Unnamed: 0
int64
0
31.6k
Clean_Title
stringlengths
7
376
Clean_Text
stringlengths
1.85k
288k
Clean_Summary
stringlengths
215
5.34k
100
Impact on demersal fish of a large-scale and deep sand extraction site with ecosystem-based landscaped sandbars
The demand for marine sand in the Netherlands and worldwide is strongly increasing.In the Netherlands, approximately 24 million m3 sand is used annually for coastal nourishments and for construction.An increase of annual coastline nourishments of up to 40-85 million m3 to counteract effects of future sea level rise is expected.For the seaward expansion of the Port of Rotterdam approximately 220 million m3 of sand was extracted between 2009 and 2013 with an average extraction depth of 20 m.In general, only shallow sand extraction down to 2 m below the seabed and beyond the 20 m isobath is allowed in the Netherlands.For Maasvlakte 2 though, the Dutch government permitted sand extraction deeper than the common 2 m, primarily to reduce the surface area of direct impact.Fish assemblages at North Sea scale are mainly influenced by bottom water temperature, bottom water salinity, tidal stress and water depth.Furthermore, fish assemblages are linked to biotic and abiotic habitat characteristics and meso-scale bedforms.Ellis et al. found that species richness of infauna, epifauna and fish was larger in the silty troughs of sandbanks off the coast of the UK than on the crests.Large-scale sand extraction was shown to have a negative impact on fish in the Yellow Sea, a decline of more than 70% of the total number of fish and the number of species and direct and indirect damages to commercial fisheries were observed.Since the start of aggregate extraction in the Eastern Channel Region, the majority of fish species have shown marked reductions in abundance and draghead entrainment was identified as a possible cause.On the other hand, aggregate extraction may also lead to new habitats and may favour macrozoobenthos and fish.Ecosystem-based landscaping techniques are not commonly used to reduce the impact of sand extraction.In the UK, gravel-seeding techniques were tested to restore the seabed after gravel extraction.In the Maasvlakte 2 sand extraction site, two sandbars were artificially created by selective dredging, copying naturally occurring meso-scale bedforms to increase habitat heterogeneity and thereby possibly increasing post-dredging benthic and demersal fish species richness and biomass.In this study, we test the hypothesis that deep and large-scale sand extraction and ecosystem-based landscaping approaches will lead to differences in fish assemblage and we are aiming to answer the following questions:Are there significant differences in fish species assemblage between reference area and sand extraction site, and within the extraction site?,Are there significant temporal differences in fish assemblage, macrozoobenthos and environmental variables during the monitoring campaign?,Which environmental variables determine the differences?,Are ecosystem-based landscaping techniques landscaping techniques feasible and effective in influencing fish assemblages?,The Maasvlakte 2 sand extraction site is situated in front of the Port of Rotterdam, the Netherlands, outside the 20 m depth contour.The sand extraction site is 2 km long, 6 km wide with an average extraction depth of 20 m at an initial water depth of approximately 20 m below average sea level.Approximately 220 million m3 sand was extracted between 2009 and 2013, of which 170 million m3 in the first two years.Two sandbars were created in the extraction site to investigate the applicability of ecosystem-based landscaping in sand extraction projects.One sandbar, parallel to the tidal current, was left behind in the seabed in spring 2010.This parallel sandbar has a length of 700 m, a width at the crest of 70 m and slopes of 140 m length.The crest of the sandbar is located at a water depth of 30 m and the troughs are more than 40 m deep.In 2011, the second sandbar was completed with an orientation oblique to the tidal current.The length and width are similar to the parallel sandbar but, due to time constraints, the difference in depth between crest and trough is less pronounced.The crest is situated at a water depth of 28 m and the northern trough is 36 m deep.A narrow and 32 m deep trench separates the crest from the slope of the sand extraction site.The volume of each sandbar is approximately 1.25 million m3 with slopes of 1:7–1:10.During our surveys in 2011 and 2012, two trailer suction hopper dredgers were active in the centre of the sand extraction site, extracting approximately 2 million m3 marine sand per week.The water depth increased from 33 m to approximately 40 m but the areas near the landscaped sandbars remained un-dredged after completion.A commercial fishing vessel was used, the Jan Maria, GO 29, with a length of 23 m, less than 300 horsepower and equipped with a standard commercial 4.5 m beam trawl.The beam trawl was equipped with four tickler chains, five flip-up ropes and diamond mesh size of 80 mm, which was applied at 4 knots fishing speed."The ship GPS-system logged the position of the sampling locations and water depth was determined with the ship's depth sounder.The maximum haul distance was one nautical mile in the reference area.Shorter hauls were planned within the sand extraction site; at the landscaped sandbars, hauls of approximately 700 m length were applied.Some of the hauls ended before the planned end-coordinates because of difficulties with fishing inside the sand extraction site due to large changes in seabed topography and sediment composition.In surrounding reference areas, fishing direction was generally perpendicular to the direction of naturally occurring seabed patters to ensure heterogeneous sampling of crests and troughs of sand waves.In the sand extraction site, fishing direction was generally parallel to the seabed structures to enable comparisons between the different locations.We sampled in the reference area, at the slope of the sand extraction site, two locations in the deep parts of the extraction site i.e. the south-east and north-west, in the troughs and at the crests of the sandbars.The fish surveys were conducted on 14 July 2010, 27–29 July 2011 and 13–15 June 2012.In 2010, four reference fish samples and five sand extraction site samples were collected.In 2011, seven samples were collected in the reference area and thirteen samples in the extraction site.In 2012, for samples were collected in the reference area and thirteen in the extraction site.Fish were sorted and length frequency distribution ‘to the cm below’ was determined.Abundance of fish was calculated by dividing the number of fish by the fishing surface area and expressed as number of fish per hectare.Species richness of a haul was determined and published length–weight relationships were used to calculate the weight of fish.The average length of a fish species was calculated by dividing the sum of all multiplied size classes and their abundances with the total abundance.In 2012, stomach and intestine contents of on average 10 specimens of plaice, dab, and shorthorn sculpin were taken from the south-eastern deep part and troughs to obtain a rapid indication of fish diets.The closest macrozoobenthos and sediment samples locations were selected to link with the fish hauls.A boxcorer was used to sample macrobenthic infauna and sediment, carried out by the Monitor Taskforce of the Royal Netherlands Institute for Sea Research on 29–30 June 2010, 2–5 May 2011 and 23–25 April 2012.The boxcorer surface area was 0.0774 m2 with a maximum penetration depth of 30 cm.Infaunal ash-free dry weight biomass was analysed by means of loss on ignition.A bottom sledge was used to sample larger macrobenthic fauna, executed by the Institute for Marine Resources & Ecosystem Studies on 7–8 July 2010, 14–15 June 2011 and 6–7 June 2012.The sledge was equipped with a 5 mm mesh cage.On average, 15 m2 was sampled during each sledge haul.Wet weight of larger fauna was directly measured.Samples from the upper 5 cm were collected from untreated boxcorer samples and kept frozen until analysis.Sediment samples were freeze dried, homogenised and analysed with a Malvern Mastersizer 2000 particle size analyser.Percentile sediment grain size and sediment grain size distribution among the different classes; clay, silt, mud, very fine sand, fine sand, medium sand and coarse sand were measured as percentage of total volume.Sediment organic matter was analysed in 2012 by means of loss on ignition as percentage of sediment mass.Areas surrounding the extraction site are labelled as reference area, Trecent denotes sampling directly after sand extraction and for 1 and 2 year after the cessation, T1 and T2 are used.Fishing activity in- and outside the sand extraction site was derived from Vessel Monitoring through Satellite data for the years of the fish survey."Significance of differences in fish species composition between location and time after sand extraction was tested using analysis of homogeneity by the Betadisper function of the package ‘vegan’ followed by permutation test and Tukey's HSD test posteriori multi-comparison tests of the package ‘stats’.We applied Dufrêne-Legendre Indicator species analysis using the indval function of the package ‘labdsv’ to determine indicator species of sub locations.The analysis is based on the product of relative frequency and relative average abundance of a fish species for a certain sub location."After checking of normality and homogeneity of univariate variables the parametric two-way ANOVA with interaction of variables followed by Tukey's HSD test was used.When the normality and homogeneity assumptions were violated, the non-parametric Kruskal–Wallis one-way multi-comparison tests of the package ‘pgirmess’ was used to determine significant differences between locations.We applied Non-Metric Dimensional Scaling using the metaMDS function in the package ‘vegan’, based on Bray–Curtis dissimilarities of fish abundance data, to visualize differences in fish assemblages in the extraction site and reference area.Environmental variables were linearly fit onto the ordination using the envfit function in the package ‘vegan’.We used the ordisurf function to plot a smooth surface onto the ordination in the case of non-linear relationships.When Spearman rank correlation coefficients between a set of variables exceeded 0.9 one of the variables was dropped.We used the mantel.correlog function in the package ‘vegan’ to check for autocorrelation between the ecological distance matrix and geographic distance matrix.For all analyses we used R: A Language and Environment for Statistical Computing, version 3.0.1.In total, 32 fish species were identified.Fish assemblages in the reference area were dominated by dab, plaice, scaldfish, common sole, shorthorn sculpin and solenette.On average, 20.9 ± 12.2 kg WW ha−1 of fish and 13.1 ± 1.7 fish species haul−1 were caught.Species assemblage and biomass at the slope of the extraction site were not significant different from the reference area.Plaice, dab, scaldfish, shorthorn sculpin and solenette were most abundant and on average 20.2 ± 6.7 kg WW ha−1 and 14.2 ± 1.9 species haul−1 were caught.Turbot and brill were Dufrêne-Legendre indicator species for the slope of the extraction site due to a higher relative frequency and average abundance compared to the other sub locations.At the crests of the sandbars, plaice, dab, sole, shorthorn sculpin and hooknose were most abundant and on average, 93.8 ± 47.1 kg WW ha−1 and 11.4 ± 2.1 species haul−1 were caught.Tub gurnard is a Dufrêne-Legendre indicator species of the crests of the sandbars.Species assemblage at the crest of the oblique sandbar significantly differed from the assemblage of the reference areas in 2012.Biomass at the crests showed a nearly 4.5-fold increase compared to the reference area and was significantly different for the comparisons between crests and reference area.Plaice, dab, european flounder, sole and hooknose dominated the troughs of the sandbars.On average, 164.6 ± 205.1 kg WW ha−1 and 10.2 ± 2.5 fish species haul−1 were caught.The highest biomass values were found for fish sample 44 and 45 in the trough of the oblique sandbar, respectively 522.1 and 484.0 kg WW ha−1.This is a significant 23-fold increase in biomass values compared to the reference area in 2012.Increased biomass of white furrow shell was not detected in the accompanying infaunal sample but in a sample from the trough west of the sandbar, total infaunal biomass reached 61.9 g AFDW m−2 with 24.9 g AFDW A. alba m−2.In 2012, species assemblage in the south-eastern deep area significantly differed from the troughs two years after cessation of sand extraction whereas in 2011 and in troughs 1 year after cessation significant differences were absent.This difference is also clearly visible in the nMDS ordination as T2 samples from the trough ended up in the left region surrounded by reference and deep north-western samples.Furthermore, fish biomass values in the troughs of the parallel sandbar differed significantly from the locations sampled one year after cessation, which harboured 175.1 and 413.9 kg WW ha−1.The accompanying bottom sledge samples of the fish samples from the troughs of the parallel sandbar was characterised by a very high biomass of serpent star Ophiura ophiura.The most dominant species in the south-eastern deep area of the sand extraction site were plaice, sole, European flounder, dab and shorthorn sculpin.European flounder and plaice are Dufrêne-Legendre indicator species.In 2011, 2012, fish biomass significantly increased 20-fold and reached 413.9 and 409.5 kg WW ha−1 and 15 and 8.5 fish species were caught.Demersal fish biomass in south-eastern deep area remained in 2012 almost as high as in 2011 but the abundance of plaice decreased from 3520.1 in 2011 to 1771.4 ind. ha−1 in 2012.In 2012, the length of plaice was larger than in 2011, 23.4 cm instead of 15.5 cm, which compensated for the lower observed abundance in 2011.A similar trend was found in the troughs; in 2012, the average length of plaice was 20.8 cm instead of 15.4 cm in 2011.The average length of plaice in the reference area in 2010, 2011 and 2012 was 17.82, 15.29 and 17.11 cm, which means that the deep areas of the extraction site attracted larger plaice specimen.The length of plaice in the reference area in 2011 was the smallest of the three years, which may explain the smaller length of plaice in the deep areas in 2011.No differences in length of the other dominant fish species were observed.Biomass in the north-western deep area remained relatively low but just above the reference level, 25.7 ± 15.5 kg WW ha−1."Species richness is significantly lower in the north-western deep area, 9 species haul−1 compared 13.1 species haul−1 in the reference area,.The most dominant species were plaice, dab, shorthorn sculpin, sole and hooknose.Fish assemblage and environmental dissimilarities revealed a significant association.Percentage coarse sand was dropped from the analysis because of collinearity with D50, percentage medium and fine sand and mud content were dropped because of collinearity with very fine sand.Mantel correlograms analysis showed that autocorrelation was below significance level.All nMDS ordinations had stress values below 0.07, which means an outstanding goodness of fit.For the 2010–2012 survey periods, the ordination showed a significant association with time after the cessation of sand extraction and water depth.Median grain size D50, the fraction very fine sand and infaunal white furrow shell biomass was just above the significance level.In 2010, only water depth showed a significant association with the ordination.In 2011, the very fine sand fraction, time after the cessation of sand extraction, water depth and D50 showed significant associations with the ordination.In 2012, infaunal white furrow shell biomass, water depth and the fraction very fine sand showed a strong association with the ordination.The association of the ordination with time after the cessation of sand extraction was just above the significance level.No significant associations were found for total epifaunal biomass and specific species sampled with the bottom sledge, e.g. scavenging brittlestars and predatory flying crab.Stomach and intestine content of plaice was dominated by undigested crushed white furrow shell remains, dab stomachs end guts were filled with remains of brittle stars and shorthorn sculpin stomachs were filled with whole swimming crabs.We observed significant changes in fish assemblage in the sand extraction site, which showed a strong association with sediment composition and white furrow shell biomass.For the fraction of very fine sand significant differences were found between location and time after sand extraction.In 2011, a significant difference was found between reference and troughs.In 2012, the fraction very fines significantly differed between reference area and the south-eastern deep area.In general, the fraction very fines decreased at the crests of the sandbars and increased in the troughs and deep areas of the extraction site.D50 at the crests of the parallel sandbar increased from 165 μm in 2010 to 304 μm in 2012 and very fine sand fraction decreased from 22.5% in 2010 to 6.1% in 2012.We observed the opposite for the sediment in the trough of parallel sandbar, D50 decreased from 321.8 to 133.9 μm.Significant differences were found in white furrow shell biomass, between locations and year.In 2011, biomass of white furrow shell was significantly higher at the crests of the sandbars compared to the reference areas.White furrow shell is virtually absent in reference areas and the slope of the extraction site.After 2 years, white furrow shell biomass increased to 20.6 and 97.4 g m−2 AFDW for respectively the troughs of sandbars and the south-eastern deep area.Based on Vessel Monitoring through Satellite data seabed disturbing fishing activity in the sand extraction site was virtually absent in 2010.In 2011, fishing activity mainly occurred in the northern area of the northern sand extraction site.In 2012, fishing activity in the sand extraction site exceeded the fishing activity level of the reference area.Furthermore, fishing activity in 2012 almost equalled the fishing activity level before sand extraction.Fish biomass significantly increased 20-fold in the south-eastern deep area of the Maasvlakte 2 sand extraction site and 5-fold on the crests of the landscaped sandbars compared to the reference areas and recently extracted areas.This increased biomass is associated with a significant increase in biomass of infaunal white furrow shell.In the north-western deep area biomass values remained relatively low but just above the reference level, 25.7 ± 15.5 kg WW ha−1 probably due to ongoing sand extraction activities and the absence of the increase in white furrow shell biomass.The highest species richness, 14.2 species haul−1, was found at slope the of the extraction site with turbot and brill being indicator species.These species are at a length of 20 cm known to forage on mobile prey while severely relying on eyesight.Instead of foraging in more turbid circumstances, the edge of the extraction site may be more suitable while still favouring from increased fish biomass in the surroundings.Species richness in the sand extraction site is lower compared to the reference area.Inside the extraction site, on average, 10.5 species per haul were found compared to 13.1 species haul−1 in reference areas.The lowest species richness was found in the north-western deep area, 9.0 species haul−1 probably again due to continuing sand extraction.Comparisons of species richness between reference area and locations in the sand extraction site are potentially biased due to differences in sampled surface area.Based on a study in an extraction site in France, Desprez stated that sand extraction on the long term might create new habitats such as the presence of boulders and higher heterogeneity of sediment and favour an increase in the richness of benthic fauna and fish.A qualitative analysis of fish assemblage was however lacking.In line with our results, large-scale sand extraction in the Yellow Sea and the resulting bottom disturbance was shown to have a more pronounced negative impact on fish species richness, a decline of more than 70% of the total number of fish and the number of species was observed.In the Eastern Channel Region, the majority of fish species have shown marked reductions in abundance since the start of aggregate extraction and draghead entrainment was identified as a possible cause.Next to differences in biomass and species richness, significant differences in fish species assemblage between reference area and extraction site were found.Dab and plaice were the most abundant species in the reference area whereas in the sand extraction site plaice was more abundant.This difference is again possibly due to the increase in white furrow shell biomass, which may be a more preferred prey item of plaice.In a smaller sand extraction site on the Belgium Continental Shelf with shallow sand extraction dab was also more abundant than plaice although a clear change in fish distribution was not observed possibly due to continuous sand extraction or less pronounced differences in bathymetry and sediment characteristics.Epibenthos in the Belgium extraction site was dominated by the predatory common starfish Asterias rubens and the scavenging serpent star Ophiura ophiura.The fish assemblage differed significantly between crests and troughs of the sandbars and tub gurnard is an indicator species of the crests which may be induced by differences in macrozoobenthic assemblage.The difference in fish assemblage is a first indication of the applicability of landscaping techniques to induce heterogeneity of the seabed although it remains difficult to draw a strong conclusion due the lack of replication of the experiment.Comparable differences in species assemblage also occur on the crests of natural sandbanks, early-life history stages of the lesser weever were found to be more abundant compared to the troughs.Several environmental variables may be responsible for the differences in fish species assemblage.In general, fishing activity may also play a role but, in 2012, activity in the extraction site was almost equal to the reference area.Therefore, differences in fish assemblage and biomass between dredged and non-dredged areas are not induced by differences in fishing intensity.In 2010, only water depth showed a significant association with the ordination.In 2011, the very fine sand fraction, time after the cessation of sand extraction, water depth and D50 showed significant associations with the ordination.In 2012, infaunal white furrow shell biomass, water depth and the fraction very fine sand showed a strong association with the ordination.In 2011, observed differences in the ordination were not yet associated with infaunal white furrow shell biomass, possibly due to the gap between the two sampling activities.Macrozoobenthos sampling occurred at the end of June 2010, the start of May 2011 and at the end of April 2012 while fishing occurred mid-July 2010, at the end of July 2011 and mid-June 2012.Sell and Kröncke concluded that fish assemblages on the Dogger Bank were linked to both biotic and abiotic habitat characteristics, abundance of specific fish species could be linked with individual in- and epifauna species.In the Western Baltic, white furrow shell comprised 24% of the diet of plaice and a comparison of present-day diet and the diet of plaice at the beginning of the 20th century suggested that the preponderance of polychaetes has increased and that of bivalves decreased.White furrow shell is a deposit feeding bivalve and tends to prefer fine-grained sediments with a median grain size between 50 and 250 μm and a mud content of 10–50%.An increase of Tellinid shellfish, plaice and common sole was found at deposition areas around aggregation sites in France.We found that stomach and intestines of plaice were mainly filled with undigested remains of white furrow shell.Stomach content analysis of the other fish species also revealed specific preference of prey items but this is not confirmed by our statistical analysis.Prey items of dab were dominated by brittle stars, a similar preference was also found by other researchers.All shorthorn sculpin stomachs contained swimming crabs and this preference was also found in other studies.Time after the cessation of sand extraction is an important variable.Directly after sand extraction, fish biomass values are similar to the reference area.Fish biomass values increased in 2011 and 2012.White furrow shell biomass is showing the same pattern, median grain size in the extraction site is decreasing and the fraction of very fines in the sediment is rising.For shallow sand extraction in the North Sea, recovery time of benthic assemblages is estimated to be six years.A study in a deep temporary sand extraction site with 6.5 million m3 sand extracted, an initial water depth of 23 m and extraction depth between 5 and 12 m revealed a sedimentation rate of 3 cm per year.Thatje et al. found a continuous sedimentation rate of around 50 cm per year inside a 700 m wide and 48 m deep natural seafloor crater 20 miles off the coast of Germany with an initial water depth of 34 m. Mud content inside the crater increased from 5% in 1963 up to 40% in 1995 and the benthic community was characterized as an urchin – brittle star association.Considering these sedimentation rates, backfilling of the Maasvlakte 2 sand extraction site may take decades or longer, resulting in a prolonged and more pronounced effect on macrozoobenthos and fish assemblages in the area.The increase in length of plaice in the troughs and south-eastern deep area may be the result of a residing cohort within the extraction site, plaice with body size 16 cm is 1.5 years old and with body size 22 cm is 2.5 years old.The increase in length of plaice may also be related to prey-size preferences of juvenile plaice.In one year specimens of Abra alba reach lengths of 12–14 mm, in two years 13–16 mm and maximum length of 20–25 mm.All sediment parameters remained in the same range except SOM values, which was on average 3.9% in the south-eastern deep area and 6.0% in the troughs of the parallel sandbar.However, the statistical analysis revealed no significant association of the ordination in 2012 with SOM.Dissolved oxygen levels are influenced by factors such as; SOM, water depth, temperature and water circulation.The increase in length may also be induced by avoidance of the south-eastern deep area by smaller plaice, which are more sensitive to reduced DO levels.DO levels may even be more reduced in the troughs resulting from the specific bathymetry and greater water depth, which may have led to the significant decrease in biomass values from the 23-fold increase in 2011 down to reference level and significant change in species assemblage in 2012.Fish biomass was found to be significantly correlated with oxygen concentration and a reduction in fish biomass was observed when oxygen concentration in the bottom water dropped below 3 mg l−1.Next to the difference in SOM and possible differences in DO levels, differences in macrozoobenthic assemblage were also present.The accompanying bottom sledge samples of the fish samples from the troughs of the parallel sandbar were characterised by very high biomass values of serpent star Ophiura ophiura.Studying the distribution of demersal fish for two years is not sufficient to understand the final impact of deep and large-scale sand extraction on demersal fish.Conclusions on sedimentary evolution are based on a relatively small sample size and short monitoring period.More sediment samples from the 2010–2012 surveys will be analysed in future work.On-going sedimentation and a rise of mud content up to 40% can be expected.The highest encountered mud content in the second Maasvlakte extraction site was 23.3% in the south-eastern deep part in 2012.Fish surveys were conducted mid-July 2010, at the end of July 2011 and mid-June 2012 while the oxygen concentration reached a minimum at the end of the summer and a maximum in May.The occurrence of temporal hypoxia and possible detrimental effects on fish and macrozoobenthos cannot be excluded with our data.We recommend monitoring of demersal fish and macrozoobenthic assemblage and accompanied sediment variables for a longer period, at least for a period of six years, i.e. the estimated recovery time of shallow sand extraction.Six years of monitoring may be even insufficient because of larger differences resulted from the large-scale and deep sand extraction.An increase of annual coastline nourishments up to 40-85 million m3 per year is expected for counteracting effects of future sea level rise.For the seaward harbour extension of Maasvlakte 2, the Dutch government permitted deep and large-scale sand extraction to reduce the surface area of impact.It is questionable if the seabed of the Maasvlakte 2 extraction site will return to its original state within decades.We showed that deep sand extraction leads to significant differences in fish biomass between the deep extraction site and the surrounding reference.Our findings indicate that ecosystem-based landscaping techniques are feasible and effective in influencing fish assemblages.These findings have to be included in future design and permitting procedures of large-scale sand extraction projects.
For the seaward harbour extension of the Port of Rotterdam in the Netherlands, approximately 220 million m3 sand was extracted between 2009 and 2013. In order to decrease the surface area of direct impact, the authorities permitted deep sand extraction, down to 20m below the seabed. Biological and physical impacts of large-scale and deep sand extraction are still being investigated and largely unknown. For this reason, we investigated the colonization of demersal fish in a deep sand extraction site. Two sandbars were artificially created by selective dredging, copying naturally occurring meso-scale bedforms to increase habitat heterogeneity and increasing post-dredging benthic and demersal fish species richness and biomass. Significant differences in demersal fish species assemblages in the sand extraction site were associated with variables such as water depth, median grain size, fraction of very fine sand, biomass of white furrow shell (Abra alba) and time after the cessation of sand extraction. Large quantities of undigested crushed white furrow shell fragments were found in all stomachs and intestines of plaice (Pleuronectes platessa), indicating that it is an important prey item. One and two years after cessation, a significant 20-fold increase in demersal fish biomass was observed in deep parts of the extraction site. In the troughs of a landscaped sandbar however, a significant drop in biomass down to reference levels and a significant change in species assemblage was observed two years after cessation. The fish assemblage at the crests of the sandbars differed significantly from the troughs with tub gurnard (Chelidonichthys lucerna) being a Dufrêne-Legendre indicator species of the crests. This is a first indication of the applicability of landscaping techniques to induce heterogeneity of the seabed although it remains difficult to draw a strong conclusion due the lack of replication in the experiment. A new ecological equilibrium is not reached after 2 years since biotic and abiotic variables are still adapting. To understand the final impact of deep and large-scale sand extraction on demersal fish, we recommend monitoring for a longer period, at least for a period of six years or even longer. © 2014 The Authors.
101
Longitudinal associations between parents’ motivations to exercise and their moderate-to-vigorous physical activity
Physical activity is associated with reduced risk of a variety of health outcomes, including heart disease, stroke, type 2 diabetes, several forms of cancer, and depression.Similarly, physical inactivity has been shown to be detrimental for health and well-being and has been identified as a source of great economic cost globally.As such, global physical activity guidelines recommend that adults undertake at least 150 min per week of moderate intensity activity, including additional muscle-strengthening activities at least twice a week, alongside the general aim of reducing their sedentary time.However, evidence suggests that between 15% and 43% of adults in western countries do not meet physical activity recommendations.From a public health perspective, it is clear that efforts need to be made to encourage more adults to be regularly active.Thirty-five percent of adults in the UK have dependent children.Parents of young children have been shown to engage in less moderate-to-vigorous physical activity than similar aged adults without children, with a noticeable decrease in physical activity at the point of transition to parenthood.Engaging in regular physical activity may be particularly challenging for parents with dependent children due to increased demands on their time, financial burdens, and a change in priorities compared to before parenthood.Yet, promoting parental engagement in physical activity could be beneficial for both parents and children in terms of health benefits, parenting behaviour, and energy levels.Additionally, if parents are active then they model active behaviour to their children, with some evidence suggesting a weak positive association between parent physical activity and child activity.A recent review of reviews highlighted that individual level variables are the most consistent correlates of physical activity therefore indicating that interventions should be either tailored to specific populations or should encompass ways of manipulating these variables to increase physical activity.Self-determination theory is a framework through which the motivational processes that underpin physical activity can be investigated.Within SDT, quality of motivation is placed upon a continuum whereby different types of motivation differ in the extent to which they are autonomous or controlled.Three types of motivation are said to be more autonomous in nature: Intrinsic motivation, the most autonomous form of motivation characterised by an individual inherently enjoying or gaining satisfaction from the activity; integrated regulation, when the behaviour aligns with an individual’s identity; and identified regulation, when an individual consciously values the behaviour.More controlled types of motivation are introjected regulation, when behaviour is controlled by self-imposed sanctions, such as shame, pride, ego, or guilt and external regulation, the most controlled form of motivation when behaviour is driven by external factors such as rewards, compliance and punishments.Additionally, a lack of either autonomous or controlled forms of motivation is classed as amotivation.In the context of physical activity, effortful and persistent behaviour is more likely to occur when an individual’s motivation is autonomous as opposed to controlled.Cross-sectional evidence consistently shows autonomous motivation for exercise to be positively associated with self-reported and accelerometer-assessed physical activity in healthy adults.Controlled motivation is generally shown to have little cross-sectional association with self-reported and accelerometer-assessed physical activity behaviour.However, when analysed separately, introjected regulation more frequently shows a positive cross-sectional association with physical activity whereas external regulation is more commonly negatively associated with physical activity.In addition to the cross-sectional evidence, there are a small number of studies that have examined, and provided evidence, for a small to moderate positive association between autonomous motivation and self-reported physical activity over periods of time ranging from 1 to 6 months.In support of these longitudinal associations, qualitative evidence aligns with the theoretical tenet that movement through the behavioural regulation continuum towards more autonomous motivation is central to physical activity adherence.To date, studies have shown no evidence for a longitudinal association between controlled motivation and physical activity.The limited number of studies assessing the associations between motivation and physical activity over time have used autonomous and controlled composites and not disaggregated the types of behavioural regulation in statistical models, thus limiting the study of the roles of qualitatively different types of motivation.Additionally, these longitudinal studies have also relied on self-reported measures of physical activity behaviour which are prone to bias.Therefore, further investigation of the longitudinal associations between behavioural regulation and physical activity, using more reliable behavioural estimates is warranted.Despite evidence indicating lower levels of physical activity in parents compared to the wider adult population, theoretical models have been seldom used to understand physical activity during parenthood.The quality of motivation for exercise may be particularly pertinent to parents’ physical activity due to extensive competing demands that may make converting some forms of motivation to behaviour more challenging for parents than non-parents,In support of this, in a study involving 1067 parents of children aged 5–6 years old, only identified regulation showed evidence of a cross-sectional association with MVPA after adjustment, suggesting that, for parents of younger children, identifying with personally meaningful and valuable benefits of exercise may be the strongest motivational driver.However, there are no studies of the motivation-physical activity associations amongst parents of older children or evidence for any longitudinal associations between motivation and physical activity behaviour during parenthood.The aims of this research were to 1) examine the cross-sectional associations between the behavioural regulations set forward in SDT and objectively-estimated physical activity in parents of children aged 8–9 years old and then two years later when the same child was 10–11 years old and 2) assess the longitudinal associations between behavioural regulation type and accelerometer-assessed physical activity in parents over a five-year period.The current analyses used data from the B-Proact1v project.The broader project is a longitudinal study exploring the factors associated with physical activity and sedentary behaviour in children and their parents throughout primary school.Briefly, data collection was conducted at three timepoints: between January 2012 and July 2013 when all participants had a child aged five to six years, between March 2015 and July 2016 when the same child was aged eight to nine years and between March 2017 and May 2018 when the same child was in year 6.A total of 57 schools consented to participate at time 1 and were subsequently invited to participate at times 2 and 3.Forty-seven schools participated at time 2 and 50 participated at time 3.Across all three timepoints, data were collected from 2555 parents from 2132 families: 1195 parents were involved at time 1, 1140 at time 2, and 1233 at time 3.A total of 546 parents took part across two timepoints and 246 across three timepoints.The study received ethical approval from the University of Bristol ethics committee, and written consent was received from all participants at each phase of data collection.Characteristics.Parents completed a questionnaire which included information about their date of birth, gender, ethnicity, height, weight, education level, and number of children.BMI was calculated from their self-reported height and weight.Parents also reported their home postcode, and this was used to derive Indices of Multiple Deprivation, based upon the English Indices of Deprivation.Higher scores indicate areas of higher deprivation.Motivation to exercise.The Behavioural Regulation in Exercise Questionnaire was used to assess motivation to exercise.The BREQ-2 consists of 19-items each assessing one of five forms of behavioural regulations: intrinsic, identified, introjected, external, and amotivation.Due to difficulties in empirically distinguishing between identified and integrated regulation, the BREQ-2 does not assess integrated regulation.Participants rated each item on a 5-point Likert scale ranging from 0 to 4.In the current study, the BREQ-2 subscales had good internal consistency in both the cross sectional and longitudinal samples.Physical activity.Participants were asked to wear a waist-worn ActiGraph wGT3X-BT accelerometer for five days, including two weekend days.Accelerometer data were processed using Kinesoft in 60-s epochs.In line with recommendations for monitoring habitual physical activity in adults, analysis was restricted to participants who provided at least three days of valid data including at least 1 weekend day.A valid day was defined as at least 500 min of data, after excluding intervals of ≥60 min of zero counts allowing up to 2 min of interruptions.The average number of MVPA minutes per day were derived for each participant using population-specific cut points for adults).The data analysis consisted of cross-sectional analyses of time 2 and time 3 data and a longitudinal analysis including data from all three timepoints.All analyses were conducted at the parent level, so where two or more parents/guardians from the same family were included in the project, each parent was treated as a separate participant.Participants were included in the cross-sectional analysis if they had valid accelerometer data, and BREQ-2 responses with not more than 1 missing item per subscale.Participants were included in the longitudinal analysis if they met the above criteria for at least one timepoint.Following recommendations for dealing with missing data, and in order to reduce bias and increase statistical power, multiple imputation using chained equations was used to impute missing data for participating parents cross-sectionally.The imputation models included parent gender, age, ethnicity, BMI, education level, IMD score, number of children, MVPA and the five subscales of behavioural regulation.For each, 20 imputed datasets were created using 20 cycles of regression switching and estimates were combined across datasets using Rubin’s rules.Five independent variables, reflecting the five motivation types, were treated as continuous variables.In the cross-sectional analyses, linear regression models were used to examine associations between behavioural regulation and mean MVPA minutes per day.To identify longitudinal associations between the motivation variables and physical activity, we used a multi-level model to capture how MVPA changes over time.We also included interaction terms between behavioural regulation variables and age to explore whether change in MVPA over time differs with changes in behavioural regulation.In line with evidence for their influence on MVPA in adults, all analyses were adjusted for age, gender, number of children in the household and IMD score.In all models, robust standard errors were used to account for school clustering in the study design, and parents were clustered within families, to account for family-level similarities.All analyses were performed in Stata version 15.At both time 2 and time 3, missing data among participating parents was minimal, and the distributions of observed and imputed characteristics were similar.At time 2, the sample consisted of 925 participants of whom 72% were female, with a mean age of 41.34 years, and mean BMI of 25.83 kg/m2.At time 3, the sample consisted of 891 participants, of whom 73% were female, with a mean age of 43.32 years, and mean BMI of 25.83 kg/m2.Mean IMD was consistent across time.Average daily MVPA increased slightly from 50.03 min at time 2–52.28 min at time 3.At both timepoints, means and standard deviations for all motivation variables were similar, with participants reporting higher levels of autonomous motivation than controlled motivation for exercise, with levels of identified regulation being higher than intrinsic regulation.Full descriptives for time 1 are reported elsewhere.A total of 2374 parents had valid accelerometer data for at least one timepoint.Of these, 463 had valid accelerometer data for two timepoints and 185 for three timepoints.Twenty-five parents provided no identifiable information at any timepoint and so were excluded from the analyses.Due to the study design, school attrition accounted for 244 families not taking part at time 2 and 167 families at time 3.At the family level, children moving to schools not involved in the project accounts for the drop out of 253 families.Further, as the same parent was not required to participate at every timepoint, in 227 families who had been involved at time 1, a different parent participated at time 2.A total of 302 families had a different parent participating at each timepoint.In these cases, all parents involved at any timepoint are included in the analysis.Consistent with baseline findings, fully adjusted cross-sectional regression models showed a positive association between identified regulation and MVPA, with a one-unit increase in identified regulation associated with a 5.4-min and 4.9 min increase in MVPA per day at time 2 and time 3, respectively.There was no evidence for an association between any other type of behavioural regulation and MVPA at time 2.At time 3, introjected regulation was negatively associated with MVPA, with a one-unit increase in introjected regulation associated with a 3.2-min decrease in MVPA per day.There was no evidence for an association between the other types of behavioural regulation and MVPA at time 3.The full multi-level model explores how parent MVPA changes across the three timepoints in relation to behavioural regulation.Daily MVPA increased by an average of 0.60 min per year.Identified regulation was positively associated with MVPA, with a one-unit increase in identified regulation being associated with an average of 3.96-min more MVPA per day per year.Due to the association between time and MVPA in model 1, in model 2 we investigated whether change in MVPA over time was associated with behavioural regulation.At time 1, identified regulation remained positively associated with MVPA with a one unit increase in identified regulation being associated with an 8.73-min increase in average daily MVPA per year, however there was no association with change in MVPA over time.Introjected regulation was not associated with MVPA at time 1 but had a small negative association with change in MVPA over time, with a one unit increase in introjected regulation being associated with an average decline in daily MVPA of 0.52 min per year.This study presents the first longitudinal analysis of the association between parents’ exercise motivation and accelerometer-estimated physical activity.The analyses indicate that high levels of introjected regulation can lead to a small decrease in MVPA over time.Substantiating the baseline findings from the B-Proact1v cohort, the cross-sectional analyses also show that identified regulation is consistently associated with higher levels of MVPA but was not associated with change in MVPA over time.Our cross-sectional findings corroborate previous evidence showing that identified regulation is the type of behavioural regulation most strongly associated with physical activity behaviours in adults.These also extend the baseline findings of the B-Proact1v cohort to show that, across all three phases of the project, parents who are motivated to be physically active due to personal value engage in higher levels of MVPA than parents who are motivated in other ways.However, despite being consistently associated with higher levels of physical activity, the multi-level models suggest that identified regulation is not associated with change in physical activity over time.This is consistent with the theoretical assumption that more autonomous motivation is associated with long-term behavioural engagement and could indicate that identified regulation is more pertinent to behavioural maintenance, as behaviours that align with an individual’s personal values are more likely to be sustained.Whilst the motivational continuum proposed within SDT suggests that intrinsic motivation is the strongest motivational driver of behaviour, it has been recognised in the wider literature that behaviours such as exercise or being active might be more strongly driven by what can be achieved through doing it rather than inherent enjoyment of the activity itself.In the present study, parents’ endorsement of intrinsic and identified types of motivation were similar, emphasising that enjoyment and satisfaction are important sources of motivation.However, the lack of association between intrinsic motivation and physical activity suggests that, for parents, enjoyment of physical activity is not sufficient to lead to action, and that personally valuing activity is a more stable motivation factor for underpinning behaviour.One explanation for this could be related to the parental role, where activities that do not directly align, and potentially compete, with core parenting duties are not prioritised and may even result in feelings of guilt and selfishness.The present cross-sectional findings show that when the benefits of physical activity are perceived to be personally relevant and valuable, engagement in physical activity is greater.Therefore, valuing the benefits of exercise as an individual and/or as a parent may be central to being a more physically active parent.The cross-sectional findings showed a differential change in the association between introjected regulation and parents’ MVPA across the three timepoints, moving from a small positive association at time 1 to an increasingly negative association across times 2 and 3.The longitudinal results further highlight the impact of introjected regulation on behavioural outcomes, with a negative association between introjected regulation and change in MVPA over time.Within SDT, it is proposed that more controlled forms of motivation are detrimental to both behavioural and well-being outcomes, yet previous longitudinal studies have found no evidence for this association and cross-sectional research has indicated that introjected regulation may be associated with higher levels of physical activity.These findings offer the first evidence of a long-term negative impact of introjected regulation on physical activity behaviour.Although further longitudinal research is warranted, given the universality of SDT we would anticipate that similar associations in the wider adult population with previous studies finding no association due to aggregating introjected and external regulations into a controlled motivation variable.However, there may also factors unique to parents that mean motivations grounded in feelings of guilt and shame have an increasing negative effect on physical activity as your child gets older.Extending previous studies, we explored the individual association of each type of behavioural regulation with change in physical activity behaviour.A potential explanation for this could be the association between introjected regulation and maladaptive social comparisons.For example, parents may feel envy and resentment towards other parents who appear to be managing to be physically active, with such feelings having a more detrimental effect on behavioural engagement once the child is older and the parent might have more discretionary time.In terms of internal comparisons, parents of young children may accept that they cannot be physically active in the short term, hoping that they will be more active as their child gets older.If parents do not meet these self-imposed expectations when their child is older, this could result in a cyclical relationship between failure to be active and guilt.For parents these effects might be particularly salient due to the competing demands on their time.Evidence suggests that, for adults, increasing daily MVPA by 5–10 min can have clinically meaningful health benefits.With this in mind, the data presented in this paper indicate that strategies to encourage parents to find personal value, relevance and importance in being physically active whilst ensuring that feelings of guilt or shame are not induced, may be particularly important for promoting increased physical activity engagement with the potential for meaningful impact on health outcomes.This requires environments that support rather than thwart the three psychological needs of autonomy, competence and relatedness.Such environments are characterised by the provision of choice, optimal challenge, and strong connections with others.Additionally, and particularly relevant for the promotion of identified regulation, may be the provision of a personally relevant and valuable rationale for being active, which has been shown to promote long-term behavioural engagement.Qualitative evidence suggests that long-term goals can be abstract and undermine physical activity and, as such, physical activity messages should emphasise that being active can also be immediately gratifying and help people cope with or manage their daily goals.Research on the content of exercise goals shows that goals such as health, social affiliation and development of exercise competence or skills are associated with greater autonomous motivation and psychological well-being.For parents, effective messages could include promoting physical activity as an important family activity that provides the opportunity to spend time together and interact with others, increase energy levels, and to relax and escape daily pressures.However, further qualitative work is required to inform the development of specific health messaging that aligns with identified regulation.The strengths of the study lie in the theoretically-grounded approach to motivation, longitudinal data collected from the same cohort on three occasions over a five-year period and the use of accelerometers to estimate levels of MVPA which when combined have made a novel contribution to the literature.However, limitations should be acknowledged.First, the sample was largely female, and therefore is more representative of mothers than parents in general.However, given the universality of SDT, and supporting evidence indicating no gender differences in behavioural regulation as measured through BREQ-2, we would not anticipate significant differences in findings in a more male-dominated sample.Additionally, participating parents were generally more active than average in the UK and had high levels of self-determined motivation.This may limit the generalisability of the findings to the wider parent population and may also have limited the extent to which we can observe change over time.Whilst we did not have the power in the present study, in future researchers may want to look at the differences in motivation between parents with low physical activity levels, and those with high physical activity levels.A further limitation is that the BREQ-2 questionnaire used to measure motivation does not assess integrated regulation which sits on the SDT continuum of motivation between identified and intrinsic motivation types and represents the assimilation of the behaviour with one’s values, goals, and sense of self.This study is the first to assess the longitudinal associations between parent’s behavioural regulation to exercise in relation to accelerometer-estimated physical activity.The results indicate that motivation grounded in the personal meaning and value of exercise is associated with higher levels of MVPA.Additionally, the results suggest that being motivated by feelings of guilt and shame impact negatively on MVPA over time.Therefore, interventions that promote greater enjoyment, personal relevance and value, whilst also ensuring that guilt is not promoted, may offer promise for the facilitation of greater long-term physical activity engagement in parents.We have no competing interests to report.This work was funded by the British Heart Foundation.
Objectives: This study is the first examination of the longitudinal associations between behavioural regulation and accelerometer-assessed physical activity in parents of primary-school aged children. Design: A cohort design using data from the B-Proact1v project. Method: There were three measurement phases over five years. Exercise motivation was measured using the BREQ-2 and mean minutes of moderate-to-vigorous physical activity (MVPA) were derived from ActiGraph accelerometers worn for a minimum of 3 days. Cross-sectional associations were explored via linear regression models using parent data from the final two phases of the B-Proact1v cohort, when children were 8–9 years-old (925 parents, 72.3% mothers) and 10 to 11 years-old (891 parents, 72.6% mothers). Longitudinal associations across all three phases were explored using multi-level models on data from all parents who provided information on at least one occasion (2374 parents). All models were adjusted for gender, number of children, deprivation indices and school-based clustering. Results: Cross-sectionally, identified regulation was associated with 5.43 (95% CI [2.56, 8.32]) and 4.88 (95% CI [1.94, 7.83]) minutes more MVPA per day at times 2 and 3 respectively. In the longitudinal model, a one-unit increase in introjected regulation was associated with a decline in mean daily MVPA of 0.52 (95% CI [-0.88, −0.16]) minutes per year. Conclusions: Interventions to promote the internalisation of personally meaningful rationales for being active, whilst ensuring that feelings of guilt are not fostered, may offer promise for facilitating greater long-term physical activity engagement in parents of primary school age children.
102
Resilience through risk management: cooperative insurance in small-holder aquaculture systems
Aquaculture is one of the most diverse and fastest growing food production sectors on the planet.It is also a highly heterogeneous sector, with farm enterprises ranging from low-input, land-based ponds maintained by individual subsistence farmers to high-input coastal cages owned by transnational corporations.Economic risks are ubiquitous across the sector: aquaculture is inherently risky and often has higher variability in both yields and revenues relative to other food production systems.This is due in part to the growth of aquatic organisms being highly sensitive to changes in environmental conditions and on the immaturity of the technology used by the majority of the aquaculture industry, relative to many agricultural and livestock producers.As a consequence of the inherent variability in fish-farm revenue, and the lack of demand and availability of risk management products, currently only a small fraction of the aquaculture industry is insured for losses.This is in stark contrast to insurance in agriculture, where economic risk-management tools like insurance are far more wide-spread.The lack of production and/or revenue insurance in aquaculture has helped create a number of economic and environmental problems.While the aquaculture industry as a whole is growing rapidly, growth in some countries is slow, and in a number of systems there is a high turnover of aquaculture producers, as many entities exit the industry after initial failures.Further, to mitigate the inherent risks associated with fish farming, aquaculturists will often employ inefficient management practices, such as the overuse or prophylactic use of therapeutants and antimicrobials.This can help sustain large and consistent yields in the near-term, but it often comes with a longer-term environmental cost, which can ultimately lead to catastrophes that impact not only individual growers, but also their neighbors as these negative impacts spill over.Financial tools like insurance can help incentivize food producers, including aquaculturists, to adopt best management practices that reduce local and regional environmental impacts and hence diminish the risk of regional catastrophes.For example, when insurance policies for intensive shrimp pond operations require that farmers reduce the rate of water exchanged with surrounding water bodies, the amount of pollution from farm effluent is decreased, resulting in reduced environmental harm and reduced risk of disease exposure for other farms that utilize the same water body.Policies that require established best management practices to avoid undue stress often result in decreased disease prevalence and reduced need for excessive antimicrobial usage.However, these gains can only be realized if and when insurance products are designed appropriately for aquaculture production systems.Here, we have developed a cooperative form of aquaculture indemnity insurance aimed at harnessing the often strong social-capital of small-holder food production systems.The approach hinges on the concept of a cooperatively managed mutual fund, where aquaculturists self-organize into a cooperative that self-insures using this fund.Members of the cooperative also monitor and verify losses.This approach to insurance is not new and it has been applied in numerous cases in agriculture and in a limited number of cases in aquaculture.Indeed the cooperative model of insurance is at the heart of Protection and Indemnity Clubs, which have been active in the maritime transportation industry for over a century, as well as in new insurance companies like Lemonade and Friendsurance which provide home insurance to the public, and most recently the new decentralized approaches to insurance based on cryptocurrencies.We generalize this approach for application to the aquaculture industry, where fish farmers who wish to be compensated for downside risk do so by forming an insurance risk pool.We have designed a cooperative form of indemnity insurance, and explored its utility using numerical simulations parameterized with empirical economic data collected from a fish farming community in Myanmar, who are adversely affected by heavy rainfall and subsequent flooding.The methods below first introduce the social, biological, and physical characteristics of this case-study system.Then, our insurance theory is described, which we note is general to any group of food-producers aiming to self-insure against losses through a risk pool.Last, we describe numerous simulation experiments used to explore the benefits and costs associated with an aquaculture risk pool in Myanmar.The aquaculture sector in Myanmar serves as a valuable example of a rapidly expanding food-production sector providing an important source of food and income locally but that has little to no access to financial risk-management tools.Aquaculture production in Myanmar, which is almost exclusively for finfish, has grown rapidly over the last two decades and plays an increasingly important role in national fish supply."The sector's technical and economic characteristics have been studied in a recent survey – the Myanmar Aquaculture–Agriculture Survey.Here, we only provide a brief description of the main methodological steps in MAAS, as there are detailed reports elsewhere.MAAS was implemented in May 2016 and data were collected from a total of 1102 rural households for the preceding year, including crop farmers, fish farmers, and the landless, located in 40 village tracts in four townships in the Ayeyarwady and Yangon regions."All the village tracts surveyed lie in a zone within a radius of 60 km from Myanmar's largest city and main commercial center, Yangon.The households surveyed represent a total population of about 37,000 households.A subset of 242 fish farming households were interviewed in 25 village tracts, representing a total of 2450 fish farming households.The surveyed fish-farms represent 57% of the total area of inland fish ponds in Myanmar.Farms surveyed were selected to represent the entire population of fish farming households resident in the 25 village tracts."Given that 90% of Myanmar's inland fish ponds are located in the Ayeyarwady and Yangon regions, the sample can be considered to represent approximately half the area used for freshwater aquaculture in Myanmar.Two types of fish farms were surveyed: 1) specialized nurseries growing juvenile fish for sale to growout farms; and 2) “growout” farms producing food fish for the market.All subsequent analyses pertain to the growout farms, and the damage that flooding has on their production of fish, and hence revenue.Food-production by these aquaculturalists can be strongly affected by floods, which cause damage to the farming infrastructure and can literally flood the ponds, causing fish to escape.Data from the interviews with the aquaculturalists identify that floods can reduce annual revenue by up to 80%, and in 2015 approximately 40% of households reported some losses due to flooding, and 15% of households estimated that their losses amounted to more than 30% of their expected production.An important question is what happens if there is not enough money in the mutual fund to cover all the claims.In the risk pool scheme, payouts are effectively stopped if there is not enough money in the mutual fund.In this case, the expected net profit of the risk pool insurance scheme is negative.Hence, it will not be acceptable for a risk-neutral fish farmer to enter into the risk pool.Indeed, even a fish farmer that is risk-averse but close to risk-neutral, might not choose to enter the risk pool.But, in this model we assume that all fish farmers have a sufficient level of risk-aversion that makes this insurance scheme attractive.The risk pool theory presented above was used to develop several simulation experiments, from which we assessed the potential benefits and costs of a risk pool in the Myanmar aquaculture system.In these simulations, the dynamics of a region-wide risk pool, where all 130 Myanmar fish-farms are members, is integrated over a 25 year period at yearly intervals.This integration period reflects the typical life-time of a fish-farm in Myanmar.Following the risk pool theory, we assumed that the revenue before losses is constant through time, but heterogeneous across farms, with values taken directly from the MAAS data.We also assumed that the premiums are set using Equation, that for each simulation the model parameters are fixed throughout the integration period, and that there is no discounting/accumulation, i.e. interest rates are equal to zero.Last, for every simulation experiment we performed numerous realizations of each parameter set, because flooding and flood impacts were modeled probabilistically, which were then summarized using ensemble statistics.With these specifications, we explored simulation outcomes over a range of values for the probability of flooding p, and the fraction of revenue covered δ."The first simulations were designed to explore the probability of ruin as a function of different approaches to seeding the risk pool's starting capital. "Next, following the theory above for the case where a risk pool's mutual fund is limited to values greater than or equal to zero, payouts can be less than claims, and we assessed the impact of a limited mutual fund using the ratio of payouts to claims in the p and δ parameter space.Last, we performed simulations to quantify the impact of a risk pool on individual farm cash flows under the “weak” and “strong” flooding scenarios.To measure this impact, we calculated the ratio of the 5th percentile of the income distribution with and without a risk pool.Essentially, this metric measures how much the risk pool brings up the downside of the income distribution.We calculated these values in the p and δ parameter space, and compared them to changes in the median income with and without a risk pool, and also the premium that must be paid in order for the risk pool to be operational.Initial simulation experiments were performed to identify important qualitative features of the risk pool model.In particular, we analyzed the evolution of risk pool mutual funds though time.For example, see Figure 3), where the blue trajectories identify mutual fund trajectories for different realizations of the same parameter sets.In these simulations, the mutual funds often go negative, as does the lower dashed red line, which identifies the lower level of the confidence band over realizations.A mutual fund with negative value is not an impossibility, as it represents a scenario where the fish-farmers have access to credit."However, is it highly unrealistic, and as a consequence we updated the implementation of the risk pool simulations to include constraints on the possible payouts following Equation, i.e. if the total claims exceeds the amount in the mutual fund, then all the remaining capital in the mutual fund is used, but not more, and payouts are provided to individual fish farms in proportion to their claim's size.This constraint limits the mutual fund to positive values only.One note about Figure 3: recall that the value of the mutual fund is increased at the beginning of each year by the sum of the premiums, so the value of the fund will be reset to at least the sum of the premiums at the beginning of each year.Ruin probabilities are largely controlled by the initial capital in the mutual fund, and the life-time of the risk pool."We explored two ways in which initial capital is provided to the mutual fund: when the initial capital is set as the sum of all premiums, as calculated following Eq., which is proportional to the probability of flooding, and when the initial capital is simply a fixed amount, for example the sum of a given fraction of each fish-farm's expected annual revenue.For the latter, expected annual net revenues were derived from the empirical MAAS data, and the fraction of this value committed to initialize the mutual fund was set to 10%.The following results were qualitatively consistent, regardless of the choice of this value.In the case where the starting capital is the sum of premiums, and hence proportional to the flood probability p, the mutual fund ruin probability is essentially invariant with the fraction of loss covered δ, and has a negative relationship with the probability of flooding.This is a non-intuitive result, as one would expect the probability of ruin to increase with the probability of flooding.However, from Equation we can see that premiums scale with the flood probability p, and hence as p increases so do premiums, and so does the starting capital in the mutual fund.In the case where the initial capital is a fixed amount, ruin probabilities have a very different relationship with the flood probability p and the fraction of loss covered δ.Now, the mutual fund ruin probability has a positive relationship with both factors, with highest ruin probabilities occurring at high p and δ values."In the case where the risk pool's mutual fund is limited to positive values, which is when payouts may be less than what is claimed, another important metric than can be calculated is the ratio of payouts to claims, averaged over farms.This metric identifies situations when the risk pool mutual fund provides payouts that are less than what is claimed.In the case of the initial capital being the sum of premiums, the pattern in this metric mirrors that of ruin probabilities in the unlimited mutual simulations, that is the difference in payouts and claims increases with flood probability, but is invariant with the fraction of loss covered δ.In the case of fixed initial capital, the difference between payouts and claims is highest at low values of δ and all values of p, but decreases as both factors increase.Interestingly, the fraction of a claim not provided is at most only around 3–4%, which may be an acceptable amount for the Myanmar fish-farmers.Every risk pool simulation creates an income distribution over time for each farm.Income is defined as the net revenue minus losses due to flooding.In the case of where a risk pool is formed, premiums are an additional loss term when calculating income.In Figure 6A we show the income distributions for an individual farm that experienced weak floods for the no-pool and with-pool simulations.Several features become evident.The first is that in both cases income is comprised of a mixture of distributions: a delta distribution describing income when there is no flood, and a broader loss-distribution describing income when floods are experienced.The with-pool delta distribution is to the left of the no-pool delta distribution, and this difference identifies the additional cost of the premium when joining a risk pool.We see that the no-pool loss distribution extends towards to the zero-income point, while when there is a risk pool the loss distribution is constrained to larger positive values."This identifies the positive impact of the risk-pool on the farmer's down-side risk.These results are entirely expected following the mathematics described previously.In Figure 6B we show these same distributions but for a scenario where flood impacts are high.In this case, the difference between the no-flood income delta distributions are accentuated.This is because the premiums are higher in this strong flood scenario.However, the differences in the loss-distributions are also greater, with the loss distribution for the case where there is a risk pool now far further to the up-side of the no-pool loss distribution, which is now much closer to the zero-income point.These differences between with-pool and no-pool cases is the same across all fish-farms, and taking advantage of this uniform impact of a risk pool, we measured its net benefit on the whole Myanmar fish-farming community as the ratio of the 5th percentile in the loss-distribution between with-pool and no-pool cases, averaged over all farms.We term this dimensionless ratio Δ5, and we quantified it for a range of flooding probabilities p and values of the fraction of loss covered δ, also for weak and strong flood impact scenarios.Intriguingly we find a non-monotonic relationship: in the case of weak floods, there is a positive relationship between Δ5 and the probability of flooding p and the fraction of loss covered δ, but with a peak at intermediate flood probabilities.This peak in Δ5 is accentuated in the strong flood case, where maximal values are now shifted to the left, occurring when flood probabilities are around 0.5.Furthermore, maximal values of Δ5 are around 1.5–1.6 in the weak flood impact scenario, and 5–5.5 in the strong flood impact scenario.In other words, the risk pool fish-farmers are exposed to much less risk in their income when there is a flood.Two other important metrics are the ratio of the median of the loss-income distribution, which we call Δ50, between cases with and without a risk pool, and the premium paid posed as a fraction of expected revenue.Δ5 values have a positive relationship with the fraction of loss covered δ, and a negative relationship with the probability of flooding p.This identifies that most of the positive impact of a risk pool, in terms of Δ50, occurs when the probability of flooding is low and when the fraction of income covered is high.Δ50 in this part of the parameter space increases by a factor of 3 for strong flood impacts, for weak flood impacts values are around 1.5.In contrast, premiums show both a positive relationship with p and δ, which is an outcome from Equation.At high values of p and δ, premiums are 20% of annual revenues.We have developed a mathematical insurance framework for quantifying the benefits and costs of cooperative indemnity insurance in small-holder aquaculture systems, using Myanmar as a case-study."Through ensemble simulations, we found that investing 5–20% of one's annual revenue into a risk pool's mutual fund can lead to large increases in the 5th percentile of income, or in other words a dramatic reduction in down-side risk.Interestingly, there is a non-monotonic relationship between this change in the downside of the income distributions and the probability of flooding p, and the fraction of loss covered δ.There are also benefits to individual fish farmers in terms of the change in median income experienced during a flood-year, which also increases, but shows a very different qualitative relationship with flood probability and fraction of loss covered.While we have focused solely on income, the data we have allow us to quantify the gross margin – profit – that the Myanmar fish farmers are likely to gain.Indeed, these data show that the margins are slim, with many fish-farmers accruing a loss in 2015, the year the MAAS interviews focused on.This highlights the importance of managing down-side risk in small-holder aquaculture with financial instruments like insurance.This is true for the fish farmers in Myanmar, but also for small-holder aquaculture communities around the world.For instance, in Bangladesh fish farmers are exposed to similar levels of flooding and in Australia, it is drought that can have large negative impacts on fish farm production.Although aquaculturists are exposed to great risk, except for a few recent pilot studies there is little evidence that financial risk management tools are available for aquaculture enterprises around the world.This places a key limit on the growth of the industry.Perhaps the largest assumption in our simulation experiments was that every fish-farm in the Myanmar system formed one large risk pool.The size of the risk pool is important because, like all other forms of insurance, the larger the insurance pool, the more available capital there is to provide payouts and the more widely risk is spread, diminishing the likelihood of ruin.How then might risk pools initially form, when small pools are likely to be unattractive?,There are various answers to this question.Perhaps the initial set of fish-farmers have sufficiently strong social-ties and are willing to commit large fractions of their annual revenue to the mutual fund, knowing that as the pool grows over time, this cost will diminish.Another viable option would be for small risk pools to seek some form of subsidy, in the form of government support for example.This is common in other food-production systems.For example, fishermen are known to commonly form cooperatives that make decisions as a collective on where and how much to fish, and for dealing with risk.In many of these fisheries cooperatives, outside assistance is often required to get individuals to join the cooperative, for example through co-management with fisheries management agencies or indeed through subsidy.Indeed, the viability of cooperative insurance schemes is a subject of much study in agriculture and aquaculture, and a focus for technology innovation, suggesting that while currently rare, cooperative risk pools in aquaculture will become more popular in the near future.Risk pools offer a means to overcome some of the main challenges associated with traditional insurance.In particular, the main cost to insurance is typically in verifying claims, which is normally done via in-person monitoring.However, in the risk pool model claims could be verified by members of the pool.Further, making a false claim is disincentivized because risk pools are formed around the often strong social-ties in fish-farming communities, and as a consequence the threat of social ostracism can deter bad behavior.The problem of false claims is related to the broader challenge of moral hazard in insurance.This is where fish farmers adopt more risky behaviors once insured, and it is currently unknown how moral hazard might manifest in a fish-farming risk pool.Like verifying claims, the problem of moral hazard could be solved, in part, by monitoring of farms/farmers by members of the cooperative.Another key challenge associated with insurance is adverse selection, and risk pools are not immune to this problem: even though they rely on the social capital of fish-farming communities, members will still have an incentive to select against risky individuals, either by kicking-out risky members or by denying entry to risky farmers to the pool.Solution to the problem of adverse selection include offering a menu of insurance contracts, thus separating fish farmers with different risk profiles, including deductibles and using index triggers for payments.In addition to forming an insurance risk pool, one very important step that the fish-farmers in Myanmar could make is to reduce the probability of flooding p.In the region of Myanmar that we have studied, the vast majority of ponds are located in an area that is already protected by flood defenses.These were constructed during the 1990s in order to make the area cultivable.This land is an area of low lying flood plain located between two of the major distributaries of the Ayeyarwady River.As such, during extreme flood events, existing flood control infrastructure is inadequate to prevent this area becoming inundated.Individual farms can invest in raising dykes.We envision that in addition to receiving direct payouts from the mutual fund to cover any losses, another useful scheme would be for fish-farming communities to invest part of their mutual fund in developing flood protection infrastructure.In the long-run this would reduce the probability of flooding and/or limit losses due to floods, and ultimately reduce the premiums required of the members of the risk pool.In small-holder agriculture, weather index or parametric insurance policies have grown in popularity due to efficiencies in overcoming challenges associated with false claims.This is when the probability of flooding, and the occurrence of a flooding event are calculated and identified from historical and real-time remotely sensed data respectively.Weather index insurance is attractive in small-holder food production systems because it can be tailored on specific down-side events and it removes the problem of moral hazard and false claims.This approach could be useful for small-holder fish farmers too, and there are several initiatives dedicated to the analysis of remotely sensed data that is specific to quantifying risk in aquaculture production.The main challenge here is minimizing basis risk, which is essentially the error in the empirical relationship between an index derived from remotely sensed data, and the probability of loss.Basis risk in aquaculture can come from many sources: there will be error in the relationship between fish production and precipitation and with temperature for example.However, basis risk can be effectively managed by many means, for example when indexes are used to trigger payouts for extreme weather events only.Possibly the most difficult aspect of weather-related food-production insurance are the stationary and non-stationary aspect of local and regional environment.In Myanmar precipitation is known to exhibit long-term oscillations, and this could engender strategic entry and exiting behaviors amongst fish farmers.Further, we know that in the coming decades there are likely to be large changes in our climate and weather, with specific implications for aquaculture.This means that, in addition to the various choices over the heterogeneity in flood probabilities and how to model flood impacts, these factors are also changing in time.There are several methods for dealing with non-stationarity, caused for example by climate change, the simplest being a moving window assessment of the risk pool parameters.However, in the limit of extreme climate change and long-time scales, there are situations where a risk pool that is “viable” now, in terms of its premiums that fish-farmers are willing to pay and the payouts they receive, will cease to be in 10–20 years.It is an open question as to how to deal with these situations, but it will likely involve using additional financial tools to help risk pool members transition to new sources of income and food.The modeling framework we have developed, while specific to small-holder aquaculture in Myanmar, could be applied to any other system comprised of individually-owned food production facilities that are affected by adverse weather.These could be crop or cattle-farms, fishermen, vineyards, or even ski resorts.This is because the risk pool framework is built from simple advances to general insurance mathematics.Indeed, the motivation for this work came from Protection and Indemnity Clubs, which are some of the oldest forms of cooperative insurance.More broadly, there appear to be numerous opportunities to borrow financial tools from other fields and economic sectors and apply them to the aquaculture industry.Financial risk management tools, like futures and forward contracts and other forms of insurance, are widely used in agriculture and other food production sectors.But they are relatively little used in aquaculture, which is currently the fastest growing food-production sector on the planet.We are at a critical point in the development of the aquaculture industry because these financial tools, if designed poorly, can incentivize environmentally harmful behavior.A key next-step in this work, and in the progression of the aquaculture industry more generally, is to identify how new financial tools can be designed for the opposite effect.In doing so, the use of financial tools like cooperative insurance, could help steer the aquaculture industry towards sustained growth and multiple wins, including enhanced financial risk management and long-term profitability for aquaculture enterprises and reduced environmental intensity of the production through adoption of best management practices.Aquaculture is a rapidly growing industry, providing food and income to many millions of people around the world.However, it is an immature sector relative to other food producing sectors, especially in terms of the risk management tools available.Here, we have developed a cooperative indemnity insurance scheme that is tailored to a fish-farming community in Myanmar.The insurance scheme revolves around members of a community pooling funds to protect against losses incurred by floods, which are common in the region.The scheme greatly reduces the downside risk of fish-farmers, and ultimately provides resilience to the community through smoothed income.These forms of cooperatively managed and self-organized insurance are a promising route by which the aquaculture industry can maintain a positive trajectory in terms of its growth, while achieving economic gains through reduced risk for the producer, and environmental wins through incentivizing best management practices.James R. Watson, Fredrik Armerin, Dane H. Klinger, Ben Belton: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper.This work was supported by the NSF Dynamics of Coupled Natural-Human Systems project GEO-1211972, the USAID Food Security Policy Project AID-482-LA-14-00003, the Livelihoods and the Food Security Trust Fund Agrifood Value Chain Development in Myanmar project: Implications for Livelihoods of the Rural Poor and The Global Economic Dynamics and the Biosphere Programme at the Royal Swedish Academy of Sciences.The authors declare no conflict of interest.No additional information is available for this paper.
Aquaculture is a booming industry. It currently supplies almost half of all fish and shellfish eaten today, and it continues to grow faster than any other food production sector. But it is immature relative to terrestrial crop and livestock sectors, and as a consequence it lags behind in terms of the use of aquaculture specific financial risk management tools. In particular, the use of insurance instruments to manage weather related losses is little used. In the aquaculture industry there is a need for new insurance products that achieve both financial gains, in terms of reduced production and revenue risk, and environmental wins, in terms of incentivizing improved management practices. Here, we have developed a cooperative form of indemnity insurance for application to small-holder aquaculture communities in developing nations. We use and advance the theory of risk pools, applying it to an aquaculture community in Myanmar, using empirical data recently collected from a comprehensive farm survey. These data were used to parameterize numerical simulations of this aquaculture system with and without a risk pool. Results highlight the benefits and costs of a risk pool, for various combinations of key parameters. This information reveals a path forward for creating new risk management products for aquaculturalists around the world.
103
Occult HIV-1 drug resistance to thymidine analogues following failure of first-line tenofovir combined with a cytosine analogue and nevirapine or efavirenz in sub Saharan Africa: a retrospective multi-centre cohort study
Combination antiretroviral therapy can lead to declining mortality and HIV incidence in high prevalence settings.1,2,Virological failure occurs after 12 months in 15–35% of patients treated with thymidine analogue-containing first-line regimens, with most cases of resistance to non-nucleoside reverse transcriptase inhibitors and lamivudine occurring in regions without access to routine viral load monitoring.3,4,HIV-1 drug resistance could be responsible for nearly 425 000 AIDS-related deaths and 300 000 new infections over the next 5 years.5,WHO has recommended first-line tenofovir disoproxil fumarate instead of thymidine analogues since 2012.6,Of the 17 million people accessing first-line ART in 2016,7 roughly 3·5 million were treated with a thymidine analogue.8,During the process of programmatic tenofovir substitution in ART-treated individuals, confirmation of viral suppression before the regimen change is rarely done in sub-Saharan Africa because of poor access to viral load testing.Given the potential substantial prevalence of unrecognised virological failure and drug resistance in this setting,4,9–11 programmatic single-drug substitutions risk more rapid acquisition of high-level drug resistance not only to NNRTIs and cytosine analogues, but also to tenofovir.12,Importantly, NNRTI resistance and thymidine analogue resistance mutations can be transmitted to uninfected individuals who are subsequently at increased risk of ART failure themselves.13,Evidence before this study,We did a systematic review using PubMed and Embase, searching from Jan 1, 2000, up to Aug 15, 2016, without language limitations.Manuscripts of interest were also identified from the reference lists of selected papers, clinical trials registries, and abstracts from the Conference on Retroviruses and Opportunistic Infections and International AIDS Society.We used the search terms “HIV” AND “Tenofovir” AND “thymidine analogue” OR “stavudine” OR “zidovudine” OR “AZT” OR “d4T”.We found no studies reporting the implications of previous thymidine analogue use on outcomes following tenofovir-based antiretroviral therapy.One study investigated the implications of transition from thymidine analogue to tenofovir by use of a cross sectional survey in Myanmar before the introduction of tenofovir.The investigators tested viral loads in more than 4000 patients after 12 months of thymidine analogue-based ART to avoid substitutions in viraemic patients.They noted that a substantial proportion of patients were having treatment failure, in whom direct tenofovir substitution for the thymidine analogue would not be appropriate.Added value of this study,Our results show that tenofovir-based first-line regimens are failing in a substantial proportion of patients who have evidence of previous exposure and drug resistance to older nucleoside analogues such as zidovudine and stavudine in sub-Saharan Africa.These individuals are likely to have developed drug resistance to the non-nucleoside reverse transcriptase inhibitor as well as the cytosine analogue, and therefore have high-level resistance to at least two of the three drugs present in tenofovir-based first line ART.Our data show that these individuals with thymidine analogue mutations have lower CD4 counts and therefore are at greater risk of clinical complications than are those without previous ART exposure.Implications of all the available evidence,Cheap and effective viral load monitoring, resistance testing, or both could prevent the transition of patients with virological failure onto tenofovir-based first-line ART and also identify individuals with pre-existing drug resistance to first line agents arising from undisclosed prior ART.These individuals could then be treated with second-line regimens.A further complication to the introduction of tenofovir in sub-Saharan Africa is shown by data suggesting that individuals presenting as treatment naive often do not disclose previous ART exposure, which is most likely with thymidine analogue-based ART.14,Accordingly, we have previously reported unexplained TAMs in patients after viral failure of tenofovir-containing first-line regimens.15,In this Article, we characterise the prevalence, determinants, and implications of TAMs in patients after virological failure of tenofovir-containing first-line regimens in sub-Saharan Africa.We identified patients from within the TenoRes collaboration, a multicountry retrospective study examining correlates of genotypic drug resistance following failure of tenofovir-containing combination ART.Data in this report cover seven countries with baseline measurements taken between 2005 and 2013.The original TenoRes collaboration spans 36 counties with baseline measurements between 1998 and 2015.Our methods have been described previously.15,Briefly, we collected data from cohorts with documented virological failure after first-line ART consisting only of tenofovir plus either lamivudine or emtricitabine plus either efavirenz or nevirapine, with no previously known exposure to additional nucleoside reverse transcriptase inhibitors such as zidovudine or stavudine.Virological failure was defined as a viral load greater than 1000 copies per mL, except for two studies in which the definition was viral load greater than 2000 copies per mL.Patients needed to have had a successful resistance test result associated with virological failure of combination ART and been on tenofovir-based ART for a minimum of 4 months before virological failure.We collected information on baseline characteristics, and HIV genotype following virological failure.In our previous report,15 we excluded patients with TAMs because of concerns that they might represent pre-treated rather than first-line patients, although identical information was collected on patients irrespective of the presence or absence of TAMs at the resistance test.We defined tenofovir resistance as the presence of Lys65Arg/Asn or Lys70Glu/Gly/Gln mutations in reverse transcriptase.Although the presence of three or more TAMs inclusive of either the Met41Leu or Leu210Trp mutation has also been shown to compromise tenofovir clinically,12 no individuals in this study had such a profile.TAMs were defined as Met41Leu, Asp67Asn, Lys70Arg, Leu210Trp, Thr215Phe/Tyr, or Lys219Gln/Glu.Our definition of TAMs also included the revertant mutations Thr215Ser/Cys/Asp/Glu/Ile/Val, although only two patients presented with such a mutation without the presence of at least one other TAM.TAM revertants are indicative of previous TAM Thr215Phe or Thr215Tyr mutations in the individual, and have been associated with increased risk of treatment failure if a thymidine analogue drug is used.16,We restricted our analysis to study sites from sub-Saharan Africa because we specifically wanted to investigate the large-scale programmatic shifts in tenofovir use that are currently occurring in this region in the absence of intensive viral load monitoring and baseline resistance testing.Studies were included if they had resistance data on ten or more patients, although in sensitivity analyses that included all available data, the conclusions were not altered.We interpreted drug resistance mutations using the Stanford HIV Drug Resistance Algorithm version 7.0.In cohorts spanning multiple countries, each country within the cohort was treated as a separate study for the purposes of our meta-analyses, to ensure that within-study associations were not confounded by between-country differences.To compare baseline characteristics according to TAM resistance, we used Mann-Whitney U tests or χ2 tests.We did three main analyses.First, we calculated prevalence estimates within each study separately and used Clopper-Pearson exact 95% CIs."Second, we graphically compared the study-level prevalence of TAMs and other drug-resistance mutations and used Spearman's rank correlation coefficients to assess the strength of association between the two.Third, we calculated odds ratios for drug-resistance mutations in patients with and without TAMs.We pooled estimates across studies using fixed-effects meta-analyses with Mantel-Haenszel weighting.We chose this strategy because there was no evidence of any between-study heterogeneity, and Mantel-Haenszel weighting works well in scenarios with zero-cell counts.All analyses were done with STATA version 11.2.The funders of the study had no role in study design, data collection, data analysis, data interpretation, or writing of the report.RKG and JG had full access to all the data in the study and had final responsibility for the decision to submit for publication.We assessed 34 studies and excluded 14 because they contained fewer than ten patients.We identified 712 patients who had viral failure with WHO-recommended, tenofovir-based first-line regimens in 20 studies across sub-Saharan Africa.Most patients were from southern Africa, with 159 patients from eastern Africa and 92 from west and central Africa.481 of 712 infections were with HIV-1 subtype C.Median age at baseline was 35·0 years and 413 patients were women.The median year of initiation was 2011, and patients were followed up for a median of 18 months.Where available, the overall median baseline CD4 count was 92 cells per μL and median viral load was log10 5·23 copies HIV-1 RNA per mL.Patient characteristics were broadly similar between patients with and without TAMs, with the exception of baseline CD4 count, which was roughly 30 cells per μL lower in patients with TAMs in all regions.We noted that usage of emtricitabine was 10% lower in patients with TAM compared to those without.33 of 209 women with available data on single-dose nevirapine had known previous exposure to single-dose nevirapine.Prevalence of NNRTI resistance was 88% in patients with single-dose nevirapine exposure and 82% in those without single-dose nevirapine exposure.For many patients, it was not known whether or not they had received single-dose nevirapine, including men, for whom single dose nevirapine use was always answered as no.TAMs were detected in 115 of 712 patients.The prevalence of TAMs was similar in eastern Africa, southern Africa, and west and central Africa.TAMs were less common in patients with HIV-1 subtype D than in patients with other subtypes.Despite individual studies tending to have only a small number of patients, all but four of the 20 included studies reported a prevalence of TAMs between 5% and 25%.Asp67Asn was the most common TAM and was present in 50 of 712 patients; it was more common in southern and eastern Africa than in west and central Africa.The next most common TAMs were Lys219Glu and Met41Leu.20 patients had two or more TAMs and seven patients had three or more TAMs.In crude comparisons across the entire study population, patients with TAMs were more likely to have tenofovir resistance, as well as resistance to cytosine analogues and nevirapine or efavirenz, with consistent findings across all regions.Of the 115 patients with TAMs, 93 had Lys65Arg/Asn or Lys70Glu/Gly/Gln, whereas in the remaining 597 patients without TAMs, 352 patients had these tenofovir resistance mutations.Tenofovir resistance mutations at Lys65 or Lys70 were present in 92 of 107 patients with TAM mutations without Thr215Phe/Tyr, and one of eight patients with TAM mutations with Thr215Phe/Tyr.We found a significant association between TAMs and tenofovir resistance both at the study-level and the individual-level."Studies with the highest prevalence of TAMs tended to also have the most tenofovir resistance.For example, in the ten studies in which less than 15% of patients had TAMs, tenofovir resistance was present in 112 of 216 patients, whereas in the ten studies with more than 15% of patients with TAMs, tenofovir resistance was present in 333 of 496 patients.We found similar associations for other drug resistance mutations, such as higher levels of nevirapine or efavirenz resistance and cytosine analogue resistance in patients with TAMs.Within the study, patients with a TAM were more likely to also have tenofovir resistance.The association was maintained among patients stratified by co-administered cytosine analogue, co-administered nevirapine or efavirenz, sex, baseline viral load, or baseline CD4 count.Notably, OR for tenofovir resistance was not affected by the possibility of within study drug substitution of thymidine analogue for tenofovir.We found similar, although slightly weaker, within-study associations of TAM mutations with both nevirapine or efavirenz resistance and cytosine analogue resistance.We assessed studies for potential within-programme drug substitutions and whether viral load confirmation was sought beforehand.We found that thymidine analogue substitution for tenofovir had occurred and that suppression was rarely confirmed before the change in treatment.Three studies implemented resistance testing before initiating tenofovir, although none excluded patients with drug resistance from initiating first line ART.We found TAMs that are specifically selected by zidovudine or stavudine in roughly 16% of patients with failure of tenofovir-based first-line antiretroviral regimens.TAMs were associated with greater drug resistance to all components of WHO recommended, tenofovir-containing first-line treatment.The prevalence of resistance to tenofovir reached 80% in individuals with TAMs, a result that is concerning and very much unexpected given that the tenofovir mutation Lys65Arg and TAMs are thought to be antagonistic to one another.17,Patients with TAMs tended to have lower CD4 counts than did patients without TAMs, which is consistent with longer duration of infection or faster disease progression.Our drug resistance prevalence estimates represent prevalence for participants with documented virological failure.Although it is important to know the prevalence of drug resistance among all participants treated with first-line therapy, this was not possible, mostly because of the absence of a clear denominator in many sites.A large international meta-analysis11 reported that 15–35% of patients initiating ART in sub-Saharan Africa have virological failure by 12 months.In view of our prevalence estimate of 16% of patients with virological failure having TAMs, we estimate that between 2% and 6% of individuals treated with tenofovir plus cytosine analogue plus efavirenz will have TAMs and 2–5% would have drug resistance to thymidine analogues, tenofovir, cytosine analogues, and the NNRTIs nevirapine and efavirenz within 1 year of treatment initiation under current practices in sub-Saharan Africa.As previously reported,15 an additional 8–18% of patients are likely to have resistance to tenofovir, cytosine analogues, and NNRTIs, but without thymidine analogue resistance.There are three possible sources of TAMs in patients on first-line tenofovir.The first is transmitted drug resistance, which is unlikely to account for the majority of cases in this study because transmitted drug resistance of TAMs is rare.18,19,Additionally, TAMs and Lys65Arg are antagonistic at the level of the viral genome;17 our findings showing co-existence of TAMs and Lys65Arg in patients with virological failure possibly result from these mutations occurring on different viral genomes after sequential therapies.Because transmission is usually with a single viral variant, transmitted drug resistance with TAM would translate to a viral population within an individual that consists entirely of TAM-containing viruses.Under this scenario antagonism with Lys65Arg would be active and we would therefore not expect to see Lys65Arg and TAMs together in the same individuals.The second possibility is programmatic substitution, wherein tenofovir was used to replace a thymidine analogue at a time when the patient had occult treatment failure.Under this scenario the most likely sequence of events would be, first, acquisition of cytosine analogue resistance, TAM, and NNRTI mutations during prolonged viral failure, followed by a switch to tenofovir and subsequent emergence of Lys65Arg that confers tenofovir resistance.Therefore, prevention of the development of Lys65Arg mutation could only be achieved by viral load suppression confirmation before the switch in treatments.Effective viral load monitoring has been identified as a priority area20 and would trigger adherence counselling and then a possible switch to a second-line protease inhibitor-based regimen instead of continuation of a failing first-line regimen with the substitution of a thymidine analogue for tenofovir.A large study in Myanmar has monitored viral loads in more than 4000 patients after 12 months of thymidine analogue-based ART, with the aim of avoiding substitutions in viraemic patients.The investigators found that 13% of patients had viral loads greater than 250 copies per mL, which was halved after adherence counselling was done, reinforcing the need for viral load monitoring before drug substitution.21,However, the second scenario cannot account for many of the TAMs identified in the present study because we detected TAMs in cohorts in which no programmatic substitution had occurred and tenofovir-based ART was used at the outset in apparently untreated patients.22,23,The third possibility, which we believe could account for most of the TAMs in the present study is previous undisclosed ART use with undocumented viral failure and drug resistance.This hypothesis is supported by the lower CD4 counts detected in patients with TAMs.Moreover, significant variation has been reported in viral load monitoring practices between rural and urban settings in South Africa,24 possibly explaining how unrecognised viral failure and drug resistance during tenofovir substitution could occur in settings where viral load monitoring is centrally funded and part of national guidelines.To prevent drug resistance due to undisclosed previous ART use, accessible point-of-care baseline resistance screening25 could be used to assist in the identification of patients with resistance to the components of first-line ART.We have previously identified key mutations that could be used in such assays, including Lys65Arg, Lys103Asn, Val106Met, Tyr181Cys, Gly190Ala, and Met184Val,26 and on the basis of the present study, Asp67Asn and Lys219Gln/Glu could be added to this list.If HIV-1 drug resistance is detected with such assays, second-line ART could be initiated, taking into account the mutations identified.If they become sufficiently cheap and reliable, drug resistance assays could be used in place of viral load monitoring at treatment initiation or switches.Our study has some limitations.The sampling was not systematic and therefore prevalence estimates might not be fully representative of countries and regions.Our drug resistance prevalence estimates represent prevalence for participants with documented virological failure.We can only estimate the overall number initiating treatment, because it was not systematically assessed.Using data from WHO and Uganda on the prevalence of virological failure,11,23 we calculate that if 15% of people initiating ART have failure at 1 year, then our data represent about 4750 patients initiating tenofovir-based first-line ART.Although none of the studies overtly used targeted viral load testing in individuals suspected of having treatment failure, such targeting might have occurred at the clinical level, potentially biasing our estimates of TAM resistance upwards.Conversely, Sanger sequencing can miss drug resistance mutations in 30% or more of patients.27,Additionally, we did not assess thymidine analogue resistance conferred by mutations in the connection domain between HIV-1 reverse transcriptase and RNAseH that are known to be selected by zidovudine,28 leading to further underestimation of drug resistance.Notably, stavudine selects not only for TAMs, but also for Lys65Arg in up to 20% of patients who have failure of stavudine.9,29–31,However, given that TAM and Lys65Arg are not selected together by a single stavudine-based regimen,9,29,32 exposure to stavudine would probably not explain the genotypes with both TAM and Lys65Arg that were seen in our study.This study has important policy implications for the limitation of drug resistance as tenofovir becomes more widely used both as treatment8 and pre-exposure prophylaxis.33,First, a single point-of-care viral load test could be implemented to prevent substitution of first line zidovudine for tenofovir in patients with virological failure.Regular viral load monitoring has been advocated in the past for treatment monitoring and could identify early virological failure in patients with previously undisclosed ART and drug resistance.However, this regular monitoring might be less cost effective than targeted viral load measurement.Second, simple resistance test kits could both assist in screening for drug resistance before ART initiation and also contribute to population level surveillance of HIV-1 drug resistance25 in both treated and untreated populations—a priority in sub Saharan Africa given the substantial mortality now recognised to be associated with HIV-1 drug resistance.5,These proposals should be part of a multipronged approach and subjected to cost effectiveness assessment in the wider context of other interventions that aim to limit burden of the HIV epidemic.
Background HIV-1 drug resistance to older thymidine analogue nucleoside reverse transcriptase inhibitor drugs has been identified in sub-Saharan Africa in patients with virological failure of first-line combination antiretroviral therapy (ART) containing the modern nucleoside reverse transcriptase inhibitor tenofovir. We aimed to investigate the prevalence and correlates of thymidine analogue mutations (TAM) in patients with virological failure of first-line tenofovir-containing ART. Methods We retrospectively analysed patients from 20 studies within the TenoRes collaboration who had locally defined viral failure on first-line therapy with tenofovir plus a cytosine analogue (lamivudine or emtricitabine) plus a non-nucleoside reverse transcriptase inhibitor (NNRTI; nevirapine or efavirenz) in sub-Saharan Africa. Baseline visits in these studies occurred between 2005 and 2013. To assess between-study and within-study associations, we used meta-regression and meta-analyses to compare patients with and without TAMs for the presence of resistance to tenofovir, cytosine analogue, or NNRTIs. Findings Of 712 individuals with failure of first-line tenofovir-containing regimens, 115 (16%) had at least one TAM. In crude comparisons, patients with TAMs had lower CD4 counts at treatment initiation than did patients without TAMs (60.5 cells per μL [IQR 21.0–128.0] in patients with TAMS vs 95.0 cells per μL [37.0–177.0] in patients without TAMs; p=0.007) and were more likely to have tenofovir resistance (93 [81%] of 115 patients with TAMs vs 352 [59%] of 597 patients without TAMs; p<0.0001), NNRTI resistance (107 [93%] vs 462 [77%]; p<0.0001), and cytosine analogue resistance (100 [87%] vs 378 [63%]; p=0.0002). We detected associations between TAMs and drug resistance mutations both between and within studies; the correlation between the study-level proportion of patients with tenofovir resistance and TAMs was 0.64 (p<0.0001), and the odds ratio for tenofovir resistance comparing patients with and without TAMs was 1.29 (1.13–1.47; p<0.0001) Interpretation TAMs are common in patients who have failure of first-line tenofovir-containing regimens in sub-Saharan Africa, and are associated with multidrug resistant HIV-1. Effective viral load monitoring and point-of-care resistance tests could help to mitigate the emergence and spread of such strains. Funding The Wellcome Trust.
104
Three dimensional characterisation of chromatography bead internal structure using X-ray computed tomography and focused ion beam microscopy
Liquid chromatography systems consist of porous, micro-spherical beads that are packed into a cylindrical column , with the three dimensional structure of both the packed beds and individual beads being important to key performance metrics .The surface area of a chromatography bead is maximised by having an internal structure comprised of intricate pore networks , with various materials of construction used as the backbone for size exclusion or chemical based separation processes .Chromatography beads have previously been characterised for several important aspects such as porosity and tortuosity in addition to performance based metrics .Both imaging and non-imaging approaches have been used , with Inverse Size Exclusion Chromatography being commonly used to determine internal pore sizes .Another available method for pore size investigations is mercury porosimetry which is also used for porosity calculations.Tortuosity has been relatively more difficult to define for internal chromatography bead structures, particularly using imaging techniques, however methods such as using Bruggeman relationships, dilution methods and other equation based approaches have been the most common methods for doing so.Two main imaging approaches have been extensively used for both visualisation and quantification of chromatography bead structure: confocal laser scanning microscopy and electron microscopy .CLSM has been demonstrated to be capable of imaging the internal structure of a chromatography bead without the need for physical sectioning, however CLSM lacks the resolution capabilities for defining internal bead pores , whilst electron microscopy can display the detailed porous structure at the surface but has no natural sample penetration beyond thin sliced samples .This has made both visualisation and subsequent quantification of the entire chromatography bead detailed microstructure difficult using existing imaging approaches as these techniques either lack sufficient resolution or internal structure visualisation, requiring a method to physically cut through bead material for nano-scale imaging.Microtomy has been demonstrated in other studies to be capable of cutting through chromatography resins .However producing a series of thin nano-slices for the softer chromatography materials resulted in microtomy being excluded from this study; although use of approaches such as serial block face microtomy may be a more viable alternative for successfully applying microtomy to beads.While parameters such as porosity have been extensively characterised for a wide array of industrially relevant resins using non-imaging techniques , tortuosity has been a point of contention in terms of the most representative method of evaluation in addition to the actual range of tortuosity for chromatography resins.From 1.3 to 6 across various types, this presents a vast difference in estimation of tortuosity that influences key performance metrics such as transfer and diffusivity within a chromatography bead .Therefore, imaging approaches for visualisation and quantification of internal chromatography bead structure are presented here, achieving resolution superior to CLSM whilst enabling sub-surface imaging not available when using conventional electron microscopy .The issue of penetrating material whilst attaining a sufficient pixel size and quality was the main criteria for technique selection and optimisation, with X-ray Computed Tomography and Focused Ion Beam microscopy selected to image agarose, cellulose and ceramic beads.Tomographic imaging has been used in other fields to provide a method for simulating tortuosity factor where, like in chromatography studies, the methods used have typically relied on empirical or equation based derivations .In a previous study , X-ray CT was used to investigate packed bed inter-bead structure of cellulose and ceramic based columns, although the pixel size and field of view requirements were of different scales .X-ray ‘nano-CT’ has been used to represent other porous structures and so was deemed appropriate to image and reconstruct the 3D internal structure of conventional chromatography beads, albeit of different materials to those investigated here.Focused ion beam microscopy was also used, a technique that relies on milling via a gallium ion beam and then sample imaging using electron microscopy to generate a sequence of two dimensional images; which can be reconstructed into a 3D structure or to produce samples for TEM or X-ray CT .Fig. 1 displays overall schematics for X-ray CT and FIB imaging used to provide the basis for 3D bead structural representation.Each technique has relative advantages and disadvantages , but provide distinctly different methods of producing 3D structures at high resolutions; both in terms of pixel sizes achievable as well as the approach required in order to obtain tomographic data-sets of sufficient quality for visualisation and quantification of structural geometry.Using two different tomographic approaches for 3D bead visualisation and quantification enabled comparisons both between results obtained for each bead type and overall technique suitability.Important considerations for determining the capability for using tomographic approaches for visualisation and quantification of bead internal structure included accuracy of results when compared to established literature techniques, in addition to general ease-of-use and feasibility for applying 3D imaging to relevant chromatography beads of different materials.Consideration included both the quantifiable results obtained after imaging and processing in addition to requirements for imaging using X-ray CT and FIB.Porosity, tortuosity factor, surface area to volume ratio and pore size of each sample are discussed in relation to the technique used and material examined in addition to identifying relevant advantages and disadvantages of using X-ray CT and FIB microscopy for bead visualisation and evaluation.Comparisons to values obtained using established techniques would enable determination of X-ray CT and FIB microscopy suitability for visualising and characterising the 3D structure of chromatography beads.Tortuosity evaluation of the internal pore network in particular was of interest given the relative difficulty in accurately measuring this aspect, despite its importance in relation to mobile phase flow paths through internal bead structure.Agarose beads used in this study were Capto Adhere resin from GE Life Sciences.Cellulose and ceramic materials were provided by Pall Biotech in the form of CM Ceramic HyperD™ F or MEP HyperCel™ 100 mL sorbent containers in 20% ethanol storage buffers before drying processes were performed in parallel.Investigations were performed in parallel for each bead type and so are referred to as sample or beads collectively.Average bead diameters for agarose, cellulose and ceramic beads were found to be 70 μm, 86 μm and 53 μm respectively based on optical imaging of a small sample as a reference on size, with whole bead X-ray slices available in Fig. 2.Initial sample preparation was performed by dehydrating each material type to a 100% ethanol concentration from the original 20% as a requirement for drying.Subsequent critical-point drying was performed using a Gatan critical-point dryer to displace ethanol with carbon dioxide as performed by Nweke et al .on beads.After critical-point drying, samples were sub-divided for X-ray computed tomography and focused ion beam microscopy, which required further preparation.For X-ray CT samples, an individual bead was isolated and held in place on top of a sharp pin using contact adhesive and stored in a sealed container for 24 h before use to ensure that the bead had been correctly set in place before scanning.For FIB preparation, approximately 100 of each bead type were inserted into a Struers 25 mm mould, with a brass subdivide used in order to separate and isolate agarose, cellulose and ceramic samples.EpoFix epoxy and hardener mixture were added to fill the mould in 15:2 parts respectively, before vacuum desiccation of the sample for 24 h to remove trapped air from the sample.The embedded puck was then removed from the desiccator for smoothing of the sample surface to expose beads using silicon carbide sheets of increasing grit rating: 360, 600, 1,200, 2400 and finally 4000 before diamond paste polishing; finishing with gold coating using an Agar Scientific coater performed to increase sample conductivity and reduce charging.Prepared samples were adhered to a 25 mm aluminium stub using conductive Leit C cement with a silver bridge added in order to ensure conductivity between the sample and stub.A pin-mounted bead was placed in a Zeiss Xradia 810 Ultra at the Electrochemical Innovation Laboratory in UCL at an accelerating voltage of 35 kV used in each case using a chromium target.The sample was rotated through 180° during imaging.Large Field Of View mode was used to image the entire bead achieving a 63 nm pixel size.This was improved to 32 nm using High Resolution mode by applying binning mode 2 on a 16 nm original pixel size; however this compromised the field of view to the top 16 μm of the sample, of which further cropping was often required.Stub-mounted samples were inserted into a Zeiss XB1540 ‘Crossbeam’, with an accelerating voltage of 1 kV used in secondary electron detection mode for imaging with the stage tilted to 54° for crossbeam alignment.After selecting a suitable bead, 500 nm thick platinum deposition was performed over the area of interest in order to provide a smooth protective surface for precise milling over the internal bead volume to be subsequently imaged.A preparatory trench at a depth of approximately 30 μm was milled using the gallium ion beam at a current of 1 nA in order to expose the protrusion capped by deposited platinum with block face polishing performed at 200 pA.Subsequent ‘slice and view’ imaging and milling at 100 pA of the block face at set intervals was used to generate a series of JPEG images for each sample.For ceramic results, a cubic voxel size of 15 nm was used, whilst for agarose and cellulose beads 20 nm width and height at a depth of 40 nm was achieved in both cases.A Helios NanoLab 600 was used instead for cellulose beads as a replacement system, however the approach taken was in-line with settings used for the other samples.As with image processing performed in a previous study , either 2D images or 3D TXM files were loaded into Avizo®.For FIB microscopy image sequences, the StackReg plugin for ImageJ was used to align all slices correctly before insertion into Avizo for processing and analytical purposes .The main objective of the processing stages was to produce an accurate representation of internal bead structure by segmenting material and void phases in addition to artefact removal.For X-ray CT samples the same bead was used for LFOV and HRES imaging, where extraction of a sub-volume at the relevant coordinates enabled generation of a LFOV volume in the same position as the HRES counterpart for comparison purposes, with this new volume referred to as ‘adjusted’ or ‘ADJ’ with Table 1 displaying approaches used.Analysis of geometric porosity, geometric tortuosity, available surface area to volume ratio and average pore diameter were calculated within Avizo, with tortuosity factor in each case determined using the MATLAB® plugin TauFactor by using 3D TIFF files.To evaluate the porous structure in an individual chromatography bead using X-ray CT, two different modes were used considering the trade-off between optimising pixel size and total sample imaged.This was performed to determine the impact of improving pixel size on both the capability for X-ray CT to accurately visualise the intricate structure of the bead in addition to quantify parameters such as porosity and tortuosity.Fig. 2 displays slices of cellulose and ceramic bead samples using LFOV and HRES approaches using the best available cubic voxel size in each case having respective dimensions of 63 nm and 32 nm.It was observed that all 2D slices in Fig. 2 display an internal porous structure for agarose, cellulose and ceramic beads, with the characteristic shell visible around the ceramic sample.Large voids were observed to occur for all materials within the internal structure of the samples which was also visible in microtome slices presented by Angelo et al. for cellulose beads, which would have been difficult to find without the use of 3D imaging techniques, with penetration of the adherent epoxy also obscuring some structure.The high resolution images were found to visualise a more intricate porous structure with smaller features relative to the large field of view counterparts, which was particularly noticeable for cellulose samples due to the larger pores that were visible at both scales, but with high resolution images also displaying smaller surrounding pore networks.This indicated that improving the pixel size from 63 nm to 32 nm enabled a greater degree of chromatography bead internal structure identification and thus would be considered to be more representative of the porous geometry within each bead, particularly for agarose and cellulose slices.However, using HRES mode also limited the field of view to the top of the bead in each case, preventing analysis of the entire sphere using this approach, requiring a sub-volume LFOV imaging to be produced in order to provide direct comparison between pixel sizes at the same coordinates for each of the materials investigated.X-ray CT was demonstrated to be capable of imaging the 3D porous structure of various chromatography materials without having to physically section the beads.This also enabled multiple acquisitions of the same volume without destroying the sample for optimisation purposes and comparisons between the resolution and field of view.The main disadvantage of using X-ray CT was the pixel size available because even when achieving 32 nm in HRES mode, alternative techniques such as ISEC used on similar materials suggest that the finest structure may not have been identified due the pore sizes being smaller than pixel dimensions achieved by X-ray CT and FIB imaging, requiring a higher resolution 3D approach.FIB has previously been used as a basis for analysing porous materials, analogous to chromatography bead internal structure, and so was selected to achieve an improved pixel size relative to X-ray CT due to the differences observed between resolution and field of view images.The difference in pixel size dimensions between X-ray CT modes was approximately 2, therefore this approach was kept constant for higher resolution FIB imaging by achieving pixel size dimensions of 15 nm.Cubic voxels were preferred despite potential further pixel size gains available using FIB, however this would compromise the overall volume that could be imaged for each sample and would present further imaging issues.Whilst a 15 nm pixel size was achieved for ceramic imaging, the softer agarose and cellulose displayed stability issues and so required a reduction in both block face pixel size to 20 nm in addition to slice depth being increased to 40 nm.This was undesirable in terms of both losing pixel size as well as preventing direct parity across all FIB volumes in terms of voxel dimensions; however was a necessary compromise for stable slice-and-view.Important considerations involved with sample preparation before imaging included ensuring that as much air was removed from the sample during epoxy embedding as possible in order to minimise disruptions to the continuous epoxy phase.Imaging difficulties at this stage would require artefact removal during digital processing in addition to potentially compromising milling quality in the local area by causing issues such as streaking effects .Fig. 3 displays a sample puck containing 2 different bead types, an overhead view of a bead after trench milling and block face slices for the agarose, cellulose and ceramic beads.It was observed in Fig. 3 that structure can be identified embedded within the epoxy for all materials, with again the characteristic shell visible for the ceramic bead visible as was the case for X-ray CT imaging.Platinum deposition that formed a smooth surface on the top sample can be seen that was used to increase conductivity in addition to reducing streaking artefacts that distort the block face in each slice, with the epoxy impregnation performed under vacuum to minimise air pockets.Whilst artefact reduction before reconstruction was successful, some instances still occurred and required digital correction afterwards.Both techniques have been demonstrated to be capable of producing visual representations of agarose, cellulose and ceramic chromatography bead structure, although each technique had relative advantages and disadvantages.The main advantage of FIB compared to X-ray CT was that the pixel size achievable was superior to either X-ray CT mode, potentially enabling smaller features in the structure to be identified which would result in more accurate measurements of characteristics such as porosity and pore sizes compared to ISEC etc.However, a FIB approach did have several drawbacks, including being a destructive technique, which meant that the sample could only be imaged once unlike for X-ray CT where the same bead could be examined multiple times, enabling comparative optimisation .The second disadvantage to using FIB was the increased sample preparation requirements, which could result in undesirable changes to the sample itself , with the epoxy puck inherently susceptible to air pockets and streaking artefacts that were minimised but not eliminated entirely.X-ray CT was also capable of imaging the entire bead whilst using a FIB approach limited the overall volume that could be prepared and then milled.Overall, the superior pixel size achieved by FIB was countered by various attributes that make X-ray CT relatively more convenient to use whilst still being able to resolve chromatography bead internal structure.This highlights that suitable technique selection relies on various factors that need to be considered in relation to the sample itself and the final imaging requirements, of particular interest being the pixel size achievable in relation to expected feature sizes.Both techniques performed considerably better for ceramic beads compared to the softer agarose and cellulose samples, as stability issues were encountered using FIB and X-ray CT imaging in particular for agarose and cellulose beads.The reconstructed volumes were processed in Avizo in order to segment the bead and void phases, in addition to removing any artefacts that had occurred due to sample preparation or imaging.Digitally processed geometries were then analysed for porosity, tortuosity factor, surface area to volume ratio and average pore diameter.For X-ray CT LFOV samples, cubic volumes of 40 μm dimensions were analysed, whilst for HRES and FIB volumes dimensions of 10 μm–15 μm were obtained for structural quantification.Using a 3D approach enabled visualisation of key aspects relating to chromatographic structure, with Fig. 4 displaying outputs based on cellulose HRES X-ray CT imaging.Producing 3D representations of chromatography bead structure enabled visualisation of important geometric aspects such as void-distance maps to aid understanding of chromatography bead structure and pore geometry, with Fig. 5 showing results for porosity and pore size across the different materials and approaches used.To provide direct comparison between LFOV and HRES X-ray CT imaging for each bead, a sub-volume with identical co-ordinates was produced with the difference being pixel size achieved in each case.This was referred to as the ‘adjusted’ volume, or ‘ADJ.’,It was observed that X-ray CT porosity readings from Fig. 5 for each material were similar between 63 nm and 32 nm pixel size approaches used, with agarose and cellulose close to 70% in each case and ceramic 65%.Ceramic beads of the same HyperD family have previously been determined to have an average porosity of 61% using Maxwell derived equations based upon cross sectional area available suggesting that tomographic representation was accurately determining porosity values for the overall ceramic structure.However, for agarose and cellulose beads porosity readings are typically reported in the 80%–90% range using a variety of established techniques such as ISEC on popular and commercially available resins, although porosities down to below 70% have been reported .Therefore whilst the average porosities presented here lie within these ranges, the tomographic approaches displayed a considerably lower porosity to those values typically observed, albeit dependent on variation between different types of agarose and cellulose beads available.By using an improved average pixel size via a focused ion beam slice and view approach between 15 nm and 25 nm, increased porosities closer to 80% were observed that were closer to expected values suggested by other methods such as ISEC .Whilst similarities in results were observed for overall bead porosity between imaging techniques, clear disparities were apparent when evaluating average pore sizes.In all cases, the 63 nm X-ray CT volumes were found to have a much larger average pore size compared to the high resolution and FIB counterparts, despite the similar overall porosities.This was attributed to the inferior pixel size being unable to discern the finest chromatography bead structural features, supported by relative surface area to volume ratios displayed in Table 2 being considerably higher for the improved resolution approaches.Average pore sizes suggested in literature using established techniques cover a vast range for relevant bead materials, from below 10 nm determined using ISEC up to 100 nm , suggesting the difficulty in accurately determining pore size.Whilst the large field of view and adjusted counterparts displayed results above 130 nm in all cases, the higher resolution approaches suggested values between 60 nm and 100 nm across each material which was within the expected range and order of magnitude, albeit at the higher values .Angelo et al. does discuss the potential for SEM imaging for pore size determination of cellulose beads to result in an approximate average of 50 nm on the surface.Differences were expected between tomographic imaging methods when determining average pore size due to the differing pixel dimensions, where the minimum theoretical pore size would be 1 pixel.By obtaining an improved pixel size, finer porous network could be resolved as can be seen in Fig. 2 when comparing LFOV and HRES X-ray CT visualisation of chromatography bead structure.Whilst ceramic results displayed a decreasing average pore size upon improving average voxel size, agarose and cellulose counterparts have the smallest average pore size determine by X-ray CT.This was attributed to despite having a superior average voxel dimensions of 25 nm–32 nm, by compromising to a 40 nm slice thickness the smallest pore structure obtainable was reduced for softer bead materials.Overall, tomographic quantification demonstrated that for aspects such as average pore size evaluation, achieving the best pixel size possible was favourable to obtain more representative results by using either high resolution X-ray CT or FIB.However, for overall porosity measurements there was no major difference between X-ray CT imaging of the same bead even at different pixel sizes.This suggested that the technique used for tomographic imaging should also be based upon the desired outcomes, as using higher resolution methods can include required compromises such as field of view loss.Whilst aspects such as porosity are relatively straightforward to characterise using existing non-imaging methods for chromatography beads, others such as tortuosity have been both ill-defined and quantified despite the inherent importance to liquid flow paths and thus transfer between phases .Using a tomographic approach in other fields has been found to be an effective way to evaluate tortuosity, with continued efforts to standardise and better represent this factor .Therefore two methods were selected for this study: geometric tortuosity and tortuosity factor based upon the 3D volumes produced from imaging.The geometric variant was determined by relating the average path length through a segmented porous volume to the shortest distance possible being commonplace .Tortuosity factor was evaluated using TauFactor software that considers simulated steady state diffusion through the tomographic structure that is compatible with existing fundamental relationships .This enabled a more complex evaluation of tortuosity compared to geometric tortuosity which relies on slice-to-slice positional movement without consideration of geometry-based flux constrictions.Results for both geometric tortuosity and tortuosity factor variants are displayed in Fig. 6.Tortuosity results for both measurement approaches were found to be below 2, which was at the lower end of the range as reported by 6 other studies into tortuosity of chromatography beads using other methods .A highly porous structure reconstructed from tomographic imaging as was obtained in each case here would result in low tortuosity readings, however the method for determining tortuosity is a major factor to consider , particularly given the relatively lower porosities here compared to other methods.As expected, tortuosity factor was found to be greater than geometric counterparts for all materials and tomographic methods, with an average difference of 0.22 for softer agarose and cellulose volumes and 0.07 for ceramic counterparts.This was attributed to tortuosity factor considering neighbouring pixels of the same phase, which allows for a greater appreciation of an increased tortuosity in regions with finer pore sizes that are less represented when evaluating geometric tortuosity that relies upon a scalar flow through pores regardless of size and is solely impacted by relative void position and slice-to-slice movement.However, inconsistencies between Avizo and TauFactor results have been documented by Cooper et al. and so may be a contributing factor here.A major advantage of using a tomographic approach for 3D imaging and reconstruction for visualisation of chromatography bead internal structure was that the digital volume could be quantitatively analysed for various important geometric characteristics.This also enabled comparison of results to those obtained in literature using established techniques that have either relied on alternative imaging techniques or non-imaging methods including ISEC, BET and mercury porosimetry, which have been compared for porosity, tortuosity and average pore sizes in Table 3.This suggested that further improvements to pixel size would endeavour in improving pore size determination accuracy for tomographic techniques when considering conventional chromatography beads, however the soft materials commonly used for resins provided issues that required compromises to aspects such as resolution to obtain stable imaging.Achieving an optimal or relevant pixel size relies on knowing the smallest feature sizes in the structure and is important for producing truly accurate representations of 3D structure at sufficient resolution, particularly if aspects such as average pore size are to be investigated that heavily rely on being able to resolve even the smallest pores.However, these approaches have been found to require several compromises in order to obtain high quality 3D representations compared to large field of view X-ray CT scanning.The first of these was field of view, where an entire bead could be imaged when using X-ray CT at a 63 nm pixel size, but for the high resolution counterpart, only the very top of the spherical sample could be imaged due to the field of view constraints.The most credible way to image an entire bead of approximately 50 μm in diameter would be to perform mosaic scans, where many data-sets are acquired using HRES mode and then digitally stitched together to produce an overall volume that could cover the entire bead volume whilst maintaining a 32 nm voxel size.However, this approach was deemed to be impractical as this would require a vast amount of time to achieve this, particularly problematic for the agarose and cellulose beads that displayed stability issues when exposed to the X-ray beam for any considerable amount of time.Another problem with mosaic imaging at such high quality is that in order to image the very centre of the sphere, a considerable amount of surrounding material would obscure the beam, detrimentally impacting the signal-to-noise ratio of imaging and also presenting issues when accurately determining volume boundaries.FIB lift-outs for X-ray CT could be attempted in order to alleviate this issue, however bead-epoxy definition would be required and the overall process would be more intensive than imaging using FIB itself.Simulating tortuosity factor in different orientations for the volumes examined was not found to produce results of particular difference to each other and so pore structure was not observed to have major directional disparities for tortuosity, with distance maps such as displayed in Fig. 4B useful for visualising chromatography bead structure.Tomographic approaches have also enabled consideration of pore geometry and morphologies, although the main value of interest here was average pore size for comparing to results obtained using ISEC and other approaches.Table 3 displays comparisons of porosity, tortuosity and average pore sizes to existing literature values based upon established methods, where BET has also been commonly used to evaluate available surface area of internal bead structure that was investigated in relative terms between tomographic techniques here.ISEC has been used for all 3 bead materials to quantify porosities and pore sizes, where overall bed porosity that includes inter-bead voidage had been determined.This could be quantified using 3D imaging by combining porosities obtained here with values obtained in a previous study .Aforementioned lower porosities obtained using the various tomographic approaches resulted in corresponding reduced overall column porosities, although the exact bed geometry in each study was not identical.Whilst pore sizes were typically higher compared to other methods such as ISEC and mercury porosimetry, the same order of magnitude was achieved and results were in-line with values reported when imaging bead surfaces using electron microscopy .Overall, these results suggested that the pixel sizes used were suitable for imaging bead internal structure, however the higher resolution approach of X-ray CT and FIB were more appropriate for quantification of characteristics such as pore size due to their inherent sensitivity to the smallest features that suggest results closer to those suggested by orthogonal methods .On the contrary, aspects such as tortuosity did not show a definitive or reliable change when using higher resolution approaches, suggesting that visually identifying major pore networks would be sufficient to approximate a tortuosity factor for the material, without the necessity of achieving a pixel size to accurately image the smallest features that may present other imaging considerations and obstacles.X-ray CT and FIB have been demonstrated to be effective methods for imaging the 3D internal structure of three chromatography bead materials, yielding quantitative results that are relatable to established approaches for measurement.Different pixel sizes achieved were compared both between and within tomographic techniques explored here that highlighted the benefits of using nano-scale resolution approaches to both visualise and evaluate bead structure, in addition to requirements for representative imaging.Limitations, particularly when considering the softer bead types, resulted in constraints and thus compromises that would result in a greater degree of the smallest porous structures being obscured.These trade-offs may be possible to overcome upon technology advancement.Future areas of interest include expanding the technique and material portfolio, as well as investigating chromatography use and application based impacts on bead structure.This would be greatly enhanced by improvement in X-ray CT or FIB technology by either further improving pixel sizes attainable whilst reducing constraints; as well as the availability of new techniques or technologies that enable new approaches to obtaining high quality tomographic representations of chromatography beads, including the smallest feature sizes.This would provide greater insight of how bead structure relates to important geometric factors such as tortuosity.
X-ray computed tomography (CT) and focused ion beam (FIB) microscopy were used to generate three dimensional representations of chromatography beads for quantitative analysis of important physical characteristics including tortuosity factor. Critical-point dried agarose, cellulose and ceramic beads were examined using both methods before digital reconstruction and geometry based analysis for comparison between techniques and materials examined. X-ray ‘nano’ CT attained a pixel size of 63 nm and 32 nm for respective large field of view and high resolution modes. FIB improved upon this to a 15 nm pixel size for the more rigid ceramic beads but required compromises for the softer agarose and cellulose materials, especially during physical sectioning that was not required for X-ray CT. Digital processing of raw slices was performed using software to produce 3D representations of bead geometry. Porosity, tortuosity factor, surface area to volume ratio and pore diameter were evaluated for each technique and material, with overall averaged simulated tortuosity factors of 1.36, 1.37 and 1.51 for agarose, cellulose and ceramic volumes respectively. Results were compared to existing literature values acquired using established imaging and non-imaging techniques to demonstrate the capability of tomographic approaches used here.
105
Alternative mechanisms for talin to mediate integrin function
In multicellular organisms, cells adhere to extracellular matrices to migrate and resist mechanical force.ECM adhesion is generally mediated by integrins, transmembrane receptors connecting the ECM to the actin cytoskeleton via multiple intracellular linker proteins .One intracellular adaptor, talin, is particularly critical for this connection, being uniquely essential for all integrin adhesive functions within developing organisms .Talin is a large multidomain molecule that makes numerous protein interactions and has at least two separable functions: modulating integrin affinity and linking integrins to actin .The N-terminal “head” domain is a modified FERM domain with four subdomains, F0–F3 .An F2-F3 fragment binds the integrin β subunit cytoplasmic tail, with integrin-binding site 1 within F3, and is necessary and sufficient for “inside-out” integrin activation, increasing ECM binding .The head also contains membrane-binding sites in F1 and F2, and binds actin and other proteins .The rest of talin, the C-terminal “rod,” is composed of α-helical bundles, which include binding sites for vinculin, integrin, and actin .The vinculin-binding sites are buried within the helical bundles but are exposed by force across talin, contributing to the force dependency of vinculin recruitment .These findings led to a model where talin binds integrins via the head domain, activating integrins; the C-terminal actin-binding domain binds to actin; and force from actin polymerization or myosin contraction stretches talin, exposing VBSs that recruit vinculin, providing additional links to actin.In addition to vinculin and actin, talin recruits other integrin-associated proteins , providing a scaffold for protein complex assembly.This model agrees with superresolution microscopy showing talin oriented perpendicular to the plasma membrane, with the head bound to integrin and the ABD to actin .However, it does not explain how IBS1-mutant talin is still recruited to adhesions , how the isolated C terminus of the talin rod can mediate cell proliferation , or why in Drosophila, IBS2 is required for more integrin-mediated processes than IBS1 .Moreover, site-directed talin mutants retain partial activity, which varies with the developmental event examined .Thus, it is likely that talin function is more complex: different domains of talin may operate independently; different tissues or developmental stages may express “redundant” proteins that substitute for distinct talin subfunctions; or talin may function by more than one molecular mechanism, with different domains being more or less important for each mechanism.Our findings show that indeed, within the different cells of an organism, the way that talin assists integrins to mediate adhesion varies dramatically.To identify key residues required for talin function, we exploited Drosophila genetics to generate cells homozygous for randomly generated mutations just in the wing and selected mutants impairing integrin adhesion.From 50,000 mutants screened, 39 talin mutants were isolated.To our surprise only two changed a single residue, and one of these changed the initiating methionine, preventing translation.The other altered a key residue in IBS1, changing R367 to histidine, similar to the talinR367A mutant we generated previously to impair integrin activation .The other 37 mutations were truncations caused by stop codons or frameshifts, providing an invaluable deletion series from the C terminus, which enabled the mapping of key activities, as described below.For comparison, 19 of 38 of the other mutants from the screen were single-residue changes.This suggests that there are few single residues that are critical for talin function or structure.To complement this series of C-terminal deletions, we generated a site-directed GFP-talinΔhead allele, expressed from the talin promoter and tagged with GFP, as well as the wild-type control construct GFP-talin, and combined them with a null allele in the endogenous gene.GFP-talin fully rescued the null allele, whereas Δhead was lethal with the phenotypes described below.None of the mutant talins caused dominant effects; all are recessive alleles.We used them to assay the function of different regions of talin in three distinct integrin-mediated developmental processes: muscle attachment in the embryo; epidermal morphogenesis during early embryogenesis; and adhesion between the two epithelial cell layers of the adult wing.Surprisingly, each process required different talin domains.The most prominent embryonic integrin-adhesion structures are the muscle attachment sites; without integrin or talin, the muscles fully detach.Many talin mutants retained some muscle attachment, quantified by measuring shortening of dorsal muscles.Three phenotypic classes were statistically distinct: null, partial loss of function, and wild-type, shown by bar color.Deletion of the head completely inactivated talin; Δhead protein levels were normal and the remaining rod fragment was recruited but had no detectable function.This is much stronger than the point mutant in IBS1 , consistent with the head having other activities in addition to binding and activating integrin, such as membrane binding .The most C-terminal truncated protein, talin2509, which lacks half of the ABD dimerization helix, still had some function in muscle adhesion.Deletion of the whole ABD in talin2120 did not impair talin function further, consistent with the dimerization helix being essential for actin binding , and possibly only necessary for this function, because a point mutant that inactivates actin binding but not dimerization is equivalent to one that impairs both .The deletion that also removes IBS2 retained the same level of partial activity, even though a site-directed IBS2 mutant caused muscle detachment .Further deletion from the C terminus revealed an abrupt transition from partial activity to no activity when the last VBS was deleted, going from talin646 to talin511.This transition did not correlate with protein levels, because talin511 was expressed similarly to talin759 but caused stronger detachment.Thus, C-terminal deletions revealed two steps: talins lacking the ABD retained partial function, which was lost only when the last VBS was deleted.This suggested that vinculin binding compensates for ABD deletion, so we tested vinculin’s contribution.To avoid any concern of partial vinculin activity in the existing Vinculin mutant , we generated a deletion removing all of the Vinculin coding sequence, ΔVinc, which is viable and does not cause any visible phenotype in the adult.Removal of vinculin from talin mutants that lacked the ABD but contained one or more VBSs caused the loss of the residual talin function.This was not due to a nonspecific additive effect, as removing vinculin did not enhance every talin mutant with partial activity.We therefore conclude that in the muscles, vinculin is partially compensating for the absence of ABD, possibly by using its own ABD.In summary, the muscle phenotype of the new talin mutants fully fits the model of talin function in focal adhesions outlined in the Introduction, as the head is critical and there is some overlap in the function of the ABD and bound vinculin.However, this is not the case for other developmental processes.We next investigated the contribution of talin domains to the morphogenetic process of germband retraction of the embryo, which reverses the elongation of the germband that occurred during gastrulation.Quantifying embryos with GBR defects showed that the alleles caused one of two effects, either indistinguishable from the null talin allele or wild-type.In embryos with the talin gene completely deleted, 38% failed to undergo GBR, showing that talin makes an important contribution to this process, but there must be a compensating factor that allows many embryos lacking talin to undergo GBR.In contrast to the muscle, loss of the head had no effect on talin’s contribution to GBR, whereas the most C-terminal truncated protein, talin2509, had no GBR activity.These findings were consistent with previous work showing that specific disruption of actin binding caused a null GBR defect , but contrasted with the null GBR defect seen in embryos expressing headless-talinGFP, a construct similar to our Δhead .The difference could be caused by the GFP tag inserted at the C terminus of headless-talinGFP, which may partially impair actin binding .As expected, the failure of talin2509 to mediate GBR did not get worse by removing vinculin, but surprisingly Δhead lost all its activity.Vinculin is not known to bind integrins, suggesting that vinculin is substituting for another function of talin’s head.Both talin head and vinculin bind actin and the membrane, suggesting that one of these activities is essential for GBR.We next examined talin mutant function in wing adhesion.Because talin is required for viability, these experiments were performed by inducing homozygous mutant cells within the developing wing and assaying the wing blister phenotype.Quantitation of all talin mutations revealed four statistically distinct phenotypic classes, indicated by three bar colors and the absence of a bar, and showed that many talin truncations retained some adhesive function.Intriguingly, the requirement for particular talin domains was different from muscle or GBR.In contrast to both the roles for talin head in muscle and GBR, Δhead had partial activity in the wing.The C-terminal deletions that just impair ABD had partial activity, similar to Δhead.Of interest, talin2120 had more activity than truncations up to talin2167, suggesting an inhibitory domain between 2120 and 2167.Uniquely in this tissue, we observed the abrupt transition from partial to null activity at the transition from talin2120 to talin2049.Notably, the 71-residue region between these deletion endpoints contains α helix 50, which has residues critical for IBS2 function and is a VBS .This suggests that binding of integrin, vinculin, or another molecule is critical, although the existence of many other VBSs in this truncation argues against it being vinculin.These results suggest that both IBSs contribute to talin function in the wing.We then tested whether they needed to be in the same molecule by measuring whether the partial blister phenotype of Δhead could be ameliorated by combining it with a truncation producing talin head, talin646, but it was not.This demonstrates that for full function the head and rod must be in the same molecule.The remaining function of truncations lacking the ABD required vinculin, similar to muscle, but in contrast to GBR the remaining function of Δhead did not require vinculin.This finding was also important because it showed that removing vinculin does not enhance every talin mutant that retains partial activity.To summarize, each developmental process requires a unique set of talin regions.Three key mutants reveal these differences: Δhead completely inactivated function in muscle, was fully functional for GBR as long as vinculin was present, and had partial function in the wing, regardless of vinculin’s presence; the most C-terminal truncated protein, talin2509, which impairs actin binding, had partial vinculin-dependent function in muscle and wing and no function in GBR; and the mutant talin lacking the ABD and IBS2/α helix 50, talin2049, retained the partial activity of ABD deletions in muscle, had the same null defect as ABD deletions in GBR, and eliminated the partial activity in the wing.These differences suggested that the mechanism of talin function in each process could be different.We therefore considered alternative models of talin function to explain these differences and focused on the differences between muscle and wing, because they both involve clear integrin-containing adhesive structures that mediate strong adhesion between tissue layers.We first checked that these differences in activity of mutant talins are not caused by altered protein stability at different developmental stages.Phenotypic differences in muscle versus wing for Δhead and talin646 were not explained by reduced talin levels in the tissue with the stronger phenotype.In the wing, both the residual activity of Δhead, which lacks IBS1, and the importance of IBS2 support a key role for IBS2 binding to integrin.One way to explain the results is if in muscles a single talin molecule lacking its ABD and IBS2 can link an integrin to actin with IBS1 and vinculin; in contrast, this does not work in the wing, where instead each talin molecule must bind two integrins.This latter point arises because we note that every talin mutant that retained partial activity in the wing can make a talin dimer/monomer with two IBSs: Δhead still has the dimerization helix and so can make a homodimer with two IBS2s, whereas deletion of ABD results in a monomer containing IBS1 and IBS2.It also fits with our finding that for full function, both IBSs have to be in the same molecule.We therefore tested whether the proximity between integrin and IBS2 varied in the two tissues by measuring fluorescence resonance energy transfer within the whole animal.We quantified FRET by fluorescence lifetime imaging, which measures the reduction in lifetime of the donor fluorescence when FRET occurs between two fluorescent molecules less than 10 nm apart .Fortuitously, a gene trap insertion was isolated that permits the insertion of mCherry in-frame into talin, 18 amino acids C-terminal to IBS2/α helix 50.In addition, we generated an integrin βPS subunit tagged with GFP at the C terminus by homologous recombination and genomic rescue constructs encoding vinculin tagged with GFP or red fluorescent protein at the C terminus.The fluorescent tags did not impair function, as the insertions into the integrin and talin genes were homozygous viable and fertile with no visible defect, and the tagged vinculins tightly colocalized with integrins.The βPS-GFP/talinIBS2-mCherry pair did not show FRET in muscles, but showed substantial FRET in wing adhesions.Thus, talin’s IBS2 is in closer proximity to integrin in wing versus muscle, supporting the increase in phenotype we observed when IBS2 was deleted in wing but not muscle.The degree of proximity varied between different wing adhesions, suggesting a dynamic interaction.The pattern varied from wing to wing, and this variability was found in live wings as well as at earlier and later pupal stages.We then examined whether vinculin was in close proximity to talin head or IBS2 by analyzing two FRET pairs: vinculin-GFP/talinIBS2-mCherry and GFP-talin/vinculin-RFP.Vinculin’s C terminus was in close proximity to IBS2 in both tissues, demonstrating that we can detect FRET at muscle adhesions, and therefore there is no technical reason for not detecting FRET there between integrin and IBS2.Vinculin’s C terminus was also in close proximity to talin head, but only in the wing, consistent with distinct molecular architectures in the two tissues.The FRET of these pairs showed a similar level of variability in the wing as βPS-GFP/talinIBS2-mCherry, suggesting integrin adhesions are generally more dynamic in wing versus muscle.Finally, the GFP-talin/talinIBS2-mCherry pair did not show FRET in either wing or muscle, indicating that talin head is not close to IBS2, and confirming that the FRET we did observe in the wing is not due to any nonspecific crowding effect.The lack of IBS2 proximity to integrin in muscles does not explain the previous result that an IBS2 point mutant has a strong muscle phenotype .To resolve this contradiction, we hypothesized that, in the muscle, talin initially binds to integrin via IBS2, and then actin binding via the ABD and vinculin pulls the talin C terminus away from the membrane.This prompted a number of new experiments to determine the extent of the separation between IBS2 and integrins, and test whether actomyosin activity and vinculin are involved in this separation.We used superresolution 3D structured illumination microscopy and observed at MASs a clear separation between βPS-GFP and talinIBS2-mCherry and between the two ends of talin, GFP-talin/talinIBS2-mCherry.In contrast, no separation was detected between a combination of vinculins C-terminally tagged with GFP or RFP.3D-SIM has a resolution of 120 nm, consistent with separation of talin ends by >250 nm in mammalian cells , which is stretched relative to the ∼60-nm length by electron microscopy .This indicates that talin is stretched perpendicular to muscle ends, resulting in the separation of IBS2 from integrins.In contrast, in the wing, we never observed a separation between βPS-GFP and talinIBS2-mCherry or GFP-talin and talinIBS2-mCherry.This fits with the fact that IBS2 contributes to function in the wing and suggests that talin head is localized close to integrins at the membrane.Thus, these observations show that the differences in the regions of talin that are crucial in the two tissues are reflected by a difference in the configuration of talin, suggesting that talin is oriented perpendicular to the membrane in muscles and parallel in wings.The separation between integrins and IBS2 at MASs could result from forces exerted on the rod of talin, pulling it away from the membrane.When we disrupted the contractile apparatus of muscles, by removing muscle myosin , we could now detect FRET between βPS-GFP and talinIBS2-mCherry, showing that they have moved closer together.We hypothesized that actomyosin’s contribution could be mediated directly via talin’s ABD and/or indirectly via vinculin’s ABD.Supporting the latter, removing vinculin also resulted in integrin and IBS2 coming together, comparable to the FRET observed in muscle myosin mutants.It appears that only a fraction of talin becomes oriented with IBS2 close to integrin, because βPS-GFP and talinIBS2-mCherry remained separated at MASs in vinculin mutants when visualized with superresolution microscopy.It proved not possible to do 3D-SIM in muscle myosin mutants, because the βPS-GFP/talinIBS2-mCherry fluorescence intensity was too low.An alternative way that loss of vinculin could increase the fraction of talins with IBS2 in close proximity to integrin is if vinculin competes with integrins to bind α helix 50/IBS2, as this helix is also a VBS .To test whether vinculin competes with integrins for IBS2, we determined whether removing vinculin increased βPS-GFP/talinIBS2-mCherry FRET in the wing or increased IBS2-GFP recruitment to MASs, and found that it did not.The lack of competition may suggest that the vinculin-GFP/talinIBS2-mCherry FRET signal derives from the close proximity between vinculin-GFP bound to another VBS and the mCherry inserted near IBS2.Altogether, our data support a mechanism by which actomyosin contractions and vinculin separate IBS2 from integrins in muscle, most likely by exerting force on the C terminus of talin that pulls it away from integrins.We have presented key findings that change our view of talin function: talin is needed for every integrin adhesion event in fly development, each with variable dependence on individual talin interaction sites; the IBS2 of talin is separated from integrins in muscle but not in wing, and this partly requires myosin activity and vinculin; and even though the absence of vinculin is tolerated, vinculin is required for certain mutant talins to retain their residual function.Vinculin’s maintenance through evolution in Drosophila was at odds with the lack of a mutant phenotype , especially as vinculin mutants are lethal in other organisms .However, vinculin mutants have recently been observed to cause mild muscle detachment in late-stage fly larvae , and here we show that vinculin is required for the partial activity of talin mutants.Thus, vinculin supports normal functions of talin by adding additional actin/membrane-binding sites.Activated vinculin increases focal adhesion size, slows talin turnover, and maintains stretched talin in an unfolded conformation , and so vinculin may also increase the stability of mutant talins at adhesion sites.The ability of vinculin to aid mutant talin function is somewhat paradoxical if stretch between head and ABD is required to expose VBSs : how therefore do talins that lack the C-terminal ABD recruit vinculin?,Possible explanations include: some VBSs are exposed in unstretched talin; other interactions stretch and expose VBSs; truncation exposes VBSs; and activation of vinculin drives binding to truncated talins, because artificially activated vinculin can recruit talin .Our finding that the C terminus of vinculin was in close enough proximity to talin to show FRET was surprising, because the talin-binding domain of vinculin is at its N terminus and therefore the actin-binding C terminus would be expected to extend away from talin.In all our other ongoing experiments, we only get FLIM if the tag is adjacent to the interaction site.The close proximity therefore suggests that vinculin becomes aligned with talin.In muscle and wing, this alignment would be in the same direction, with vinculin binding a VBS N-terminal to IBS2, resulting in vinculin’s C terminus in close proximity to the mCherry inserted C-terminal to IBS2.This is consistent with actin-mediated forces pulling the C-terminal ABDs of talin and vinculin away from integrins and talin head, respectively.The FRET indicates that some vinculin is pulled in the opposite direction in wings but not muscles, bringing vinculin’s C terminus near talin’s N terminus.This difference fits talin’s parallel orientation in the wing, where the cortical actin meshwork could pull vinculin in a variety of directions.It is also possible that talin’s head and vinculin’s C terminus are brought into proximity by membrane binding.Our results provide additional support for binding of IBS2 to integrins , consistent with results showing that mutating IBS2 and the IBS2-binding site on the βPS integrin subunit cytoplasmic domain have similar phenotypes .We show that continued interaction between IBS2 and integrins is context dependent, with lack of IBS2 proximity to integrins at MASs, as in focal adhesions , and retention of proximity in the wing.Our finding that IBS2 was not required in the embryo for the residual function of talin lacking ABD, or talin/PINCH maintenance in this mutant, seems inconsistent with the defects caused by an IBS2 site-directed mutation, including muscle detachment and separation of talin and PINCH from integrins .Furthermore, we need to explain how IBS2 can be required for talin to remain bound to integrins but not remain in close proximity.One explanation is to hypothesize that IBS2-integrin binding strengthens the interaction of talin’s head with another integrin or the plasma membrane, so that it can resist the pulling forces on ABD and vinculin that separate IBS2 away from integrins.When IBS2 is mutated the interaction between talin head and integrins/membrane is weakened, such that the full-length protein is pulled off, but a protein lacking ABD remains attached sufficiently to provide some function.This suggests that IBS2 should be in close proximity to integrins during early stages of adhesion formation in muscles, but we were unable to detect any FRET.It could therefore be a transient interaction or IBS2 may bind another protein in muscles.We propose three distinct models for the mechanisms adopted by talin to mediate integrin adhesion, and these explain all our findings. In muscle, talin appears to work as presented in the Introduction, with talin dimers bound to integrins or membrane with their heads and to actin directly with the C-terminal ABD and indirectly with vinculin.Actomyosin activity and vinculin likely exert force on the rod of talin, each separating a fraction of the IBS2s from integrins. In the wing, talin is oriented parallel to the membrane, with each talin dimer binding four integrins using all IBSs.Alternatively, talin heads are bound to the membrane or cortical actin, and the IBS2s are bound to two integrins.Actin is bound directly with the C-terminal ABD and indirectly with vinculin. During GBR, we suggest that talin dimers are bound to cortical actin or membrane directly with the head and indirectly with vinculin.Because IBS2 is critical for GBR , we further suggest that talin dimers bind to integrins with IBS2s and to actin with the C-terminal ABD.In these models, we have opted for the simplest explanation where IBS2 binds directly to integrins, but we have not ruled out that there are intermediate adaptor proteins.In the wing, the proximity between IBS2 and integrins could result from insufficient actomyosin activity perpendicular to the membrane, but such a “passive” mechanism could not explain why IBS2 was critical in some tissues.The requirement for both talin head and IBS2 in the wing and during GBR suggests new parallel orientations of talin that could sense stretching forces within the adhesion plane, similar to EPLIN at cell-cell adhesions .In the wing, stretch would occur between integrins, and between integrin and membrane or actin in GBR.It is also possible that talin senses stretch between the membrane and cortical actin, as organisms lacking integrins have talin .The different orientations will also impact on integrin density and integrin:talin stoichiometry.In the wing, the distance between integrins can be fixed by talin, whereas in the muscle, integrin density would vary, depending on the flexibility of the talin dimer.It will be of interest to find whether parallel orientation of talin is found in epithelia of other organisms.Finally, our results emphasize that when mutant versions of a protein are found to work better in some cell types than others, this may be indicating different mechanisms of action, a possibility that could resolve apparently contradictory findings.Details on the generation of new rhea and Vinculin alleles can be found in Supplemental Experimental Procedures.For wing blister quantification, mitotic clones were generated in the wings of heterozygous flies by crossing rhea mutant males to w; PVg P; P2A females.Embryonic phenotype quantification was performed on mutant embryos lacking both maternal and zygotic wild-type talin and/or vinculin, as they were obtained from germline clones generated in heterozygous mutant females by crossing rhea mutant females to P1, y w; P3L P2A or ΔVinc w; P38/CyO; P3L P2A males.Heat shocks were performed two times for 1 hr and 15 min each at 37°C at L1 and L2 larval stages.TalinIBS2-mCherry was kindly provided by H.J. Bellen.The myosin heavy chain mutant used was Mhc , kindly provided by S.I. Bernstein.IBS2-GFP recruitment to muscle attachment sites was performed with UAS::IBS2-GFP expressed in muscles with P3.Details on the generation of genes expressing fluorescently tagged talin, vinculin, and βPS integrin subunit are in Supplemental Experimental Procedures.Immunostainings were carried out according to standard procedures, as fully described in Supplemental Experimental Procedures.Primary antibodies were rabbit anti-talin N terminus , rabbit anti-GFP, mouse anti-muscle myosin , and rat anti-αPS2 .Samples were scanned with an Olympus FV1000 confocal microscope using a 20×/0.75 NA objective with 1.2× zoom for whole-embryo pictures or a 60×/1.35 NA objective with 2× zoom for muscle attachments.The images were processed with ImageJ and Adobe Photoshop.The lengths of embryonic dorsal muscles were measured with ImageJ from raw z stacks.The average muscle shortening and standard deviation for each genotype were obtained from five embryos, in each of which five dorsal muscles were measured to calculate a mean length per embryo.Each dorsal muscle length was normalized by the mean length of the embryo and compared to wild-type to calculate the percentage of shortening for each genotype.Germband retraction defects were scored by counting embryos stained with anti-talin N terminus, which exhibits a background staining outlining the epidermis.The quantitation of IBS2-GFP recruitment to MAS was performed on dorsal MASs of 13–15 live 0- to 1-hr-old larvae.Two five-frame stacks per larvae were imaged and analyzed with MATLAB.Statistical differences in muscle shortening were determined by Student tests using Excel.Statistical differences in the frequencies of wing blisters or GBR defects were determined by chi-square tests using Prism software.FRET-FLIM experiments were repeated at least twice, and ANOVA was used to test statistical significance between different populations of data.Sixteen- to 20-hr-old embryos and 48-hr-old pupal wings were fixed with 4% formaldehyde, using standard procedures, for 20 min or 2 hr at room temperature.For FRET-FLIM, samples were incubated 15 min in NaBH4 to reduce autofluorescence and mounted with FluorSave reagent.Details of imaging FRET-FLIM and 3D-SIM are in Supplemental Experimental Procedures.For each genotype analyzed by FLIM, n > 10 samples were imaged and only one image was analyzed per sample.All pixels within a single image were averaged to a single value, and the n values per genotype were used to calculate the mean FRET efficiency and SEM.Lifetime image examples shown are presented using a pseudocolor scale whereby blue depicts normal GFP lifetime and red depicts reduced GFP lifetime.For each genotype analyzed by 3D-SIM, n > 5 samples were imaged and only one image was analyzed per sample.B.K. performed all experiments except the FLIM imaging, which M.P. advised on and performed.S.L.H. and N.H.B. performed the genetic screen.J.W. generated βPS-GFP and Vinc-GFP/RFP.S.L.H. and R.J. generated and mapped ΔVinc.N.H.B. generated the GFP-talin and Δhead constructs.B.K., S.L.H., and N.H.B. designed the experiments.B.K. and N.H.B. wrote the paper.
Cell-matrix adhesion is essential for building animals, promoting tissue cohesion, and enabling cells to migrate and resist mechanical force. Talin is an intracellular protein that is critical for linking integrin extracellular-matrix receptors to the actin cytoskeleton. A key question raised by structure-function studies is whether talin, which is critical for all integrin-mediated adhesion, acts in the same way in every context. We show that distinct combinations of talin domains are required for each of three different integrin functions during Drosophila development. The partial function of some mutant talins requires vinculin, indicating that recruitment of vinculin allows talin to duplicate its own activities. The different requirements are best explained by alternative mechanisms of talin function, with talin using one or both of its integrin-binding sites. We confirmed these alternatives by showing that the proximity between the second integrin-binding site and integrins differs, suggesting that talin adopts different orientations relative to integrins. Finally, we show that vinculin and actomyosin activity help change talin's orientation. These findings demonstrate that the mechanism of talin function differs in each developmental context examined. The different arrangements of the talin molecule relative to integrins suggest that talin is able to sense different force vectors, either parallel or perpendicular to the membrane. This provides a paradigm for proteins whose apparent uniform function is in fact achieved by a variety of distinct mechanisms involving different molecular architectures.
106
Miocene biostratigraphy and paleoecology from dinoflagellates, benthic foraminifera and calcareous nannofossils on the Colombian Pacific coast
The biostratigraphy and paleoecology of Neogene dinoflagellate cysts have been widely reported in high and mid latitudes.However, there are only few published papers about tropical Neogene dinoflagellates.Furthermore, there are no publications on dinoflagellates from western Colombia."In this paper, we present the palynological data, primarily dinoflagellates, of the exploratory well Buenaventura 1-ST-P, drilled in western Colombia by the Agencia Nacional de Hidrocarburos-ANH.Since we are dealing exclusively with fossil dinoflagellate cysts, we will refer to them as dinoflagellates.This section includes strata accumulated from latest early Miocene to late Miocene in the southern part of the Chocó Basin.Additionally, we provide an integrated biostratigraphy using dinoflagellates together with calcareous nannofossils, and pollen and spores.We also present the paleobathymetric evolution of the section studied based on lithology, benthic foraminifera and palynomorphs.We propose a chronostratigraphic framework for western Colombia, and correlate the changes observed in the dinoflagellate assemblages, calcareous nannofossils, and benthic foraminiferal assemblages with geologic and paleoceanographic events that affected this region of South America.The geologic history of western Colombia covers Late Cretaceous to Recent times, and is closely related to subduction of the oceanic Nazca plate under South America.One of the main events of this interaction is the collision of the oceanic Panamá-Chocó volcanic arc with South America.This event started in late Oligocene to early Miocene times, and uplifted the mountain ranges in the northwestern corner of South America.Afterwards, this collision caused the Neogene uplift of Central America and the separation of the Caribbean and Pacific Basins.The Chocó basin, is part of the so-called Chocó Block in Colombia, and originated by extension in the Panamá-Chocó forearc.To the north, this basin reaches to Panama, to the south it is limited by the Garrapatas fault system, to the east by the Western Cordillera, to the northwest by the Baudó arc and to the southwest by the Pacific Ocean.Several authors have subdivided the Chocó Basin into two basins, the southern San Juan Basin and the northern Atrato Basin, separated by the structurally deformed Istmina-Condoto high.The most complete paleogeographic model of the Neogene evolution of NW South America describes the geological and paleobathymetric characteristics, as well as some aspects of the paleoceanographic circulation.This model proposes that one part of the Colombian northwestern region, together with the Isthmus of Panama, formed the Chocó Block, or terrane.Three main structural features characterize this terrane; these are the Dabeiba and Baudó arcs, separating the Atrato and Chucunaque basins, and the Istmina-Condoto high.Duque-Caro reported a chronostratigraphic framework for the Atrato Basin to the north of Buenaventura.This author includes the upper Oligocene to lower Miocene limestones of the Truando Group, and the middle Miocene to lower Pliocene clastics of the Rio Salado group.For the San Juan Basin, the Neogene stratigraphic column reported by Cediel et al. rests unconformably on the Eocene Iró Formation and includes the lower Miocene limestones of the Istmina Formation and the La Mojarra Conglomerates, the middle Miocene fine to coarse clastics of the Condoto Formation, and the Pliocene Raposo and Mayorquín Formations unconformably overlie this column.The stratigraphic column for the San Juan Basin, includes the Cretaceous Dagua and Diabásico groups conforming the basement, which is conformably overlain by the Paleocene-Eocene Iró Formation.This Paleogene unit is in turn unconformably overlain by the Oligocene Sierra Formation, and unconformably, over the Sierra Fm., is the lower-middle Miocene San Juan Group, conformed by the Istmina Formation, the La Mojarra Conglomerates and the Condoto Formation.Overlaying this group is the upper Miocene Munguidó Formation, that consists of sandstones, siltstones and mudstones.At the top of the column are the Pliocene-Quaternary Raposo and Mayorquín Formations, which are composed of fine to coarse clastics, and were drilled by the Buenaventura 1-ST-P well.To the south of Buenaventura, a low-resolution chronostratigraphic framework of the Neogene was interpreted with seismic data of the continental margin in the Colombian Pacific area.These authors assign an upper Miocene unit to the Naya Formation, which unconformably underlies the Plio-Pleistocene Mayorquín and Raposo Formations.Samples for this study come from the exploratory well Buenaventura 1-ST-P, located onshore, near the city of Buenaventura, Department of Valle del Cauca, Colombia.This well was drilled by the ANH in 2012, and reached a depth of 3699 m.We studied 50 samples that cover 1987 m of stratigraphic thickness.The cored intervals studied include 1372–1411 m; 1502–1683 m, and 2152–2216 m.We chose the interval from 1070 to 3057 m for the palynological analyses because it contains mainly finer-grained lithologies, and marine microfossils.Of the 50 samples analyzed, 38 are cuttings and 12 correspond to core samples.The combined analysis of the gamma ray well logs and the lithological well reports are shown in Fig. 2.Samples were processed with the normal palynological technique that involves acid attack to remove the mineral fraction.This attack included 37% HCl for 12 h to remove carbonates, and 70% HF for 24 h to remove silicates.The residue was then rinsed and sieved on 106 μm and 10 μm meshes, to eliminate coarse and fine material respectively.The material >10 μm was mounted on a slide with polyvinyl alcohol and sealed with Canada balsam.The identification and counting of dinoflagellates of each sample was done under an Olympus CX-31 microscope at 40× and 100× magnification.Dinoflagellate taxonomy follows Williams et al.To assign ages to the section studied, we considered the known stratigraphic ranges of the species of calcareous nannofossils, dinoflagellates, pollen and spores in each sample.The basic framework is based on the calcareous nannofossil biochronozones and their calibration with paleomagnetic chronostratigraphy and radiometric dates.The biostratigraphic ranges of the calcareous nannofossils are those reported by Young et al.The stratigraphic ranges of the continental palynomorphs were taken from the zonation by Jaramillo et al.Since there are no zonations for dinoflagellates from tropical latitudes, we used their ranges as calibrated with calcareous nannofossils and planktonic foraminiferal biochronozones, or those reported by Duffield and Stein; de Verteuil and Norris, de Verteuil, Harland and Pudsey; Williams et al., Louwye et al.We integrated our results with data from calcareous nannofossils, benthic foraminifera, and pollen and spores observed in splits of the same sampled intervals.Paleoecological interpretations from benthic foraminifera follow those proposed by Ingle, Resig, and Nwaejije et al.Benthic foraminifera observed include mostly calcareous hyaline taxa, a few porcelaneous and no arenaceous forms.We assigned a paleobathymetric range to each sample according to the following criteria; transitional to inner neritic facies contain benthic foraminifera like Ammonia tepida and Amphistegina sp., and dinoflagellates like Polysphaeridium zoharyi, Tuberculodinium vancampoae and Homotrybliun spp., combined with medium to coarse granulometry, and by ~ 20% of marine palynomorphs.Inner to middle neritic environments are characterized by the foraminifera Hanzawaia boueana, Lenticulina cf. cultrata, Lenticulina spp., Nonion commune, Nonion sp. and Nonionella miocenica, combined with medium to fine-grained clastics, and with abundance of dinoflagellates between ~20–40% of the total palynomorphs.Middle to outer neritic facies contain the foraminifera Bolivina cf. imporcata, Brizalina interjuncta-bicostata, Bulimina costata, Bulimina elongata and Bulimina marginata, combined with medium to very fine granulometry and an abundance of marine palynomorphs from ~30–50%.Outer neritic to upper bathyal environments are characterized by the benthic foraminifera Globocassidulina oblonga, Gyroidina orbicularis, Uvigerina cf. acuminata, Uvigerina cf. isidroensis, Uvigerina hootsi, Globobulimina oblonga, Brizalina interjuncta-bicostata, and Valvulineria glabra and Valvulineria spp., associated with fine to very fine grained clastics, and >40% of the dinoflagellate abundance.The studied samples contain 69 dinoflagellate species that belong to 30 genera.Of these species, four are considered as conferred and another four are considered as affinity, due to their poor state of preservation.Out of the 61 positively identified species, 41 correspond to the order Gonyaulacales, 18 to the order Peridiniales, one belongs to the Gymnodiniales and another one is Incertae sedis.Species used in the biostratigraphic analyses and those found commonly in our samples are shown in Plates 1 and 2.The dinoflagellate association indicates that the studied interval was deposited from the early to the late Miocene.The integration of these results with calcareous nannofossils, pollen and spore data reported from the Buenaventura 1-ST-P well improves the biostratigraphic resolution attained, and helps calibrate the biostratigraphic ranges of some of the dinoflagellate species, which are common in tropical areas.The bioevents used to assign ages to the section studied include the stratigraphic ranges of each species indicated in Ma in parentheses.We did not include data on reworking material from Cretaceous and Paleogene material, because it is uncommon.The following are the intervals dated, notes on the markers used and other important points.Taxa mentioned correspond to different microfossil groups and are indicated as follows: * = calcareous nannofossil; # = dinoflagellate or acritarch, and + = pollen or spore.The deepest sample studied at 3057 m indicates the age of the oldest limit by the presence of *Reticulofenestra pseudoumbilicus.The age of the upper limit of this interval is indicated by the presence of * Helicosphaera vedderi at 2484 m.This interval most likely contains the Burdigalian-Langhian boundary, which has been dated as 15.97 Ma.The age of the upper limit of this interval is also indicated by the presence of *Helicosphaera vedderi at 2484 m.The age of the oldest limit is indicated by the presence of # Labyrinthodinium truncatum in the sample at 2984 m.This middle Miocene interval is subdivided by age into two parts, as indicated by the presence of +Crassoretitriletes vanraadshooveni in the sample at 2612 m.The upper part almost certainly contains the boundary between Langhian and Serravallian times, which has been dated at 13.8 Ma.The upper part also contains the lowest stratigraphic presence in the column studied, of #Selenopemphix dionaeacysta at 2569 m.This interval was almost certainly deposited during the Serravallian, which ranges from 13.82 to 11.62 Ma.The age of its upper limit is indicated by the presence of *Helicosphaera walbersdorfensis in the sample at 2265 m, while the age of the lower limit is still indicated by the presence of +Crassoretitriletes vanraadshooveni in the sample at 2612 m.This interval also contains the lowest stratigraphic presence, in the column studied, of #Trinovantedinium xylochoporum at 2313 m.This interval is subdivided by age into three parts.The age of its upper limit is indicated by the presence of *Catinaster coalitus at 1966 m, while the presence of *Helicosphaera stalis at 2265 m indicates the age of the lower limit.The limit between the upper and middle parts is indicated by the presence of *Coccolithus miopelagicus in the sample 2157 m, while the presence *Helicosphaera walbersdorfensis at 2228 m indicates the limit between the middle and the lower parts.The lower and middle parts were most likely deposited during the Serravallian, and the Serravallian-Tortonian boundary is probably located in the lower third of the upper part.The upper part of this interval, contains the presence of #Selenopemphix minusa, below *C. coalitus, and the highest stratigraphic presence, in the column studied, of #L. truncatum at 2051 m. #S. minusa has been reported from, but our data extends the lower stratigraphic range of this species from 8.64 to at least 9.0 Ma.The age of the upper limit of this interval is indicated by the presence of *Discoaster loeblichii at 1663 m, while the age of the lower limit is indicated by the presence of *Catinaster coalitus in the sample 1966 m.This interval contains the highest stratigraphic presence, in the column studied, of #Hystrichosphaeropsis obscura at 1683 m.The age of the upper limit is indicated by the concurrent presence of # Trinovantedinium ferugnomatum in the uppermost sample analyzed for dinoflagellates, while the age of its lower limit is indicated by the presence of *Discoaster loeblichii at 1663 m.This interval is subdivided by age into two parts, and the limit between them is indicated by the presence of *Discoaster berggrenii in a core sample at 1167 m.The Tortonian-Messinian boundary, which has been dated at 7.25 Ma, is most probably located in the upper half of the lower part, while the upper part was in all likelihood deposited during Messinian times.This interval is characterized by the presence of #S. minusa at 1100 m and #T. xylochoporum in the sample 1131 m, and by the highest stratigraphic appearance, in the column studied, of #T. ferugnomatum and #Q. condita at 1527 m.Interpretation of the paleobathymetric curve is based mostly on lithology, and integrating benthic foraminifera and palynology.This combination of data indicates three well-developed transgressions that reached upper bathyal depths.The basal conglomeratic interval shows a slight transgressive trend from the bottom of the section drilled to ca. 2610 m, where shales and siltstones were deposited.The high relative abundance of continental palynomorphs, together with the scarce presence of dinoflagellates indicate that the conglomeratic interval was deposited in continental to transitional environments.The silty-shaly interval around 2610 m also contains dinoflagellates, which are probably in situ and most likely indicate a transitional to inner neritic environment.The oldest well-developed transgression began above the conglomeratic interval, sometime between 14.18 and 13.4 Ma, and reached its maximum depth during the interval from 13.53 to ~9 Ma.This maximum depth interval is represented by shales with the benthic foraminifera G. orbicularis, Valvulineria glabra, G. oblonga, U. cf. acuminata, and U. cf. isidroensis.This cycle ended at approximately 10 Ma, at the base of the conglomeratic-sandy strata at ~1900 m.The age of the maximum transgression of this cycle coincides with that of the TOR-1 third order cycle at 10.5 Ma, as proposed by Haq et al. and updated by Gradstein et al.The second T-R cycle began in transitional environments, reaching its maximum depth at about 8 Ma, and it ended at approximately 7.4 Ma, at the base of the sandstone beds at ~1600 m.The maximum depth interval of this cycle is characterized by shales containing the benthic foraminifera U. cf. acuminata and U. hootsi.The age of the maximum transgression of this cycle coincides with that of the TOR-2 third order cycle at 7.7 Ma, as proposed by Haq et al. and updated by Gradstein et al.Finally, the third cycle started in inner neritic environments, and reached its maximum depth at approximately 6 Ma.This maximum depth is represented by shales and siltstones with the benthic foraminifera G. oblonga, V. glabra, U. hootsi, and U. cf. acuminata.This cycle ended sometime near 5 Ma, at the base of the sandstone beds at ~1050 m.The age of the maximum transgression of this cycle coincides with that of the ME-1 third order cycle at 5.97 Ma, as proposed by Haq et al. and updated by Gradstein et al.The basal part of the well presents very low abundance of dinoflagellates, which is due to the coarse-size lithology of this interval.These deposits indicate high energy during sedimentation and removal of fine material, such as dinoflagellate cysts.Despite the poor recovery, this interval shows a slight predominance of autotrophic over heterotrophic dinoflagellates.Higher up, the oldest maximum flooding surface, presents a considerable increase in the abundances of dinoflagellates.It also shows a dominance of G over P, indicating stable paleoceanographic conditions, despite changes in paleobathymetry.In this transgression, there is a predominance of autotrophic genera and species, among which are blooms of L. machaerophorum, Spiniferites, and Achomosphaera, which indicate warm superficial water masses.The Spiniferites group has been related to warmer water temperatures.Also, the presence of the autotrophic L. polyedrum, has been associated with relaxation of upwelling, and increase in surface water temperature.Therefore, the presence of these taxa in this interval indicates warm and stratified superficial water.Afterwards, in the interval corresponding to the regressive portion of this cycle, and in most of the second transgression, there are fewer dinoflagellates, even in the shaly facies.After the maximum flooding at ~8 Ma, there is a slight predominance in the abundance of P over G. Finally, in the upper, third T-R cycle, dinoflagellate assemblages are dominated by heterotrophic species, namely Selenopemphix nephroides, Selenopemphix quanta, Selenopemphix brevispinosa and Selenopemphix dionaeacysta, which indicate the presence of colder, superficial water with higher levels of nutrients.This interval also contains common specimens of the acritarch Quadrina condita.Since the studied interval covers the early to late Miocene interval, we propose a correlation of the changes shown by the dinoflagellate assemblages with different regional and global events.As previously mentioned, the basal portion of the study interval, from 2527 m to 3100 m, contains thick packages of conglomerates, that are likely related to a regional or local tectonic event.Our data indicate, that the age range of this interval goes from early to middle Miocene, between 17.9 and 13.4 Ma.As stated by Duque-Caro, the collision of the Chocó Block with NW South America took place during this interval, followed by the closure of the Isthmus of Panama.It has even been proposed by Montes et al. that sometime between 13 and 15 Ma, at least one segment of the Panama Arc was already positioned and emerged somewhere near northwestern South America, indicating tectonic activity.However our results do not seem to have a relation with the closure of Panama.According to our data and chronostratigraphic framework, the presence of the basal conglomerates in the drilled section is correlatable with the local compressive event related to the collision of the Chocó Block with NW South America, which most likely took place in the upper part of the Burdigalian age, sometime between 18 and 16 Ma.The Carbonate Crash is a paleoceanographic event recognized in the sedimentary record of tropical regions of the Pacific, Atlantic and Indian Oceans, and in the Caribbean Sea.This event took place between 11 and 8 Ma, and is characterized by a marked decrease in the concentration of calcium carbonate in the sediments, and poor preservation of calcareous microfossils.Two processes have been proposed to explain the decrease in CaCO3.The first one suggests an increase in the dissolution of carbonates in deep water, due to the restriction on the circulation of these waters during the early stages of the closure of the Isthmus of Panama.The restricted circulation intensified the speed of deep currents such as the Deep North Atlantic Current, therefore increasing its corrosive action and causing the dissolution of carbonates.In contrast, the second process proposes a decrease in surficial productivity.Supporters of this decrease in primary productivity in the Eastern Equatorial Pacific propose that it was caused by the strengthening of eastward warm equatorial currents due to the partial closure of the Strait of Indonesia.This closure began with the northward displacement of Australia, which closed circulation between the Western Pacific and Indian Ocean.The closure of the Strait of Indonesia originated the appearance, or the increase of the Equatorial Undercurrent from the Western Pacific.Such a warm water mass was the origin of the poor biogenic primary productivity in the Central and EEP.A low biogenic productivity from ~10.4 to 8 Ma, in the EEP, is supported by very low recovery of dinoflagellates, diatoms and calcareous nannofossil indices, which indicate the scarcity of these microfossils.In addition, there are low values of total organic carbon, and calcium carbonate reported in core samples from ODP Site 1039, off the Pacific coast from Costa Rica.Simultaneous decrease of calcareous, siliceous and organic microfossils in the marine sedimentary record are more easily explained by reduced marine primary productivity, than by corrosive chemical conditions.In the Buenaventura well from Colombia, intervals with low recovery of dinoflagellates, calcareous nannofossils and benthic foraminifera reflect this paleoceanographic event.These intervals with few fossils were observed between 1923 m and 1683 m depth, which correspond to an age of ~10.9–7.6 Ma.Some samples are barren of calcareous microfossils, which indicate a decrease in the primary productivity during this time.The paleobathymetries interpreted for this interval, of <500 m, rule out the deposition below the Carbonate Compensation Depth.The GBB is a paleoceanographic event distinguished by an increase in abundance of marine biogenic material, which has been explained by reactivation of the primary productivity.Uplifting pulses in the Andes and the Himalayan ranges increased the rates of weathering, and the supply of nutrients to the ocean, thus reactivating the productivity.In the late Miocene, an increase in sea level caused a connection between the Western Pacific and the Indian Ocean, and weakened the equatorial warm currents in the EEP.These conditions should have reactivated upwelling in the EEP during this time, creating oceanographic conditions similar to the current “La Niña” conditions.In the EEP, this event has been recognized at approximately 7.4–4.5 Ma.The interval from 1320 to 1070 m depth in the Buenaventura well, which corresponds to the upper part of the Third T-R cycle, represents the GBB.This interval has the highest recovery values of benthic foraminifera and one of the highest of calcareous nannofossils.In addition, the interval contains an average of 77.6% of peridinioid dinoflagellates, which proliferate under high primary productivity conditions.The biostratigraphic range proposed for S. minusa is 8.64–7.15 Ma.In our results, S. minusa had its lowest stratigraphic occurrence at 2094 m, sometime between 13.53 and 9.0 Ma, and its highest stratigraphic occurrence at 1100 m, within the 8.3 to 5.33 Ma interval.Therefore, the earliest ocurrence of S. minusa in the EEP, is extended to at least 9.0 Ma.The interval from 1070 to 3057 m of the Buenaventura well has an early to late Miocene age.The basal conglomeratic portion was deposited between <17.9 and ~13 Ma, and probably corresponds to the collision of the Chocó Block with northwestern South America.Overlying these conglomerates, there are three complete Transgressive-Regressive sedimentary cycles with environments ranging from transitional to outer neritic/upper bathyal.In the oldest T-R cycle, from 14.18 to <10.9 Ma, the autotrophic gonyaulacoid dinoflagellates dominate, indicating warm and stratified superficial water masses.The lowest recovery of marine microfossils characterizes the intermediate T-R cycle, from <10.9 to <9.53 Ma, indicating low productivity.The low fossil content of this interval can be linked to the “Carbonate Crash” event.Finally, the youngest T-R cycle, from <9.53 to 5.33 Ma, shows very good recovery of marine microfossils and dominance of heterotrophic peridinioid dinoflagellates, indicative of high productivity and cold superficial water.This interval can be correlated with the “Global Biogenic Bloom” event.
We present dinoflagellate assemblages contained in samples from the exploratory well Buenaventura 1-ST-P, located on the Colombian Pacific coast. The biostratigraphic model includes data from calcareous nannofossils and dinoflagellates studied from the same samples. The paleobathymetric evolution for the stratigraphic column is interpreted from its lithology, and content of benthic foraminifera and palynomorphs. We propose a chronostratigraphic framework based on the correlation of the integrated biostratigraphy, and this framework allows identification of regional tectonic and paleoceanographic events in the section studied. Our results indicate that the studied interval was deposited from early to late Miocene times (<17.9–5.33 Ma). The basal conglomeratic portion of the section was deposited between <17.9 and 13.4 Ma, and may reflect the collision of the Chocó Block with northwestern South America. These conglomerates contain very few dinoflagellates, calcareous nannofossils and benthic foraminifera. Overlying these conglomerates, ~1500 m of mainly shales, represent three complete transgressive-regressive (T-R) sedimentary cycles with environments ranging from transitional to upper bathyal. Autotrophic gonyaulacoid dinoflagellates dominate the lowest T-R cycle (~14.18 to <10.9 Ma), indicating warm and stratified superficial waters. The second T-R cycle (<10.9 to <9.53 Ma) is characterized by the lowest presence of marine microfossils, indicating low primary productivity. This cycle coincides with the Carbonate Crash event in the eastern tropical Pacific. Finally, the youngest T-R, from <9.53 to ~5.33 Ma, shows an increase in marine microfossils, and a dominance of heterotrophic peridinioid dinoflagellates, which indicate high productivity in cooler superficial waters. This cycle coincides with the late Miocene Global Biogenic Bloom event.
107
Cancer risks from chest radiography of young adults: A pilot study at a health facility in South West Nigeria
The data contains radiation doses and incidence of cancer risks among young adult females who underwent chest radiography for school admission purposes.Radiation protection of patients in diagnostic radiology is a subject of global concern.Concerted effort to minimizing patient׳s dose has led to generation of datasets .Justification of radiographic examinations and optimization of the procedures have been the emphasis for the protection of patients .Data on some experiences leading to the discouragement of requests for chest radiography used for school admission and employment purposes can be found in .Data on the risks of cancer induction from low dose ionizing radiation can be found in .Beyond cancer induction other radiation risks have been reported .The patient parameters, technical factors, radiation doses and incidence cancer risks are presented in Tables 1, 2 and 7.Descriptive analysis of patient parameters and technical factors are presented in Table 2 and the descriptive analysis of radiation doses and cancer risks is reported in Table 7.The influence of patient parameters and technical factors on entrance surface dose is reported in Tables 3–6 and Fig. 2.Fig. 1 compares the entrance surface dose with world data.The cancer risks ratio is presented in Table 7.Data was collected during chest radiography of young adult females at the x-ray unit of Radiology Departments of Obafemi Awolowo University Teaching Hospital Complex Ile-Ife, Osun State, Nigeria.The participants were students admitted into one of the Schools of the University Teaching Hospital for the year 2017.Consent was obtained from each participant before the commencement of the examination.Entrance surface dose were determined using thermoluminescent dosimeters from RadPro International GmbH, Poland.Each of the TLD chip was enclosed in labelled black polythene pack.A total of three coded chips were used to measure the entrance surface dose during the procedure in order to obtain the mean and enhance precision.The chips were attached to an elastic tape and placed in the centre of x-rays field where the beam intercepted with the irradiated part of the patient.Patient׳s clinical information and exposure parameters were noted and recorded using self-structured form.The x-ray machine output parameters were determined using MagicMax quality control kits.The TLD chips were oven-annealed using Carbolite oven made in England.Irradiation of TLD chips for calibration was conducted at the Secondary Standard Dosimetry Laboratory of the National Institute of Radiation Protection and Research, Ibadan.TLD chips were read using Harshaw Reader at the Department of Physics, Obafemi Awolowo University Ile-Ife.The bone marrow dose, breast dose, lung dose and effective doses were evaluated from the measured entrance surface dose using PCXMC software.Thereafter, BEIR VII model software was used to estimate the incidence cancer risk.The hospital is the only federal tertiary healthcare institution in the State with a population of about 4.7 million .It provides tertiary, secondary and primary healthcare services to all the neighbouring States.The hospital serves as the teaching hospital of the Medical School of Obafemi Awolowo University Ile-Ife and has other six schools under its jurisdiction.
The recommendation of chest radiography for school admission and employment purposes should be discouraged due to the risks of radiation especially cancer induction. It is therefore imperative to keep diagnostic radiation doses as low as possible. This dataset presents the entrance surface dose, effective dose, bone marrow dose, breast dose, lung dose and the incidence cancer risks from chest radiography of 40 young adult females. The mean incidence cancer risk to participants is 1: 20,000 for solid cancers. The data revealed the significant factors influencing the entrance surface dose and incidence cancer risks.
108
Key terms for the assessment of the safety of vaccines in pregnancy: Results of a global consultative process to initiate harmonization of adverse event definitions
The concept of maternal immunization – vaccinating pregnant women in order to protect women themselves and their newborn infants from serious infectious diseases – emerged along with the development of the first vaccines in the early 20th Century .Routine vaccination of pregnant women with tetanus toxoid has been successfully implemented worldwide since the 1960s for the prevention of maternal–neonatal tetanus .In some countries, the recognition of severe influenza disease in pregnant women has led to the recommendation to vaccinate women with influenza vaccine .The resurgence of pertussis disease in the United States and the United Kingdom has led those countries to recommend vaccination of pregnant women to prevent pertussis in infants .Since the 1980s, the United States National Institutes of Health has funded clinical studies of vaccines in pregnancy .Worldwide, studies evaluating the safety, immunogenicity, and efficacy of various licensed and investigational vaccines in pregnancy against influenza, tetanus, Haemophilus influenzae type b, pneumococcus, meningococcus, group B streptococcus, Bordetella pertussis and respiratory syncytial virus have been completed or are underway .Although many studies and surveillance systems have collected information on reported adverse events following immunization in both mothers and their infants, there is variability in the terms and definitions of the events observed and assessed for a potential causal association.Since 2000, the Brighton Collaboration, an independent professional network with the mission to enhance the science of vaccine research by providing standardized, validated objective methods for monitoring safety profiles and benefit–risk ratios of vaccines has provided investigators with case definitions of AEFI .In 2004, the Brighton Collaboration was requested by WHO to develop a guidance document harmonizing safety assessment during maternal and neonatal vaccine trials.This document has been updated repeatedly in response to the rapidly evolving field .In 2011, the NIH convened a series of meetings of experts with the goal of producing guidance to researchers in the field of maternal immunization, including recommendations concerning adverse events .These NIH guidance documents were designed with high resource settings in mind, where research on maternal vaccines mostly had been conducted.Further attention to maternal immunization has been given by WHO which recently recommended that pregnant women receive influenza and pertussis vaccination under certain circumstances .Highlighting the urgency and need for tools to standardize assessment of vaccine safety in pregnancy in all resource settings, large studies of vaccines for pregnant women, against influenza, pertussis, GBS, and RSV are now being planned or implemented in low- and middle- income countries .No consensus vaccine safety monitoring guidelines or adverse event definitions to meet the need of concerted safety monitoring during the life cycle of vaccines for global access in rapidly emerging immunization in pregnancy programs exist.This report describes the process pursued by BC and the WHO Initiative for Vaccine Research to advance the development of these necessary vaccine safety monitoring tools.In 2014, Brighton Collaboration, together with WHO, convened two taskforces to conduct a landscape analysis of current practice, available terms, and case definitions and to develop and to propose interim terminology and concept definitions for the evaluation of the safety of vaccines administered to pregnant women.One taskforce reviewed maternal and obstetric events, and the other reviewed fetal and newborn events.Taskforce membership reflected diverse geographic and professional backgrounds, as well as broad expertise in clinical research, epidemiology, regulatory and immunization implementation requirements, maternal immunization, obstetrics, and pediatrics.Members represented academia, the pharmaceutical industry, regulatory agencies, clinical investigators, private and public organizations.The taskforces gathered relevant information from a systematic review of published literature on the safety of vaccination during pregnancy in mothers and infants as well as from a global stakeholder survey of relevant terms and safety assessment methods.The objective of the systematic literature review was to determine the extent and variability in AEFI definitions and reporting in maternal immunization studies.The methods and results of the review were reported separately .The objective of the global stakeholder survey was to identify existing case definitions of key events in pregnant women and newborns, as well as to describe existing methods for the assessment of safety of vaccines used in pregnancy.We developed an expansive list of national and international obstetric and pediatric professional societies, government agencies, regulatory agencies, research institutions, local and international organizations, and pharmaceutical companies that could be involved in work relevant to our objectives.We sent each institution an electronic survey and asked them to describe activities that collected information on key events during pregnancy and the newborn period.We also searched for information in existing standard terminology criteria documents, healthcare databases, population-based surveys, pregnancy registries, active and passive surveillance reporting systems, meeting and study reports, ongoing interventional and non-interventional studies, and the Brighton Collaboration network of vaccine safety experts.Through these efforts, we established an inventory of stakeholders and a repository of existing adverse event terms, case definitions, protocols, practice guidelines, and manuscripts with data pertinent to the assessment of safety of vaccines in pregnant women and their infants.The taskforces held regular meetings to define procedures, to review progress of information gathering, to prioritize event terms, and to recommend definitions of terms for further review at a larger expert consultation.The taskforces identified “key terms”—defined as the most important adverse event terms based on frequency of occurrence, severity or seriousness, public health relevance, potential for public concern, and measurability or comparability with existing data.Key terms were organized with their synonyms, and existing definitions with bibliographic sources.When a taskforce identified more than one existing definition, it proposed a best definition based expert assessment of definition applicability and positive predictive value.These key terms, synonyms, and proposed definitions were presented at the expert consultation for further discussion.The expert consultation took place at WHO in Geneva, Switzerland, July 24–25, 2014 and it included taskforce participants and other invited experts .The objectives of the consultation were: to review existing relevant obstetrical and pediatric adverse event case definitions and guidance documents; to prioritize terms for key events for continuous monitoring of immunization safety in pregnancy; to develop concept definitions for these events; and to recommend a core data set of key terms of events to be collected when monitoring the safety of immunization in pregnancy.The terms and definitions were intended to be used specifically in vaccine safety monitoring.They were not intended to be used for diagnosis or treatment of patients, nor in non-vaccine clinical epidemiologic studies.The taskforces proposed key terms and concept definitions.For each term, the full consultation determined whether the term was important for the assessment of safety of vaccines in pregnancy, identified potential synonyms, determined whether there was consensus agreement on concept definitions, and considered the applicability of the term and concept definitions in different resource settings.This led to a list of key terms with synonyms and short descriptions of the respective disease concept, recognizing that this was a first critical step towards globally harmonized safety monitoring.It was acknowledged that an approach to reducing misclassification of reported events and to promoting data comparability in globally concerted safety monitoring would require more elaborate standardized case definitions.Such definitions should allow the classification of events based on objective, as well as measurable criteria at different levels of diagnostic certainty to serve the needs of monitoring the safety in diverse cultural and resource settings during the vaccine life cycle.Selected key terms were further classified as “priority outcomes” if they were considered to be the most important terms for the assessment of safety of the vaccine in pregnancy, “outcomes” if there were considered important but not critical, and “enabling” if the term was used to assist in the assessment of other outcomes or priority outcomes.Overall, for organizational and reporting purposes, key terms were classified in broad conceptual categories.Key terms for the safety assessment related to immunization of pregnant women were sub-classified as: pregnancy-related, complications of pregnancy, complications of labor and delivery, and maternal health terms.Key terms for the assessment of safety in the fetus and newborn were sub-classified as: events of delivery, physical examination and anthropometric measurements, and neonatal complications classified by organ system.The results of the systematic literature review are reported in detail separately .Briefly, among 74 studies included in the review, 10 were clinical trials, 54 were observational studies, and 10 were reviews.Most studies were related to influenza vaccine, followed by yellow fever vaccines, and then Tdap.A total of 240 different types of AEFI were reported on in these studies.Of these, 230 were systemic and 10 were injection site reactions.Considerable variability of the event terms used and lack of consensus on the definitions used for the assessment of AEFI reported in immunization in pregnancy studies was identified, rendering meaningful meta-analysis or comparison between studies and products challenging.WHO contacted 446 individuals, and Brighton Collaboration contacted 500 individuals.Overall, 41% of individuals responded, and 40% of institutions responded.Individuals represented 427 institutions, of which 57% were based in the EURO and PAHO WHO regions.Of the institutions that responded, 81% were from the EURO and PAHO WHO regions.Respondents confirmed the lack of standardized definitions for the assessment of safety of vaccines in pregnant women and reported their adverse event reporting to be based on classifications of events and terms used for various medical purposes or developed for use in a given organization of research network.Individuals shared the actual case definitions, protocols, and manuscripts used, when available.The survey identified relevant information from a wide variety of groups, including Brighton Collaboration documents on safety of vaccines, WHO documents addressing vaccine safety and surveillance of AEFI , the National Institutes of Health Toxicity Tables, publications and studies , the US National Children Study project , the Global Alliance on Prevention of Prematurity , reports from GAVI Alliance, UNICEF and WHO , established terminology databases including the International Classification of Diseases , Common Terminology Criteria for Adverse Events , Medical Dictionary for Regulatory Activities and pregnancy and birth defect registries and guidance documents , vaccine safety and pharmacovigilance surveillance systems including the CIOMS report on vaccine pharmacovigilance , vaccine safety active surveillance programs , the American College of Obstetrics and Gynecology practice guidelines , investigators of current and planned clinical trials, and the pharmaceutical industry working on candidate vaccines for pregnant women.Based on the findings of the landscape analysis and predefined criteria as described above, a total of 45 key terms describing medical events of significance for the assessment of safety of vaccines in pregnant women were identified by the maternal and obstetric event taskforce.A total of 62 key terms were identified by the neonatal and fetal event taskforce.The participants of the consultation recommended the elaboration of disease concepts into standardized case definitions with sufficient applicability and positive predictive value to be of use for monitoring the safety of immunization in pregnancy globally.Overall, 39 key terms were reviewed, prioritized and agreed upon by the participants of the consultation and consensus concept definitions were endorsed for immediate use.A summary of all key terms are described in Tables 1 and 2, respectively.A complete repository including additional suggested terms and interim concept definitions suggested by the taskforces is available at the Brighton Collaboration website .In addition, the expert consultation identified and recommended critical steps to further improve safety monitoring of immunization in pregnancy programs, including the development of guidance for data collection, analysis and presentation of safety data; tools for harmonized data collection, classification, and data sharing; and globally concerted secondary use of health care datasets to strengthen active surveillance to enable evidence based local and global response to safety concerns .We conducted a literature review, global stakeholder survey, and expert consultation to assess key events related to safety monitoring of immunization in pregnancy.We identified substantial heterogeneity of event definitions and assessment methods in current practice, and described a structured approach to initiating globally concerted action towards the ascertainment of the safety of mothers and their children following immunization in pregnancy.The systematic literature review was a hallmark of this consensus process, highlighting the opportunities for improvement.The strengths and limitations of this effort are discussed in detail elsewhere .The findings directly informed decision making and prioritization both at the taskforce and consultancy levels, and provide a useful baseline assessment for monitoring and re-evaluation of globally concerted actions in this rapidly evolving field of research.The stakeholder survey was the second hallmark of consensus formation.Given the thorough approach, we interpret the response rate and geographic distribution of responses to be reflective of the actual availability of event terminologies, case definitions, and guidance documents in the regions where most of the structured research into the safety of drugs and vaccines administered during pregnancy have thus far been conducted.The expert consultation recommended efforts be made to increase involvement from low- and middle-income countries, particularly in Africa and Asia, as trials and immunization programs are increasingly occurring in these regions.WHO and the Brighton Collaboration continue to monitor emerging case definitions and guidance documents, as well as validation efforts informing best practice and harmonization efforts for upcoming vaccines and programs of immunization in pregnancy.The authors recognize that despite the taskforces’ efforts to capture existing definitions for key safety events in pregnancy, it is likely we have not identified all definitions available or needed.Thus, we encourage readers to share available information not captured or adequately represented in this publication by contacting the WHO and Brighton Collaboration.While challenging, the development of a common language through harmonized definitions will facilitate efforts in the research and implementation of vaccines for maternal immunization.The consistent use of definitions of key events related to immunization in pregnancy will enhance comparability of safety outcomes monitored during the vaccine life cycle from pre-licensure to post-licensure clinical trials, as well as from observational studies.Harmonization of terms, disease concepts and the development of standardized case definitions of key events related to safety monitoring of immunization in pregnancy is a challenging exercise, specifically in view of the need for applicability in high- and low-income settings and the multiple stakeholders involved.We employed a structured approach building on the recognized standard Brighton Collaboration process to arrive at interim terminology and concept definitions for immediate use, while planning for collaborative development and validation of standardized case definitions with investigators and stakeholders in the near future.We recognize that the establishment of a core set of terms, disease concepts, and definitions is an important step towards this aim, while acknowledging that not all pertinent events may be identified and defined in anticipation.However, with an established network and processes among globally collaborating investigators, additional ad hoc definitions may be developed rapidly as the need arises.Therefore, an important aspect of this effort was the broad net that was cast to identify relevant methods, terms, and definitions available from all resource settings.The early involvement and contributions by a large group of stakeholders with diverse backgrounds and the global expertise within the taskforces and in the consultation strengthened the harmonization process from its inception.Broad representation and face-to-face discussion encouraged increasing information exchange and collaboration, while minimizing duplication of efforts.The harmonization exercise and consultancy also helped foster discussion on the necessary way forward given current limitations.The participants identified additional obstacles and needs.Recommendations included the development of tools to standardize and increase the efficiency of safety data collection in clinical trials and observational studies.Further, robust data on background rates of key events related to immunization in pregnancy, and pooled safety analyses based on international data sharing would better inform decision making on maternal immunization programs, and enhance patient, regulator, and provider decision making and comfort with vaccination offered to protect pregnant women and their children from preventable diseases and possible death.Maternal immunization is an evolving field, and adaptation of standards and tools to specific vaccines, protocols, populations, geographic regions, and other factors is necessary when evaluating the safety of vaccines in pregnancy.The Brighton Collaboration has established a collaborative network dedicated to address this continuing need: the Global Alignment of Immunisation Safety Assessment in Pregnancy .The aim of the GAIA project is to provide standards and tools to establish a globally shared understanding of outcomes and approaches to monitoring them with specific focus on low- and middle-income countries needs and requirements.GAIA will build on the efforts of this initial work and develop standardized case definitions for selected key terms through the standard Brighton process as well as guidance and tools harmonizing data collection in clinical trials and observational studies.The process described in this paper outlines a format successfully initiating active discussion and sharing of information between stakeholders and investigators in view of rapidly evolving immunization programs of pregnant women.This approach could serve as a model for future efforts aiming at early harmonization of the safety assessment of specific vaccines and global immunization programs leading to sustainable collaboration and concerted action while minimizing fragmentation and duplication of efforts in line with the Global Vaccine Safety Blueprint, the strategic plan of the WHO Global Vaccine Safety Initiative .Philipp Lambach and Justin Ortiz work for the World Health Organization.The authors alone are responsible for the views expressed in this publication.The findings, opinions, and assertions contained in this document are those of the individual scientific professional contributors."They do not necessarily represent the official positions of each contributor's organization. "Specifically, the findings and conclusions in this paper are those of the authors and do not necessarily represent the views of the author's organizations.There is no conflict of interest for this work.
Background: The variability of terms and definitions of Adverse Events Following Immunization (AEFI) represents a missed opportunity for optimal monitoring of safety of immunization in pregnancy. In 2014, the Brighton Collaboration Foundation and the World Health Organization (WHO) collaborated to address this gap. Methods: Two Brighton Collaboration interdisciplinary taskforces were formed. A landscape analysis included: (1) a systematic literature review of adverse event definitions used in vaccine studies during pregnancy; (2) a worldwide stakeholder survey of available terms and definitions; (3) and a series of taskforce meetings. Based on available evidence, taskforces proposed key terms and concept definitions to be refined, prioritized, and endorsed by a global expert consultation convened by WHO in Geneva, Switzerland in July 2014. Results: Using pre-specified criteria, 45 maternal and 62 fetal/neonatal events were prioritized, and key terms and concept definitions were endorsed. In addition recommendations to further improve safety monitoring of immunization in pregnancy programs were specified. This includes elaboration of disease concepts into standardized case definitions with sufficient applicability and positive predictive value to be of use for monitoring the safety of immunization in pregnancy globally, as well as the development of guidance, tools, and datasets in support of a globally concerted approach. Conclusions: There is a need to improve the safety monitoring of immunization in pregnancy programs. A consensus list of terms and concept definitions of key events for monitoring immunization in pregnancy is available. Immediate actions to further strengthen monitoring of immunization in pregnancy programs are identified and recommended.
109
Predictors of first-year statin medication discontinuation: A cohort study
Statins are one of the most widely studied and evidence-based medications1 and are an essential component of cardiovascular disease prevention.Statins are well tolerated, safe, and inexpensive.Despite these well-documented benefits, poor adherence to statins and an extreme form of it, discontinuation—that is, quitting statin medication use2—is common among primary and secondary prevention patients.3–6,In clinical trials, the discontinuation rates range from 4% to 11%,7–9 but in routine care, the rates are much higher; between 11% and 53%.2,3,10,11,According to studies using electronic medical records, approximately 25% to 50% of patients discontinue statin use within six months to one year after initiating their use.4,5,12,The number of patients continuing therapy falls sharply in the first few months of treatment, followed by a more gradual decline.4,13,Discontinuation is commonly attributed to statin-related adverse events, but, because most patients who reinitiate statin use can tolerate this medication long-term,2 many of these events may have had other etiologies.Previous studies on the determinants of discontinuation have reported mixed results.Some found a greater tendency to discontinue statin treatment among the young or old,11,14–16 with high co-payment,2,3,17,18 for primary prevention patients,3,5,15,19–21 and intensive dose therapy.22,An increased risk of statin discontinuation has also been found for smokers,20 and patients with diabetes20,23 despite the current guidelines recommending statin medication for nearly all patients with type 2 diabetes.24,On the contrary, one study found diabetes to be associated with the continuation of lipid-lowering drugs.25,However, other studies have shown no association between discontinuation and age,19,20,26–29 diabetes,6,16,27,29 or smoking.27,This study was aimed at identifying patient groups with an increased likelihood to discontinue statins in a large prospective cohort linked to prescription registers.A better understanding of the determinants of adherence to statin treatment is important because discontinuation is common, and it significantly increases both the incidence of cardiovascular and cerebrovascular events30 and, among high-risk patients, also all-cause mortality.31,32,Because the decision concerning the continuation of statin-use is commonly made during the first year of treatment,3,33 we restricted our analysis to predictors of discontinuation during the first year of medication.The data used in this study came from the Finnish Public Sector Study,34 a prospective study of all local government employees of 10 towns and all employees in 21 public hospitals with a ≥6-month job contract in 1991–2005.We initially included the 80,459 identifiable participants who responded to a survey in 1997–1998, 2000–2002, 2004, or 2008.The questionnaires involved demographic characteristics, lifestyle factors, and health status, and the average response rate was 70%.We linked the survey data to data from national health registers using unique personal identification numbers as in our earlier studies.34,35,Among the respondents, there were 11,949 participants who had initiated statin medication between 1 January, 1998 and 31 December, 2010.Of them, we included all the 9285 participants who had completed a survey before the statin therapy began and had not been dispensed statins in the previous two years.From the initiators, we identified the 1142 discontinuers.Follow-up data were available until 31 December 2011.In cases of repeated surveys before initiation, we selected the most recent response.The mean lag between the response and statin initiation was 3.4 years.In Finland, statins are available by prescription only.The National Health Insurance Scheme provides prescription drug coverage for all community-dwelling residents.All reimbursed prescriptions are registered in the Finnish Prescription Register managed by The Social Insurance Institution of Finland.36,Reimbursed medicines can be supplied to a patient for three months per purchase.For each drug, reimbursement-related factors including the dispensing date, the World Health Organization Anatomical Therapeutic Chemical code,37 the quantity dispensed, and co-payment are recorded.From this register, we identified all statin users, based on filled prescriptions with the ATC code C10AA.All the patients who initiated statin therapy were assumed to require treatment for the rest of their lives.Discontinuation of statin therapy was considered to take place when after the first filled prescription, no more statin prescriptions were filled within the subsequent 12 months.We also recorded co-payment and the year of statin initiation due to major changes in prescribing practices and statin costs over time.38,We assessed lifestyle factors using standard questionnaire measurements.34,35, "We requested the participants' smoking status and calculated body mass index using self-reported weight and height.We classified body mass index into the following three groups: normal weight, overweight, and obese.We defined risky alcohol user as a participant with either a high mean alcohol consumption or having passed out due to heavy alcohol consumption at least once during the 12 months or both.Physical activity was measured by the Metabolic Equivalent Task index; the sum score of the Metabolic Equivalent Task hours was used to identify active, moderate, or low physical activity.We identified cardiovascular comorbidities using special reimbursement and hospital discharge registers.34,35,Information on cancer diagnosis within five years before statin initiation came from the Finnish Cancer Registry.39,Antidepressant purchases during the 36 months preceding statin initiation, captured from the Prescription Register and served as a proxy for depression.Information on self-rated health and marital status were obtained from the survey responses."Data on sex and age came from the employers' administrative registers.Information on co-payment per first statin purchase came from the Finnish Prescription Register.Statistics Finland provided information on education, which was classified as high, intermediate, or basic.40,We used a logistic regression analysis to estimate the association of discontinuation with demographic characteristics, comorbidities, lifestyle factors, and co-payment.Only the respondents with complete data on all the predictors were included.The first model was adjusted for the year of statin initiation.All the found significant predictors of statin discontinuation were then simultaneously entered into the second model to examine their independent effects on discontinuation.The data were analyzed with SAS software, version 9.2.The study was approved by the ethics committee of the Hospital District of Helsinki and Uusimaa.The participants were predominately women and highly educated, and they were aged 55.7 years on average.Almost one third of them had vascular comorbidity, and one fifth was obese.Behavioral health risks were common: 31% were physically inactive, 17% were current smokers, and 14% were risky alcohol users.Of the 9285 statin initiators, 88% continued medication and 12% discontinued it.Table 1 shows the baseline characteristics of the discontinuers and continuers, and Table 2 gives the odds ratios and their 95% confidence intervals for discontinuation adjusted for the year of statin initiation.Of the demographic factors, only high age was associated with a decreased odds of discontinuation, whereas sex, education, and marital status were not.Of the health measures, vascular comorbidity was associated with decreased odds of discontinuation, whereas suboptimal perceived health, use of antidepressants, and cancer history were not.Of behavior-related risk factors, overweight and obesity predicted reduced odds of discontinuation among all the participants.Finally, high co-payment predicted increased odds of discontinuation.In the sex-stratified analyses, age and co-payment were associated with discontinuation among the women only, and the corresponding association of obesity and former smoking was observed among the men only.However, none of these sex differences were significant.The only significant difference between the sexes was observed for risky alcohol use: the OR being 0.66 for the men and 1.32 for the women.Table 3 presents the independent associations of the significant predictors found in Table 2, adjusted for each other, and the year of statin initiation.The results from this fully adjusted model were substantially similar to the model adjusted for the year of statin initiation only.The association of former smoking with discontinuation disappeared for the men in this fully adjusted model.The difference between the sexes observed for risky alcohol use was almost unchanged.In our observational study involving a large cohort of public sector employees, we found that older age, vascular comorbidity, and overweight or obesity were associated with a decreased odds of discontinuation of statin therapy.In contrast, high-patient co-payment of the first statin purchase was associated with an increased odds of discontinuation.Among the women but not in the men, risky alcohol use was additionally associated with an increased risk of discontinuation.The rate of discontinuation was 12%, which is within the range reported earlier.2,3,6,10,11,41,42,Some previous studies have found that patients with cardiovascular comorbidities are less likely to discontinue statin use than those free of such comorbidities.3,20,43,However, a recent Danish, population-based study of 161, 646 new statin users reported discontinuers to have a slightly higher prevalence of almost all the examined diagnoses of comorbidity, including cardiovascular disease.11,A major limitation of that study was its reliance on registered data only.As a result, it was not possible to control for co-existing behavior-related risk factors, such as obesity, smoking, and alcohol use, which could have affected the risk of discontinuation.In our study, we were able to control for these health risk behaviors as they appeared before the initiation of statin treatment, and we found that discontinuation was less likely for patients with previous cardiovascular comorbidities than for patients free of them.These findings suggest that patients who are the most likely to benefit from statin therapy are the most likely to continue it.This group of patients may be more motivated than others due to a better understanding of the need for statin treatment.44,It is possible that patients discontinue therapy because of poor drug effectiveness or the development of adverse effects.However, in the West of Scotland Coronary Prevention Study, adverse effects accounted for only 2% of discontinuations, with the overall discontinuation rate of 30% at five years.45,In a retrospective cohort study of 107,835 patients, more than half of the study patients had their statin discontinued, but only 3.9% of them reported an adverse reaction as the reason for discontinuation.2,This potentially unnecessary discontinuation of statins3 may lead to preventable cardiovascular events.Indeed, a three times higher risk of myocardial infarction has been found among discontinuers than among patients who continued statin treatment.46,In addition to comorbidities, higher age, overweight, and obesity, and, for men, risky alcohol use were significantly associated with a decreased odds of discontinuation.Contrary to these factors, risky alcohol use among the women was associated with an increased rate of statin discontinuation.Previous studies support some of our findings: younger age11,14,23,47 and alcohol misuse48 have been shown to be associated with an increased risk for statin nonadherence.In addition, obesity in a male population has been shown to decrease the odds of nonadherence.49,Consistent with previous research, in which lower out-of-pocket expenses had a positive impact on persistence with therapy,38 we found that a high level of patient co-payment was an independent factor for increased statin discontinuation.Especially in a high-risk secondary prevention group with a greater total of medications, overall co-payment can be high and thus unnecessary statin-related costs should be avoided.Discontinuation is an extreme form of nonadherence.In a previous register study from Finland, half of all statin users discontinued statin for at least 180 days during 10 years of follow-up.Of the discontinuers, 47% restarted statin within one year, and 89% by the end of the follow-up.50,In our study, when extending the follow-up period by another year, we found that, of the 1142 participants who discontinued statin medication during the first year of therapy, 18% filled at least one prescription during the second year of statin treatment.Thus, it is likely that a substantial proportion of statin discontinuers drop of treatment permanently or for a long period.This is a longitudinal study based on register data linked to questionnaires involving demographic characteristics, lifestyle factors, and health status.Our study has a number of strengths.First, it has a large sample size with excellent follow-up.Second, the generalizability of our findings is expected to be greater than in clinical trials as our study involves a large cohort of unselected statin initiators in real-world practice.Third, the Finnish Public Sector cohort contains detailed health status information and lifestyle factors rarely available in prescription claims databases.Finally, owing to the universal drug reimbursement system in Finland and the availability of statins by prescription only, the prescription register provided comprehensive and valid data on statin purchases.All statins were assessed; thus switching to another statin was possible and would not have been wrongly interpreted as discontinuation.In spite of these strengths, our study also has some limitations.First, we have no information on the reasons for discontinuation of statin therapy or of drug-related adverse effects.Second, we used prescription register information to estimate actual pill intake.This practice meant that primary discontinuers were not included.Moreover, as with any pharmacy claim database study, we could only determine that a prescription was filled, not that a patient actually took the medicine.51,Third, as we did an all-statin analysis instead of looking at individual statins, we do not know if there are differences in discontinuation between individual statins."Fourth, factors related to the health care system and physician's performances were not available, although they can be associated with discontinuation.43,52",Fifth, self-reporting tends to underestimate obesity and overweight53 as well as smoking and alcohol use.54,55, "Finally, our study did not include any measurement of serum lipid levels or an assessment of patients' total cardiovascular risk, which may have affected the perceived need for statin therapy and the discontinuation of it.In clinical practice, many patients for whom statins are prescribed discontinue the use of the drug within a year, which is likely to reduce any benefit of medication and increase the risk of cardiovascular events.56,Our study suggests that statin discontinuation is common, but several predictors of discontinuation are readily assessable and can provide information with which to identify those with an increased risk of nonadherence.Further intervention studies are needed to assess whether increased efforts to motivate treatment adherence in risk groups would reduce discontinuation and cardiovascular events.
Background The discontinuation of statin medication is associated with an increased risk of cardiovascular and cerebrovascular events and, among high-risk patients, all-cause mortality, but the reasons for discontinuation among statin initiators in clinical practice are poorly understood. Objective To examine factors predicting the early discontinuation of statin therapy. Methods In this prospective cohort study, participants with baseline measurements before the initiation of statin treatment were linked to national registers and followed for the discontinuation of statins during the first year of treatment (no filled prescriptions after statin initiation within the subsequent 12 months). Results Of all the 9285 statin initiators, 12% (n = 1142) were discontinuers. Obesity, overweight, vascular comorbidities, and older age were independently associated with a reduced risk of discontinuation [odds ratios (OR) = 0.82 (95% confidence interval [CI], 0.69–0.99), 0.85 (95% CI, 0.73–0.98), 0.80 (95% CI, 0.68–0.93), and 0.82 (95% CI, 0.68–0.99), respectively]. In contrast, high-patient cost-sharing was associated with an increased odds (OR = 1.29; 95% CI, 1.03–1.62) for discontinuation. The only significant difference between the sexes (P = .002) was observed among the participants with risky alcohol use, which was associated with a decreased odds for discontinuation among the men (OR = 0.69; 95% CI, 0.49–0.98) and an increased odds among the women (OR = 1.28; 95% CI, 1.02–1.62). Conclusions The discontinuation of statin therapy during the first year after initiation is common. Lowering out-of-pocket expenditures and focusing on low-risk patient groups and women with risky alcohol use could help maintain the continuation of medication.
110
The immunogenicity and tissue reactivity of Mycobacterium avium subsp paratuberculosis inactivated whole cell vaccine is dependent on the adjuvant used
Johne's disease is a chronic enteritis caused by Mycobacterium avium subspecies paratuberculosis.Typically, it is spread within and between herds/flocks of ruminants by the faecal-oral route.The disease results in weight loss and mortality and can cause significant economic impact for farmers .While treatment for JD is not feasible, vaccination is being used as one of the key control measures, especially in sheep .Vaccination against JD has been in use since the 1920s with mixed success.Current commercial vaccines are effective in reducing clinical disease occurrence by up to 90% giving farmers an important disease control tool .Vaccinated animals can still become infected and shed MAP in their faeces .The current vaccines, including the Gudair® vaccine, are based on killed whole MAP cells mixed with an oil adjuvant .One of the major concerns with these vaccines is their tendency to result in lesions at the site of injection in a proportion of animals .Also of concern to users of these vaccines is human safety, because recovery from accidental self-injection may take months and require multiple medical treatments, often involving surgical intervention .The adjuvant portion of a vaccine plays an import role in its efficacy .In the case of JD vaccines these have been mineral oils although their exact composition is not disclosed due to commercial considerations."Mineral oil adjuvants, when mixed with whole mycobacterial cells, can often lead to injection site responses similar to those seen when using Freund's complete adjuvant .Therefore, it is hardly surprising that injection site lesions in sheep are prevalent after vaccination with the current commercial JD vaccines.Recently, highly refined mineral oil emulsion adjuvants have become available.They are of several types: water in oil, water in oil in water and oil in water .A newly developed killed whole cell vaccine for JD, Silirium®, uses a highly refined mineral oil adjuvant which should result in fewer injection site lesions than Gudair®, but like Gudair® it does not prevent infection .Most recent novel vaccination studies against MAP infection have examined one or two adjuvants, generally from different adjuvant classes such as alum and saponin .In a study of the immunogenicity of a recombinant M. bovis antigen in cattle it was noticed that different classes of adjuvants, mineral oils and cationic liposome-based formulations, resulted in different immune response profiles .The mineral oil-based adjuvants resulted in an effector and a central memory response while the cationic, liposome-based formulations resulted in strong central memory responses.Differences in the immune response were also observed amongst the several mineral oil adjuvants used .In this study, we aimed to characterise the immunological responses to MAP antigens associated with a range of adjuvants.The immunogenicity of formulations containing heat killed MAP mixed with one of seven different adjuvants administered to sheep with or without a booster dose was examined.Ninety Merino wethers aged 24–36 months were sourced from a flock in Armidale, New South Wales, an area that has no prior history of JD.Absence of JD was confirmed through repeated whole flock faecal tests and antibody enzyme linked immunosorbent assays .The animals were moved to a JD-free quarantine farm at the University of Sydney Camden and maintained under conventional Australian sheep farming conditions by grazing on open pasture.Heparinised blood was stimulated in a 48-well plate with 0.5 mL of mycobacterial purified protein derivative antigen at 20 μg/mL.The negative control for each sample consisted of blood with 0.5 mL of culture medium while the positive control had 0.5 mL of media with pokeweed mitogen added at 10 μg/mL.After 48 hr incubation at 37 °C in air supplemented with 5% CO2, the culture supernatant was collected and stored at −20 °C.The ELISA was carried out and the OD data were converted to sample to positive percent as described by Begg et al 2010 .Descriptive analyses were initially conducted and included creation of frequency tables for categorical variables and calculation of summary statistics for quantitative variables.Incidence of injection site lesions was calculated as the proportion of animals in each group at the start of the trials that developed injection site lesions."Relative risk was calculated to compare incidence risk between different vaccine formulations, and the significance of differences in proportions was determined using Fisher's exact test followed by two-sided two sample binomial tests.Sizes of injection site lesions between treatment groups were compared using the non-parametric Kruskal-Wallis test because the distribution of injection site lesions was skewed invalidating assumptions of parametric tests.Further pairwise two-sample Wilcoxon comparisons were made to compare lesion sizes between pairs of different vaccine formulations.MAP-specific IFN-γ and antibody responses were compared between the different vaccine formulations using the linear mixed modelling approach by including IFN-γ and antibody responses as outcomes in their respective models: vaccine formulations, time and their interactions as fixed effects; and animals as a random effect to account for multiple observations for each animal.IFN-γ and antibody responses were log transformed to meet the assumption of normality and homoscedasticity of variance was evaluated using residual diagnostics.Unless otherwise stated the analyses were conducted using the SAS statistical program.All p-values reported in the manuscript are two-sided.All animal experiments were conducted with the approval of the University of Sydney Animal Ethics Committee.Sheep were allocated into 18 groups, with five sheep per group.The first eight groups were allocated for a single dose of the novel vaccines.The remaining eight groups were allocated a primary and a booster dose of the novel vaccines.The booster dose was administered 4 weeks after the primary dose.One group, the positive control, was given the commercially available vaccine Gudair® in a single dose as recommended by the manufacturer.A negative control group comprised sheep that were not vaccinated.The treatment groups and vaccine formulations are described in Table 1.Adjuvants from the MontanideTM ISA series used in the novel vaccine formulations for this study included five W/O, a W/O/W and a polymeric gel.These were MontanideTM ISA 50V, MontanideTM ISA 50V2, MontanideTM ISA 61VG, MontanideTM ISA 70 M VG, MontanideTM ISA 71 VG, MontanideTM ISA 201 VG and MontanideTM Gel 01 PR.Eight vaccine formulations were used in this study, 1 = ISA 50V, 2 = 50V2, 3 = 61VG, 4 = 70M VG, 5 = 71VG, 6 = 201VG, 7 = Gel 01, 8 = No Adjuvant.The Gudair® vaccine comprised killed MAP cells in a mineral oil adjuvant as prepared by the manufacturer.A single dose of the novel formulations contained approximately 1 × 108 organisms of MAP.MAP inactivation was confirmed by liquid culture .The antigen and adjuvant components were mixed at a ratio of 60:40 vol/vol under aseptic conditions and emulsified by vortexing the mixture for 2 mins.All novel vaccines were tested for sterility by aerobic culture on sheep blood agar incubated at 37 °C for 48 hours, prior to use.The vaccines were administered by subcutaneous injection high on the neck, behind the ear as a 1 mL dose.All vaccines were given on the right side of the neck.At 4 weeks post primary administration, groups requiring a booster dose were given a second dose of the same vaccine formulation."Gudair® vaccine was administered only as a single dose, according to the manufacturer's instructions.Blood samples were collected by jugular venepuncture into tubes without anticoagulant from all animals immediately before vaccination and at 2, 3, 4, 5, 6, 7, 8, 10, 14, 18, 22 and 26 weeks post primary vaccination.The blood tubes were centrifuged at 1455 x g and serum was aspirated into screw-capped tubes.Blood samples for the IFN-γ assay were collected pre-vaccination and then monthly for 6 months by jugular venipuncture into vacuum collection tubes containing lithium heparin.Serum samples were stored at −20 °C until required while heparinised blood was held at room temperature prior to stimulation with antigens for the IFN-γ assay.The site of injection was monitored weekly until 10 weeks post vaccination and then monthly until 6 months post vaccination.The area around the injection site was palpated and visually inspected for the presence of swelling, and open lesion or abscess formation.Injection site lesions were defined as having a diameter greater than 0.5 cm, measured in one axis.Smaller lesions were detected by palpation, but not frequently or consistently, and were therefore not included in the data set.Injection site lesion data are presented on a group basis for sheep in each treatment aggregated across all the observations.An indirect ELISA incorporating a complex MAP antigen was employed to detect MAP-specific antibody in serum .Results were expressed as the mean optical density signal from two replicates.Sheep given a single dose of Gudair® vaccine developed injection site lesions that tended to be larger, persisted longer and were more common than in sheep given a single dose of most of the other formulations."The overall Fisher's exact test was significant; for all groups except 50V, sheep given a single dose of Gudair® had a significantly greater probability of developing an injection site lesion than sheep given the other MAP vaccine formulations; the relative risk was 1.67–5 times for a single dose and 1.25 to 5 times for a double dose of the other formulations.Sheep that were given two doses of the novel MAP vaccines were significantly more likely to develop an injection site lesion than animals that received only one dose.The IFN-γ responses of the sheep given different vaccine formulations were monitored over time.There were no significant differences between the IFN-γ response attributable to the number of doses of vaccine given for any of the formulations containing adjuvant.Vaccine formulations 70MVG and Gel01 did not stimulate an antigen-specific IFN-γ response after vaccination with either one or two doses and were not significantly different from the No adjuvant and unvaccinated groups.Gudair® vaccinated animals had a significantly greater antigen specific IFN-γ response than animals given the formulations 70MVG, Gel01, 201VG, 61VG, No adjuvant or those left unvaccinated.There was no significant difference between the IFN-γ response from the Gudair® vaccinated animals and sheep given the formulations 50V, 50V2 and 71VG.Vaccine formulations 50V2, 71VG and 201VG showed a trend towards increased antigen-specific IFN-γ response compared to No adjuvant in sheep given a second dose of the same formulation four weeks after the primary dose.Vaccine formulations 50V and 61VG resulted in a trend towards lower antigen specific IFN-γ production after the booster vaccination compared to the group that were given a single dose.This was most evident for formulation 50V, with 4 of the 5 sheep that were given a booster dose having a lower antigen-specific IFN-γ response than the sheep given a single dose.The antigen-specific antibody levels were significantly greater in sera collected from Gudair® vaccinated sheep compared to sheep vaccinated with the other vaccine formulations.Animals that were given two doses of Gel 01, 201VG, 61VG, and 70M VG vaccines had increased serum antigen-specific antibody levels compared to sheep given a single dose of the same formulation.Sheep given a single dose of the formulations 61VG, 70M VG and 201VG produced low levels of specific antibody, similar to animals given no adjuvant.For the two vaccine formulations where the booster vaccination led to reduced IFN-γ responses, the antigen-specific antibody responses were similar or significantly increased compared to those seen in the single dose vaccinated sheep.Overall, the results indicated that the immune response profile to heat-killed MAP antigen was altered by the adjuvant in the formulation.With no adjuvant, the heat-killed MAP did not induce a significant elevation in either the serum antibody or antigen-specific IFN-γ memory response compared to the unvaccinated sheep.Immune response patterns ranging from biased cell mediated to biased humoral immunity were found with different formulations.For example, an immune bias towards IFN-γ was generated using adjuvant 50V with a single dose and an antibody/humoral immune bias was seen when using adjuvant 71VG as a single dose.The formulation comprising adjuvant 50V2 and heat-killed MAP given in 2 doses created a mixed response with elevated IFN-γ and antibody levels.The development of injection site lesions was not always associated with a strong immune response.Of the sheep vaccinated with a single dose of killed MAP with adjuvant Gel 01, injection site lesions were observed at 33% of the recordings, predominantly in 2 sheep.These two sheep had low levels of antigen-specific IFN-γ and antibodies but accounted for 23–33% of injection site lesions across all observations.We have demonstrated that immunisation of sheep with formulations comprising heat killed MAP and different adjuvants results in different immunological profiles.The immune response was altered also by the use of a second dose of the same vaccine; in some cases this resulted in a lower cell mediated immune response compared to a single dose.Testing the immunogenicity of a mycobacterial antigen with different adjuvants is logical, especially with regard to recombinant antigens .In this study, the testing of highly refined mineral oil adjuvants with a complex whole cell mycobacterial antigen led to a range of unexpected results.The theoretical optimal immune profile proposed for protection against mycobacterial infections including MAP is a cell mediated/IFN-γ biased response .The commercially available MAP vaccines provide incomplete protection against JD, but results in a strong mixed cellular IFN-γ and humoral immune response .Others are examining how to best develop vaccines with a bias towards a Th1/IFN-γ response .This study has shown that by altering the adjuvant, different immunological profiles can be achieved ranging from cellular IFN-γ, humoral, or mixed responses.Such widely differing immune responses to the same antigen have not previously been observed in JD vaccine development, probably due to the limited number of adjuvants that were tested previously .This finding has significance for JD vaccine development: novel antigens should be tested with a wider range of adjuvants.Furthermore, previously tested poorly immunogenic antigens may need to be re-examined to determine whether a preferred immune profile and possible protection can be established using different mineral oil adjuvants.The use of a single or double dose of the formulations also resulted in unexpected alterations to the immune profile for some of the novel vaccines.Typically, only one dose of mycobacterial vaccine is administered to ruminants, for example Gudair® in sheep.Other types of inactivated vaccines are given to sheep but typically these require a primary dose and a booster dose to achieve optimal immune responses.In this study, giving a second dose of formulations 50V and 61VG did not boost the immune response as expected but resulted in a reduced IFN-γ response.This may be due to a negative feedback loop, via release of Immunoregulatory cytokines such as IL-10 or IL-4, or preferential activation of T regulatory cell subsets rather than effector cells.The concomitant activation of T regulatory and effector cell phenotypes and expression of the immune checkpoint molecule Programmed cell death protein 1 by antigen-specific CD4+ T cell populations has been seen post-vaccination in other mycobacterial diseases .A detailed assessment of MAP-specific CD4+ T cell populations, including cytokine and cell surface markers, is required to conclusively determine the mechanism of this post-booster effect.While the second dose of 61VG resulted in a slight increase in antibody response, no difference was seen in the antibody response of sheep given a booster dose of formulation 50V.This indicates either that the immune response may have been inhibited by a second dose of vaccine but further examination was beyond the scope of this study.All of the vaccines tested in this study resulted in fewer injection site lesion scores compared to Gudair®."It is thought that the injection site lesions associated with Gudair® and other killed MAP mineral oil vaccines are due to the Freund's-like nature of the vaccines .The use of highly refined mineral oils and emulsification protocols is the most probable reason for the reduced injection site lesions in this study.Another possible explanation for the reduced injection site lesions could be a disparity in the number of killed MAP in the formulations, however this cannot be confirmed as the number of killed MAP in Gudair® is not disclosed.A new commercial MAP vaccine, Silirium®, uses highly refined mineral oils in the adjuvant with the aim to develop fewer injection site lesions, however this vaccine is also not fully protective .Vaccination site lesions are considered to be due to the interaction between the adjuvant, the antigen and the immune response of the host.However, for a number of formulations, lesions were found in animals or groups with a low systemic immune response.One of the adjuvants, Montanide Gel 01, a polymeric gel, resulted in injection site lesions but the acquired immune response was negligible.This raises the possibility that this formulation induces an inflammatory response that is not specific to the antigen.It is possible that there were significant immune responses, not measured in this study that may have effected lesion formation.Caution must also be taken when interpreting these results, as the trial has not been replicated.This study indicates that the adjuvant mixed with killed MAP influences the immune response and the incidence of injection site lesions.Although we did not investigate the effect of the strain of MAP, others have shown that this can make a difference to the immune response and pathology that develops during an active MAP infection .Currently the critical parts of the protective immune response induced by commercial mycobacterial vaccines are unknown and it may now be possible to uncouple protective immunity from excessive tissue reactivity.With this knowledge it may also be possible to formulate and test the efficacy of vaccines that produce targeted immunological profiles suited to protection against other pathogens, i.e. those for which a bias towards cellular or humoral immunity would be advantageous based on understanding of pathogenesis.D J Begg: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper.O Dhungye: Conceived and designed the experiments; Contributed reagents, materials, analysis tools or data.A Naddi: Performed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data.N K Dhand: Analyzed and interpreted the data.K M Plain: Conceived and designed the experiments; Contributed reagents, materials, analysis tools or data.K de Silva: Conceived and designed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data.A C Purdie: Conceived and designed the experiments.R J Whittington: Conceived and designed the experiments; Analyzed and interpreted the data; Wrote the paper.This work was supported by Meat and Livestock Australia and by Cattle Council of Australia, Sheepmeat Council of Australia and Wool Producers Australia through Animal Health Australia.The authors declare no conflict of interest.No additional information is available for this paper.
Johne's disease (JD) is a chronic enteritis caused by Mycobacterium avium subspecies paratuberculosis (MAP). Current commercial vaccines are effective in reducing the occurrence of clinical disease although vaccinated animals can still become infected and transmit MAP. Many vaccinated sheep develop severe injection site lesions. In this study a range of adjuvants (MontanideTM ISA 50V, ISA 50V2, ISA 61VG, ISA 70 M VG, ISA 71 VG, ISA 201 VG and Gel 01 PR) formulated with heat-killed MAP were tested to determine the incidence of injection site lesions and the types of immune profiles generated in sheep. All the novel formulations produced fewer injection site lesions than a commercial vaccine (Gudair®). The immune profiles of the sheep differed between treatment groups, with the strength of the antibody and cell mediated immune responses being dependant on the adjuvant used. One of the novel vaccines resulted in a reduced IFN-γ immune response when a second “booster” dose was administered. These findings have significance for JD vaccine development because it may be possible to uncouple protective immunity from excessive tissue reactivity, and apparently poorly immunogenic antigens may be re-examined to determine if an appropriate immune profile can be established using different adjuvants. It may also be possible to formulate vaccines that produce targeted immunological profiles suited to protection against other pathogens, i.e. those for which a bias towards cellular or humoral immunity would be advantageous based on understanding of pathogenesis.
111
Concept of hydrogen fired gas turbine cycle with exhaust gas recirculation: Assessment of process performance
The most efficient way to produce power at large scale from gaseous fuel is by using gas turbine engines.High hydrogen content gas fuel can be found in three possible applications: Integrated Gasification Combined Cycle power plants; power plants using the pre-combustion CO2 capture in a Carbon Capture and Sequestration context; power plants in a fully developed renewable energy based society, where hydrogen is used as energy storage in case of excess wind, solar, or other intermittent renewable power.Although CO2 free, the combustion of hydrogen generates high levels of Nitrogen Oxides which are strongly regulated because of they play a major role in the atmospheric pollution leading to smog and are responsible for acid rains.For example, NOx emissions limit in California for 10 MW and higher stationary gas fired gas turbines is 9 ppm @ 15% O2 and 24 ppm in Europe.Many studies report of NOx emission doubling or more by switching a low NOx burner from methane to pure hydrogen .Indeed, a known characteristics of hydrogen combustion is that it has a high flame temperature.One of the main chemical contribution to the formation of NOx is through a kinetic pathway where the nitrogen of the air is oxidized by oxygen at high temperature.This mechanism is strongly sensitive to temperature and called the Thermal NOx mechanism for this reason .A small increase in the higher range of temperature results in an exponential increase in NOx production.High NOx values in excess of 200 ppm @ 15% O2 dry have been reported in Todd et al. in an 85–90% hydrogen fuelled GE’s 6 F A test combustor and even 800 ppm @ 15% O2 dry in Brunetti et al. .These values, compared to the typical emission limits applied to gas turbines, highlight how inadequate the current combustor technology is and therefore the need for innovative solutions.In modern conventional fossil fuel based gas turbines, the high temperature regions in the flame is avoided by premixing the fuel and air prior to combustion to the point that the adiabatic flame temperature of the mixture is much below than that of the stoichiometric mixture.These burners are known as lean premixed burners or Dry Low NOx burners.The technology has initially struggled because the required degree of air – fuel premixing leads to issues related to combustion stability: flashback, extinction, and thermo-acoustic instabilities .The technology is however now commercial and the major gas turbine manufacturers offer engines that achieve NOx emissions levels within the regulated values without the need of end of pipe abatement installations such as SCR or SNCR.Nevertheless, the application of this technology to high hydrogen content fuels still strives because of the specific characteristics of hydrogen combustion: wide flammability limits, much higher reaction rates, preferential diffusion and higher flame temperatures leading to short auto-ignition times and high flame speed .As a consequence of these particular properties, combustion occurs promptly before air and hydrogen have had the time to be fully premixed.This problem is referred to as flashback, i.e. unwanted propagation of flame in the premixer region not designed for the presence of flame with the risk of component damage.In existing IGCC plants and those with pre-combustion CO2 capture where hydrogen is the major fuel component, the NOx formation problem is tackled by using simple diffusive type burners, but by adding large amounts of diluent gas.Nitrogen and steam are both potential diluent candidates because they are available at relatively low cost on site of IGCC plants.Wu et al. experimentally showed that steam is more effective than nitrogen to reduce NOx formation because of the higher heat capacity of steam, hence the larger reducing effect it has on adiabatic flame temperature.For example, steam to fuel ratio of unity was shown to half the NOx emissions from 800 ppm @ 15% O2 dry in Brunetti et al. .Nevertheless, nitrogen is practically preferred firstly because steam affects significantly the heat transfer properties of the hot exhaust gas flow and reduce components life .Secondly, nitrogen is a readily available by-product of the Air Separation Unit present on site of an IGCC plant for producing oxygen for the gasifier.The use of diluents in industrial cases with syngas as fuel on diffusion type combustors have shown good emissions results in the above references.However, although available at low costs, using nitrogen as diluent induces an expense of up to 20%–30% of the total auxiliary power consumption due to the required compression work for injection at the combustor stage.For comparison, this share is even higher than that of the CO2 compression power in the case of pre-combustion plant .From a cost perspective the, compressor unit is expensive and bulky.Gazzani et al. showed that dilution used in combination with diffusion type combustors imposes an efficiency penalty of 1.5 %-points as compared to the reference combined cycle plant if the amount of nitrogen dilution is that required to reach a flame temperature similar to that of a natural gas flame.The penalty becomes 3.5 %-points in the case of steam dilution.The selected dilution degree and corresponding efficiency decrease is to be compromised with NOx emissions since these are exponentially proportional to combustion temperature.The implementation of DLN combustors would avoid the inert dilution to reduce NOx emissions.However, to counteract the aforementioned excessive flashback propensity, high injection velocity and therefore high pressure drop would be needed, which in turns has an efficiency cost as shown in Gazzani et al. .Consequently, DLN burners have not been achieved to date for fuels with hydrogen content larger than approximately 60% without some kind of dilution.In addition, even if lean premixed combustion of hydrogen were achieved through DLN burners, Therkelsen et al. evidenced experimentally that at the same flame temperature, measured NOx emissions were still higher in a hydrogen than in a methane flame.They attributed this effect to the higher propensity of the hydrogen – air chemical kinetic to produce NO through the low temperature NNH pathway .There is therefore a real need for innovative concepts in combustion technology to cope with hydrogen fuels.The recent review from du Toit et al. on the use of hydrogen in gas turbine describes several burner technologies available and still points out the remaining R&D challenges of tackling the high temperature and NOx emission.In search for alternative ways to burn pure hydrogen, Ditaranto et al. suggested to tackle this challenge not by yet another burner technology, but by setting up a power process that inherently avoids high temperature.The gas turbine cycle concept proposed includes exhaust gas recirculation that has potential for low NOx emissions without the need for either fuel dilution or burner technology breakthrough.Further, the authors showed through a first order combustion analysis that the oxygen depleted air entering the combustor - due to EGR - naturally limits the combustion temperature and NOx formation.With this concept, the burner and combustor can be of diffusion type, i.e. simple and reliable, and would avoid the high cost and risks associated with the development of complex DLN systems for high hydrogen content fuels.The concept of EGR is a common and mature technology in internal combustion engines, mostly diesel, with the aim of reducing NOx formation .For gas turbine applications however, it is only known in two cases related to the CO2 capture context.One as a mean for increasing the CO2 concentration in natural gas combined cycles exhaust gas, with the aim of making post-combustion CO2 capture more efficient .The other is in the oxy-fuel CO2 capture scheme where CO2 replaces air as the gas turbine working fluid and the cycle is therefore semi-closed .For power cycles based on hydrogen fuels however it has, to the knowledge of the authors, not been evaluated in the scientific literature, apart from their above-mentioned preliminary studies.In Ditaranto et al. the combustion assessment showed that the flame stability could be achieved at high EGR rates, high enough to maintain low NOx emissions even without dilution of hydrogen.It brought the idea through TRL 1 and the present study aims at validating TRL 2 by an evaluation of the concept from a process and thermodynamic cycle perspective in order to assess whether the concept is worth further development towards TRL 3.The power cycle under investigation is an IGCC plant with CO2 capture with coal as primary fuel.The basic layout of the IGCC power plant is that of the European Benchmark Task Force which has been designed with the objective to serve as a reference.The main components of the power cycle are the pressurized gasifier producing syngas which undergoes first a two stage shift reaction, followed by Acid Gas Removal and H2S Removal units, and then a CO2 separation step.CO2 is then compressed and delivered at the battery limit ready for transport and geological storage, while the hydrogen rich syngas is burned in the power island power island composed of a gas turbine and bottoming steam cycle.As described in the Introduction, the gas turbine when fuelled with syngas or hydrogen fuels generally operates with a stream of dilution nitrogen coming from the ASU in order to control temperature in the combustor and avoid excessive NOx emissions.The goal of the present study is to demonstrate that applying EGR to the gas turbine can replace the dilution stream and make the economy of its compression power.The layout of the proposed concept in its simplest form, i.e. with dry EGR, is depicted in Fig. 1 in two possible application cases: 1) as a natural gas or coal fired IGCC power plant with CO2 capture, 2) as integrated in a renewable energy system with hydrogen as energy storage.In addition to the dry EGR case, the study considers two other variations of the cycle with wet EGR and wet EGR with cooling.In wet EGR, the recirculated exhaust gas goes directly into the compressor without any form of condensation or cooling after exit from the HRSG unit, while in the wet EGR with cooling, the recirculated exhaust gas is cooled down to its dew point temperature to optimize compression efficiency.In the EBTF, the overall system was optimized cycle by heat integration of various components such as the air separation unit delivering oxygen to the gasification process generating the syngas, the gasifier, and the shift reactor.Applying EGR and suppressing the nitrogen dilution call for some adjustments in this reference system as will be described in the following section.In the work presented, the process components external to the power island are not modelled in this simulation.The streams of mass flows and energy to these components are considered as inputs and outputs unchanged from the reference cycle, only proportionally adjusted to the syngas flow rate.A modification to the ASU needed however be implemented due to the modified gas turbine operating conditions, as will be explained in detail in the following section.The power plant setup is shown in Fig. 1 with the high hydrogen content syngas fired gas turbine and the bottoming cycle with the HRSG and steam turbine divided into four turbine stages.All water and steam streams scale with the same factor when the lowest of these streams are changed to match the outlet temperature of the exhaust.The setup and all its variations have been modelled in Aspen HYSYS code.The power island of the IGCC plant with pre-combustion CO2 capture of the EBTF has also been modelled, the results of which is used as the reference case for this study.It is noted that the composition of some streams differs slightly from the ones given in EBTF since more accurate values were obtained in the working document of the European project, which generated the EBTF reference.In the reference power plant, integration of the ASU with the gas turbine is done in order to improve the efficiency of the overall cycle, such as 50% of the air used in the ASU comes from the GT compressor while the remaining air is drawn into a separate smaller compressor that is a part of the ASU.Given that the air compressor is the largest energy user in the ASU , shifting the compression energy cost of 50% of the air to the gas turbine with a larger and more efficient compressor, means that maximum 3 MW is spared due to the integration, corresponding to an efficiency improvement of approximately 0.3 %-point on the power island efficiency.Because the present concept necessarily implies that the O2 concentration in the air flowing the GT compressor is decreased, the amount of oxygen in the air going to the ASU would therefore also be decreased and therefore it cannot be expected that the same gain would be obtained by such an integration as the optimal amount of integrated air would probably be at a different amount than in the reference cycle.In order not to complicate unnecessarily this assessment study, no air is extracted from the gas turbine compressor for the ASU and therefore no potential benefit from any integration is considered.Coincidentally, this amount of air kept in the compressor compensates for the nitrogen flow used in the reference case to dilute the syngas, but avoided in our concept.The overall mass flow through the turbine is thus closer to that of the reference case and permits a fair comparison from a hardware point of view.Omitting the integration results in an increased power consumption in the ASU.Since the integration concerned 50% of the air, the extra energy consumption at the ASU cannot exceed the reference integrated ASU power consumption and is probably lower.In fact, assuming a conservative specific energy consumption of 200 kWh/t of oxygen, the production of oxygen needed for the gasifier considered in this study would be of 22.05 MW.Nevertheless, the power consumption for the ASU in the simulation of the EGR cases has been doubled from that of the reference case, thus the results can be considered as conservative or worst-case scenario.Since the syngas production rate varies from case to case as explained latter, the ASU power consumption was further assumed to be proportional to that rate.The typology of gas turbine considered in EBTF is a large-scale “F class” 50 Hz.It is a reference F-class large-scale gas turbine averaged from the largest manufacturers at the time of publication of the EBTF.The pressure drop inferred by the air inlet filter was imposed with a valve.The total mass flow of air into to the GT process is 642.1 kg/s.A part of the air bypasses the combustion chamber and is used as cooling air in the turbine.The pressure ratio across the compressor is 18.11 bar.By setting the temperature outlet to be 409 °C in the reference setup the polytropic efficiency could be backwards determined to be 93%.In the simulations this PTE has been applied to the compressor which gives an outlet temperature of 408 °C, considered close enough given the precision of PTE given.The combustion process was modelled as a Gibbs reactor which minimizes the Gibbs energy of the mixture of its inlet streams to produce the outlet stream.A pressure drop of 1.08 bar was set across the reactor, as used in the reference system.In the reference case, a stream of N2 is injected in the combustor for syngas dilution.The EBTF syngas mass flow is 22.45 kg/s and the N2 mass flow is 80 kg/s, which when mixed with the air in the Gibbs reactor gives a temperature of 1306 °C after combustion.The syngas and N2 mass flows were proportionally lowered to 22.34 kg/s and 79.58 kg/s respectively, in order to reach a temperature of 1302 °C after combustion as found in the reference document.After mixing with the cooling air, which is injected as coolant of the first turbine stage, a temperature of 1205 °C is reached.These two temperatures were fixed and used to determine the flow rates of syngas and cooling air in the parametric study.The dilution nitrogen was compressed from 1 bar and −134 °C to 25 bar and 200 °C.In our reference setup this compression power consumption corresponds to 26.28 MW, although in EBTF the pressure after compression is 36 bar with a corresponding power consumption of 27.82 MW.Such a high pressure was deemed unnecessary and the study kept a nitrogen compression power consumption of 26.28 MW as a reference.This choice is further supported by the study of Gazzani et al. who used a dilution nitrogen pressure of 27.1 bar at the mixer inlet.Therefore, the power island efficiency found in our simulated reference is expected to be better than in EBTF.The gas at the combustor exit was mixed with the cooling air before entering the turbine.By using an expansion of 15.98 bar, as found in the reference system, and setting the turbine outlet temperature to 571 °C, a PTE of 84% could be determined which was then specified in our set up to release the condition on TOT.The TOT was calculated to be 569.9 °C, which as for the compressor, was considered to be close enough.The chemical composition and thermodynamic values of the exhaust gas can be found in Table 1.The temperatures and pressures of the streams in the steam cycle were specified as indicated in the reference system except for some cases where they had to be changed slightly to have vapour fractions.The water and steam mass flows can be scaled, all with the same factor.This effectively gives the steam cycle one degree of freedom that can be specified to make the exhaust gas reach exactly 100 °C at the cycle outlet by extracting the right amount of heat from it in such a way that the steam cycle operates the same way as described in the reference system.The exhaust gas leaves the bottoming cycle at 1.02 bar as a result of each heat exchanger pressure drop being specified as found in the source documents .To make it reach 100 °C the water inlet flow had to be increased from 129.5 kg/s in the original system to 131.72 kg/s.The PTE of the steam turbine stages were determined by the specified temperatures and pressures at the inlets and outlets, as found in the reference system.The high pressure stage takes in the high pressure superheated steam and sends a part of its outlet to the gasification process.The rest is combined with the intermediate pressure superheated steam and reheated before entering the intermediate pressure ST stage.A part of this outlet is also sent to the gasification process, while the rest is sent to the next ST stage.This outlet is then combined with the superheated low pressure steam and sent to the last ST stage where it expands below its dew point, leaving with a vapour fraction of 0.899.The steam is then condensed completely before it is pumped into a tank where it is combined with feedwater from other processes to return into the steam cycle inlet.The pumps in the steam cycle are implemented as described in the reference material, so this energy consumption is accounted for in the total power calculations.The exhaust gas from the gas turbine is led through a series of heat exchangers transferring heat to a steam cycle with a four-stage steam turbine.The steam cycle in the reference system is integrated with the gasification and shift reactors processes in that they exchange water and steam from and to the bottoming part of the power island at certain temperatures and pressures.In the reference plant, this carries 49.27 MW of net heat into the steam cycle.It is not desired to simulate the entire power plant including the components external to the power island as that would require to run an entire optimization process on all these components combined together with the power island, with the consequence of losing track of the benefits or losses of the studied concept.Nonetheless, it is necessary to keep the power island boundary conditions equivalent to allow for a common comparison.Decoupling completely the power island from the external processes, would also require a complete redesign of the bottoming cycle and again the basis for comparison would be altered.For example, most of the high pressure water is evaporated in the gasification process and the corresponding latent heat comes so to say “free” for the bottoming cycle.If this heat was to be extracted from the exhaust gas as a result of decoupling of the gasification process, the temperature drop in the exhaust gas would increase significantly and the temperature would become too low to provide sufficient heat to the water in the following heat exchanger stages, and so forth.In order to keep track of the energy flows between the bottoming cycle and the external components, every stream in and out of the steam cycle were brought to a reference point of 15 °C and 1.01 bar.Water was therefore heated or cooled to the appropriate temperatures and pressures given in the inlets and outlets of the EBTF simulations.These heating and cooling were done with the HYSYS components heater and cooler respectively.Since this was done on all streams entering the steam cycle, the difference in energy flows measured between the heaters and the coolers gives the total net energy flow into the steam cycle from the virtual external processes.Because of the variation in total mass flow of water and steam through the steam cycle for the different EGR cases, this net energy flow varies.However, this is not necessarily proportional to the syngas mass flow so the net energy extracted from the surroundings in the HYSYS simulation is not necessarily the same as the heat delivered from a calculation using proportionality between syngas mass flow and net heat supply from the gasification and water shift processes.This proportionality constant was calculated in the reference case by measuring net flow of energy divided by the syngas mass flow rate.The difference between the measured net energy flow and the calculated energy flow available from the production of syngas was then multiplied by the weighted average of the efficiencies of the steam turbine stages and the efficiency of the electric generator to find the amount of energy that had been supplied to the total output of the steam turbine, but could not have been provided by the real syngas production.This difference represents an additional virtual energy flow that must be subtracted from the total energy generated by the steam cycle to calculate the corrected efficiency.It is recognized that this correction is a limitation in the comparison work, however this power correction does not represent more than 0.2%, 2% and 1.4% of the gross power output for the cases dry EGR, wet EGR without cooling, and wet EGR with cooling respectively.It will be further seen that the gain observed in terms of efficient are larger that this imprecision.To calculate the combustion properties of the mixtures in all the EGR and the reference cases, a one-dimensional adiabatic freely-propagating, premixed flat flame reactor case was setup and solved in the kinetic calculation code Cantera under Python.The GRI 3.0 chemical mechanism was used.The fuel and compressor gas composition and temperature calculated by the process simulations at the inlet of the combustor were used as inputs.The adiabatic temperature, laminar flame speed, and NOx concentrations in the burned gases region were calculated under stoichiometric conditions, which is representative of the flame front condition in a simple diffusion flame and considered to be the least favourable conditions from a NOx formation perspective.The reference case was reproduced and simulated, and then adapted to include the different cases of EGR.Two types of EGR are considered: dry EGR and wet EGR, with the EGR rate being defined as the ratio of the volume of the recycled exhaust to that of the exhaust, as in Ref. .Nitrogen dilution of the fuel is only present in the reference case and not in the EGR cases even at low EGR rates.It is therefore obvious that the efficiency of the EGR cycles with 0% EGR has a higher efficiency than the reference case, corresponding to the saving in dilution compression work.The EGR rate was varied between 0 and 65% with increments of 5 %-points.As described in the previous section, the air stream from the compressor directed to the ASU has been removed as well as the N2 dilution stream, however the latter is not fully compensated by the former and there is a mass flow reduction of about 17.5 kg/s through the combustor.Therefore, the mass flow of syngas needs be reduced to achieve the correct TIT compared to the reference case.Due to the changing heat capacity with the varying EGR rate, the syngas mass flow and sometimes the cooling air mass flow, had to be adjusted for each EGR value to achieve a constant TIT value.The power island efficiency is calculated by dividing the net output power by the coal fuel power input based on lower heating value.Since the syngas mass flowrate varies with EGR rate, the mass flow of coal has to be accommodated.The gasifier is not simulated in this study which instead focuses mainly on the power island, and it has therefore been assumed and used a mass-based syngas-to-coal ratio as that used in the reference case, equal to 1.688 The heat for coal drying is assumed to be 0.85% of the coal LHV power used, again equal to that used in the reference cycle.In all plant arrangements studied, the ASU power consumption is assumed to be twice as large as that given in Ref. and serves as a “worst case scenario” as argued earlier.The efficiency of the power island calculated with Eq. for the reference case simulated in the present setup gives a value of 43.2% which compares with 41.7% when using data reported in Ref. .The discrepancy between the two values is due to the necessary adjustments made to the cycle setup as explained above, in particular the use of a lower N2 compression work as explained in §3.2.The reference for comparison in the remainder of the study always refers to the cycle calculated with our plant setup which has the exact same basis.In the dry EGR case, water was extracted from the exhaust stream before entering the compressor such that the mole fraction of vapour is saturated and the same as in ambient air: 1.01%, even though this may not be the optimal trade-off between acceptable moisture content for the turbomachinery and expense of auxiliary power for cooling, as will be discussed later.The moisture level requires the exhaust to be cooled down to 7.55 °C.The cooling from 100 °C to the ambient temperature of 15 °C is in principle free energy-wise.Further cooling down to 7.55 °C requires some form of heat pump which implicated a certain energy cost.A conservative heat pump efficiency of 4 was chosen to provide a measure of power consumption related to the cooling.The two stages of cooling were therefore separated in two cooler components of different kind.In addition, a water pump was inserted in the model to get rid of the residual water by increasing its pressure by 1 bar.The net and gross efficiencies for the dry EGR case are shown in Fig. 2.The gross efficiency is based on the power generated by the turbines, while the net efficiency takes into account the auxiliaries power consumption.At 0% EGR the net efficiency is 44.5%, that is 1.3 %-points higher than in the reference case.The efficiency penalty stemming from nitrogen dilution for NOx emission control in an IGCC plant is therefore of around 1.3 %-point.This value compares well with the study of Gazzani et al. who showed that 1.5 %-point of energy efficiency was lost due to nitrogen dilution.For the sake of comparison, the ENEL’s Fusina Hydrogen demonstration project operating a smaller 16 MWe gas turbine fuelled with pure hydrogen achieved 41.6% in combined cycle mode, but did not have to concede the power expense of an ASU for gasification purposes because hydrogen was supplied as a by-product from elsewhere on the industrial site at no energetical cost.The gross efficiency is rather constant over the range of EGR rate while the net efficiency drops 0.2 %-point from 0 to 65% EGR.The decrease in efficiency with increasing EGR is therefore due to the energy penalty related to the various causes of auxiliary power consumption of the power island.Fig. 3 shows clearly that the exhaust cooling is a parasitic power consumer.However, the major contributor to efficiency decrease remains the ASU independently of EGR rate, which accounts for more than 75% of the auxiliary power consumption.The cooling power based on a Coefficient of Performance ratio of 4 increases also with EGR, but has little impact on the overall efficiency.The results of two power plant cases with wet EGR are presented.In one case, the air and exhaust gas mixture is cooled down to the dewpoint to allow for the lowest possible compressor inlet temperature.In the other scenario there is no cooling provided and the compressor inlet temperature is thus allowed to have a higher value as shown in Fig. 4.Fig. 5 shows the net and gross electric efficiencies for the wet EGR case with cooling.As for the dry EGR case, the net efficiency of the power island declines with EGR rate, but so does the gross efficiency.The presence of moisture in the cycle is therefore affecting the efficiency of the power island, especially that of the gas turbine cycle as shown in Fig. 6.The decrease of power generated in the gas turbine is nonetheless compensated by an increase of power generated in the steam cycle, but only partially.It is noted that for wet EGR with cooling, it is not possible to achieve 65% EGR since at recirculation rates above 60%, there is not enough oxygen in the working fluid to burn the fuel, which is not a reasonable way of operating a power plant.The reason for this happening only in the case with cooling is that when cooled, the working fluid needs more fuel to reach the specified TIT.In the case of 0 and 5% EGR with cooling the air and exhaust gas mixture had to be cooled below 15 °C, but this energy cost has not been accounted for as it is negligible.Figs. 7 and 8 show the results for the case of wet EGR without cooling.Results show generally the same behaviour as previously described, but with a greater expense in terms of lost power in the gas turbine when the EGR rate increases.The steam cycle on the other hand remains rather unaffected by the cooling step.The simple gas turbine efficiencies calculated on the basis of the syngas LHV are shown in Fig. 9 showing that all the EGR scenarios fall between the reference cases’ efficiencies taking into account or not the nitrogen compression work for syngas dilution.The dry EGR case shows here also the highest efficiency of all the EGR cases, with values very close to that of the reference gas turbine without compression work.As a comparison the ENEL’s Fusina Hydrogen project had a measured gas turbine net efficiency of 31.1% at 11.4 MWe delivered power .Fig. 10 shows the comparison between the different cases studied.All EGR cases exhibit a better overall efficiency than the reference case showing that avoiding the nitrogen dilution is beneficial.However not all EGR cases can satisfactorily replace the reference case since at low EGR rates, the adiabatic flame temperature in the combustor is high and so are the NOx emissions.The comparison should therefore be made with the EGR rates providing an adiabatic flame temperature outside the potentially high NOx zone.In addition, these conditions must be compatible with flame stabilization properties equivalent to or better than that of the reference case.If such region exists, Fig. 10 shows that the power plant will always have a better overall efficiency, as even at 65% EGR there is nearly 1 %-point difference.A first order combustion assessment of combustor technology applicable to this concept was made in Ditaranto et al. based on two characteristics: 1) calculated flame speed of the different fuel and oxidizer mixtures, which is the conventional property to assess flame stability, hence burner viability; 2) NOx formation.The results showed that premixed technology in dry EGR mode would be possible within a given working fluid distribution with EGR rate of approximately 50% depending on PZ distribution.For the wet case, the domain was found to be larger, but limited to higher EGR rates if low NOx emissions are to be achieved.Combustion calculations have been updated and presented in Figs. 11 and 12 that would correspond to a diffusion combustor technology.The NOx emissions from a real burner - combustor system is design and hardware dependent and cannot be predicted in general terms from the sole knowledge of fuel and air input compositions and temperature.Kinetic calculations in laminar conditions as those presented herein can only provide the potential for achieving low NOx emissions, which in the case of hydrogen combustion is dominated by the thermal mechanism, hence temperature.Gazzani et al. and Chiesa et al. suggest that if the adiabatic stoichiometric flame temperature of a mixture does not exceed 2300 K, state-of-the-art diffusive combustor technology is able to produce low NOx burners, based on ENEL’s practical experience in using hydrogen containing fuels in GE gas turbines .In their reference case with dilution they assume that a diffusive combustor operating with a SFT of 2200 K could achieve NOx emissions below 20 ppmvd.The reference case with nitrogen dilution which is used in this study is in agreement with that limit as the corresponding calculated SFT is 2190 K.It is stresses again that this temperature is form a kinetic point of view very and would form very high NO concentration, but it is an indicative flame temperature at which state-of-the-art burner design would manage to “beat” equilibrium and generate low NOx.For the sake of comparison, the SFT of methane in the same combustor conditions is 2464 K.The 2190 K temperature limit, marked in Fig. 11, is achieved above 45% and 55% EGR rates for the wet and dry cases respectively.Reporting these rates in Fig. 10 indicates that the efficiencies are up to 1.0 %-points higher than in the reference case for all EGR cases.In natural gas fired gas turbine, it is accepted that a maximum of 30%–35% EGR rate could be achieved below which combustion can maintain full combustion efficiency and stability .Nevertheless, Fig. 12 indicates that the reactivity of hydrogen in the high EGR rates required is still sufficient to maintain stable combustion with laminar flame speed being still as high as in the reference case.Noting that the laminar flame speed in these conditions is approximately 110 cm/s also suggest that EGR rates could be further pushed since that value is more than twice that of methane in conventional gas turbine conditions.Although all cases perform approximately equally better than the reference case, but at somewhat lower EGR rate in the wet scenarios, dry EGR is probably a practically better choice from a gas turbine point of view.Indeed, the working fluid has thermodynamics properties that are quasi identical to that of a conventional gas turbine whereas the moisture content in wet EGR could imply a different choice of materials and a small loss in polytropic efficiency of the turbomachinery.Particularly at the high EGR rates required where water vapour concentration at the compressor inlet reaches 10% at 45% EGR, whereas in dry EGR water vapour concentration is deliberately brought to that of the reference case at all EGR rate.Several assumptions were made in this study making sure to not over-estimate the potential benefits of the concept.These were:ASU has been attributed double the power consumption as in the reference case with integration, even though we showed that with a conservative specific energy consumption of 200 kWh/t oxygen, the ASU power consumption would be less than that.The need for cooling to lower than ISO ambient temperature for keeping humidity levels in the dry EGR case.The background for this assumption was to make sure that the reference compressor component could be utilized without any modifications and without impeding lifetime and maintenance frequency.In Kakaras et al. simulations showed that the net effect of increasing moisture content is a reduction in the power output, resulting in a gas turbine efficiency reduction of 0.28 %-point for an increase from 0% to 100% humidity, and could thus justify the cooling of the EGR stream.In terms of materials, humidity can represent a corrosion issue for the compressor, however, it must be highlighted that the increased water content is due to hydrogen burning, and in other words clean water without traces of impurities or mineral contents.Given that the acceptable relative humidity limit is strongly dependent on local concentration of acidic contaminants which are not presented in the water stemming from the EGR, only efficiency losses should be considered and evaluated in this context.In addition, the deficiency in oxygen in the EGR air stream would probably decrease the aggressivity of potential corrosion issue.In the reference case the working fluid through the turbine has a concentration of 12.25% steam affecting the turbine stage as is well described in Gazzani et al. .At 0% EGR the dry scenario would give approximately 11% steam in the turbine stream, decreasing as EGR rate increases.Therefore, steam content in the EGR concept would not bring more severe degradation than those potentially encountered in the reference scenario with nitrogen dilution.The efficiency of the heat pump to cool the exhaust gas mixed with the air before inlet to the compressor has been given a conservative value.Due to boundary conditions imposed between the power island and the rest of the power plant, a correction for the net heat generated by the syngas production and input to the bottoming cycle had to be applied.This is a recognized, but unavoidable limitation in the comparison work, however this correction represents only between 0.1% and 1.9% of the gross power output depending on the cases and EGR rate, and only 0.2% for the cases which have been considered as optimal.It is therefore not believed that this limitation alters the conclusions of the study.The laminar flame speed calculated and shown in Fig. 12 suggest that the EGR rates assumed to be necessary for achieving equivalent NOx emission performance as in the reference case, could be pushed to higher values and the limitation would come from oxygen availability, more than stability issue,Relaxing even partially some of these assumptions would provide room for improvement on the gain of the concept presented herein and the results can therefore be considered as conservative.To circumvent the difficulties of achieving low NOx emissions when burning high hydrogen content fuels in gas turbines, a concept where exhaust gas is recirculated is studied from a process perspective.By forcing exhaust gas back to the compressor inlet, the oxygen concentration in the air decreases as the EGR rate increases to the point where the flame temperature, controlling the NO formation, is limited.The simulations of the power cycle concept were analysed with three options: a dry, a wet, and a wet with cooling of the exhaust gas, all of them built upon and compared against an IGCC power plant with pre-combustion carbon capture with nitrogen dilution as reference cycle.The main findings of the study can be summarized as follows:Conventional nitrogen dilution for NOx control in the reference cycle costs 1.3 %-point to the power island efficiency;,Implementing any of the three EGR options investigated represents a gain in efficiency when nitrogen dilution is eliminated, nevertheless:EGR rate needs to be at least 45% and 55% in the wet and dry EGR respectively to meet the same adiabatic flame temperature as in the reference case with dilution and therefore potentially achieve similar NOx levels;,The gain in efficiency in these EGR conditions is 1 %-point, thus recovering 75% of the loss of efficiency caused by conventional nitrogen dilution of the fuel;,High laminar flame speed even at high EGR rates indicates that flame stability should not be impaired even at these very oxygen depleted air conditions;,Since the assumptions used are considered as conservative, the results indicate that using EGR instead of nitrogen dilution could very likely increase the total power plant efficiency without compromising on NOx emissions.
High hydrogen content fuels can be used in gas turbine for power generation with CO2 capture, IGCC plants or with hydrogen from renewables. The challenge for the engine is the high reactive combustion properties making dilution necessary to mitigate NOx emissions at the expense of a significant energy cost. In the concept analysed in this study, high Exhaust Gas Recirculation (EGR) rate is applied to the gas turbine to generate oxygen depleted air. As a result combustion temperature is inherently limited, keeping NOx emissions low without the need for dilution or unsafe premixing. The concept is analysed by process simulation based on a reference IGCC plant with CO2 Capture. Results with dry and wet EGR options are presented as a function EGR rate. Efficiency performance is assessed against the reference power cycle with nitrogen dilution. All EGR options are shown to represent an efficiency improvement. Nitrogen dilution is found to have a 1.3% efficiency cost. Although all EGR options investigated offer an improvement, dry EGR is considered as the preferred option despite the need for higher EGR rate as compared with the wet EGR. The efficiency gain is calculated to be of 1% compared with the reference case.
112
Large-scale otoscopic and audiometric population assessment: A pilot study
Otitis media and its most consequential complication, chronic suppurative otitis media, are disproportionally overrepresented in developing countries, particularly in Asia and sub-Saharan Africa .The burden of disease from CSOM includes hearing loss, tympanic membrane perforation, otorrhea, cholesteatoma and intratemporal and intracranial complications which in turn have important downstream social, educational and vocational impact.OM global health initiatives and clinical research in these populations mandate accurate epidemiologic assessments in LMIC.However, large-scale otoscopic and audiometric assessment of ear disease in children in LMIC is difficult due to logistic constraints, including a lack of portable equipment, and trained audiologists and otorlaryngologists with the time and interest for these studies.A survey of published reports in the past 15 years with relevance to OM through population studies in LMIC and developing countries has revealed one or more methodologic deficiencies; namely, small sample size, lacking either otoscopic or audiometric evaluation and particularly, assessment of CSOM.Two recent studies have overcome these apparent deficiencies including the documentation of frequencies of CSOM through large-scale population surveys in at-risk regions including Asia and Africa .Yet both of these studies relied heavily on onsite and/or local highly-trained otolaryngologists and audiologic personnel.In the Indonesian study, locally trained certified audiologists conducted hearing testing.In the Kenyan study, local medical officers were utilized to carry out clinical assessments and audiometric assessments.While frequencies of CSOM were assessed, both required significant local trained audiologic and/or clinical personnel with limited Western-trained audiologist and pediatrician/otolaryngologist supervision and monitoring.Both the Indonesian and Kenyan studies were carried out by research teams visiting schools and conducting cross-sectional studies over 3–5 days at each school that lasted for several months.However, there have been no recent population-based studies that could extend beyond the confines in terms of sample size and resource commitment described in these studies.Our research team recently received funding to assess the long-term audiometric and otologic status as well as cognitive development of a cohort of over 12,000 teenagers.They were previously enrolled in a randomized clinical trial that evaluated the safety and efficacy of an 11-valent pneumococcal conjugate vaccine for radiographically confirmed pneumonia when these children were less than 2 years of age.These research subjects lived in Bohol, Philippines, an island with a land mass of 4821 km2.The enormity of this research undertaking defied all cited conventional methods.A new methodology had to be developed that would allow local research personnel to conduct these otologic and audiometric assessments using portable testing equipment that could be transported to remote municipalities on the island of Bohol and allow periodic Internet/cloud uploads to facilitate ongoing monitoring of data quality and eventual data analysis, from Manila and Denver.With the advances in audiometric testing equipment suitable for field deployment and cloud-based technology, our team hypothesized that it was possible to develop a methodology based on new technologies to conduct this large-scale population based otoscopic and audiometric assessment in Philippines following hands-on training provided by a U.S. audiologist/otolaryngologist team for local field workers.Furthermore, we wished to demonstrate that this methodology is capable of allowing continued supervision and monitoring.This paper describes the otoscopic/audiometric methodologic aspects of the larger study through the analysis of a pilot sample.IRB approval for the overall study was obtained from the Research Institute of Tropical Medicine, Manila Philippines and the COMIRB at the University of Colorado School of Medicine, Aurora, USA.The methodology is described in 4 sections: overall construct for data collection, storage and analysis; training of local field workers, the promulgation of the “do no harm” precept and continuous data quality monitoring and troubleshooting.The overarching goal was to collect both objective and subjective data by trained local nurses as discussed below in Bohol, Philippines and be able to analyze them in the U.S.It was also to establish a mechanism to provide continued support for the local field workers and maintain data quality.This necessitated a robust training program for the field workers by a U.S. team as described below.It also required a willingness for the U.S. investigators to answer all queries generated by the field workers in a timely manner based on uploaded data via email.All subjects underwent otoscopy using an operating otoscope.Once the entire circumference of the tympanic membrane was in full view, images were recorded using a video otoscope.This unit uses an SD card for image storage in the JPEG format.Each subject then underwent tympanometry and DPOAE recordings.This unit allows tympanometry to be recorded at 266 Hz and DPOAE recorded in 4 frequencies sequentially without removal of the probe during testing.Although the unit stored the raw data, the output for the research nurses were pass/fail binary outcomes for both tympanometry and DPOAE.Data were stored within the unit until the time for download.Finally, each subject then underwent screening audiometry using noise cancelling headphones linked to a handheld audiometer operated through an Android device manufactured by HearScreen.This unit allows pure tone frequencies from 0.5 to 15 kHz to be tested and it also generates a pass/fail binary outcome as defined as >35 dB HL at any frequency in either ear.The Maximum Permissible Ambient Noise Level for the amount of ambient noise permissible for audiometric testing for this system ranges from 27 dB at 500 Hz to 44 dB at 2000 Hz.The unit generates a green light when the ambient sound frequencies are below the thresholds or a red light when the thresholds are exceeded; local nurses were trained to only conduct screening audiometry when the green light was on.Binary outcome and raw data are directly uploaded through cloud technology to HearScreen.All data with the exception of the screening audiometry data were downloaded at the local research facility first onto a personal desktop computer from their respective storage sources.Screening audiometry data were downloaded from the HearScreen cloud storage site.Backup files were made and were stored on an external hard drive.All downloaded data were then uploaded in batches to a REDCap database set up at University of Colorado, School of Medicine, Aurora, CO, USA.All otoscopic data with the aid of tympanometric data were classified into one of the following categories: normal, otitis media with effusion, myringosclerosis, perforation, healed perforation and retraction pocket/cholesteatoma.For the sample described in this paper, the diagnoses were made by the first author.For the analysis of the complete dataset, U.S. otolaryngology advanced practice providers will be trained in the future to interpret the videootoscopic findings.The training methodology included didactic and hands-on training of local nurses led by a U.S. otolaryngologist/audiologist team.The main objective of the nurses was to collect the data described and they themselves did not participate in diagnostic decisions.The only decisions made by them were related to referral to the local otolaryngologists and for full audiologic evaluation dictated by the research protocol.It included teaching the nurses to remove cerumen using an operating head otoscope and the use of all equipment described above.Maintenance and calibration of all equipment as well as the storage and transfer of all data were taught.Written standard operating procedures were developed to ensure standards are kept in case of personnel changes.They included referral criteria to Bohol Hearing Center and to the local otolaryngologist for otorrhea.Accommodations were made in the methodology to refer the research subjects to local healthcare professionals.The local field workers were instructed to refer subjects to Bohol Hearing Center for failed DPOAEs, tympanometry, and/or screening audiometry and to the local otolaryngologist for ear pathology.The nurses rotate their duties on a weekly basis, both to build repetitiveness and to minimize boredom.The supervisory nurse is responsible for coordination of all study subjects and backing up of all of the data on a daily basis.Weekly quality control monitoring is done by a data manager onsite and periodic quality checks by one of the investigators at every site visit through the past year.Ongoing training of the field workers is achieved by the U.S. investigators monitoring the data quality through periodic REDCap database sampling.Queries generated by the field workers are resolved by the U.S. investigators via email.A U.S. otolaryngologist/audiologist team provided a 1-day instruction for 5 Filipino nurses through didactic lectures on OM and basics in audiology as well as hands-on training for all otoscopic and audiometric procedures.On-the-job training continued for the subsequent 3 days.47 Filipino children were enrolled and the nurses were able to perform all procedures independently with decreasing need for assistance from the instructors during the course of the 4-day period.Raw data were reviewed and categorized on site at the local research facility by the team with the nurses for teaching purposes.The identical dataset once uploaded to the RedCap was again reviewed and categorized in Colorado once the US team returned home.Of the 94 ears in the cohort, cerumen sufficient to occlude the full peripheral view of the tympanic membrane was found in 40 ears.Cerumen in 6 ears could not be removed by the nursing team and these subjects were sent home with mineral oil.They returned the following week and all 6 ears were cleared of cerumen and completed the assessments.The diagnoses of the 94 ears were categorized in Table 1.Otoscopic findings of the 94 ears included: normal, otitis media with effusion, myringosclerosis, healed perforation, perforation and retraction pocket/cholesteatoma.Abnormal audiometric findings included: tympanogram, DPOAE and screening audiometry.Three of the four subjects with either perforation or retraction pocket/cholesteatoma failed either the screening tympanometry or OAE or both.One subject with a retraction pocket had normal tympanogram and DPOAE.None failed the screening audiometry.Per protocol the 4 subjects with abnormal audiometric findings were referred for formal audiogram.Two individuals were found to have chronic otorrhea despite being seen by the local otolaryngologist and have not had an audiogram at the time of preparation of this manuscript.The subject with a unilateral perforation was found to have a corresponding conductive hearing loss.The subject with a unilateral abnormal DPOAE was found to have normal hearing threshold bilaterally.The number of emails between the Filipino field workers and U.S. investigators were tabulated based on a unique incident or patient from November 2016 to April 2018.These email exchanges included 3 instrument breakages, 2 instrument calibration issues, 17 research protocol-related and 16 patient-related queries.All queries were resolved satisfactorily; some required more than 1 cycle of emails.Prior to this effort, most large-scale studies of CSOM and its sequelae in school-aged children in LMIC have either been conducted at otolaryngology clinics, or as cross-sectional studies in schools or preschools, and rarely in population-based studies from random samples or door to door surveys .In the otolaryngology clinic setting, standard methods were used for otoscopy and diagnostic audiometry .In the school-based studies, for the most part trained otolaryngologists conducted the otoscopy and screening audiometry was done in the schools .Diagnostic audiometry, when done was rarely performed in the schools but most often at a referral hospital .Only our prior studies used tympanometry but DPOAE for audiometric screening was not used.All of these studies utilized otolaryngologists and or trained audiologists in the field to do the studies.While trained ear health research officers and nurses have been used in the past, to conduct community and school based surveys of Aboriginal Australian children using tympanometers, voroscopes and video-otoscopes, followed by later specialist review in country, our study has extended these methodologies using more robust appropriate technology that can be used in LMIC and other deve, oping countries.In this study, we were able to use trained nurses to do most of these procedures, with the help of i) robust modern technology, ii) the ability to train nurses to record abnormalities on video-otoscopy, and automate DPOAE, tympanometry and screening audiometry results to pass or fail, and iii) and the ability to collect all of this data in a digital manner and to download all of the data in real-time on to an Internet-based database.This study affirms the robustness of several key components of the methodology.Local Filipino nurses with proper didactic and hands-on training were capable of recording still and video images on the first pass which included cleaning 88% of the ears with cerumen.Furthermore, the nurses were able to use semi-automated audiometric equipment to obtain tympanograms, DPOAE and screening audiograms in all the 47 subjects.The ability to transfer otoscopic and audiometric data both in terms of data storage at the local research facility in the Philippines and uploading them to the REDCap database in Colorado were determined to be both feasible and reliable.These features in the methodology allowed not only data analysis of the present study but ongoing monitoring of data quality of the full project.Although it is impossible to extrapolate the effects of the pneumococcal vaccine on the prevalence CSOM and its disease burden in the large cohort of children at this time, the random sampling of 47 children suggests that both cerumen impaction and complications of OM might both be higher than what one would expect when compared to a U.S sample in everyday clinical practice.The major limitation of the study lies in the duration of the didactic and on-the-job training for the nurses in preparation for the data collection phase of the study.It would have been more ideal to have spent more time in the training of the nurses.But resource limitations in terms of time availability of the US trained personnel and additional monetary needs to execute a longer training program precluded this option.In a global health research environment, the authors strongly assert that ongoing support in clinical decision-making, adherence to research protocol and upkeep of research equipment has made this methodology rigorous to go forward in evaluating the larger cohort.The other limitation of this methodology, is its dependence on fairly expensive equipment, the necessity for continuous power to run a lot of the equipment, the necessity for good high-speed Internet access to upload data especially video-otoscopic data, which on average ranges between 35 and 50 MB, and the necessity for a rapid response from an otolaryngologist to interpret video-otoscopic data.We did encounter occasional problems such as power outage, malfunction of the video otoscope and immediate availability of U.S. clinical personnel for troubleshooting.However, the major advantage of this methodology, now validated by over 4500 children having been screened in the last year with near-complete data collection on every child, outweighed these disadvantages.We propose that this methodology could be used with minimal training of nursing staff, for larger scale population-based studies that does not require intense intervention from busy local otolaryngologists in LMIC.There are only 3 otolaryngologists in Bohol servicing a population over 1.3 million and our research methodology worked well in that scenario.A novel methodology has been developed and field-tested and is deemed sufficiently robust in assessing the otologic and audiometric status of a cohort of Filipino teenagers using advanced portable audiometric equipment and cloud technology that is currently deployed in a large-scale project.This methodology likely has applicability in other LMIC, for large-scale population based studies.This study was supported by the Bill & Melinda Gates Foundation and Colorado CTSA Grants UL1TR002535, KL2TR002534, and TL1TR002533.The contents of this report are solely the responsibility of the authors and do not necessarily represent the official views of their institutions or organizations or of the sponsors.The funders did not participate in any aspect of the study, including study-conduct, data collection, analyses of the data or the write-up of the manuscript.The authors certify that they have no conflicts of interest.
Objective: Large-scale otoscopic and audiometric assessment of populations is difficult due to logistic impracticalities, particularly in low- and middle-income countries (LMIC). We report a novel assessment methodology based on training local field workers, advances in audiometric testing equipment and cloud-based technology. Methods: Prospective observational study in Bohol, Philippines. A U.S. otolaryngologist/audiologist team trained 5 local nurses on all procedures in a didactic and hands-on process. An operating otoscope (Welch-Allyn R ) was used to clear cerumen and view the tympanic membrane, images of which were recorded using a video otoscope (JedMed R ). Subjects underwent tympanometry and distortion product otoacoustic emission (DPOAE) (Path Sentiero R ), and underwent screening audiometry using noise cancelling headphones and a handheld Android device (HearScreen R ). Sound-booth audiometry was reserved for failed subjects. Data were uploaded to a REDCap database. Teenage children previously enrolled in a 2000–2004 Phase 3 pneumococcal conjugate vaccine trial, were the subjects of the trainees. Results: During 4 days of training, 47 Filipino children (M/F = 28/19; mean/median age = 14.6/14.6 years) were the subjects of the trainee nurses. After the training, all nurses could perform all procedures independently. Otoscopic findings by ears included: normal (N = 77), otitis media with effusion (N = 2), myringosclerosis (N = 5), healed perforation (N = 6), perforation (N = 2) and retraction pocket/cholesteatoma (N = 2). Abnormal audiometric findings included: tympanogram (N = 4), DPOAE (N = 4) and screening audiometry (N = 0). Conclusion: Training of local nurses has been shown to be robust and this methodology overcomes challenges of distant large-scale population otologic/audiometric assessment.
113
The breast cancer oncogene EMSY represses transcription of antimetastatic microRNA miR-31
The amplification of EMSY has been found in 17% and 13% of sporadic ovarian and breast cancers, respectively, and it is associated with a poor outcome.The EMSY gene maps to 11q13-11q14, a locus that harbors several known and potential oncogenic drivers frequently amplified in breast cancer, most notably in the estrogen-receptor-positive luminal subtype.EMSY was shown to silence BRCA2 transcriptional activity and localize to sites of repair after DNA damage.Although the precise cellular function of EMSY remains unknown, numerous lines of evidence suggest that EMSY plays a role in transcriptional regulation.First, the N terminus of EMSY has an evolutionarily conserved EMSY N-terminal domain that structurally ressembles the DNA binding domain of homeodomain proteins.Second, EMSY binds chromatin-regulating factors such as HP1.Third, EMSY was found as a component of multiprotein complexes linked to transcriptional control in D. melanogaster.Furthermore, EMSY was involved in the repression of interferon-stimulated genes in a BRCA2-dependent manner.Finally, recent reports identify EMSY as part of the Nanog interactome as well as the SIN3B complexome.MicroRNAs are conserved small noncoding RNAs that function as key negative regulators of gene expression.Over the past years, a number of miRNAs were found to behave as tumor suppressor genes or oncogenes.Classically, miRNAs are defined as tumor suppressors gene when they target an oncogene.Recent studies have provided evidence for widespread deregulation of miRNA in cancer leading to cell invasion, migration, and metastasis.Like other human genes, miRNA expression can be altered by several mechanisms, such as chromosomal abnormalities, mutations, defects in their biogenesis machinery, epigenetic silencing, or the deregulation of transcription factors.Here, we identify several miRNAs whose expression varies with EMSY expression levels.Among these, we find that miR-31, an antimetastatic microRNA involved in breast cancer, is repressed by EMSY.Chromatin immunoprecipitation experiments show that EMSY binds to the miR-31 promoter.We demonstrate that the TF ETS-1 recruits EMSY and that EMSY binding correlates with JARID1b/PLU-1/KDM5B occupancy at the target promoter.Altogether, our results identify EMSY as a regulator of miRNA gene expression and provide insights into the molecular mechanisms by which EMSY contributes to the initiation or progression of breast cancer.To determine the basis of EMSY’s contribution to breast cancers, we asked whether EMSY can induce malignant transformation.First, we tested this by stably transfecting mouse immortalized fibroblast NIH 3T3 cells with the full-length human EMSY coding sequence.Figure 1A shows that EMSY overexpression significantly confers the ability to NIH 3T3 fibroblasts to form colonies in soft agar.Then, given that EMSY amplifications are observed predominantly in luminal breast tumors, we engineered the luminal breast cancer MCF-7 cell line, which harbors normal levels of EMSY, to stably overexpress EMSY and similarly tested the oncogenic activity of several clones using the soft-agar assay.We confirmed EMSY overexpression by western blotting and quantitative real-time PCR.Figure 1B shows that EMSY overexpression significantly enhances anchorage-independent growth of MCF-7 cells.This effect is unlikely to be due to an increase in proliferation, given that no differences in growth rate could be detected between the MCF-7 and MCF-7-EMSY cell lines.To further investigate the tumorigenic capacity of EMSY, we set out to determine whether EMSY affects tumor growth in vivo.To this end, equal numbers of mock- and EMSY-expressing MCF-7 cells were orthotopically implanted into the mammary fat pad of athymic nu/nu mice, and the animals were monitored twice a week over 34 days.MCF-7 cells are poorly invasive and metastatic, but EMSY-transformed cells produced tumors first measurable after 12 days and that continued to grow in size until the termination of the experiment.In contrast, no tumors were detected in MCF-7 control cells.Finally, EMSY was demonstrated to increase the metastatic potential of MCF-7 cells, given that mice injected in the tail vein with MCF-7-EMSY cells developed more lung micrometastases than those injected with controls cells.Histological staining of the lungs were observed and micrometastasis were quantified.Altogether, these results establish EMSY as a potent breast cancer oncogene in vitro and in vivo.In order to understand the mechanisms by which EMSY induces oncogenic transformation, and given its potential to regulate transcription, we sought to identify genes regulated by EMSY.We considered the possibility that EMSY may regulate the expression of critical miRNAs, given that certain miRNAs are associated with poor breast cancer outcome, similar to EMSY amplification.First, we performed an unbiased screen to identify miRNAs whose expression may vary with EMSY levels.Using a qPCR-based array, we profiled the expression of 88 miRNAs known or predicted to alter their expression during breast cancer initiation and/or progression in MCF-7 cells depleted from EMSY using small interfering RNA.We found 38 significantly deregulated miRNAs.Then, we asked whether these potential EMSY targets would also be deregulated in tumors where EMSY is amplified.Thus, we investigated miRNA expression within the Molecular Taxonomy of Breast Cancer International Consortium cohort of human breast cancer samples.The integrated analysis of genomic copy number and gene expression revealed an ER+ subgroup with cis-acting aberrations at the 11q13-11q14 locus and extremely poor outcome.In line with our previous findings, the 11q13-11q14 cis-acting group was enriched for EMSY amplification.EMSY copy-number alterations occur predominantly in ER+ tumors.The miRNA expression levels were available for 1,283 samples from this cohort.We compared EMSY-amplified versus neutral cases from the METABRIC data set and found that 12 miRNAs were significantly deregulated.Given that miR-31 was the only common target identified from these two approaches and that miR-31 has been previously reported as a microRNA involved in the suppression of metastasis in breast cancer, we considered the possibility that miR-31 is an EMSY target gene and that it may be part of the mechanism underpinning EMSY’s oncogenic potential.Therefore, we decided to focus our interest on this microRNA.Next, we set out to further confirm the observation that miR-31 expression levels are significantly lower in EMSY-amplified versus neutral tumors.Given that the expression probe for EMSY on the Illumina HT12 v3 array was nonresponsive, we profiled miR-31 expression levels by qRT-PCR in a representative subset of 98 primary tumors from the METABRIC cohort.This analysis corroborates our previous findings, given that EMSY copy-number and expression levels were highly correlated.We found that EMSY expression strongly anticorrelated with miR-31 expression.Then, we further examined the relationship between EMSY and miR-31 expression levels.The expression of miR-31 was monitored by qRT-PCR in MCF-7 cells overexpressing EMSY and in comparison to control cells.This analysis revealed that miR-31 levels were significantly reduced in cells stably overexpressing EMSY, whereas the expression of other miRNAs such as miR-181a-2 and miR-198 remained unchanged.Conversly, and consistent with these findings, miR-31 was markedly upregulated in EMSY-depleted MCF-7 cells, in comparison to miR-181a-2 and miR-198.Finally, we evaluated miR-31 levels in tumors obtained from the mammary fat pad experiment presented in Figure 1C. Figure S2D shows that miR-31 expression is lower in tumors generated from the MCF-7-EMSY than from MCF-7 control cells.Altogether, these results support an inverse correlation between EMSY and miR-31 expression levels.Given that miR-31 affects cell invasion and migration via its pleiotropic regulation of prometastatic target genes, we tested whether EMSY expression impacts on miR-31 target genes.To test this, we analyzed MCF-7 cells stably overexpressing EMSY for the expression of a set of validated miR-31 target genes.Figure 2E shows that MCF-7-EMSY cells display higher levels of transcripts reported to be under the control of miR-31 in comparison to control cells.This effect is specific, given that non-miR-31 target transcripts, namely B2M, CXCL12, and ALAS1, remain unaffected.Three miR-31 target genes, namely ITGA5, RDX, and RhoA, are crucial for the antimetastatic response of miR-31.Then, we set out to test whether EMSY overexpression impacts on the invasive and migrative capability of MCF-7 cells.Using a Boyden chamber assay, we observed that the overexpression of EMSY increases MCF-7 migrative capacity when compared to control cells.These findings are consistent with the results obtained from the mammary fat pad and tail vein injection experiments described in Figure 1.Then, to establish whether miR-31 mediates, at least in part, the effects of EMSY overexpression, we performed a “rescue” experiment wherein miR-31 was exogenously expressed in the context of EMSY overexpression.MCF-7 cells stably overexpressing EMSY were transfected with a vector encoding miR-31.Enforced expression of miR-31 significantly reversed EMSY-mediated induction of cell migration.Given that miR-31 has been described to affect invasion and migration, we tested whether miR-31 inhibition alters the initial acquisition of a transformed phenotype.We used antisense oligonucleotides to deplete MCF-7 cells from miR-31.We found that a loss of miR-31 significantly enhanced the invasion and migration capacity of MCF-7 cells.However, when miR-31 sponges were used to stably deplete miR-31 from MCF-7 cells, we observed that the resulting cells formed colonies in soft agar as efficiently as control MCF-7 cells.Thus, the loss of miR-31 from MCF-7 cells did not enhance transformation.These data indicate that EMSY overexpression induces MCF-7 cell transformation, and, in a second step, EMSY functions to downregulate miR-31, leading to the progression of the transformed phenotype.Importantly, these data are completely consistent with those reported by Valastyan et al. and led us to propose a model for the EMSY-miR-31 interaction.Then, we asked whether the re-expression of miR-31 in MCF-7 cells stably overexpressing EMSY could abrogate EMSY-induced oncogenic activity using the soft-agar assay.Figure 3D shows that miR-31 re-expression profoundly reduces the ability of the MCF-7-EMSY cells to form colonies in soft agar.Next, having shown that EMSY and miR-31 levels inversely correlate in human breast cancers, we investigated whether the EMSY-miR-31 interaction is also seen in cell lines harboring amplification of the EMSY-11q13-11q14 locus.We used two such cell lines, MDA-MB-175 and MDA-MB-415, and confirmed that both have high EMSY and very low miR-31 levels in comparison to MCF-7 cells.Figure 3E shows that miR-31 was markedly upregulated in MDA-MB-175 cells upon EMSY depletion, consistent with our results in MCF-7 cells.Similar results were obtained in MDA-MB-415 cells.Then, we tested the oncogenic activity of these cells in vitro using the soft-agar assay.EMSY depletion significantly reduces the anchorage-independent growth of both cell lines.Moreover, increased expression of miR-31 phenocopies the effect of EMSY knockdown in these two EMSY-amplified cell lines.Finally, we monitored the invasion and migration abilities of MDA-MB-175 and MDB-MB-415 cells after the depletion of EMSY or overexpression of miR-31.EMSY depletion significantly reduced the invasion and migration abilities of the cell lines.Exogenous expression of miR-31 reduced the invasive and migrative rates of both cells lines.Therefore, we conclude that miR-31 expression phenocopies the effect of EMSY depletion.Altogether, these results indicate that the invasion and migration features of EMSY-amplified breast cancer cells are dependent on EMSY levels.These results also demonstrate that miR-31 is an important antagonist of EMSY’s function in breast cancer.EMSY is not listed as a putative miR-31 target, and its expression remained constant upon miR-31 transfection.In contrast, the depletion of EMSY led to increased levels of primary miR-31 transcripts suggesting that EMSY affects the transcription of the miR-31 gene rather than the processing of its transcripts.These observations prompted us to investigate whether EMSY directly represses transcription of miR-31 by binding to its promoter region.We identified the promoter of miR-31 using 5′ rapid amplification of complementary DNA ends experiments.ChIP analyses indicated that EMSY associated with the promoter of miR-31 but did not bind the regions upstream of two control miRNA genes, miR-181a-2 and miR-198.Furthermore, we found that RNA polymerase II occupancy on the miR-31 promoter increased upon the downregulation of EMSY, consistent with EMSY repressing transcription.Finally, to address whether EMSY amplification could further repress miR-31, as predicted by our previous results, we compared EMSY and RNA polymerase II occupancy between MCF-7 cells and MDA-MB-415 and MDA-MB-175 cells.EMSY occupancy on the miR-31 promoter was higher in cells with EMSY amplification, whereas RNA polymerase II occupancy was lower.Altogether, these observations indicate that EMSY associates with the promoter of miR-31 and represses its expression.To further understand the mechanisms by which EMSY silences miR-31, we sought to decipher how EMSY is recruited to the miR-31 promoter.Given that EMSY has no obvious DNA binding domain, we considered the possibility that EMSY is recruited to the promoter of miR-31 via a DNA binding TF.To probe this hypothesis, we tested whether BRCA2, a known binding partner of EMSY, may be involved in EMSY’s recruitment.The depletion of BRCA2 has no effect on miR-31 expression, indicating that BRCA2 is not directly involved in the EMSY/miR-31 pathway.Then, we examined the DNA sequence upstream of the miR-31 transcription start site for TF binding sites.Figure 5A shows that putative binding sites for ETS family members can be found in the miR-31 regulatory region but not within the analogous regions of miR-198 and miR-181a-2, whereas a GATA1 site is present within miR-31 and miR-198.The position weight matrix for the ETS-1 TF binding motif was identified from the JASPAR database.The presence of the ETS-1 binding motif in the miRNA promoters was confirmed with the matchPWM function in the Biostrings package with a minimum matching score of 100%.Importantly, ETS-1 is a classic DNA binding protein implicated in transcriptional regulation, and its expression correlates with higher incidence of metastasis and poorer prognosis in breast and ovarian carcinoma.In order to validate the TF binding site predicions, we depleted MCF-7 cells for each of the TFs with a putative binding site in the miR-31 promoter and assessed the expression of miR-31.Figures 5B and S5A show that, of the three TFs, only ETS-1 downregulation led to the specific upregulation of miR-31.Importantly, it did so without affecting the expression of miR-198 and miR-181a-2.To further confirm these results and validate the predictions from the bioinformatic analyses, we tested the ETS-1 motifs in the miR-31 cis-regulatory element in functional transcription assays.We constructed reporter vectors containing the miR-31 promoter sequence upstream of the firefly luciferase cDNA using the pGL4 plasmid.Figure 5C shows promoter activity for the miR-31 promoter region containing the 2 ETS-1 motifs.Site-directed mutagenesis of each of the ETS-1 sites led to a significant increase of promoter activation.These results demonstrate that each of the ETS-1 binding sites are relevant to miR-31 expression and that ETS-1 functions as a repressor in this context.The mutation of both ETS-1 binding sites did affect miR-31 promoter activity more than each of the single mutants.Importantly, we confirmed by ChIP that the ETS-1 protein binds directly to the miR-31 promoter within the region containing the predicted ETS binding site.This analysis also showed that ETS-1 did not bind to other regions within the miR-31 promoter, nor did it bind to the promoters of miR-181a-2 or miR-198.Furthermore, depletion of ETS-1 resulted in reduced binding of both ETS-1 and EMSY to the miR-31 promoter without affecting the levels of EMSY protein in MCF-7 cells.Notably, downregulation of EMSY does not affect the binding of ETS-1 to the miR-31 promoter, supporting a model in which ETS-1 recruits EMSY to the miR-31 promoter.EMSY has been found to be a component of complexes involved in transcriptional control that contain, among other proteins, members of the histone H3K4me3 demethylase family.Given that JARID1b/PLU1/KDM5B has been shown to be expressed in 90% of invasive ductal carcinomas and that KDM5B represses transcription by demethylating H3K4, we examined whether KDM5B associates with EMSY.Figure 5E shows that KDM5B coimmunoprecipitates with endogenous EMSY from MCF-7 cell extracts.Moreover, ChIP experiments showed that KDM5B binds to the miR-31 promoter at the same position as EMSY and ETS-1.Consistent with its role as a histone demethylase, the depletion of KDM5B resulted in an increase of H3K4 trimethylation on the miR-31 promoter and a concomitant increase in the expression of miR-31.Depletion of KDM5B did not affect the association of either EMSY or ETS-1 to the miR-31 promoter but did affect its own binding in that region.Finally, we asked whether ETS-1 and KDM5B contribute to EMSY’s capacity to increase the migration of MCF-7 cells.Using the Boyden chamber assay, we observed that the increased migrative ability of MCF-7 cells overexpressing EMSY is significantly reduced when cells are depleted for ETS-1, KDM5B, or a combination of both ETS-1 and KDM5B.Altogether, these results support a model whereby ETS-1 directly binds to an ETS binding motif within the miR-31 promoter to recruit EMSY and KDM5B to repress transcription.The EMSY gene is amplified and overexpressed in a substantial proportion of sporadic cases of poor prognosis breast cancer.However, determining the direct contribution of EMSY to breast cancer has been difficult because the gene resides within the 11q13-11q14 locus, containing a cassette of genes, including several known and potential breast cancer drivers.This has meant that EMSY’s oncogenic contribution has been difficult to assess.Our previous studies have shown that the EMSY protein interacts with BRCA2 and has a role in chromatin remodeling.The data presented here show that EMSY itself possesses the hallmarks of an oncogene: ectopic expression of EMSY transforms NIH 3T3 fibroblasts, confers anchorage-independent growth to MCF-7 cells in soft agar, and potentiates tumor formation and metastatic features in vivo.In light of the recent demonstration that amplification of the 11q13-11q14 locus is associated with a particularly high-risk outcome, the results presented here support the conclusion that EMSY is not simply a passenger of the locus but rather a significant oncogenic driver.Our results also identify a subset of miRNAs whose expression is affected by EMSY levels in primary breast tumor samples.One of them, miR-31, is a key regulator of breast cancer metastasis.Examination of the large METABRIC cohort highlights a negative correlation between EMSY and miR-31 expression.In breast cancer cell lines, the overexpression of EMSY reduces the expression of the miR-31 gene, increases the expression of miR-31 target genes, and induces invasion and migration.We also find that loss of miR-31 did not enhance transformation in MCF-7 cells, suggesting a two-step process, wherein EMSY overexpression induces cell transformation and then EMSY functions to downregulate miR-31, leading to the progression of the transformed phenotype.Moreover, restoration of miR-31 significantly inhibited the invasion, migration, and colony-formation abilities of cells overexpressing EMSY or harboring EMSY amplification, a result which phenocopied the effects of EMSY depletion in these cells.Thus, the regulation of the miR-31 pathway by EMSY can explain, at least in part, its oncogenic behavior and association with poor prognosis.Moreover, miR-31 is a critical target of EMSY in breast cancer.Our results also identify an EMSY pathway in which the BRCA2 protein does not seem to contribute.Therefore, the work presented here provides evidence for the existence of at least two distinct EMSY activities: one where EMSY acts in concert with BRCA2, and one where EMSY functions in a BRCA2-independent manner.Our data also provide mechanistic insights into the function of EMSY at the molecular level.EMSY is recruited to the miR-31 promoter by the ETS-1 TF, and, together with the KDM5B histone demethylase, these factors repress miR-31 expression.The ability of EMSY to function via the recruitment of the chromatin regulator KDM5B is consistent with its previously suspected involvement in chromatin regulation.Furthermore, it is interesting to note that two components of this pathway, namely ETS-1 and KDM5B, have themselves been implicated in breast cancer.Of particular relevance is the fact that ETS-1 is defined as a candidate breast cancer oncogene involved in the regulation of the expression of genes involved in tumor progression and metastasis and that the expression of ETS-1 correlates with higher incidence of metastasis and poorer prognosis in breast and ovarian carcinoma.The second factor implicated in the pathway, which we identified as KDM5B, is an H3K4me2- and H3K4me3-specific demethylase belonging to the Jmjc-domain-containing family of histone demethylase.KDM5B has been shown to be overexpressed in several cancers, such as breast, prostate, and lung cancer, and is required for mammary tumor formation in xenograft mouse models.The pathway indentified here may not provide the unique and complete molecular explanation for the association between EMSY amplification and poor prognosis.EMSY’s contribution to tumorigenesis may not be solely due to the silencing of miR-31, given that other EMSY unidentified target genes may also play a role.As such, the identification of additional EMSY mRNAs and miRNA targets will be important for understanding the complex mechanisms underlying EMSY’s role in breast cancer.Moreover, the EMSY/ETS-1/KDM5B pathway may not be the only route for miR-31 transcriptional regulation.Indeed, recent reports have identified other genetic and epigenetic axes influencing miR-31 expression.For example, in adult T cell leukemia, Polycomb proteins have been shown to contribute to miR-31 downregulation.Interestingly, a functional interplay between KDM5B and Polycomb proteins has recently been implicated in mouse development.In addition, another epigenetic modification, DNA methylation, is also involved in decreasing miR-31 expression in breast cancer.The interplay between EMSY and these factors, in addition to other targets and/or signals such as estrogen, and how they may contribute to malignant progression are of significant interest for future work.The data presented here, which are derived from multiple different molecular and cellular assays as well as animal studies together with correlative studies involving human breast cancer patients, collectively and consistently support the conclusion that EMSY represses transcription of the noncoding miR-31 gene.This pathway represents one mechanism by which EMSY exerts its oncogenic and metastatic potential in breast cancer, resulting in a poor outcome prognosis of EMSY-amplified breast tumors.Given the convergence of four breast-cancer-associated genes with overlapping biological roles, this pathway offers a number of avenues for therapeutic intervention.The details of all siRNAs, plasmids, synthetic RNAs, primers, and antibodies used in this study are provided in in the Supplemental Information and/or are available upon request.Mouse NIH 3T3 and human breast cancer cell lines MCF-7, MDA-MB-415, and MDA-MB-175 were purchased from ATCC and cultured in Dulbecco’s modified Eagle’s medium supplemented with 10% fetal bovine serum and 1% penicillin and streptomycin and grown at 37°C under 5% CO2.MCF-7 cells were harvested by rinsing with PBS, and total extracts were incubated at 4°C overnight with the antibodies in IPH buffer.Standard procedures were used for western blotting.For control western blotting of RNAi-treated cells, antibodies against EMSY, ETS-1, KDM5B, and β-tubulin were used.Finally, membranes were incubated with secondary antibodies conjugated with horseradish peroxidase, and signals were visualized with enhanced chemiluminescence.Images were scanned and processed with Adobe Photoshop CS, and slight contrast was applied equally across the entire images.2.5 × 104 cells were plated in serum-free media in the upper chamber with either non- or Matrigel-coated membranes for transwell migration and invasion assay, respectively.The bottom chamber contained DMEM with 10% FBS.After 24–48 hr, the bottom of the chamber insert was fixed and stained with crystal violet.Cells on the stained membrane were counted under a microscope.Each membrane was divided into four quadrants, and an average from all four quadrants was calculated.Each assay was performed in biological triplicates.Statistical analysis was carried out with Microsoft Excel or in the R statistical computing language in order to assess differences between experimental groups.Statistical significances are expressed as a p value.In this study, 5-week-old female athymic nu/nu mice housed under specific pathogen-free conditions were used.Mice were anesthetized and ovariectomized 2 days before the implantation of 3.0 mm pellets containing 17β-estradiol.Cells were injected 2 days after pellet implantation.For tumor growth analysis, 4–5 × 106 cells were injected in to the mammary fat pads of mice.Tumor development was monitored twice a week, and tumor width and length were measured.Tumor volume was estimated according to the formula V = π/6 × L × W2.Mice were sacrificed 15–30 days after injection, and then tumors were excised and weighed.For the experimental metastasis assay, 1–2 × 106 cells were injected into the tail vein.Metastasis were examined later and analyzed macroscopically and by hematoxylin and eosin tissue staining.At least ten animals were tested for each experimental condition.All experiments with mice were approved by the Bellvitge Biomedical Research Institute Animal Care and Use Committee.Total RNA and enriched miRNA fractions were obtained with miRNeasy Mini and mirVANA kits according to the manufacturer’s instructions.DNase treatment was carried out in order to remove any contaminating DNA.RNA was reverse transcribed with a miScript Reverse Transcription Kit according to the manufacturer’s instructions.The real-time qRT-PCR for the quantification of miRNAs as well as pri-miRNAs was carried out with a miScript Primer Assays and miScript SYBR Green PCR Kit on an ABI 7500 Fast Real-Time PCR System.Transcripts of RNU5A and RNU1A small RNAs were also amplified in order to normalize the levels of miRNAs.All reverse transcriptions and no-template controls were run at the same time.qRT-PCR was used to determine the expression levels of EMSY, BRCA2, ETV4, GATA1, ETS-1, KDM5B, KDM5A, FZD3, MMP-16, RHOA, RDX, ITGA5, M-RIP, CXCL12, B2M, and ALAS1.B2M was amplified as an internal control in order to monitor the amount and integrity of the cDNA.All reactions were performed in duplicates.The ΔΔCt method was used for analysis.Fold change expression in a sample = 2−ΔΔCt.Primers sequences are available upon request.Gene expression changes are presented as relative fold change compared to the values in empty vector cells.MCF-7 cells were analyzed for the presence and differential expression of a panel of 88 cancer-related miRNAs with cancer RT2 miRNA PCR arrays according to the manufacturer’s instructions.Data analysis was performed with the web-based software package for the miRNA PCR array system.In order to examine the relationship between EMSY amplification, miR-31 expression levels, and the expression of miR-31 targets, we employed the METABRIC cohort of ∼2,000 primary breast tumors for which paired Affymetrix SNP 6.0 copy-number data and Illumina HT-12 expression data were available.Data were processed and summarized as described by Curtis et al.The genotype and expression data are available at the European Genome-phenome Archive, which is hosted by the European Bioinformatics Institute, under accession numbers EGAS00000000083.The raw noncoding RNA microarray data are available under accession number EGAS00000000122.Here, we report on a subset of 1,283 cases for which we also had miRNA expression profiles based on a custom Agilent platform.In brief, miRNA data were processed by iteratively removing the two most extreme outliers followed by cubic spline normalization.Using these data, we evaluated the relationship between EMSY amplification and miR-31 expression levels as well as the association between these events and clinical outcome.EMSY copy-number states were determined as previously described.miR-31 expression levels were binned into low and high states on the basis of the lower and upper 15% of expression values, respectively.Statistical significance was evaluated with the log rank test.All analyses were performed in the R statistical computing language.The Wilcoxon rank-sum test was used to evaluate whether expression levels varied significantly depending on EMSY copy-number state.A representative subset of 98 primary tumors were selected from the METABRIC cohort in order to include cases with EMSY amplification, gain, neutral copy number, or heterozygous deletion.Cases were also selected to be copy-number neutral for IFNE1 and MTAP, which flank the miR-31 locus.Then, qRT-PCR was performed in order to assay EMSY and miR-31 expression levels as well as the expression of the control miRNAs, miR-191, and miR-93.Triplicate aliquots of cDNA were subjected to real-time qPCR on the ABI PRISM 7900HT system.Relative expression values accounting for differences in amplification efficiency were calculated by automated software using the linear regression of a standard titration curve included for each plate.Expression was normalized for each sample by dividing the relative expression of each gene by the geometric mean of the relative expression values of multiple internal reference genes.RACE was carried out with an GeneRacer RLM-Race Kit according to the manufacturer’s instructions.In brief, cDNAs were made from total RNA with random primers according to the standard protocols.The 5′ ends of the miRNAs were amplified with a gene-specific primer.Amplicons were reamplified successively with nested-gene-specific primers.All PCRs were performed with Platinum TaqDNA Polymerase High Fidelity.Amplified PCR products were purified and cloned into pCR4-Topo vectors according to TOPO-TA cloning protocols, sequenced, and analyzed by BLAST.Human miR-31 promoter was amplified by PCR from genomic DNA of MCF-7 cells.PCR products were digested and ligated into pGL4.10.Mutations were introduced by site-directed mutagenesis of the ETS-1 binding sites according to the manufacturer’s instructions for site-directed mutagenesis.The pTK-luc vector was cotransfected with the pGL4 vectors in order to normalize the transfection efficiency.After 48 hr of transfection, the promoter activity was measured with a Dual-Glo Luciferase Assay System according to the manufacturer’s protocol.Chromatin was prepared from MCF-7, and immunoprecipitations were performed as described previously and with the ChiP-IT High Sensitivity Kit according to the manufacturer’s instructions.Chromatin was immunoprecipitated with 2 μg of specific antibodies.Reverse crosslinking of DNA was followed by DNA purification with the ChIP DNA Clean & Concentrator kit.The amount of DNA immunoprecipitated with each antibody was measured in duplicate by qPCR.Primers sequences are listed in Supplemental Experimental Procedures.E.V. conceived and coordinated the study, designed and carried out experiments, interpreted data, assembled figures, and wrote the manuscript.T.K. designed experiments, interpreted data, and wrote the manuscript.C. Curtis, A.G., and C. Caldas performed experiments, analyzed data, and edited the manuscript.V.D. and M.E. contributed to animal work.A. Villanueva and A. Vidal performed the pathological analysis.S.R. contributed to TF binding site identification.S.A. provided access to tumor data.
Amplification of the EMSY gene in sporadic breast and ovarian cancers is a poor prognostic indicator. Although EMSY has been linked to transcriptional silencing, its mechanism of action is unknown. Here, we report that EMSY acts as an oncogene, causing the transformation of cells invitro and potentiating tumor formation and metastatic features invivo. Weidentify an inverse correlation between EMSY amplification and miR-31 expression, an antimetastatic microRNA, in the METABRIC cohort of human breastsamples. Re-expression of miR-31 profoundly reduced cell migration, invasion, and colony-formation abilities of cells overexpressing EMSY or haboring EMSY amplification. We show that EMSY is recruited to the miR-31 promoter by the DNA binding factor ETS-1, and it represses miR-31 transcription bydelivering the H3K4me3 demethylase JARID1b/PLU-1/KDM5B. Altogether, these results suggest a pathway underlying the role of EMSY in breast cancer and uncover potential diagnostic and therapeutic targets in sporadic breast cancer. © 2014 The Authors.
114
Innate Recognition of Intracellular Bacterial Growth Is Driven by the TIFA-Dependent Cytosolic Surveillance Pathway
Innate recognition of pathogen-associated molecular patterns, invariant molecules broadly present in microbes yet different from self, alert the host to a microbial presence and, depending on the context in which they are recognized, elicit an immune response that is commensurate with the microbial threat presented.The cellular compartment in which the microbial product is sensed is particularly important, as contamination of the cytosol serves as an indicator for virulence, often eliciting a more robust response than the same PAMP sensed on the cell surface.The appropriate ranking of microbial threats is especially critical in the epithelium, which forms the physical and functional barrier between the host and external environment.Intestinal epithelial cells, through mechanisms that remain poorly defined, maintain immune homeostasis with trillions of commensal microbiota, yet mount robust inflammatory responses to pathogenic infections.In this respect, IECs act as frontline sentinels for invasive pathogens.A classic model for invasive intestinal pathogens is Shigella flexneri, a foodborne bacterium that invades the colonic epithelium and causes shigellosis in humans.Shigella crosses the epithelial barrier via M cells, then invades IECs basolaterally via its type III secretion apparatus.Following internalization, Shigella escapes the entry vacuole and readily multiplies in the cytoplasm, using actin-based motility to spread to adjacent cells.The innate response to Shigella is characterized by massive nuclear factor κB-driven production of the chemokine interleukin-8, which recruits neutrophils to control the infection at the intestinal level, but also contributes to immunopathogenesis characteristic of shigellosis.Most of the IL-8 originates from infected IECs and is believed to depend entirely on intracellular recognition of Shigella by the peptidoglycan sensor NOD1.While evidence that invasive Shigella activates NOD1 is abundant, whether additional pathways are also required to drive NF-κB activation is unclear.However, the lingering Shigella-induced cytokine responses in mice deficient for RIP2, the adaptor essential for both NOD1 and NOD2 signaling, hints at the existence of additional NF-κB-activating sensors for invasive Shigella.We recently demonstrated that a metabolic intermediate in lipopolysaccharide biosynthesis, heptose-1,7-bisphosphate, represents a new PAMP specific to Gram-negative bacteria.Host recognition of HBP requires its liberation from the bacterial cytosol, which can occur through active release by Neisseria species or in the case of enteric bacteria, through extracellular or intraphagosomal bacteriolysis.Sensing of liberated HBP occurs within the cytosol following internalization by endocytosis and requires the TRAF-interacting forkhead-associated protein A as the central mediator of a cytoplasmic surveillance pathway specific for HBP.Contamination of the cytosol with HBP induces TIFA phosphorylation-dependent oligomerization, which recruits and activates the E3 ubiquitin ligase TRAF6 to initiate an NF-κB-dependent pro-inflammatory transcriptional response.Although epithelial cells can internalize extracellular HBP, the requirement for cytosolic sensing indicated that the real function of the TIFA pathway may lie in surveilling the cytosol for intracellular bacteria.However, whether the TIFA pathway plays a role in detecting cytosol-invasive bacteria is unclear.Here, we report that the TIFA cytosolic surveillance pathway constitutes a NOD-independent mechanism for detecting invasive Gram-negative bacteria.Whereas NOD1 mediated bacterial sensing during the initial breach of the cytoplasm, TIFA conferred the ability to detect replicating bacteria in the host cytosol.Both invasive Shigella and a vacuole-escaping Salmonella mutant released HBP during cytosolic growth and triggered a dynamic TIFA-dependent inflammatory response that gauged the rate of intracellular bacterial proliferation.These findings define TIFA as an innate sensor of intracellular bacterial growth and extends our understanding of cytosolic pathogen recognition beyond the NOD proteins.We hypothesized that TIFA had a role in sensing cytosol-invasive bacteria in IECs and chose Shigella as a model pathogen with which to test this hypothesis.Considering the abundance of literature on NOD1-dependent detection of Shigella, we began by definitively assessing whether NOD1 and NOD2 could account for the entirety of the IL-8 response to invasive Shigella.We generated human HCT116 colonic epithelial cells in which either NOD1, NOD2, or both these sensors were knocked out via CRISPR/Cas9.Surprisingly, a substantial portion of the IL-8 produced 6 hr after infection was independent of both NOD1 and NOD2.Knock down of MyD88 confirmed that the NOD-independent IL-8 was not derived from TLRs.In contrast, there was a dramatic decrease in Shigella-induced IL-8 in HCT cells knocked out for TIFA.TIFA KO cells were not compromised in their response to the NOD1 agonist C12-iE-DAP or the cytokine tumor necrosis factor alpha.TIFA was also essential for the inflammatory response to HBP and invasive Shigella in HeLa and T84 colonic epithelial cells.Transcriptional analysis revealed that the TIFA response was most apparent at late time points, suggesting a kinetic requirement to TIFA activation.To ascertain whether TIFA signaling was independent of NOD1, we conducted small hairpin RNA knock down of TIFA or NOD1 in NOD1 or TIFA knockout cells, respectively.In each case, depletion of both sensors further reduced IL-8 production, indicating that TIFA and NOD1 signal independently.We then asked whether the TIFA deficiency affected Shigella-induced NF-κB activation.Phosphorylation of transforming growth factor β-activated kinase 1, IκBα, and nuclear translocation of NF-κB p65 induced by Shigella was markedly reduced in TIFA KO cells.Extracellular stimulation with soluble bacterial lysate or cell-free supernatant derived from Shigella did not induce significant TIFA-dependent NF-κB activation or IL-8 production confirming that TIFA was not activated by extracellular HBP.In line with this, all cell lines were refractory to non-invasive Shigella BS176.To determine if the TIFA response was driven by the bacterial metabolite HBP, we generated S. flexneri strains deleted for hldE, the gene that synthesizes HBP, or waaC, a gene downstream of HBP in the ADP-heptose biosynthesis pathway.The ΔwaaC mutant is a control for the LPS phenotype of the ΔhldE mutant because both have the same truncated “deep-rough” LPS structure and thus differ solely by the ability to synthesize heptose metabolites downstream of HldE.Strikingly, HBP-deficient Shigella ΔhldE failed to elicit IL-8 production from HCT cells despite a similar bacterial load to Shigella ΔwaaC.These results demonstrate that TIFA acts independently of NOD1 and NOD2 to sense heptose metabolites delivered to the cytosol by invasive Shigella.To further explore the pathogenic signature required to activate TIFA, we infected wild-type or TIFA KO cells with bacteria that establish their niche in differing cellular compartments.Whereas Shigella immediately escapes into the cytoplasm, Salmonella enterica Typhimurium replicates in the phagosome, using a T3S to translocate effector proteins and maintain a stable bacteria-containing vacuole.We found that the potent TIFA-dependent NF-κB response apparent during Shigella infection was absent during infection with Salmonella even though soluble lysates from both species had comparable levels of TIFA-stimulating activity when introduced into permeabilized cells.Loss of the effector SifA causes vacuole rupture with concomitant release of Salmonella into the cytosol.Consistent with TIFA as a sensor of cytosolic bacteria, Salmonella ΔsifA induced significantly more TIFA-dependent NF-κB activation and TIFA oligomerization than wild-type bacteria.IECs express endogenous levels of TLR5, confounding analysis during infections with flagellated bacteria like Salmonella.Indeed, knock down of MyD88 markedly amplified the TIFA-dependent difference in NF-κB induced by Salmonella ΔsifA.Moreover, allowing the infection to proceed for 20 hr when some wild-type Salmonella enter the cytoplasm further uncoupled TIFA-mediated Salmonella recognition from that which occurs through TLR5.To determine if invasion alone was sufficient to activate TIFA, we considered whether invasive Gram-positive bacteria, which are naturally heptose-deficient, initiated a similar response.Infection with Listeria monocytogenes did not elicit a TIFA response.However, HBP could recapitulate the Gram-negative bacterial response, because repeating the L.m infections in the presence of HBP induced a dramatic TIFA-driven NF-κB response without affecting interferon-beta production.Together, these observations suggest TIFA is activated by Gram-negative bacteria that escape the vacuole and identifies HBP as an important pro-inflammatory mediator during infections with invasive Gram-negative bacteria.NOD1 and NOD2 are localized to cell membranes, meaning they are positioned to rapidly respond to invading microbes at the site of bacterial entry.Evidence suggests that the initial breach of the vacuolar membrane introduces peptidoglycan fragments into the cytosol that activate the NODs.Indeed, NOD1 is recruited to Shigella entry sites and interacts with the adaptor RIP2 within minutes of bacterial invasion.Considering the TIFA-independent p-IκBα and IL8 induction within 60 min of Shigella infection, we hypothesized that NOD1 and TIFA are activated at different stages of infection: NOD1 responding first to vacuolar escape and TIFA responding later to bacteria free in the cytosol.Consistent with this hypothesis, nuclear translocation of p65 NF-κB was entirely NOD1-dependent and TIFA-independent within 60 min of infection.Strikingly, this phenotype was reversed at later time points, with nuclear p65 being TIFA-dependent and NOD1-independent by 2 and 4 hr of infection.This temporal pattern was also reflected in p-IκBα levels during infection.Nuclear p65 was absent in cells deficient in both NOD1 and TIFA, confirming that both pathways are independent and are cumulatively responsible for bacterial sensing.In addition to NF-κB, invasive Shigella activates the stress response kinase SAPK/JNK.Interestingly, inactivation of both TIFA and NOD1 pathways was required to eliminate Shigella-induced p-SAPK/JNK.Given that both NF-κB and JNK control pro-inflammatory gene expression, we next analyzed the kinetics of Shigella-induced IL8 transcription.Whereas NOD1 mediated the initial wave of IL8 induction, the robust and sustained increase in IL8 beginning 90 min after infection was almost entirely driven by TIFA and is reflected by the total IL-8 protein level being decidedly TIFA-dependent by 4 hr of infection.If the NOD1 pathway is sensing the initial breach of the cytoplasm, then using a higher MOI would shift IL-8 production to be more NOD1-dependent, as a higher dose of NOD1-agonist would enter cells.Indeed, only at high doses of bacteria was there a significant defect in NOD1-dependent IL-8 production.In contrast, a reduction in TIFA-driven IL-8 was observed at all MOIs examined, with the phenotype being more apparent at lower MOIs.Shigella induces mitochondrial cell death in non-myeloid cells; NOD1-driven NF-κB signaling is important in counterbalancing this necrotic response.We found that TIFA activation is also important for survival, as cells deficient in both NOD1 and TIFA demonstrate increased cell death during infection.Together, these results suggest that the NOD1 and TIFA pathways are not redundant and instead represent separate defense mechanisms that are activated temporally during infection.We next analyzed TIFA expression in the human intestinal tract.TIFA was expressed at high levels in the gastrointestinal epithelium when considered relative to NOD1.TIFA has been identified as a microbial-inducible gene in other vertebrate species, and the TIFA mRNA sequence contains A+U rich elements in the 3′UTR common to short-lived cytokines and proto-oncogenes.Therefore, considering the temporal cascade of Shigella-induced NF-κB activation, we asked whether bacterial recognition by NOD1 may induce TIFA expression and thereby amplify the inflammatory response.Indeed, stimulation of NOD1, NOD2, TLR5, or TIFA itself leads to upregulation of TIFA mRNA in HCT cells.Moreover, overexpression of NOD1 increased TIFA expression in a dose-dependent manner in HEK293T cells and was amplified by stimulation with c12-iE-DAP.TIFA induction was driven by NF-κB, as depleting the subunit RelA abrogated the upregulation.During Shigella infection, TIFA expression was induced at high MOIs in a NOD1/2-dependent mechanism.Priming TIFA expression through previous stimulation of NOD1, TLR5, or TNFR increased Shigella-induced NF-κB activation in a TIFA-dependent manner.Importantly, the priming effect was absent in TIFA KO cells complemented with a non-inducible TIFA construct confirming that the increase was a direct result of endogenous TIFA upregulation.Moreover, preventing translation with cycloheximide decreased Shigella-induced IL8 in a manner that required both NOD1/2 and endogenous TIFA.We next screened microarray meta-data for cell lines that exhibit naturally low TIFA expression.Among the greatest perturbations, were human Caco-2 colonic epithelial cells and a trans-immortalized mouse intestinal epithelial cell-line mIC-CI2.TIFA expression in Caco-2 cells, a model of Shigella infection was significantly lower than other cell types examined.Strikingly, while Caco-2 cells were naturally unresponsive to HBP, transduction with a TIFA-expressing retrovirus conferred the ability to detect HBP and increased Shigella-induced IL-8.Similar results were apparent with mIC-CI2 cells, which exhibited low Tifa levels and were insensitive to HBP.Expression of TIFA in trans restored HBP sensitivity and significantly increased Shigella-induced keratinocyte-derived chemokine and macrophage inflammatory protein 2, the murine functional homologs of human IL-8.These data indicate that cellular sensitivities to both HBP and invasive Shigella are determined by endogenous TIFA expression levels.Moreover, activation of the NOD pathways during infection induces a feedback mechanism that ensures sufficient TIFA expression to robustly respond to the continued presence of freely cytosolic bacteria.We previously identified phosphorylation-dependent oligomerization as the mechanism, whereby TIFA is activated by soluble HBP.To characterize the molecular mechanism of Shigella-driven TIFA activation, we examined TIFA oligomerization by clear-native PAGE and observed that TIFA assembles into high molecular weight complexes upon infection with invasive Shigella.Unexpectedly, TIFA disappeared from the detergent soluble fraction as the infection progressed, eventually becoming completely insoluble 4 hr post-infection.TIFA activation and aggregation was dependent on phosphorylation of TIFA on threonine 9, because the non-phosphorylatable TIFA mutant did not migrate to the insoluble pellet during infection.This is in line with the recently solved structure of TIFA, which depicts TIFA oligomerization as being mediated by intermolecular binding of the central forkhead-associated domain with phospho-Thr9.Because TIFA assembly allows recruitment and activation of the E3 ubiquitin ligase TRAF6, and HBP induces a physical and functional interaction between TIFA and TRAF6, we considered whether TRAF6 also became detergent-insoluble.Indeed, invasive Shigella triggered the migration of TRAF6 from the soluble to insoluble fraction in a TIFA-dependent manner.Notably, the detergent-insoluble pellet from infected wild-type, but not TIFA KO cells, was enriched in lysine 63-linked ubiquitin, suggesting Shigella induced TIFA-dependent TRAF6 activation.Moreover, immunoprecipitation analysis revealed that invasive Shigella triggered the formation of a complex involving TIFA, TRAF6, and p-TAK1.The presence of TIFA-dependent K63-linked Ub in the Shigella-induced TIFA-TRAF6 complex is consistent with TRAF6 being activated by TIFA.Moreover, complementation of TIFA KO cells with TIFA variants unable to oligomerize or interact with TRAF6 prevented the Shigella-induced formation of the TIFA-TRAF6-TAK1-Ub complex and largely abolished the IL-8 response to Shigella.Considering the insoluble nature of the Shigella-induced TIFA complexes, we next examined their cellular localization by immunofluorescence.Large TIFA aggregates were apparent in infected IECs 3 hr post-infection, an effect that was independent of NOD1.Kinetic analysis revealed that while TIFA aggregates were occasionally visible by 45 min after infection, foci became larger and more apparent at later time points, seeming to coincide with intracellular bacterial proliferation.In contrast to what is observed for NOD1 and NOD2 that surround the bacterial entry foci, TIFA aggregates did not co-localize with bacteria.These data suggest that freely replicating bacteria in the cytosol may liberate HBP, which triggers distal TIFA phosphorylation-dependent oligomerization and activation of TRAF6.Supernatants from Shigella cultures do not contain sufficient HBP to activate IECs when supplied extracellularly.Therefore, we investigated the context through which intracellular Shigella presents HBP to the TIFA signaling axis.We hypothesized that either HBP is liberated during infrequent bacteriolysis in the cytosol, or it is shed from bacteria, but in quantities so low that accumulation and concentration within the host cell is required for its detection.Provoking intracellular bacteriolysis in infected 293T cells using the cell-permeable antibiotic imipenem had no appreciable effect on NF-κB activation.Therefore, to determine if HBP is released during growth, we cultured Shigella in undiluted cytoplasmic extracts prepared from IECs to simulate cytosolic conditions.Following 2 hr of growth, we collected the cell-free supernatant and presented it into the cytosol of HCT cells using reversible digitonin permeabilization.Strikingly, there was abundant TIFA stimulating activity in the CFS after only 2 hr of growth.The effect was not limited to Shigella, as intracellular presentation of the CFS from E. coli grown in LB broth also stimulated TIFA.Previous studies have attributed the NF-κB inducing activity of Gram-negative supernatants to LPS or peptidoglycan.Therefore, to prove that the TIFA agonist was HBP, we collected the CFS from HBP-proficient or HBP-deficient S. flexneri and E. coli.Whereas the CFS from the ΔwaaC mutants stimulated robust IL-8 production, the CFS from the ΔhldE mutants, grown to the same optical density, failed to induce IL-8.This provides genetic evidence that HBP is the TIFA activating agonist in the CFS.Importantly, there was no difference in NOD1 activation induced by the CFS from ΔhldE and ΔwaaC E. coli, confirming the specificity of the TIFA pathway for HBP.Considering the activation kinetics of TIFA during infection, we asked whether HBP release required bacterial replication.We inoculated Shigella into cytoplasmic extracts at different bacterial densities and temperatures to allow for a variety of bacterial growth rates.After 2 hr, we collected the CFS and presented it into the cytosol of digitonin permeabilized cells.Incubation at 12°C abrogated both CFS-induced TIFA oligomerization and IL-8 production even when derived from cultures grown to high bacterial densities.Moreover, when the CFS extracts from 37°C cultures were normalized to account for final bacterial densities, TIFA-dependent, but not NOD1-dependent, IL-8 production correlated with the bacterial growth rate from which the CFS was derived.In addition, starting cultures at log phase resulted in more TIFA-stimulating activity in the CFS than cultures started at lag or stationary phase even when normalized to bacterial density.We then employed primary human colonic epithelial cells to establish the importance of HBP recognition in a physiologically relevant setting.Strikingly, the ability of Shigella to generate HBP during growth was a requisite for colonocytes to respond to cytosol-presented bacterial supernatants.Together, these data suggest HBP is generated and released by actively replicating Gram-negative bacteria.Importantly, the failure of non-permeabilized cells to respond to HBP-containing supernatants implies that entry into the cytosol is the limiting factor that determines whether TIFA is activated by HBP, providing a mechanism whereby IECs can discriminate between extracellularly and intracellularly growing bacteria.To test whether bacterial replication was a requisite for TIFA activation in the context of an infection, we generated invasive Shigella mutants that display a reduced intracellular growth rate.Host-derived pyruvate is an important energy source for intracellular Shigella.Consequently, mutants lacking either phosphotransacetylase or acetate kinase, which are required for the metabolism of pyruvate, are less metabolically active and display longer intracellular generation times than wild-type Shigella.Consistent with our hypothesis, the Δpta and ΔackA mutants induced markedly less TIFA-driven IL-8 than wild-type bacteria 6 hr after infection.In contrast, NOD1-driven IL-8 produced within 1 hr of infection was intact, consistent with each strain being equally invasive.Notably, the robust and sustained TIFA-dependent wave of IL8 transcription was absent during infection with the ΔackA mutant, confirming that bacterial growth is essential for activating TIFA.Complementation of the Δpta mutant with acetyl-phosphate, the product of PTA, restored both the intracellular growth rate and the TIFA-driven IL-8 response.Strikingly, in TIFA KO cells overexpressing NOD1, the NF-κB response induced by the Δpta mutant was unaffected by complementation with Acetyl-P.These data demonstrate that TIFA, but not NOD1, mediates the innate immune discrimination between proliferating and stagnant intracellular Shigella.Our results support a model in which TIFA-mediated HBP detection constitutes an innate sensory system that both temporally and functionally complements the NOD1 pathway.Whereas NOD1 alerts the host to immediate invasion of the cytosol, TIFA is activated later, with freely replicating cytosolic bacteria inducing TIFA phosphorylation-dependent oligomerization and activation of the ubiquitin ligase TRAF6.Indeed, bacterial replication within the cytosolic compartment was essential for HBP release and concomitant TIFA activation.Thus, TIFA represents a previously unknown layer of cytosolic surveillance that is sensitive to both the rate and extent of intracellular bacterial growth, a key determinant of pathogenic potential at epithelial surfaces.NOD1 has long been attributed as the sole NF-κB-activating sentinel molecule for Gram- negative bacteria inside epithelial cells.We speculate there are several reasons why the TIFA response has previously escaped attention.First, we show that cellular reactivity to HBP and invasive Shigella is sensitive to endogenous TIFA expression, and cell lines differ considerably in TIFA mRNA levels.Second, we observed relatively lower Tifa mRNA levels in the murine intestinal tract compared with human.Moreover, murine embryonic fibroblasts were refractory to HBP.This could, in part, explain why the response to invasive Shigella in cells from NOD1 KO mice tend to display a greater defect than we observe here using human NOD1 KO cells.Interestingly, Shigella colonization of the murine colon does not display the intense pro-inflammatory response that typifies its colonization of the human colon.Third, a lack of human NOD1 and NOD2 KO cells necessitated studies performed in systems in which the NLRs were depleted with RNAi or with dominant negative constructs.In each case, the NOD-independent response to Shigella could be attributed to either lingering expression or signaling by the non-targeted NLR.Finally, it is likely that a combination of time points examined and MOIs used in various infection protocols shifts the balance between NOD1- and TIFA-driven responses.Our results point to a NOD1 requirement at early time points and high MOIs, with the TIFA requirement evident at later time points and lower MOIs.Thus, our use of human NOD1, NOD2, and double-knockout cells, coupled with careful examination of the response kinetics delineated the TIFA and NOD pathways and exposed HBP as a central player in the innate response to Shigella.When characterizing the context in which HBP is presented to the cytosol, we noticed that TIFA activation correlated with the bacterial growth rate.Moreover, infection with metabolically deficient Shigella failed to activate TIFA.In this respect, HBP may serve as an intracellular indicator of microbial metabolism and growth.Other microbial components have been proposed to fulfill a similar role, such as bacterial pyrophosphates, quorum sensing molecules, tracheal cytotoxin, isoprenoid metabolites, and cyclic-dinucleotides.Among these molecules, HBP represents a unique signal in that it is generated solely by replicating Gram-negative bacteria, is sensed in the cytosolic compartment of non-immune cells, and elicits a pro-inflammatory NF-κB response.Another mechanism whereby the inflammatory response to Shigella is amplified is through cell-cell propagation of NF-κB activation from infected to uninfected bystander cells.In line with this, we occasionally noticed TIFA aggregates in bystander cells that did not appear to be infected.Moreover, a recent study reported activation of TIFA in bystander cells during Gram-negative bacterial infection.While HBP would seem to satisfy the characteristics of the diffusing small molecule, additional studies are required to establish this link.Together, our results suggest TIFA detection of HBP may allow the epithelium to serve as a dynamic sensor for virulent Gram-negative bacteria, calibrating the amplitude of inflammatory response based upon the rate of intracellular bacterial replication.A finely tuned inflammatory response is especially critical during Shigella infection, as IL-8 driven neutrophil chemotaxis promotes bacterial killing but can also destabilize the epithelial barrier to promote bacterial translocation.The precise contribution of TIFA to this dual-edged inflammatory response is unclear.While the mechanisms by which the host evaluates microbial threats have been extensively studied in the context of immune cells, the relative contribution of these mechanisms in IECs is less well understood.As cytosol-invasion is a defining characteristic of virulence, activation of the inflammasomes likely play an important role.However, this fails to account for the robust transcriptional response to bacterial invasion evident in IECs.Our results identify TIFA as the mediator of an innate sensory system that functions non-redundantly with the NOD1/2 pathways to provide comprehensive immunosurveillance of the cytoplasmic compartment within IECs.Whereas activation of the NODs is indicative of cytosol invasion, stimulation of the TIFA pathway informs the host to the extent of intracellular bacterial proliferation, providing the contextual signal to dramatically amplify the inflammatory response to virulent pathogens.All cells were cultured continuously in antibiotic-free medium.HCT116 cells were maintained in McCoy’s 5A medium, Caco-2 cells and HEK293T cells were maintained in DMEM, HeLa cells were maintained in RPMI, and T84 cells and mIC-CI2 were maintained in DMEM/F12.All media was supplemented with 10% FBS and 1% GlutaMAX.Primary normal human colonic epithelial cells were grown and maintained as per manufacturer’s protocol.All experiments using HCoEpiC were performed using passage 2 cells.Generation of KO cell lines with CRISPR/Cas9 is described in the Supplemental Experimental Procedures.HCT116 and HEK293T cells expressing FLAG-TIFA were generated by transduction with an MSCV-1XFLAG-TIFA retrovirus and selection with 10 μg/mL blasticidin.Where indicated, cell lines were transfected with pUNO-hNOD1 for 72 hr prior to treatment.See the Supplemental Experimental Procedures for complete protocols for immunoprecipitations, immunoblots, and fluorescence microscopy.Flagellin, TNFα, C12-iE-DAP, LPS, and muramyl di-peptide were from Invivogen.HBP was synthesized enzymatically as described previously.Unless otherwise indicated, ligands were administered for 6 hr.The following bacterial strains were used: Shigella flexneri M90T and its noninvasive variant BS176, E. coli DH5α ΔhldE and ΔwaaC have been described previously.S. Typhimurium strain 14028S and ΔsifA were a generous gift from W.W. Navarre, and Listeria monocytogenes EGDe were from K. Ireton.Generation of Shigella mutants are described in the Supplemental Experimental Procedures.For Shigella infections, single colonies from tryptic soy agar with 0.1% Congo-red were grown overnight in tryptic soy broth, diluted 1/100 and grown to an OD600 = 0.5, washed twice, and used for infection by spinoculation for 15 min at 2,000 × g.The end of the spin was considered as T = 0 for time course experiments.Following incubation for 30 min at 37°C, samples were washed three times and replaced with fresh growth medium containing 80 μg/mL gentamicin.Where indicated, 100 mM acetyl-phosphate was included in the infection media and during all subsequent washes and incubations.Unless indicated, HCT116 and HEK293T cells were infected with MOIs of 50 and 10, respectively.These MOIs resulted in 80%–90% infection efficiency 3 hr after infection.L. monocytogenes and S. Typhimurium infections were at an MOI of 10 and performed as described for Shigella.The cytoplasm was extracted from HCT116 cells as previously described.To collect the CFS, overnight cultures of Shigella or E. coli were subcultured and grown to a final OD600 of 0.1, 0.5, or 2.0, then used to inoculate cytoplasm extracts or LB broth at the desired bacterial density.Following 2 hr shaking at 12°C or 37°C, a 10-μL aliquot was removed for CFU enumeration, bacteria were pelleted, and the supernatant filtered through a 0.22 μm SpinX filter.Where indicated, the CFS was normalized to account for final CFU/mL before use.Reversible digitonin assays were performed as previously described.Briefly, cells were stimulated with CFS extracts for 20 min in permeabilization buffer in the presence or absence of 5 μg/mL digitonin, washed three times, then incubated for 4–6 hr in complete growth medium before luciferase assay or ELISA measurements.See the Supplemental Experimental Procedures for detailed buffer recipes.Quantitative measurements of cytokines were performed using ELISA kits from R&D Systems or BD Biosciences.NF-κB p65 activation was determined using the TransAM Transcription Factor ELISA.Luciferase assays were done using the Dual-Glo Luciferase Assay System as described previously.For analysis of gene expression, RNA was isolated using an RNeasy kit per manufacturer’s protocol and treated with TURBO DNase.cDNA was synthesized using the iScript cDNA synthesis kit and amplified using SsoAdvanced SYBR Green.See Supplemental Materials for oligonucleotide sequences.Cells were lysed for 30 min on ice in non-denaturing lysis buffer: 20 mM Tris-HCl pH 8 137 mM NaCl, 10% glycerol, 1% Nonidet P-40, 0.1% Triton X-100, and 2 mM EDTA.The soluble fraction was isolated by centrifugation and separated on 12.5% tris-glycine polyacrylamide gels without SDS.The insoluble pellet was solubilized in 1× Laemmli sample buffer and separated on 10% SDS-containing tris-glycine polyacrylamide gels.Data were analyzed using Prism 6 software.Two-way ANOVA with Tukey’s multiple comparison or unpaired Student’s t tests were used to determined significance unless otherwise noted.R.G.G. performed and analyzed experiments.C.X.G. performed experiments with primary cells and Shigella mutants.R.M. and S.E.G. constructed NOD1 and NOD2 KO cell lines and provided valuable input.H.K., J.R.R., C.A., and A.-S.D. constructed the Shigella mutants.R.G.G. and S.D.G.-O.designed experiments and wrote the paper with input from all authors.
Intestinal epithelial cells (IECs) act as sentinels for incoming pathogens. Cytosol-invasive bacteria, such as Shigella flexneri, trigger a robust pro-inflammatory nuclear factor κB (NF-κB) response from IECs that is believed to depend entirely on the peptidoglycan sensor NOD1. We found that, during Shigella infection, the TRAF-interacting forkhead-associated protein A (TIFA)-dependent cytosolic surveillance pathway, which senses the bacterial metabolite heptose-1,7-bisphosphate (HBP), functions after NOD1 to detect bacteria replicating free in the host cytosol. Whereas NOD1 mediated a transient burst of NF-κB activation during bacterial entry, TIFA sensed HBP released during bacterial replication, assembling into large signaling complexes to drive a dynamic inflammatory response that reflected the rate of intracellular bacterial proliferation. Strikingly, IECs lacking TIFA were unable to discriminate between proliferating and stagnant intracellular bacteria, despite the NOD1/2 pathways being intact. Our results define TIFA as a rheostat for intracellular bacterial replication, escalating the immune response to invasive Gram-negative bacteria that exploit the host cytosol for growth.
115
Development of a single droplet freezing apparatus for studying crystallisation in cocoa butter droplets
Cocoa butter is a naturally occurring, edible fat produced from the cacao bean, and is one of the major ingredients of chocolate.It consists of a mixture of triacylglycerides and exhibits polymorphic phase behaviour, existing in six distinct crystalline forms.These are labelled Forms I to VI following the Wille & Lutton nomenclature or γ, α, β′ and β using the Larsson nomenclature.Form I is the least thermodynamically stable polymorph and Form VI is the most stable.The different polymorphs can be identified via their sub-cell structure using X-ray diffraction Clarkson et al., 1934.They can also be distinguished using differential scanning calorimetry.XRD offers unambiguous identification of the crystals present and is thus the preferred method of characterisation.Much of the chocolate literature employs the Roman classification, originally based on melting point data, and for this reason it is used here.Real time XRD studies using laboratory X-ray diffractometers have been carried out to observe the evolving crystalline structure in cocoa butter in both static and sheared crystallisation.Synchrotron XRD offers improved resolution, rapid data collection and simultaneous small and wide angle measurements, corresponding to the long and short range spacing in the crystal lattice.In most of these studies, the cocoa butter sample is held in temperature controlled sample holder, a thin capillary tube, or a bespoke cell.The cocoa butter samples are usually heated to 50–70 °C for 15–30 min to avoid memory effects, and then cooled, at a rate similar to that used in DSC, or crash cooled at 50 K/min, to the required temperature.Droplet samples of cocoa butter have not previously been studied by XRD until the development of the single droplet freezing apparatus reported by Pore et al.The use of a droplet suspended in a flowing medium is common in heat and mass transfer studies.The single droplet freezing apparatus was developed by to study temperature transitions in freezing droplets of aqueous food solutions.The SDFA allows the thermal history of a droplet to be monitored by suspending it on a fine thermocouple and subjecting it to a cold, dry air stream.This apparatus was originally used with video imaging to study the freezing behaviour of dairy solutions.SDFA devices have also been constructed for use in MRI machines for mapping emulsion creaming and solidification within droplets.Gwie et al. used an SDFA to study spray-freezing of tripalmitin and cocoa butter.Their SDFA was later modified to fit into a laboratory X-ray system by Pore et al., allowing in-situ real time monitoring of the freezing process.It was possible to observe the transformation from Form I to the higher melting polymorphs, which were consistent with the temperature-phase transitions reported for static bulk samples crystallised under isothermal conditions by Van Malssen et al.Form V was first observed after 2 h at 24 °C, with complete transformation into Form V after 4 h; this is considerably shorter than the 1 week reported by van Malssen et al.The droplets were 2 mm in diameter, small enough to be suspended on a thermocouple junction, but large enough to interact with the X-ray beam.This paper reports a substantial redesign of the X-ray SDFA to improve the consistency and accuracy of in situ observation of crystallisation with well-controlled temperature steps.The apparatus is able to maintain the droplet temperature at 70 °C, followed by cooling at 2–5 K/s to a temperatures as low as −10 °C.A short study of the crystallisation of a Côte d’Ivoire cocoa butter demonstrates the versatility of the device and further explores the low Biot number hypothesis proposed by Pore et al. for the rapid phase transformation in droplets.The experiments are accompanied by numerical evaluations of heat transfer with the drop to determine whether the temperature distribution within the droplets is uniform.Fig. 1 shows a schematic and photograph of the SDFA.The chamber is a cylindrical Perspex duct of length 130 mm and diameter 33 mm, with air entering at the base and discharging to atmosphere at the top via 6 equally spaced 5 mm diameter holes.Chilled, dry air enters the bottom of the chamber, marked e, via 8 mm i.d. reinforced PVC tubing and passes through a flow straightener, marked d, consisting of a cylindrical duct of diameter 16.5 mm containing a honeycomb network of 3 mm diameter plastic straws.The inlet air temperature, Ta, was measured at the entrance to the main chamber by a K-type thermocouple.The cocoa butter droplet was suspended from the junction of a thin K-type thermocouple, 50 μm in diameter, located on the central axis 35 mm from the air inlet.The thermocouple junction has a drop of solder attached, coated with nail varnish, to help droplets adhere without promoting nucleation.The thermocouple wires upstream of the droplet are encased in insulating reinforced PVC tubing of outer diameter 2 mm.The vertical position of the droplet can be adjusted.Both the inlet air temperature, Ta, and the droplet temperature, Td, were monitored and recorded on a PC using a PicoLog data logger at 0.25 s intervals.The X-ray beam enters through an opening in the side wall of diameter 3 mm and exits through an opening of diameter 10 mm opposite; both openings are taped over with 1 mm thick Kapton© film.The chamber is mounted between the collimator and detector of the X-ray diffractometer.The spatial configuration of the X-ray system prevented the use of long entry ducts required to obtain fully developed flow patterns.Laboratory compressed air was passed at 50 L/min from a regulator with an oil trap through a rotameter, a dehumidifier and into a 8.26 m long coil of 8 mm i.d. copper tubing immersed in the refrigerant bath of a chiller unit located underneath the X-ray chamber.The air temperature at the inlet and outlet of the chiller were monitored using K-type thermocouples.The cold, dry air passed along insulated tubing to a T-junction, where the flow was split: 30 L/min passed to the SDFA via an electronic flow controller and the remainder was discharged to atmosphere.The air flow into the SDFA was kept below 30 L/min to prevent the droplet oscillating or being blown off.An air flow rate of 30 L/min corresponded to a superficial velocity in the SDFA chamber, u∗, of 0.58 m/s.At an inlet temperature of 0 °C, this corresponded to a mean chamber Reynolds number, Rech, of 3720.This value indicates that the flow lies in the transition region, but this is not discussed further here as the flow is unlikely to be well-developed.Hot wire anemometry tests, detailed in Section 2.2, determined the velocity distribution around the droplet.HWA was performed at 20 °C using flow rates ranging from 20–50 L/min, as summarised in Table 1.Droplets with diameter, d, approximately 2 mm were produced using a 1 ml syringe.The droplet thermocouple assembly was brought to a temperature of 70 °C by holding it in a channel within a heated aluminium cylinder.The droplet was held in the block at 70 °C for at least 15 min before the start of each experiment.The holding temperature and time were chosen to eliminate residual nuclei whilst minimising heat damage to the cocoa butter.After each test the thermocouple was washed with n-hexane to dissolve any remaining cocoa butter, rinsed with acetone, recoated with clear nail varnish and left to dry.The range of cooling rates that can be studied is determined by the air flow rate and droplet size: droplet oscillation set a limit on the maximum flow rate, while practical aspects associated with loading the drop and providing sufficient X-ray scattering set a minimum size for the drop.The rate of heat transfer to and from the droplet is dominated by convective heat transfer and thus the air flow rate in the vicinity of the droplet.It is not practicable to employ long ducts upstream of the droplet in order to generate well developed velocity profiles owing to the confined access within the X-ray unit employed here, and heat losses to ambient.Velocity distributions were therefore measured in order to provide accurate values for the simulations reported in Section 3 and also to establish the uncertainty associated with assuming well developed flow profiles.The velocity distribution of the air as it passed through the main chamber was determined using a hot-wire anemometer connected via a DrDAQ data logger to a PC.The HWA element was located at the end of a 3 mm diameter rod.The probe was held vertical and measurements were made along transverse planes at two heights: the droplet level, and at the top of the chamber.The droplet holder was not present during these measurements.Measurements were made at regular increments along each axis at both levels.The HWA probe was calibrated using a smooth, cylindrical Perspex duct of length 500 mm and diameter 33 mm with flow rates ranging from 10 to 70 L/min.All HWA tests were performed at room temperature.The absolute velocity measurements, u, are presented in dimensionless form, i.e. u/u∗.Similarly, the Cartesian x and y coordinates are presented as the fractional radial position, x/R or y/R, where R is the chamber radius.The dimensionless velocity profiles are compared with the analytical result for fully developed laminar flow in a pipe and the 1/7th power law expression for turbulent flow.XRD measurements were performed using a Bruker GADDS system employing a Cu Kα source operating at 45 kV and 45 mA.A silver behenate sample was used for calibration and to determine the distance from the sample to the detector.The sample was sealed in a circular disk, diameter 10 mm, with 1 mm Kapton© films on both faces, located on the centreline of the droplet freezing cell.The Bruker HISTAR 2D-detector was positioned 0.18 m from the sample at an angle of 15° off the beam axis, giving a diffraction range of approximately 0.8–30° 2θ.Exposures lasted 150 s.Here wobs is the observed full-width at half maximum, ws is the instrumental broadening estimated using an AgBe standard, having an average crystal size of at least 1000 Å.Strictly, using FWHM as the measure of broadening results, for a distribution of crystalline sizes, in s being the ratio of the root mean fourth power to the root mean square of the crystalline dimensions.The true crystalline dimension may be smaller than the particle dimension, if the particle is polycrystalline.Droplet freezing was recorded using a 12-bit colour QICAM video camera fitted with a Leica Monozoom 7 lens.Video sequences were recorded using Streampix v3.16.5.The images were recorded as sequence files and then exported as both individual images and a movie clip.Each image recorded was 640 × 480 pixels.The video was recorded at 2.3 frames per second.For visualisation of the solidification process, the droplet was lit by a red laser pen aligned normal to the camera, to help enhance contrast between the crystallised mass and liquid fraction.The external surface of the SDFA unit was covered with black cardboard to ensure that the laser was the only source of illumination.Image processing was performed using ImageJ.Custom Montage Plugin was used to create the montage in figure.Cold dry air was passed through the apparatus at 50 L/min until it reached steady state at the required temperature.The air was briefly diverted while the droplet was loaded and was then readmitted into the chamber at 30 L/min.The droplet started to cool during the transfer but the cooling rate increased markedly when it was exposed to air flow.The alignment of the droplet in the X-ray path was adjusted using a pre-aligned laser pen.This process took around 10 s.The laser pen was then replaced by the X-ray collimator and the first pattern recorded 2 min after the readmission of air into the unit.XRD patterns were recorded at predetermined time intervals.The cocoa butter droplet was frozen and held at −2 °C for 20 min and then warmed to 28 °C and held at this temperature for up to 2409 min.The recorded Td and Ta values were smoothed using the Savitzky–Golay method before further processing.Cocoa butter was supplied by Nestlé PTC York in the solid form without any additives.The free fatty acid content was determined by dissolving the cocoa butter in 95% ethanol and titrating it with aqueous sodium hydroxide as set out in AOCS Official Method Ca 5a-40; this gave 1.60 ± 0.02% as oleic acid.The triglyceride contents were analysed using high-resolution gas chromatography at Nestlé PTC York.This gave 14.6 wt.% POP, 43.4 wt.% POS and 25.3 wt.% SOS.High-resolution gas chromatography was carried out on an Agilent Technologies 6890N gas chromatograph equipped with a J&W DB17HT poly column and flame ionisation detection.Injection was by programmed temperature vapouriser at 60 °C for 2 min, using helium at 14.8 ml/min and 124 + 10 kPa, followed by a temperature ramp of 300 K/min to 345 °C and holding for 4 min.High purity hydrogen was used as the carrier gas for the column, the column pressure being 124 kPa, flow-rate 1.8 ml/min and velocity 38 cm/s.The oven settings were: initial temperature, 40 °C; heating rate 50 K/min to 300 °C, 10 K/min to 340 °C and 0.2 K/min to 345 °C.The total run time was 62 min.The FID was set to a minimum peak width of 0.01 min, giving a data sampling rate of 20 Hz.The response factors of the individual triglycerides were previously determined to lie in the range 0.7–1.4 according to analysis performed at Nestlé Research Centre Lausanne, using a HR-GC previously calibrated with individual triglyceride standards.Samples were prepared as solutions of 10 mg of melted cocoa butter in 10 ml of high-purity hexane.For minor peaks, higher concentrations were used in order to overcome baseline noise.However, in these cases, the response factors, as well as the retention times, may differ from those determined at lower concentrations.The signal intensity varied linearly with concentration in most cases, which suggests that, at least for the minor peaks, the response factors did not change significantly at the higher concentrations employed here.DSC thermograms were obtained using cooling and heating rates of 20, 10 and 5 K/min using a PerkinElmer Pyris 1 power-compensated DSC fitted with a refrigerated intercooler.The machine was calibrated with an indium standard.Samples were sealed in aluminium pans and loaded at room temperature.These were then heated to 70 °C and held at this temperature for 15 min, before being cooled at the selected cooling rate to −20 °C.The sample was then held at −20 °C for 1 min before being reheated to 70 °C.The onset temperature, Tonset, peak temperature, Tpeak, and enthalpy change accompanying each peak, ΔH, was then estimated from each thermogram using the device software.Two approaches to calculating the temperature within a cooling droplet are presented.The first is the lumped parameter model reported by Macleod et al., which is a useful tool for inspecting laboratory data.One of its assumptions is that the droplet temperature is uniform and the validity of the low Biot number condition is assessed for the droplets used in the experiments and smaller droplets which may be generated in an industrial spray freezing system.This prompted the detailed investigation of cooling within the drops described in Section 3.3.Eq. can be solved numerically once the droplet size and physical properties are specified.Fig. 2 shows the values of Nu and Bi calculated for cocoa butter droplets with density 890 kg m−3 in air at 0 °C for diameters ranging from 1 μm to 2 mm.Bi exceeds 0.1 when d > 100 μm and 0.2 when d > 250 μm, indicating that cocoa butter droplets generated from fine sprays would be isothermal.For the 2 mm droplets used in the experiments reported here the terminal velocity is 6.4 m s−1, which is substantially greater than the local air velocity measured in the vicinity of the droplets.Fig. 2 shows that under these conditions Bi would be close to unity, so that the temperature conditions in the droplet would differ noticeably from those in a typical industrial spray.The value of Bi in the experiments, based on the overall heat transfer coefficient for a solder droplet U, was around 0.24, indicating that the experiments give a reasonable simulation of larger spray droplets.Lower air flow rates could be used in order to reduce h0 and Bi, thus giving rise to temperature conditions closer to isothermal.It should be noted that the above analysis assumes that the film heat transfer coefficient is uniform across the droplet surface.This is unlikely to the case for a falling droplet, and a spatially dependent coefficient is considered in Section 3.3.Smaller droplets will, however, give rise to fast internal conduction and the impact of non-uniformity in the external heat transfer coefficient.The visualisation studies of the droplet undergoing freezing reported in Section 4.2 show that nucleation occurred at random in the lower part of the droplet.This indicated that the temperature in this region was lower than that around the thermocouple junction.Moreover, the heat transfer studies reported by Turton and Levenspiel and the results presented in Pore et al. indicated that the Biot number in the droplet was less than 1, but not so small that the temperature in the droplet can be considered uniform.Computational calculations were therefore undertaken to estimate the temperature difference across the droplet as it cooled.The droplet and solder were modelled as two stationary spheres of different diameter located coaxially and suspended from a common point, as shown in Fig. 3.Also shown in the figure are the boundaries of three regions in the cocoa butter, encompassing 5%, 10% and 50% of the liquid volume.The surface-averaged temperature of these regions and that of the overall cocoa butter–solder assembly were calculated from the simulations as representative temperatures for comparison with the temperature measured by the droplet thermocouple, Td, and DSC results.The dimensionless air velocity profiles measured by the HWA probe for air flow rates of 20–50 L/min at the droplet plane and near the duct exit are plotted in Fig. 5.There is a noticeable region of high velocity in the centre of the duct at the droplet plane) which is less pronounced at the duct exit.Both profiles differ noticeably from the well-developed profiles for laminar or turbulent flow, although the duct exit data) approach the 1/7th power law trend.The presence of a fast-moving central jet is expected, as the air is in the turbulent regime as it passes through the flow straightener before expanding, and it will take several duct diameters) to establish a fully developed flow profile.A long duct was not practicable for the SDFA owing to the restricted space available and the need to avoid heat losses.The velocity profile at the droplet plane in Fig. 5 shows that the velocity in the region of the droplet is locally uniform.The four different flow rates all give roughly similar profiles, with u/u∗ ∼ 3.The local velocity for the 30 L/min case is therefore u/u∗ ≈ 2.5 ± 0.1.At 20 °C, the particle Reynolds number of the droplet, Red, is estimated at 560.At this Red value the local air flow past the droplet is expected to be unsteady with shedding of hairpin vortices in the near wake.Heat transfer to the droplet will be dominated by forced convection in the absence of significant radiative effects and correlations such as that by Ranz and Marshall can be used to estimate the average film heat transfer coefficient.The local heat transfer coefficient is likely to vary over the droplet surface, as discussed in Section 3.Fig. 6 shows a series of selected still images from a video of a 2 mm droplet undergoing freezing.The different stages observed during solidification, described in Table 3, are marked B–E.The droplet is initially transparent and becomes opaque as crystals, which scatter light, are formed.The red light illuminates the solder on the thermocouple junction at the top of the droplet.Reflections of this red spot in the cocoa butter/air interface can be seen at the sides of the droplet at t < 14 s.The droplet was initially suspended at 74 °C and the first image, t = 0 s, was recorded 12 s after suspension when cooling air was first admitted to the chamber.By t = 1 s, the air flow in the cell had stabilised.Nucleation was first observed at 4 s, with a cloud appearing at the bottom of the droplet.The solidification front slowly spread upwards through the droplet, reaching the mid-plane at 7 s and the top of the droplet at 12 s.The solder became obscured between 12 and 20 s, as the slurry of solid cocoa butter thickened.Between 20 and 40 s, there was increased scattering of red light around the droplet, due to increased condensation on the external surface of the SDFA unit.Fig. 7 shows a droplet frozen under similar conditions to that in 6 but illuminated by a white diffuse backlight.At t = 0 s the droplet was not spherical, with its shape determined by surface tension and gravity.Once cooling air was admitted into the chamber, the droplet rose.By 1 s, the air flow in the chamber had stabilised, the droplet is more spherical and the bottom of the droplet has risen.The dark ring around the droplet is due to total internal reflection of light.The droplet is not perfectly spherical in image C but these images suggest that a spherical geometry is a reasonable estimate for the heat transfer simulations.The solidification process is characterised by the formation of small granular particles, crystallising directly from the melt.The particles sizes lie in the range of several hundred nm.Due to the limited resolution of the camera, it was not possible to distinguish individual particles.However the changing intensity of scattered light from different layers of the droplet over time indicated that the solidification front moved both vertically upwards and radially inwards, similar to the inward freezing shell model proposed by Hindmarsh et al.Repeated runs showed that nucleation always started at the base of the droplet, confirming that solidification did not occur via heterogeneous nucleation on the thermocouple.This indicates that the base of the droplet was always slightly colder than its centre and the region around the thermocouple junction.The assumption of a uniform temperature distribution in the droplet was not, therefore, accurate and the validity of the uniform temperature distribution estimate is examined further in Section 4.4 by comparing the experimental data with numerical simulations.Fig. 8 is an XRD pattern for a droplet frozen under these conditions 120 s after the admission of air into the unit.There is a large peak at 21.3° and a smaller peak at 23.2° 2θ, indicative a mixture of Forms I and II.It is not possible to unambiguously determine the fraction of Forms I and II present, since the principal peak of both Forms occurs at the same angle.However Form I has a small subsidiary peak at 23.2° 2θ, so it is clear that some Form I is present in the early stages of crystallisation and that this decreases in favour of Form II with time.No XRD patterns giving definitive evidence of Form I crystallising alone were recorded in these experiments so these two polymorphs are not separated in Figs. 14 and 15.Scherrer analysis indicates the crystal dimensions were 130 ± 10 Å perpendicular to the planes responsible for the 21.3° reflection.Fig. 9 shows the evolution of temperature of a cocoa butter droplet, initially at 74 °C, exposed to cold air at 0 °C.There was a 20 K reduction in temperature as the droplet was transferred to the SDFA and the X-ray beam aligned.The cold air flow was restarted at B, accompanied by a noticeable change in the droplet cooling rate and air temperature.The latter reached steady state value at C, after about 15 s.There was a discontinuity in the Td profile, marked D, at Td = 19.7 °C, marking the detection of solidification.This discontinuity is more evident in Fig. 9.The cooling rate in the droplet between B and D was 3.0 K/s.Although the thermocouple first detected the phase change at D in Fig. 9, the corresponding still image in Fig. 6 indicates that a proportion of the lower half of the droplet had already solidified by this point.This indicates that the assumption of a uniform internal temperature distribution was not accurate.Crystallisation in the droplet was accompanied by an almost linear decrease in Td between D and D∗, with a cooling rate of 0.62 K/s.The reduction in cooling rate relative to that during the period B–D was due to the release of latent heat of crystallisation.After D∗ the temperature decays in an exponential fashion and approaches Ta asymptotically.This region corresponds to the cooling of the solidified droplet and gives a roughly linear trend in Fig. 9.The data from Fig. 9 are plotted in the form suggested by Eq. in Fig. 9.Linear regions are evident at: AB – corresponding to heat loss by natural convection during droplet transfer; BD – air cooling of the liquid droplet; and D∗F – air cooling of solidified droplet.The region DD∗ − corresponding to CB solidification – is not linear in Fig. 8, as expected due to the release of latent heat.During AB, the inlet air is diverted away from the SDFA unit, which leads to the unit warming up due to natural convection within the X-ray cabinet.During BD, Ta decays exponentially from 4.2 to 0 °C, as the SDFA unit is cooled along with the liquid CB droplet.The value of U obtained for the droplet in region BD in Fig. 9 is 60.7 ± 1.2 W/m2 K. Parameters used in calculating this value are summarised in Table 2.The Biot number is now estimated.An average heat transfer coefficient, h, of 89 W/m2 K was obtained from similar tests using a thermocouple with a 2 mm drop of solder attached.The thermal conductivity of solder is 40 times higher than CB so these measurements are effectively free of internal heat transfer resistance and thus should give a good approximation of the true external heat transfer coefficient in the CB droplet experiments.Comparing this value to the overall heat transfer coefficient above indicates that external heat transfer presents the largest resistance.Bi is then ∼0.24 for the tests reported here, indicating that the internal temperature distribution is expected to be close to, but not completely, uniform.The validity of this near-uniform temperature approximation is examined by simulation.Fig. 10 presents results from simulations compared with the experimentally measured droplet temperature, Td.In the simulations the droplet is initially isothermal at 74 °C.The droplet then undergoes heat loss via natural convection until t = 12 s, which corresponds to the droplet transfer stage in Fig. 9.At B, cooling air is readmitted to the chamber, which leads to rapid cooling of the droplet.N marks when nucleation is first observed and D when the thermocouple first detects the phase change.Fig. 10 shows good agreement between the measured droplet temperature profile, Td, and the simulated temperature at the thermocouple.There is good agreement until around t = 23 s, after which time Td deviates from the simulated thermocouple temperature due to the evolution of latent heat of crystallisation.Fig. 10 shows the temperature profiles for the different volumes of the droplet identified in Fig. 3, alongside the results from the simulation.The material at the bottom of the droplet cools more quickly than that near the thermocouple, which explains why nucleation is observed in this region in Fig. 6 before it is detected in the temperature measurement.The simulations show that there is a significant temperature difference between the droplet temperature measured at the thermocouple junction and the lower regions of the cocoa butter droplet.It is clear that the internal temperature distribution of the droplet cannot be assumed to be uniform and care must then be taken in attributing changes to measured temperatures.The images in Fig. 6 indicate that nucleation occurs at the surface of the droplet, where the CB experiences low temperatures first.The surface average temperatures were evaluated for the regions marked in Fig. 3 and are plotted in Fig. 10.The 5% and 10% values are similar and are noticeably colder than the bulk average and measured values.The difference in predicted temperature between the solder and the different surface averaged sections is plotted in Fig. 10, with the largest temperature difference, ΔT, arising at the base of the droplet.ΔT increases rapidly for 3 s after time B, when cooling air is readmitted to the chamber, and after 4 s slowly decreases.At B, there is a difference of 2.5 K between the thermocouple and the bottom of the droplet.This increases to 16 K at t = 15–16 s, when nucleation is first observed in Fig. 6, corresponding to an average surface temperature in the 5% region of 25 °C.The simulation continues to give a good description of Td for another 7 s, indicating the amount of latent heat evolved is small.When solidification is first manifested in the Td profile ΔT is 8.8 K and the droplet thermocouple measures 19.2 °C.The results obtained for the data set in Fig. 9 are presented in Fig. 11.Two regions are evident in the temperature sweep):The region between 53 and 74 °C corresponds to the region AB in Fig. 9.The high QL values are an artefact resulting from the droplet undergoing slow cooling before the readmission of the air into the unit.This is caused by natural convection and gives rise to a lower U value; inspection of Eq. shows that this will result in an erroneous, positive, estimate of QL.This region is not considered further.There is a broad peak centred at Td = 19 °C corresponding to latent heat released from crystallisation which corresponds to the region DD∗ in Fig. 9.The peak is not symmetrical, increasing sharply at first, which matches the appearance of crystals in Fig. 6.Integrating across this region yields an estimate of the latent heat of crystallisation to be 18.1 ± 0.01 J/g.DSC thermograms for the batch of Ivory Coast cocoa butter used in the droplet freezing experiments are reported in Fig. 12.The cocoa butter was initially heated to 70 °C to eliminate any residual Form V/VI nuclei then cooled to 20 °C at a set cooling rate.Different samples were used for each cooling rate.Fig. 12 shows that cooling gave a large exothermic peak centred at 5.1, 4.0 and 2.6 °C for a cooling rate of 5, 10 and 20 K/min, respectively, followed by a smaller exothermic shoulder peak at around −5 °C.The observed two exothermic peaks for the cooling rates used is consistent with that reported by Van Malssen et al. that at cooling rates greater than 2 K/min Form II crystallises first followed by Form I.The decrease in the onset temperature of solidification with increasing cooling rate is also consistent with that reported in literature.The samples were then maintained at −20 °C for 1 min before being reheated to 70 °C at the same rate.Fig. 12 shows a small endothermic peak corresponding to melting of Form I crystals around 0 °C followed by a large endothermic peak centred at 9.5, 10.0 and 11.8 °C for 5, 10 and 20 K/min, respectively.The shoulder on this large endothermic peak, around 17 °C, corresponds to the melting of Form III crystals.The enthalpy of crystallisation associated with Form II crystal formation was evaluated at 42.4, 38.7 and 37.0 J/g at cooling rates of 5, 10 and 20 K/min respectively.These values are smaller than the 86.2 J/g reported by Fessas et al. for pure Form II for cooling at 2 K/min and 88.7 J/g by Schlichter Aronhime et al. for cooling at 0.3 K/min.An alternative explanation is that approximately half the cocoa butter crystallises as Form II.These values are all considerably larger than those obtained by the temperature transient analysis in Section 4.3.A discrepancy is expected as the transient analysis involves several estimated values, e.g. Cp, ρ, and U.A significant source of error arises from the assumption that the droplet internal temperature distribution is uniform.An alternative method for estimating the latent heat of crystallisation in the droplet would be to extend the simulation to include crystallisation and cooling of the solidified droplet.Tanner reported simulations of this nature.An inward freezing shell model was used to describe crystallisation and gave a good description of the Td–t profiles reported by Gwie et al.The video microscopy results here, however, suggest that an inward freezing shell model is not an accurate description of the process.An alternative approach would be to use a phase field model of the solidification which could account for the evolving liquid–solid interface as well as development of crystal microstructure in the final solid.Before such complex simulations are performed, however, the length scale of interest needs to be considered.Industrial spray drying and freezing devices use considerably smaller droplets than those employed in this study, which can result in smaller Biot numbers, and droplet rotation is likely to occur.There are noticeable differences between the temperatures at which the onset of cooling is observed in the DSC tests, and the temperatures where nucleation is observed in the droplet tests.Surface nucleation occurred at an estimated surface temperature of 25 °C, while nucleation in the bulk, as indicated by the thermocouple transient, corresponded to Td = 19.3 °C.Neither of these values corresponds to the DSC value, although the 5% surface average temperature had reached 10.5 °C by point D.This discrepancy is further confounded by the difference in cooling rates: there is a noticeable shift in the DSC cooling profiles to the left with increasing cooling rate, shifting the onset of nucleation to lower temperatures.The cooling rate in the droplets is significantly greater than in the DSC tests, at 180 K/min, so one would expect the onset of nucleation to be shifted to colder temperatures.The temperatures at which nucleation is observed in the droplet are higher than the melting temperature of Form I and close to the melting temperature of Forms II and III.The XRD spectra indicate that the initial crystals formed consist of Forms I and II.Fig. 11 indicates that a significant fraction of the latent heat is released when the surface temperature is below 5 °C.Thus it is plausible that regions of the droplet are below the melting point of Form I, even though the bulk temperature is too high to allow the formation of Form I.This would explain the X-ray observation of initial formation of a mixture of Forms I and II, followed by gradual transformation into pure Form II.Fig. 13 shows a series of XRD spectra for a cocoa butter droplet, frozen and held at −2 °C for 20 min before being gradually warmed to 28 °C, and held near, this temperature until 2309 min.The air temperature dropped to around 25 °C after 1000 min due to overnight changes in ambient temperature.Fig. 8 shows that the droplet temperature reaches Ta within 90 s so the latter value is quoted.After 2 min, the wide angle region exhibited two peaks at 21.1° and 23.0° 2θ, which are characteristic of Form I.The small angle region shows three peaks at 1.7°, 2.5° and 3.1° 2θ which also indicate Form I.The droplet was maintained at −2.0 °C for a further 20 min, over which time the intensity of the shoulder peak at 23.0° decreased whilst the intensity of the peak at 21.1° increased.Over this time the triplet in the SAXS region transforms into a doublet with peaks at 1.9° and 2.0° 2θ by 22 min.These changes can be explained by the droplet containing a mixture of Forms I and II.Form II exhibits a single peak in the WAXS region, also at 21.0°, which overlaps the peak for Form I and making it difficult to identify whether the sample is purely Form I or a mixture of Forms I and II from WAXS data only.Thus growth of the 21.1° peak whilst the 23.0° peak decreases in intensity suggests transformation of Forms I to II; the shift of the 1.7° peak to 1.9° is possible if there is an increase in Form II which will exhibit a peak at 1.8° 2θ as shown at 32 min.After 32 min the air had warmed to −0.8 °C and was accompanied by the complete loss of the shoulder peak at 23.0° 2θ and the migration of the principal peak from 21.1 to 21.0° 2θ.This suggests that the droplet has completely transformed into Form II.As the inlet air was warmed further, the principal peak shifted to 20.9° 2θ, and broad shoulders were formed either side of the peak at around 19° and 22.5°.The single sharp SAXS peak at 1.8° 2θ indicates that the droplet is mostly Form II, and the shoulder peaks at 19° had 22.5° indicate transformation into Form III/IV.The appearance of the melt peak at 19° means that the Forms II to III/IV transformation proceeds via partial melting of Form II, this is in comparison to the rapid and complete transformation of Forms I to II which is a solid state transition.By 52 min the air had warmed to 22.8 °C, and the WAXS region shows peaks at 20.9° and 20.2° 2θ characteristic of Form IV, and a broad peak indicative of the melt centred at 19°.The SAXS region showed a single peak at 1.9° 2θ.This indicates that the droplet has transformed into Form IV, with some melt also present.By the time air reached 25.6 °C, the broad peak in the WAXS region remains which is characteristic of the melt.A small peak, attributed to Form IV, is evident at 20.2° 2θ.The 1.9° peak in the SAXS region was also present at much reduced intensity.Between 62 and 662 min, the droplet was warmed to 28.5 °C and the 1.9° 2θ peak of Form IV disappeared.This is consistent with completion of droplet melting.Crystallisation from the melt is evident after 707 min with the emergence of a peak at 19.2° 2θ.No peaks were observed in the SAXS region at this point.By 1340 min, the air temperature had decreased to 24.9 °C.The WAXS peak at 19.2° 2θ became more intense and the SAXS region showed 3 peaks at 1.3°, 1.9° and 2.7°, with 1.9° being the most intense.The 1.3° and 2.7° doublet indicated that Form V was present, and the 1.9° peak Form IV.The air had warmed up by 1409 min to 26.3 °C, at which point the 1.9° 2θ peak was reduced in intensity, suggesting partial Form IV melting in the range 24.9 °C to 26.3 °C.The quantity and average particle size of the polymorphs present in each diffraction pattern was estimated from the integrated intensity and full-width at half maximum of the principal peak.The percentage of each Form present is plotted against time, and droplet temperature in Fig. 13.Forms I and II were both formed initially and the fraction increased with time, reaching 97% after 20 min.This was accompanied by transformation into Form II at 100% after 32 min.By 42 min, the droplet was largely Form II, followed by a partial transformation into Form IV after 52 min.The melting of Form IV was accompanied by an increase in the amount of melt present.Form V was subsequently formed directly from the melt at 707 min.However, crystallisation of Form V was slow, reaching 36% at 1409 min and 87% at 2309 min.At 1390 min, both Form IV and V are observed, which is estimated to be at 25% and 46% respectively.The 46% value is likely an overestimate due to overlapping Forms IV and V peaks in the WAXS region, and is confirmed as the Form V percentage drops to 36% by 1409 min.Overall the transformation into Form V was quite rapid similar to that reported by Pore et al. which is significantly shorter than the 1 week reported for isothermal crystallisation of Form V in the literature.The spectra and percentage of each form present indicate that the transformation from Forms I to II is rapid and occurs via solid-state transitions, while those from Forms II to IV to V are much slower and involve melting.Inspection of Figs. 13 and 14 indicates that Form V was formed first from the melt, and that the occurrence of Form IV at 1390 min was due to the air temperature dropping to 25 °C.The particle sizes were estimated from Scherrer analysis of the XRD peaks.The Forms IV and V crystals could not be estimated using this method as the full-width at half maximum of the Form V peak was similar to that that of the AgBe standard: it can be inferred that these crystals were at least a few hundred Å.Crystals responsible for the peak at 21.1° between 2 and 22 min were on average 300 Å in size, with the 23.0° peak associated with Form I crystals corresponding to a size of 120 Å.The peak at 21.0° at 32 min corresponded to an average size of 110 Å.The results are now presented in the form of the isothermal phase transition scheme which Van Malssen et al. used to describe the evolution of polymorphic forms for a Cameroon cocoa butter.Their samples were heated to 60 °C for 60 s, cooled to the required temperature over 120 s, and then held at that temperature until crystallisation was observed.The samples consisted of 150 mg CB in holders, 10 × 15 × 1 mm in dimension, and were subject to XRD analysis.The polymorphs present were plotted in T–t space and approximate boundaries to each polymorphic region constructed.Fig. 15 shows the results from the droplet in Fig. 13 presented in this format alongside the van Malssen et al. boundaries.The transformations observed up to Form IV are reasonably consistent with the van Malssen et al. scheme.However, Form V is observed at a higher temperature, and earlier than the van Malssen et al. scheme.This difference could be due to differences in composition of the cocoa butters and the thermal history of the sample.The Côte d’Ivoire droplets experienced significantly faster initial cooling rates and were subject to varying temperatures and phase transitions.This difference may be due to the low Biot number in the droplet).A low Biot number indicates a nearly uniform temperature distribution within the droplet, which together with rapid cooling at 130 K/min will result in the formation of a large quantity of small sized Forms I and II crystals.With a larger bulk sample, the Biot number will be larger and nucleation is more likely to occur at the cold interface, giving a small number of large crystals.On warming, any Form I rapidly transforms into Form II.As the droplet is warmed further, Form II quickly transforms into Form IV.This rapid transformation is attributed to the collapse of the vertical chain Form II configuration to the herringbone Forms III/IV structure with alternating parallel rows which occurs unhindered as quoted in Kellens et al.).In terms of the droplet, this will mean that there will be a large number of Forms III/IV crystals formed which are all small in size.As the Forms IV to V transition is diffusion controlled and involves a complete reordering of every row of the Form IV structure, having small Form IV crystals will help speed up this process by Form IV melting quickly, and increased number of Form V nucleation sites.It is also possible that the initial rapid cooling results in the nucleation of all Forms.On warming, the large number of small sized lower melting polymorph crystals melts and allow the higher melting polymorphs to grow via a process of Ostwald ripening.The effect of temperature history and composition is the subject of ongoing work with the SDFA, exploiting its capacity to control the temperature history directly and to characterise the polymorphs present in situ and in real time.The single droplet freezing apparatus has been optimised for studying crystallisation in droplets.It is able to offer very fast cooling rates, up to 200 K/min for the droplet sizes studied here, and allows real time observation of the evolving crystalline structure in cocoa butter droplets.Video microscopy has been used to monitor the different stages of freezing.A droplet freezing study using an Ivory Coast CB showed that phase change at the droplet thermocouple, Td, lags behind nucleation at the base of the droplet.Numerical simulation of the cooling of the droplet showed that this lag is due to the non-uniform internal temperature distribution of the droplet, which is a result of the non-uniform local heat transfer coefficient across the surface of the droplet.These results showed that the freezing process cannot be described reliably by a 1D inward freezing shell model.A second droplet freezing experiment, in which the droplet was subjected to temperature changes over extended time, showed that phase transformations in the droplet were similar to those in the van Malssen et al. isothermal phase transition scheme.However, Form V was observed early in the droplet which could be due to the difference in thermal history or composition between their material and the one studied here.The heat transfer conditions at the boundaries marked i–iv on Fig. 3 are:The system is assumed to be axially symmetric about the common central axis of the spheres, so that the temperature has no dependence on the azimuthal angle.The system is then two dimensional.The local heat transfer coefficient, h, was set to be a function of time in order to describe the two cooling phases experienced in the experiments.The droplet is initially exposed to almost quiescent ambient air for a period while being transferred and loaded and is then subjected to forced convection of cooled air in the SDFA.In the first stage the heat transfer coefficient is estimated using the result that in the limit of Red → 0, heat transfer occurs chiefly by conduction and the average Nusselt number is 2.This gives an estimated h value of 27 W/m2 K for the first stage.In the second stage, where forced convection is active, the heat transfer coefficient varies with position.The flow pattern around a single sphere in a steady air flow was computed by Hirasawa et al. and their results show a strong variation of h with the polar angle φ.Their calculations were performed for a 10 mm sphere with a particle Reynolds number of 634, which is larger than that in this work.The same dependence of h on φ was utilised for this work, as shown in Fig. 4, applying a linear correction factor so that the surface averaged heat transfer coefficient matched that measured experimentally at 61 W/m2 K.This assumes that the flow regime is the same at that studied by Hirasawa et al.The air temperature, Ta, changes with time.In the first stage, the air temperature is set at the lab temperature, 21.5 °C.When the cold air is admitted there is a transient in Ta and this is modelled to match the recorded values of Ta.Conduction at internal boundary,The temperature in both domains is initially uniform at T0 = 74 °C.The material properties are summarised in Table 2.Calculations were performed using COMSOL Multiphysics which employs the finite element method to solve the above equations.Simulations were run on a PC and typically took 10 min to converge.The mesh reported in Fig. 3 was identified by a series of trials on mesh refinement and convergence.
The single droplet freezing apparatus described by Pore et al. (2009), which allows crystallisation to be monitored in situ by X-ray diffraction, was modified to allow rapid switching of coolant gas and monitoring by video microscopy. The apparatus was used to study drops of cocoa butter undergoing simulated spray freezing at high cooling rates, e.g. 130 K/min. The transformation of an Ivory Coast cocoa butter to the Form V polymorph was significantly faster in drops (∼40 h) than in static bulk samples (10 days) crystallised under isothermal conditions. Phase transformation was observed from Forms I/II → III → IV → melt → V, with Form V crystallising directly from the melt at 28.6 °C. Numerical simulations of the temperature evolution within the droplet established that the drops are not isothermal, explaining why nucleation was initially observed in the lower (upstream) part of the droplet.
116
Increased attenuation but decreased immunogenicity by deletion of multiple vaccinia virus immunomodulators
The poxvirus family is a diverse group of dsDNA viruses that replicate in the cytoplasm of infected cells.The potential use of this family, particularly the well-characterised vaccinia virus strains, as vaccine vectors has been under investigation since the 1980s .VACV has many features that make it attractive for this use; a large capacity for foreign gene expression, relative ease of manipulation, and, importantly, induction of strong innate and memory immune responses, including both cellular and humoral arms .Much research has focused on modified virus Ankara and NY-VAC because of their severe attenuation, and, at least for MVA, the inability to replicate in most mammalian cells.These vectors are safe even in immunocompromised hosts , which is of particular importance for vaccination against diseases such as HIV-1, malaria and tuberculosis .There is still a need to improve the immunogenicity of these vectors, particularly because large viral doses, repeated vaccination or prime-boost regimes are required to achieve adequate correlates of protection .The host antiviral innate immune response provides a strong selective pressure and, consequently, viruses have evolved a plethora of mechanisms to counteract its effects.Engagement of viral components with innate immune receptors activates transcription factors including nuclear factor kappa-light-chain-enhancer of activated B cells and interferon regulated factors-3/7 that coordinate the production of pro-inflammatory cytokines, chemokines and type I IFNs.Importantly this pro-inflammatory milieu attracts professional antigen presenting cells to the site of infection, providing an important link to the adaptive immune response and the subsequent establishment of immune memory.VACV dedicates between one third and one half of its genome to dampening host innate responses.For example, nine intracellular NF-κB inhibitors have been identified and there is evidence that more remain to be discovered .In addition, VACV encodes numerous IRF-3/7 inhibitors and multiple mechanisms to counteract the actions of IFNs .The mechanisms by which innate immunity impacts successful vaccine design remain incompletely understood.Importantly, recent data have demonstrated that the deletion of several VACV immunomodulatory genes individually enhances the immunogenicity of these vectors.Examples of such genes include the NF-κB inhibitor N1 , the IRF-3/7 inhibitor C6 and the dual NF-κB and IRF-3/7 inhibitor K7.Proteins N1 and K7 have Bcl-2 folds and C6 is predicted to do so and all are virulence factors .Some studies have investigated the effect of deleting multiple immunomodulatory genes from VACV vectors but so far have not included an in-depth comparison of single gene deletions in isolation versus deletion of genes in combination .These comparisons have also not always included challenge experiments, instead measuring aspects of immunological memory that may correlate with immune protection.This study therefore tested whether the immunogenicity of VACV could be improved further by deleting three intracellular innate immunomodulators in combination from VACV strain Western Reserve.These immunomodulators were selected because their function in innate immunity was known, and because deletion of each of these genes in isolation from VACV WR increased immunogenicity, but decreased virulence.This study shows that deletion of these three genes did not affect VACV growth in vitro, but led to sequential attenuation of the virus in two in vivo models.In challenge models, despite each individual gene deletion enhancing immunogenicity, a virus lacking all three genes was a poorer vaccine, accompanied by inferior memory T cell responses and lower neutralising antibody titres.This illustrates how the design of vaccines for optimal immunogenicity must consider how the degree of attenuation impacts on the induction of immunological memory.BSC-1 and CV-1 cells were grown in Dulbecco’s Modified Eagle’s Medium supplemented with 10% foetal bovine serum and penicillin/streptomycin.EL-4 cells were grown in Roswell Park Memorial Institute medium supplemented with 10% FBS and 50 μg/ml P/S.The deletion and revertant VACVs for N1L , C6L and K7R were described previously.Female mice Balb/c and C57/B6 were purchased from Harlan.Viruses were constructed using the transient dominant selection method as described , using vΔN1 as a starting point and plasmids Z11ΔC6 and pSJH7-ΔK7 to delete C6L and K7R respectively.Z11 is a pCI-derived plasmid containing the Escherichia coli guanylphosphoribosyl transferase gene fused in-frame with the enhanced green fluorescent protein gene under the control of the VACV 7.5 K promoter.Revertant viruses were constructed by replacing the deleted genes in their natural loci using plasmids Z11C6Rev and pSJH7-K7 .The genotype of resolved viruses was analysed by PCR following proteinase K-treatment of infected BSC-1 cells using primers that anneal to the flanking regions of N1L, C6L and K7R .Infectious virus titres were determined by plaque assay on BSC-1 cells.Infected BSC-1 cells were lysed in a cell lysis buffer containing 50 mM Tris pH 8, 150 mM NaCl, 1 mM EDTA, 10% glycerol, 1% Triton X100, 0.05% NP40 supplemented with protease inhibitors.Samples were boiled for 5 min and then subjected to SDS-PAGE.Primary antibodies were from the following sources: mouse anti-α-tubulin, mouse anti-D8 mAb AB1.1 , rabbit-anti-N1 polyclonal antiserum , rabbit-anti-C6 polyclonal antiserum and rabbit-anti-K7 polyclonal antiserum .Primary antibodies were detected with goat-anti-mouse/rabbit IRdye 800CW infrared dye secondary antibodies and membranes were imaged using an Odyssey Infrared Imager.BSC-1 cells were inoculated at approximately 50 plaque forming units per well of a 6-well plate and stained with crystal violet 3 days later .The sizes of 20 plaques per well were measured using Axiovision acquisition software and a Zeiss AxioVert.A1 inverted microscope as described .Female BALB/c mice were infected intranasally with 5 × 103 p.f.u. of purified VACV strains.VACV was purified from cytoplasmic extracts of infected cells by two rounds of sedimentation through 36% sucrose at 32,900g for 80 min.Virus was resuspended in 10 mM Tris-HCl pH 9.Virus used for infections was diluted in phosphate-buffered saline containing 1% bovine serum albumin and the titre of the diluted virus that was used to infect mice was determined by plaque assay on the day of infection.Mice were monitored daily to record body weight and signs of illness as described .Female C57BL/6 mice were inoculated intradermally in both ear pinnae with 104 p.f.u. and the resulting lesions were measured daily as described .For the challenge experiments, mice that had been inoculated i.n. were challenged 6 weeks later and mice that had been inoculated i.d. were challenged 4 weeks later, i.n., with 5 × 106 p.f.u. of wild-type VACV WR.Splenocytes were prepared as described and incubated for 4 h with a C57BL/6-specific CD8+ VACV peptide B820–27 or a negative control CD8+ VACV peptide specific for BALB/c mice, E3140–148 at a final concentration of 0.1 μg/ml.After 1 h Golgi stop was added and the cells were incubated for a further 3 h. Cells were then stained for CD8 and either IFNγ or TNFα and analysed by flow cytometry .Cytotoxic T lymphocyte activity was assayed with a standard 51Cr-release assay using VACV-infected EL-4 cells as targets, as described .The percentage of specific 51Cr-release was calculated as specific lysis = × 100.The spontaneous release values were always <5% of total lysis.The neutralising titre of anti-VACV antibodies was calculated by plaque assay on BSC-1 cells as described .Neutralisation dose 50 values represent the reciprocal of the serum dilution giving 50% reduction in plaque number compared with virus incubated without serum.The binding of serum antibodies to VACV-specific epitopes was measured by enzyme-linked immunosorbent assay using plates coated with lysates of VACV strain WR-infected cells that had been treated with ultraviolet light and psoralen to inactivate VACV infectivity as described .Plates coated with bovine serum albumin were used as a negative control.IgG end-point titres were defined as the reciprocal serum dilutions giving twice the average optical density values obtained with bovine serum albumin.Data were analysed using an un-paired Student’s t-test, with Welch’s correction where appropriate, or a Mann–Whitney test as indicated.Statistical significance is expressed as follows: *P < 0.05, **P < 0.01, ***P < 0.001.This work was carried out in accordance with regulations of The Animals Act 1986.All procedures were approved by the United Kingdom Home Office and carried out under the Home Office project licence PPL 70/7116.To construct a virus lacking N1L, C6L and K7R, C6L was deleted from a virus already lacking N1L by transient dominant selection yielding vΔΔ, followed by the removal of K7R yielding vΔΔΔ.As controls, a revertant virus where the deleted gene was re-inserted back into its natural locus was constructed at each stage.Deletion of C6L and K7R was confirmed by PCR analysis of proteinase K-treated lysates of infected BSC-1 cells using primers specific for these genes in addition to N1L and A49R as a control.The phenotype of the resulting recombinant viruses was confirmed at the protein level by immunoblotting of lysates from infected BSC-1 cells using antisera against N1, C6 and K7, as well as a monoclonal antibody against VACV protein D8 as an infection control.Proteins N1, K7 and C6 are each non-essential for virus replication or spread in cell culture and measurement of the plaque size of mutants lacking 2 or 3 of these genes confirmed that deletion of these genes in combination did not affect viral spread in BSC-1 cells.The N1L , C6L and K7R single deletion viruses are attenuated in both the i.d. and i.n. models of murine infection, highlighting their importance as virulence factors.When compared side-by-side in the i.d. model, the level of attenuation was found to be similar amongst the three single deletion viruses.In contrast, a virus lacking both N1 and C6 was significantly more attenuated than vΔN1, and a virus lacking all three immunodulators was attenuated further still, indicating that the roles of these innate inhibitors in vivo is non-redundant.The double deletion revertant and the triple gene deletion revertant control viruses behaved as expected, demonstrating that the observed attenuation phenotypes were due to specific genes deletions and not mutations elsewhere in the viral genome.Similar results were obtained in the i.n. model of infection where deletion of C6 and K7 again led to sequential attenuation of the virus, as indicated by reduced weight loss and fewer signs of illness.To determine whether the double and triple gene deletion viruses have improved vaccine potency compared with the N1 single deletion virus, mice were vaccinated i.d. and challenged i.n. one month later with a lethal dose of wild-type VACV WR.As reported , single deletion of N1 provided the mice with better protection against challenge, indicated by significantly less weight loss over a period of 10 days.However, vΔΔ provided less protection than both vΔN1 and vWT and vΔΔΔ provided significantly less protection again.Similar results were observed following challenge of mice that were vaccinated i.n.When compared head-to-head each of the single deletion viruses enhanced vaccine potency to a similar degree and the revertant control viruses behaved as expected in challenge experiments.To understand why vaccination with vΔΔ or vΔΔΔ afforded less protection than vΔN1, CD8+ T cells responses one month post-i.d. vaccination were analysed.To measure the cytolytic activity of VACV-specific T cells a chromium release cytotoxicity assay was performed.The specific cytolytic activity of T cells from vΔN1-vaccinated mice was significantly higher than that from vWT-vaccinated animals, corroborating recently published findings .Conversely, significantly lower VACV-specific cytolytic activity of T cells was measured in vΔΔ- or vΔΔΔ-vaccinated mice.Interestingly, no significant difference in cytolytic activity was observed between vΔΔ and vΔΔΔ.The release of cytokines by splenic CD8+ T cells that were stimulated ex vivo with VACV peptides was also analysed by intracellular cytokine staining.In agreement with data published recently , splenic CD8+ T cells from vΔN1-vaccinated mice secreted enhanced levels of IFNγ and TNFα following stimulation than cells from vWT-vaccinated mice.In contrast, splenic CD8+ T cells from both vΔΔ- and vΔΔΔ-vaccinated mice secreted significantly less cytokines.Again, no significant difference was observed between the double and triple gene deletion viruses.The poorer immunogenicity of the double and triple gene deletion viruses might also be due to altered antibody responses.To investigate this possibility, sera were collected from mice one month post-i.d. vaccination and the VACV-specific antibody titres and VACV-specific neutralising titres were measured by ELISA and plaque reduction neutralisation respectively.The titre of VACV-specific antibodies measured by ELISA was not different between the groups of vaccinated mice and good antibody levels were induced in all cases.The same was not true, however, when the titre of neutralising serum antibodies was measured by plaque reduction neutralisation assay.In this case the ND50 of sera from both the vΔΔ- and vΔΔΔ-vaccinated mice was significantly lower than that from both vWT and vΔN1-vaccinated mice, although there was no significant difference between the double and triple gene deletion viruses.Inducing a robust innate immune response is an important step to designing an immunogenic vaccine and this is often achieved by the addition of a vaccine adjuvant.However, a full understanding of how the innate immune system impacts immune memory is lacking and would greatly enhance our ability to rationally design vaccines with enhanced immunogenicity profiles.VACV-based vectors are popular candidates, however their genomes still encode proteins with a known role in dampening the host innate immune response, which may negatively impact their potential use as vaccine vectors.Indeed, the deletion of numerous VACV immunomodulatory genes has been shown to enhance immunogenicity including the chemokine-binding protein A41 , the IL-1β-binding protein B15 , the inhibitor of MHC class II antigen presentation A35 , the IL-18-binding protein C12 , the type I and type II IFN-binding proteins , the IRF3/7 inhibitor C6 , the NF-kB inhibitor and anti-apoptotic protein N1 , the dual NF-kB and IRF3/7 inhibitor K7 and the TLR signalling inhibitor A46 .Data presented here demonstrate that deletion of three of these genes in combination from VACV WR did not further enhance the immunogenicity and in fact provided poorer protection than the single gene deletion viruses.These data highlight that in the context of a replicating vaccine vector there is a fine balance between viral attenuation and immunogenic potential.Given that vΔΔ and vΔΔΔ are sequentially more attenuated than vΔN1 they may have generated lower antigen levels and/or been cleared more quickly by the host immune system and hence induced a weaker adaptive response.The importance of antigen availability for the formation of CD8+ T cell-dendritic cell interaction kinetics and the ensuing memory response was demonstrated recently .Of further interest, the inferior CD8+ T cell responses and neutralising antibody titres observed with vΔΔ and vΔΔΔ were not significantly different between these two viruses, however vΔΔΔ provided worse protection than vΔΔ in the challenge studies.These data may indicate that other aspects of immune memory such as CD4+ T cells, or the recently identified memory NK cells may play an important role in determining vaccine efficacy of VACV.Whether equivalent protection to WT or vΔN1 vaccination could be achieved by increasing the vaccination dose of vΔΔ and vΔΔΔ warrants further investigation.A recent study using an MVA vector expressing HIV-1 antigens in the context of a DNA prime/MVA boost regime found that the deletion of C6L and K7R in combination enhanced the magnitude and quality of HIV-1-specific CD4+ and CD8+ T cell responses in mice, as well as Env antibody levels compared to the parental MVA-B vector and MVA-B lacking C6L alone .These data demonstrate that in the context of a non-replicating strain of VACV, the deletion of more than one immunomodulatory gene may be beneficial.What is not clear from this study, however, is whether these memory responses would have been achieved with the deletion of K7R alone, because an MVA-B K7R single deletion was not included.Furthermore, whether these correlates of protection will translate to protection in an in vivo challenge model remains to be determined.Another recent study, also based on an MVA vector expressing HIV-1 antigens, found enhanced HIV-specific CD4+ and CD8+ T cell responses as well as Env-specific antibody responses in rhesus macaques with a vector lacking 4 immunomodulatory proteins .However again this vector was not compared to single gene deletions and protection was not determined by a challenge experiment.Mounting evidence indicates an important role for type I IFN in vaccine immunogenicity and each of the proteins selected for this study inhibits type I IFN production.Furthermore, the importance of NF-κB in generating robust memory immune responses was demonstrated recently for N1, an inhibitor of NF-κB and apoptosis.By vaccinating mice with viruses encoding N1 mutants that were competent for only one of these functions it was demonstrated that the anti-NF-κB activity of N1 was important for the enhanced protection observed with vΔN1, with no apparent contribution of its anti-apoptotic function .These types of studies demonstrate how viruses lacking innate immunomodulators can be utilised as tools to further our understanding of the relationship between innate immunity and immune memory, which will be important for future vaccine design.The authors declare no conflicts of interest.
Vaccinia virus (VACV)-derived vectors are popular candidates for vaccination against diseases such as HIV-1, malaria and tuberculosis. However, their genomes encode a multitude of proteins with immunomodulatory functions, several of which reduce the immunogenicity of these vectors. Hitherto only limited studies have investigated whether the removal of these immunomodulatory genes in combination can increase vaccine efficacy further. To this end we constructed viruses based on VACV strain Western Reserve (WR) lacking up to three intracellular innate immunomodulators (N1, C6 and K7). These genes were selected because the encoded proteins had known functions in innate immunity and the deletion of each gene individually had caused a decrease in virus virulence in the murine intranasal and intradermal models of infection and an increase in immunogenicity. Data presented here demonstrate that deletion of two, or three of these genes in combination attenuated the virus further in an incremental manner. However, when vaccinated mice were challenged with VACV WR the double and triple gene deletion viruses provided weaker protection against challenge. This was accompanied by inferior memory CD8+ T cell responses and lower neutralising antibody titres. This study indicates that, at least for the three genes studied in the context of VACV WR, the single gene deletion viruses are the best vaccine vectors, and that increased attenuation induced by deletion of additional genes decreased immunogenicity. These data highlight the fine balance and complex relationship between viral attenuation and immunogenicity. Given that the proteins encoded by the genes examined in this study are known to affect specific aspects of innate immunity, the set of viruses constructed here are interesting tools to probe the role of the innate immune response in influencing immune memory and vaccine efficacy.
117
Sediment sampling with a core sampler equipped with aluminum tubes and an onboard processing protocol to avoid plastic contamination
We describe methods for collecting sediments with a push corer and a multiple corer equipped with aluminum sampling tubes to minimize the possibility of plastic contamination.Typically, these coring devices are equipped with sampling tubes made of plastic such as polycarbonate, acrylic, or polyvinyl chloride.However, these plastic tubes, which can be easily scratched by sediment particles and melted at the hydrothermal vent fields, in particular during collection of sandy sediments, are a potential source of plastic contamination of sediment samples.The use of an aluminum tube can reduce these potential contamination risks.The advantages of aluminum over other metals is that it is easy to shape, light, relatively robust, and inexpensive.All these tubes were designed to the same dimensions as conventional polycarbonate or acrylic tubes for a multiple corer or a push corer, except for a small modification to the inner diameter of the push core.We introduce some examples of sediment sample collections using these sampling gears on the deep-sea floor and discuss some advantages and disadvantages to the method.To clarify the spatio-temporal distribution of microplastics, it is necessary to accurately analyze their vertical distribution in sediments, and this requires the collection of undisturbed sediment samples.A major problem arise during sediment collection and processing: contamination from the surrounding environment , particulary from the inner wall of plastic core tube.We therefore designed and tested methods for addressing these problems.We designed and tested core tubes made of aluminum for both a multiple corer and a push core with the aim to avoid contamination by microplastics during deep-sea sediment sampling, particularly in areas with coarse sediments.Sampling tubes made of polycarbonate or acrylic have the advantage of being transparent, allowing the sampled sediments to be observed from the outside.Contamination can also occur when sediment samples are processed.For example, a core extruder is typically used when a sediment core needs to be sliced at certain intervals without disturbing the sediment layers .If the core extruder is made of PVC, then the inner wall of the sampling tube can be damaged by sand grains, just as it can be when the sediment core is collected, and microplastic contamination may occur.To reduce the possibility of plastic contamination, we used a core extruder with an aluminum head.Similarly, a metal cutting plate can be used to slice the core sediments into layers.Sediment samplings using the aluminum-made tubes were performed during the KS18-J02 cruise in March 2018, the YK19-11 cruise in September 2019, and the KM19-07 cruise in September 2019 in Japanese waters.The water depth ranged from 855 to 9232 m and retrieved cores from fine sand to silt sediments.During the KS18-J02 cruise, the push coring using aluminum tubes was carried out with the ROV Hyper-Dolphin.During the YK19-11 cruise, the push coring was carried out by the HOV Shinkai 6500.Push cores were installed in a push core holder made with PVC, but the bottom of the holder was covered by a 5 mm-thick aluminum plate, preventing the bottom of the sediments coming into contact with plastic materials.Push core samplings were conducted using a corer with an aluminum sampling tube and found that the major problem was the non-transparency of the tube, which made it difficult to judge how deep the corer had penetrated and how much sediment was retained within the coring tube during its retrieval.Therefore, we used a push corer with a conventional polycarbonate tube for the first trial.Thus, we were able to determine from the polycarbonate core sampling results the hardness of the sediment, how deep we needed to insert the aluminum sampling tube, and the necessary recovery manipulation.We were able to refer to the results of the polycarbonate core samplings with respect to how deep the corer needed to be inserted and how soft or hard the sediment was, assuming that friction between the sediments and aluminum or polycarbonate are similar.Another problem is sediment disturbance.As described above, it is important not only to collect microplastics but also to analyze their vertical distribution within the sediment layers collected, without disturbing the sediment structure.When samples are collected with push corers or multiple corers with aluminum tubes, the collected sediment cannot be seen from the outside.In particular, when a submersible is used for core collection, the operation must be conducted with a manipulator and the depth of insertion into the sediment depends on sediment hardness and particle size.Thus, it is desirable to be able to see the inside of the tube in order to gauge the insertion depth.If push coring is being conducted by a submersible, a corer with a polycarbonate sampling tube can be inserted into the sediment at the same time as one with an aluminum tube and the insertion depth of the latter can be adjusted by comparison with that of the former.Similarly, when a multiple corer is used, it is possible to estimate the approximate insertion depth by installing tubes made of polycarbonate as well as ones made of aluminum.We next conducted tests with a multiple sampler equipped with both aluminum and polycarbonate sampling tubes.During the KM19-07 cruise, two out of eight tubes were replaced with aluminum-made tubes.When we deployed the aluminum cores on multiple corer, the major problem was an imbalance between the heavier aluminum tubes and the lighter polycarbonate tubes.This imbalance could cause the multiple corer to tilt and lead to uneven penetration of the sediment by the tubes.Another problem encountered was premature triggering of the device before recovery due to the load on the central axis of the multiple corer.We thus placed the two aluminum tubes to be diagonally opposite each other.After retrieving the cores onboard, they were placed on the core extruder with the aluminum-plate on the bottom.Sediment samples were sliced horizontally into desired vertical depths.Overlying water was removed with a siphon tube.However, again, due to the non-transparency of the core, it was difficult to confirm the position of the sediment-water interface through the tube.We therefore needed to illuminate the top of the core with a flashlight to confirm the position of the sediment-water interface so as not to suck off the fluffy surface sediments.Previous studies using plastic tubes for microplastic collections in the sediments trimmed the periphery of the sediment core attached to the tube as quantitatively as possible to avoid contamination .However the trimming of the peripheral part of the sliced sediments of widely-used cores can result in 43 % loss of the sediments.For deep-sea sediment where there is a limited chance for sampling and a low number of retrieved cores during scientific cruises and /or submersible dives, we recommend the use of a corer with an aluminum tube for sampling of microplastics.The sliced sediment samples were put into glass bottles, which were pre-combusted at 450 °C for 3 h, and then sealed with aluminum foil on top.All these samplings were carried out with blank bottles that were placed in the sampling area.To avoid microplastic contamination from the ambient laboratory environment, we used a clean bench and wore lab coats made of cotton with a static protection cover during experiments on the ship.If a clean bench is not available onboard it is necessary to be extremely careful of contamination and do several blank runs as appropriate.To avoid plastic contamination from the core, we designed core samplers with aluminum sampling tubes and tested them to collect undisturbed sediment samples from the deep-sea floor.In addition, by attaching an aluminum head to the core extruder, we reduced the risk of microplastic contamination of the sediment samples.Although aluminum tubes have some disadvantages, such as heavier weight and non-transparency, this reduces contamination risks from plastics during sample collection and processing, and has advantages over trimming the core periphery after retrieving the sediment samples when using a plastic tube.Plastic products that end up in the environment eventually fragment into microplastics, particles 5 mm or less in size, by photo- and thermal degradation, as well as through physical abrasion due to wave action, collisions with sand particles, etc. .Microplastics are ubiquitous across marine and freshwater environments in the surface water, water column and the sediments, with sediments being thought to possibly be the biggest sink of plastics.Of the plastic that enters the ocean, some of the missing plastic may be floating on the sea surface , or in the ocean but where most of the plastic has gone is still unknown.One possible destination of marine microplastics is bottom sediments.Seafloor sediments are sampled by inserting sampling tubes directly into the sediments.This is usually done by a scuba diver in shallow waters and, for the deep-sea floor, various types of sediment samplers can also be deployed from research vessels, including multiple corers, box corers, piston corers, gravity corers, Phleger corers, Ekman-Birge sediment samplers, and Smith-Mcintyre bottom samplers.A multiple corer, or a push corer inserted by the manipulator arm of a remotely operated vehicle or a human occupied vehicle, can retrieve undisturbed sediment samples.Microplastic pollution has been reported from almost all marine environments .Even in pelagic waters, considerable amounts of microplastics have been detected, particularly in the subtropical gyres .There is less information on the distribution of microplastics in sediments, particularly in deep-sea sediments .Although the deep-sea floor is remote from human activities and had been thought to be less contaminated by microplastics, some studies have reported considerable amounts of microplastics in deep-sea sediments .Furthermore, deep-sea sediments are less disturbed than shallow waters by dredging, storm water discharge, landslides, and so on, and are thus expected to have the potential to be used for reconstructing the deposition histories of microplastics through time, based on precise age reconstructions using radionuclides .Sampling of deep-sea sediments requires some sampling gears that are specifically designed for deep-sea areas, such as a multiple corer, a box corer, a grab corer, and/or a push corer operated by a HOV or a ROV.Among those sampling gears, both the multiple corer and the push corer are known to minimize the disturbance of surface sediments during sediment sampling.Therefore these two sampling methods are useful for studies on microplastic distribution in sediments.The above sampling gears use tubes made of plastic to collect sediments – typically either acryl or polycarbonate.During both sampling at the deep-sea floor and sediment extrusion from the tube on board , these plastic tubes are scratched by hard sediment particles, particularly at sandy sediment sites.The scratching produces small plastic particles originating from the sampling tubes, leading to contamination of sediment samples.The plastic corers can still used for other sampling.To collect undisturbed sediment core, the use of transparent plastic tube is advantageous.Undisturbed samples can also be used to clarify material cycling in the sediment surface and near-bottom layer.So that past and present environments recorded in the sediments can be inferred and sediment age can be correctly estimated, sediment samples need to be collected without disturbing the sedimentary structure.For example, analyses of the meiobenthos and microbes, and their vertical distribution in seafloor sediments can reveal seasonal fluctuations in the benthic environment.Furthermore, analyses of sediment physical properties allow estimation of the environment at the time of deposition and the sedimentation age.Although we know the material the plastic sampling tube is made of and can thus speculate that particular plastic materials in sediment samples may originate from the core tube, it is better to use non-plastic sampling gear.
Microplastics are abundant even on the deep-sea floor far from land and the ocean surface where human activities take place. To obtain samples of microplastics from the deep-sea floor, a research vessel and suitable sampling equipment, such as a multiple corer, a box corer, or a push corer manipulated by a remotely operated (ROV) or human occupied vehicle (HOV) are needed. Most such corers use sampling tubes made of plastic, such as polycarbonate, acrylic, or polyvinyl chloride. These plastic tubes are easily scratched by sediment particles, in particular during collection of coarse sandy sediments, and, consequently, the samples may become contaminated with plastic from the tube. Here, we report on the use of aluminum tubes with both a multiple corer and a push corer to prevent such plastic contamination. When compared with plastic tubes, aluminum tubes have the disadvantages of heavier weight and non-transparency. We suggest ways to overcome these problems, and we also present an onboard processing protocol to prevent plastic contamination during sediment core sampling when plastic tubes are used. Use of a sediment corer with aluminum tubes reduces the risk of plastic contamination in the sediment samples . The proposed method allows undisturbed sediment cores to be retrieved with comparable efficiency to conventional transparent core tubes
118
Molecular computers for molecular robots as hybrid systems
Many of the biomolecules whose behaviors have been understood so far act as molecular devices with modular functionalities, including sensing and actuation.Typical examples of molecular devices are membrane receptors and protein motors.In addition to those derived from living cells, various artificial molecular devices have been developed to date.Many of them are made of nucleic acid strands: DNA and RNA.The next research step is to develop an integrated system from such molecular devices .We are interested in autonomous molecular systems that respond to their environment by observing the environment, making a decision based on the observation, and performing an appropriate action without explicit external control.We call such a molecular system a molecular robot.To repeat, a molecular robot reacts autonomously to its environment by observing the environment with its sensors, making decisions with its computers, and performing actions to the environment.Therefore, such molecular robots need intelligent controllers which can be implemented with general-purpose molecular computers.In addition, if a molecular robot consists of multiple molecular devices, it needs to have a structure that integrates them and separates them from their environment.According to current molecular technologies, we can imagine molecular robots enclosed by a vesicle or a gel.We are currently organizing the research project “Molecular Robotics” funded by MEXT, Japan, and have proposed to develop two kinds of molecular robot prototypes .One has been called the “amoeba robot” and the other, the “slime mold robot”.An amoeba robot, which we aim to create, is a vesicle enclosed by an artificial membrane made of lipids, called a liposome, and contains molecular devices, including chemical reaction circuits on the surface of, and inside, the membrane.Actuators of the robot are expected to change the shape of the liposome and eventually lead to locomotive actions.The robot may contain internal liposomes, which can ‘explode’ and emit molecules into the body.The robot itself can eventually ‘explode’ too.While the body of an amoeba robot is made of a liposome, that of a slime mold robot is made of a gel, which can also work as an actuator of the robot because crosslinks in a gel can be made of DNA and such a gel can shrink or swell.Molecular devices are immobilized within the gel and DNA circuits can work inside the gel.The gel is expected to shrink or swell according to signals emitted from the devices.In relation to slime mold robots, Hagiya et al. formulated a computational model, called “gellular automata”, in which cellular space is created by walls of gels .Each cell is surrounded by gel walls and contains a solution in which chemical reactions take place.They may produce molecules called decomposers and composers, which dissolve orconstruct a gel wall of the corresponding type.One can also imagine combinations of amoeba and slime mold robots.For example, one can introduce internal liposomes into a solution surrounded by gel walls.If an internal liposome explodes, the molecules emitted from the liposome may dissolve a gel wall, or shrink or swell it.Beyond amoeba and slime mold robots and their combinations, we can foresee future generations of molecular robots including multi-cellular robots and robots that are hybrids of molecular and electronic devices .Applications of such molecular robots are expected to include intelligent drug delivery, artificial internal organs, intelligent stents, contaminated soil cleaners, brain–machine interfaces, and eventually an artificial brain.For both the amoeba and slime mold robots, we are implementing molecular computers based on chemical reactions that process information from sensors and control actuators.In this paper, we examine possible approaches to making such intelligent molecular controllers for the amoeba and slime mold robots.In particular, we regard such controllers as hybrid systems because the environment, the robot, and the controller are all state transition systems, having both discrete and continuous aspects.For modeling and designing hybrid systems, formal frameworks such as hybrid automata are commonly used .In this paper, we examine how molecular controllers can be modeled as hybrid automata and how they can be realized in molecular robots.As an example, we examine how a timed automaton can be realized as a molecular controller.In this section, we examine the requirements for a molecular computer that controls a molecular robot.Such a computer receives inputs from the sensors of the molecular robot and is expected to send orders to the actuators of the robot.It controls the system, consisting of the robot and its environment, to preserve certain conditions.Reactiveness Just like an electronic computer that controls a mechanical robot, a molecular computer that controls a molecular robot should respond to external signals or changes in the environment by sending orders to the actuators of the robot.In short, a molecular computer should be a real-time reactive system.In particular, a molecular computer for a molecular robot should handle changes of inputs from the environment.We say that a molecular computer is time-responsive if, when inputs to the computer change after its initial computation, outputs are re-computed to reflect the new inputs .Molecular computers for molecular robots should thus be time-responsive.Statefulness Generally, outputs from a molecular computer may depend on the history of its inputs.If so, it should have states that store part of the history that is necessary for computing outputs, and change them in response to new inputs.As discussed below, there may be both discrete and continuous states.Hybridness Inputs from the sensors may be instantaneous signals from the external environment or may be continuous measurements of the environment.The former kind of input is called a discrete event.Photo-irradiation for a short period of time is a typical example.Another example is pouring a solution into the environment that leads to a discrete change in concentrations of some molecular species.A molecular robot may sometimes be required to invoke a discrete event.For example, to make a fast conformational change of an amoeba robot by protein motors, the concentration of ATP should change instantaneously to start the motors in a coordinated fashion.To invoke such discrete events, the computer should also be able to make discrete state transitions.The explosion of an internal liposome is a possible discrete state transition because it is a type of phase transition and occurs instantaneously when certain conditions of the membrane are satisfied .In short, the system consisting of the robot and the environment, including the computer controlling the robot, is a typical hybrid system in the sense that it may have both discrete and continuous states and make both discrete and continuous state transitions.Discrete transitions may change the differential equations governing the continuous temporal evolution of the system, and continuous changes of the system may accumulate to cause discrete transitions.The computer controls the combination of the environment and the robot body in terms of discrete and continuous inputs and outputs.Persistency Persistency is a prerequisite for reactiveness and statefulness.However, most of the existing frameworks of DNA computing do not satisfy reusability: i.e., the computational devices can be used only once.Thus, persistency requires molecular computers to have a continuous supply of new devices and other necessary resources, such as energy.If such a supply is not easy to provide, entire molecular computers should be refreshed from time to time so that used devices are replaced with new ones.In this case, states of molecular computers should be implemented as persistent memories in the sense of persistent objects that are kept even after they are rebooted.DNA logic circuits such as enzyme-free circuits, by Seelig et al., and DNAzyme-based circuits, by Stojanovic et al., are all stateless simply because they are combinatory circuits .They consist of non-reusable gates and, therefore, they are not time-responsive.Their inputs and outputs are supposed to be only discrete, i.e., 0 or 1.Logic gates implemented as seesaw gates, by Qian et al., are also stateless, not time-responsive, and only discrete .Hopfield networks implemented with seesaw gates are similarly not time-responsive because inputs cannot be changed after outputs have been computed for old inputs .The seesaw gates themselves are reversible, giving them the potential for time-responsiveness, but are used as irreversible components in logic gates or neural networks.Regarding circuits with states, Benenson et al. implemented finite-state automata using class IIS restriction enzymes .The states of the automata are only discrete and the number of states is restricted.They are not reactive in the strict sense because inputs to the automata are not in the form of external signals.The whiplash machines proposed by Hagiya et al. use hairpin conformations as states .They can reactively make state transitions if they receive displacing strands as inputs .However, state transitions are stochastic and next states are not unique in general.These machines are also implemented as single molecules, so they can only work in a very small compartment as a state machine.Although reusable gates and time-responsive circuits have been proposed in some previous studies, they had problems in implementation, such as scalability, time efficiency, and the need for enzymes .Actual implementation has only been reported by Genot et al. .Their circuits are time-responsive but are supposed to be only discrete.To change an input from 1 to 0, it is necessary to add a strand complementary to the old input.This will form a stable duplex that will accumulate over time.At some point, the concentration of waste will get so high that it might affect the system, requiring to be removed.If this action can be performed efficiently, the system will be rebooted."Another interesting aspect of Genot et al.'s reversible gate is that the implementation of different outputs will change the configuration of the gate, which can be considered as different states.However, the number of potential states is limited, and might be difficult to be extended.In principle, arbitrary chemical reaction networks can be implemented by strand displacement reactions of DNA .It is therefore possible to implement time-responsive circuits using DNA even though some strands are consumed in the course of computation and should be supplied continuously.One can also use the PEN toolbox to implement time-responsive circuits, although they are always dissipative .As a result, while PEN-toolbox systems can run for a long time, they will eventually run out of fuel, the dNTPs used by the polymerase to generate new strands.However, rebooting such system is simple, since it only involves providing a fresh supply of dNTPs.Alternatively, the system can be sustained if a steady stream of dNTP can be provided.Another issue is that with large molecular counts, concentrations of molecular species in a chemical reaction network evolve continuously.Thus, discrete transitions should be implemented by relatively fast reactions, and discrete values could be realized in terms of appropriate representations.In the following sections, we examine how these frameworks can be combined to implement hybrid molecular controllers.In particular, statefulness of the PEN toolbox and reactiveness of the seesaw gates are combined to implement discrete states and state-dependent manipulation of variables.As seen in the sections, implementing jump transitions is a big challenge.Based on this framework, a controller and its target are both modeled as hybrid automata.The entire system is then modeled as the parallel combination of those automata, which is also a hybrid automaton.If one automaton makes a jump with a shared event label, then the other should also make a jump with the same label.One automaton can therefore send a message, which the other receives synchronously.One can also make an internal discrete transition as a jump with a non-shared event label while the other makes no transition.Implicit communication between the target and the controller is also possible via a shared real-valued variable.One changes the shared variable and the other observes it.Let us give a concrete example of a molecular controller modeled as a hybrid automaton, motivated by the work of Azuma et al., who modeled chemotaxis controllers of bacteria .It is essentially a timed automaton except that it has a global variable that denotes the input from a sensor.In this section, we examine each component needed to design a timed automaton describing a molecular controller.We also propose one or more implementation strategies for those components.In particular, we primarily present implementations that are fully based on reaction networks, but also explore alternative strategies based on liposomes or gellular automata.The use of liposomes is elaborated in Appendix A.Chemical reactions in the environment and the controller define how real-valued inputs and local variables evolve continuously over time.In particular, chemical reactions in the controller may, for example, increase the value of a clock or copy an input to a local variable.In a hybrid system, flow transitions are defined for each mode of the system.Thus, chemical reactions in the controller should be constrained by the current mode of the controller.In addition to concentrations of molecular species, it must be noted that physical parameters of macroscopic structures also evolve continuously.Such flow transitions may also be constrained by the current mode of the controller.Workflow Starting with a given hybrid automaton, molecular implementation of its various components is done in the following fashion:Identify the inputs from the environment.Since those are often fixed, they may force specific implementation strategies for the other elements.Design the modes of the controller.As with inputs, the way states are implemented will impact the implementation of transitions.Design the transitions.This has three aspects.First, it requires the implementation of local variables, such as a clock.Next, a way to check the jump condition needs to be implemented.Finally, an update mechanism has to modify the mode of the controller when the condition is verified.Various implementation strategies for each of those elements are presented in the subsequent paragraphs of this section.Real-valued input from the environment The concentration of a molecular species shared by the environment and the molecular robot is considered a real-valued input from the environment, and is observed by a sensor of the robot and referred to by its controller.For example, a membrane channel of a liposome selectively passes the molecular species, or a gel wall allows it to diffuse freely.Compared with the molecular robot, the environment can be regarded as a reservoir of that molecular species, the concentration of which can be changed in the environment but is practically clamped at a fixed level for some duration of time.Note that this molecular species might not be usable in its initial form.We assume, however, that the molecular robot contains a transducer, such as an aptamer-based sensor, that can convert it to a usable signal.The full description of this mechanism is beyond the scope of the present article.Boolean local variable representing a mode of the controller Some local variables are Boolean in the sense that they have only two values: a non-zero value and zero, or high and low values.Such a variable typically corresponds to each mode.If the controller is in a mode, its corresponding variable is non-zero and variables corresponding to other modes are zero.Those variables are also naturally represented by a concentration of a molecular species in the controller.Such Boolean variables have been realized in bistable or multi-stable circuits implemented with the PEN toolbox .This framework uses a polymerase to generate the output species) from a template that hybridizes with the input species).The output is then released into the system due to the combined action of a nicking enzyme and the strand displacement activity of the polymerase.This mechanism has been shown to perform isothermal amplification and can even be networked by using outputs as triggers for other templates .The amplification can be inhibited by a molecular species) that hybridizes with the middle of the template but does not allow polymerization due to a mismatch at its 3′-end.),Thus, it is possible to construct a bistable circuit as in Fig. 4.With the PEN toolbox, all molecular species except those for templates are degraded by an exonuclease so that concentrations of those species that are not amplified eventually converge to zero.Thus, in Fig. 4 the concentrations of A and B can be regarded as Boolean variables, each representing a state of the circuit.The existence or non-existence of a macroscopic structure can also be used as a Boolean variable.Examples are the open/closed state of a liposome and the existence/non-existence of a gel wall.When a liposome explodes, the controller usually enters a new mode.The conformation of a single molecule can also be considered as a Boolean or discrete variable.In a whiplash machine, for example, a single-stranded DNA molecule can take one of several hairpin forms.Each form is considered as a discrete state of the molecule and can emit different DNA molecules by combination of a polymerase and a nicking enzyme.Real-valued variable local to the controller In addition to the molecular species shared by the environment, the molecular controller can have its own molecular species, the concentrations of which evolve continuously according to chemical reactions that are local to the controller.Thus, the concentration of such a molecular species is considered as a real-valued local variable of the controller.The total concentration of different species can also be used.Such a local variable is typically used as a clock or as a record of a past input from the environment to the robot.Another type of local variable evolution can be realized by using a PEN-toolbox activation module and exonuclease to increase and decrease continuously the concentration of a chemical species, respectively.Additionally, a circuit called delay gate can be used specifically to implement clocks .Finally, a physical parameter of a macroscopic structure in the robot is also considered a real-valued local variable.Examples are the stability of a lipid membrane, which is usually determined by the concentration of a certain molecule on the membrane, and the width of a gel wall.Jump condition Referring to inputs and local variables, the controller makes decisions for triggering jump transitions and changing its current mode.There are various kinds of conditions for the controller to make decisions.Typical examples are comparisons on inputs such as comparisons between an input and a threshold, between two inputs, and between an input and a local variable.Other examples are comparisons of local variables, such as comparisons between a local variable and a threshold.More concretely, the controller should be able to check when a clock reaches its timer.All such comparisons may be constrained by the current mode of the controller.Generally, more complex conditions combining comparisons with arithmetic and Boolean operators may be necessary.They require analog computation .Jump by an external signal In a parallel composition of hybrid automata, a shared transition label allows a synchronous jump transition in the component automata.In the case of molecular systems, such transitions can be made by external signals.For example, photo-irradiation may change the conformation of a specific molecular group, such as azobenzene, instantaneously.If the molecular group is shared by the environment and the robot, photo-irradiation can cause a synchronous jump transition.Addition of a solution, such as NaOH, to the entire system is another external signal.It may instantaneously change the concentration of a molecular species and result in a discrete change of a certain solution condition.Some external signals can also be given in terms of construction or destruction of a macroscopic structure.For example, if a pore is made in a gel wall, some molecules may flow into the robot.Jump by the controller The controller should also be able to trigger jump transitions autonomously.First, it should be able to change values of its local variables.Examples of such jump transitions are resetting a local variable to zero, setting a local variable to a constant value, and copying an input to a local variable.The controller should also be able to start new flows by a jump transition.Finally, the controller should be able to perform actions and, as discussed in a previous section, some actions can or should be performed by jump transitions.Examples are emitting molecules outside, importing molecules from the environment, changing a macroscopic shape, and supplying energy and reactants to refresh or reboot the controller.Realization of a jump by the controller How to trigger a jump transition according to a jump condition seems to be one of the most difficult challenges in making a molecular controller.The following three methods can be considered.One is to make use of ultrafast chemical reactions in the controller, which lead to a steady state quickly.Sharp transitions can be achieved by modifying the PEN toolbox through reactions with a high Hill coefficient.While this functionality is not part of the regular PEN toolbox, we can add it by using modular primers .Modular primers are very short successive DNA strands that form, through base-stacking, a structure stable enough to trigger polymerization.Kotler et al. showed that this method worked well with the Bst polymerase, the enzyme used by Padirac et al. to implement their bistable circuit .By extension, this is an encouraging sign with respect to the feasibility of high-Hill-coefficient inhibition.Although this implementation is only an approximation of a timed automaton because the transitions are never instantaneous, we can consider them to be fast enough, compared with the other time constants of the system, for the transition to have little impact on the overall behavior.Fig. 6, right, shows that for a Hill coefficient of 3 or 4, there is negligible input for low concentration of trigger and almost maximum activity for medium concentration of trigger.Combined with some threshold mechanism, this approach allows us an almost “all or nothing” transition based on trigger concentration.The third is to use a single-molecule conformational change.This is a stochastic, but discrete, event.An example is a conformational change in a hairpin DNA molecule, which then takes another hairpin form.In this section, we present a theoretical implementation of the hybrid controller from Section 4, based strictly on DNA and enzyme reaction networks.Both the PEN toolbox and the seesaw gate can be used to design more complex systems.They have different advantages and drawbacks, based on their respective implementation.Thus, it makes sense to use those two paradigms together, getting the most out of each .This approach, of course, requires the two paradigms to be compatible.In fact, compatibility is readily achieved, so long as the following points are kept in mind during sequence design.DNA strands used in the seesaw gate should be protected against the action of enzymes.Strands should have a phosphorothioate modification on the three bases at their 5′-end or a hairpin to prevent digestion by the exonuclease.The hairpin solution, if applicable, is preferable, because it does not form a complex with the enzyme, so its activity is not decreased artificially.DNA sequences should not contain the nickase recognition site.DNA strands should be prevented from acting as a trigger for the polymerase.This can achieved by adding a phosphor modification at the 3′-end.Conversely, PEN-toolbox strands should be designed carefully so as not to interfere with the working of the seesaw gate.Specifically, strands should not contain the toehold sequence used by the seesaw gate.Unwanted invasion would create additional delay in the seesaw gate, which should be avoided.Input signal The input signal is considered to be generated from the sensor part of the molecular robot.As such, we assume that it is a DNA species with a sequence that we can choose freely.The actual implementation of the sensor is beyond the scope of this paper.Implementing the states of the controller Following the discussion of the previous section, the two states of the controller are implemented by a PEN-toolbox bistable circuit .Species A represents the forward state while species B represents the tumbling state."Based on Padirac et al.'s analysis of the switch, we have a short incoherent time when both species are present while the switch transitions from one state to another.However, this incoherent time is short compared to the other time scales of the system, and can be considered a sort of leak.Moreover, even if the state was implemented by the conformation of a single molecule, there would still be a similar lag in the concentration of downstream molecules.As such, we can consider that this bistable circuit is a valid implementation of the two states.Alternatively, the clock resetting the system to the forward state can be built into the circuit by making the B state weaker and thus unstable.This instability means that the circuit, if it switches to the B state, will eventually go back to the A state.This circuit is actually monostable and cannot be used to implement multiple clocks.Furthermore, the timer value is dependent on other variable elements from the system, such as the degradation rate of DNA signal, making the clock a logical clock, but not a wall clock.For the sake of generality, we therefore choose to use an explicit species, C, to implement the clock as above.Additionally, explicit ways to implement logical clocks exist in DNA computing by using, for example, a dedicated circuit to capture and degrade a clock species .Implementing non-clock transitions The implementation of the seesawing between X and R, mediated by B, is fairly straightforward.When the concentration of R is higher than that of X, the A state starts to become inhibited.When there is an excess of R high enough, the bistable circuit will switch.The 3′-end of R is used as input for an activation template, generating actR, a modular input that causes a sharp transition.Since the 3′-end of X is almost the same as that of R by definition, it will act naturally as a competitive inhibitor.X has a similar transduction step, generating actX, which has the same sequence as that of actR, but with a 2-nucleotide rotation, so that the last two nucleotides in actX are the first two nucleotides in actR.Using this strategy, actX can hybridize in a modular way to the inhibition template, however its last two bases will be mismatched, preventing it from activating the template.Additionally, the shift prevents any cooperative binding between actR and actX."The threshold α for the transition is thus encoded in the respective stabilities of actR and actX on the inhibiting template, as well as in the template's concentration.A derivation of α based on those parameters is described in Supplementary Information S2.Domain-level implementation A domain-level implementation of our controller is shown in Fig. 8.The A state and the clock transition are implemented as pure PEN-toolbox elements.The seesaw gate is also similar to Qian and Winfree , with the distinction that R has no extra domain, and X only has a two-nucleotide mismatch, preventing its extension on the RinhAA template.Finally, B is a hybrid species between the seesaw gate and the PEN toolbox.Its 3′-domain hybridizes with a PEN-toolbox template, and has the same length as that of a usual PEN-toolbox species.Its 5′-end is, however, used for the seesawing operation of the seesaw gate."Since both sides are limited to have no interactions, it is expected that they will not impact each other's efficiency.The major issue is on inhibiting the template producing B.Due to the length and stability of B, the output domain is expected to be double-stranded, B being only freed by the strand-displacement activity of the polymerase.In this case, a traditional PEN-toolbox inhibitor will be unable to efficiently invade the template, leaving it mostly unaffected.We solve this problem by adding an inhibitor toehold at the 3′-end of the template, giving the inhibitor the same stability as in the design of Padirac et al. .The way the species interact together is summarized in Figs. 9 and 10.Results for the case when the input X is given by the sum of a sinusoidal and affine function, are shown in Fig. 11.This corresponds to a virtual decrease of nutrient in the environment over time.Such function is a toy model of the sensing measured when tumbling and consuming nutrient.A more accurate function would require a complete modeling of the molecular robot, which is beyond the scope of this article.As mentioned before, the robot has two states: A and B.We start here in the B state, which allows the system to set R to the current value of X.The timer then goes off, switching back to the A state.Eventually, the concentration of X reaches 0 since concentrations cannot be negative.After this point, no transition can happen anymore, since R is below the transition threshold.We can thus see that the controller is behaving as expected.In this perspective paper, after defining molecular robots as autonomous systems consisting of molecular devices and introducing the molecular robots being developed in the research project “Molecular Robotics” we examined how controllers for molecular robots can be modeled and implemented as molecular computers.We first summarized the requirements for molecular computers for those molecular robots by pointing out that molecular computers are hybrid controllers, and examined existing frameworks of DNA computing from the point of implementing hybrid controllers.Notice that hybrid or timed automata have been studied in computer science for a long time and have nice properties that make their analysis and verification efficient.We then described how to implement a molecular controller by combining existing frameworks of DNA computing.We first proposed a general workflow for implementing various components of a given hybrid automaton that defines behaviors of a target system and combining them.We then demonstrated the workflow by implementing the chemotaxis controller introduced by Azuma et al. .A next step would be to combine such a controller with an accurate model of the actuators.Through this modeling, we could check that the circuit design functions correctly under the given assumptions.More detailed models, including leaks, could be generated automatically, following an appropriate approach .Implementation of the chemotaxis controller revealed some problems of existing frameworks of DNA computing.For example, state transitions are not so sharp as expected for hybrid automata, and jump conditions require high Hill coefficients.In Appendix A we propose the use of internal liposomes that can explode to solve these problems, and also discuss how to implement timed automata.We recognize that even with internal liposomes, formulating how to build a general-case hybrid controller is still beyond the scope of this paper.Let us finally emphasize that, as with other kinds of machines, the framework of hybrid systems is appropriate for designing and verifying molecular robots and their controllers.It is therefore important to develop general methods for implementing hybrid systems in terms of chemical reactions and physical phenomena that can be implemented in molecular robots.To achieve this goal, it may be beneficial to combine multiple approaches to develop molecular robots, such as those of whiplash machines, dynamic systems, and strand displacement reactions .
Various artificial molecular devices, including some made of DNA or RNA, have been developed to date. The next step in this area of research is to develop an integrated system from such molecular devices. A molecular robot consists of sensors, computers, and actuators, all made of molecular devices, and reacts autonomously to its environment by observing the environment, making decisions with its computers, and performing actions upon the environment. Molecular computers should thus be the intelligent controllers of such molecular robots. Such controllers can naturally be regarded as hybrid systems because the environment, the robot, and the controller are all state transition systems having discrete and continuous states and transitions. For modeling and designing hybrid systems, formal frameworks, such as hybrid automata, are commonly used. In this perspective paper, we examine how molecular controllers can be modeled as hybrid automata and how they can be realized in a molecular robot. We first summarize the requirements for such molecular controllers and examine existing frameworks of DNA computing with respect to these requirements. We then show the possibility of combining existing frameworks of DNA computing to implement a sample hybrid controller for a molecular robot.
119
Life cycle environmental impacts of vacuum cleaners and the effects of European regulation
Several studies have assessed life cycle environmental impacts of different electrical appliances and electronic products.The former include refrigerators, dishwashers, ovens and washing machines.The impacts of electronic products studied in the literature include plasma TVs, computers and monitors, mobile phones and e-books.However, the analysis of the life cycle environmental performance of vacuum cleaners has received little attention in literature with few studies available.For example, Lenau and Bey and Hur et al. proposed and tested different simplified life cycle assessment methodologies using semi-quantitative inventory data of unspecified models of vacuum cleaners as examples.In both studies, the main objective was to compare the proposed methodologies and, therefore, these studies did not provide specific conclusions related to the environmental performance of these devices.A screening LCA on vacuum cleaners was also performed as part of preparatory documents for the development of the European Union eco-design regulation for vacuum cleaners.The main objective of this study was to assess the environmental performance of different types of vacuum cleaner and to identify improvement opportunities.The study was performed with aggregated inventory data provided by manufacturers and specific data obtained by disassembly of certain elements.Only three environmental impacts were considered, in addition to some air emissions and heavy metals which were estimated at the inventory level only.The findings indicated that use of vacuum cleaners was the main hotspot and identified various alternatives to improve their energy efficiency and cleaning performance.However, as far as we are aware, a comprehensive LCA study of vacuum cleaners has not been carried out yet so that it is not known how the use stage affects other impacts and how much the other life cycle stages, such as raw materials and waste management, contribute to these.This is particularly pertinent in light of the European strategy on circular economy which promotes resource efficiency and waste minimisation.The power rating of vacuum cleaners has increased markedly since the 1960s, from 500 W to over 2500 W, persuading consumers that a “powerful” cleaner will perform better.However, higher power does not necessarily lead to a better cleaning performance but does mean a lower energy efficiency, which dropped from 30–35% in the 1970s to below 25% in recent years without noticeable improvements in the cleaning performance.More than 200 million of domestic vacuum cleaners are currently in use in the European Union, with around 45 million sold annually and a market growth of 9% per year.The average annual electricity consumption by these devices in the EU was estimated at 18.5 TWh in 2010, representing 0.6% of the total EU consumption and equivalent to the annual electricity generation by five gas power plants.Vacuum cleaners, therefore, represent an important area of action to help reduce the environmental impacts from households.For that reason, the European Commission has recently developed an eco-design regulation to encourage manufacturers to produce more energy efficient vacuum cleaners without compromising product performance and economic feasibility.The regulation considers only most widely used domestic vacuum cleaners and excludes others, such as dry and/or wet, robot, battery-operated and floor-polishing devices.Another important aspect considered by the EC to improve the environmental performance of household appliances is the recycling of waste electrical and electronic equipment.The generation and treatment of WEEE is currently a rapidly growing environmental problem in many parts of the world.As the market continues to expand and product innovation cycles become shorter, the replacement of electric and electronic equipment accelerates, making electronic devices a fast-growing source of waste.To help address this problem, the WEEE directive aims to prevent, re-use, recycle and/or recover these types of waste.It also seeks to improve the environmental performance of all players involved in the life cycle of electrical and electronic equipment and, in particular, those involved directly in the collection and treatment of WEEE.Waste prevention and minimisation also contribute directly to improving resource efficiency which is at the core of the European 2020 strategy for creating a smart, sustainable and inclusive economy.Therefore, considering these policy drivers, the main objectives of this study are:to evaluate life cycle environmental impacts of vacuum cleaners and identify opportunities for improvements; and,to assess the effects at the EU level of the implementation of the eco-design and WEEE regulations related to vacuum cleaners and provide recommendations for future research and policy.As far as we are aware, this is the first study of its kind for vacuum cleaners internationally.The LCA study has been conducted according to the guidelines in ISO 14040/44, following the attributional approach.The assumptions and data are detailed in the following sections, first for the reference vacuum cleaner considered here and then for the study at the EU level.Vacuum cleaners can be broadly categorised as upright and canister/cylinder type.In the upright design, the cleaning head is permanently connected to the cleaning housing, whereas in the cylinder type, the cleaning head is separated from the vacuum cleaner body, usually by means of a flexible hose.Cylinder vacuum cleaners represent 85% of the European market.Vacuum cleaners can have a disposable bag or they can be bagless with a reusable dust container.The European market shows an upward consumer trend towards low-cost bagless vacuum cleaners with high power rating, most of which are produced in China.Therefore, this study focuses on a conventional 1400 W bagless vacuum cleaner, which is representative of the European market for these products.The scope of the study is from ‘cradle to grave’, with the following stages considered:metals: galvanized and stainless steel, aluminium, brass and copper;,plastics: polyvinyl chloride, polypropylene, acrylonitrile butadiene styrene, polyoxymethylene, high density polyethylene and ethylene vinyl acetate; and,production: metal stamping and plastic moulding, production of internal cables, power cord and plug, screen printing, product assembly and packaging;,use: consumption of electricity and replacement of filters;,end of life: disposal of post-consumer waste; and,transport: raw materials and packaging to the production factory, vacuum cleaner to retailer, end-of-life waste to waste management facility.Consumer transport to and from retailer is not considered because of a large uncertainty related to consumer behaviour and allocation of impacts to a vacuum cleaner relative to other items purchased at the same time.A sensitivity analysis showed that this assumption is robust as the contribution of consumer transport to the total impacts of a vacuum cleaner is negligible.The functional unit is defined as the ‘use of the vacuum cleaner for 50 h/year over a period of eight years to clean a typical European household’.This definition is based on the specifications provided by the eco-design regulation to analyse the energy consumption of vacuum cleaners.In order to assess the effects of the application of the eco-design and WEEE regulations, the environmental impacts are calculated by considering the total number of vacuum cleaners used in the EU28 over a year.The inventory data for the vacuum cleaner are detailed in Table 1 with an overview of the dismantled product displayed in Fig. 2.The contribution of the different components to the total weight correspond well with the average values provided by the AEA for vacuum cleaners sold and used in the EU.Primary production data, including the amount and type of raw and packaging materials and a detailed description of the vacuum cleaner production, have been obtained from a major vacuum cleaner producer.Background data have been sourced from the Ecoinvent v2.2 database.Open literature sources and GaBi database have been used to fill data gaps if data were not available in Ecoinvent.Further details on the inventory data are provided in the next sections.The dismantled components of the main body and accessories show in Fig. 2 have been individually weighed to estimate the material composition of the vacuum cleaner.The total weight of the main body is 3.5 kg.The metallic components are made of aluminium, stainless and galvanized steel, brass and copper.The materials used for the plastic components of the main body are PP, ABS, HDPE and PVC.Accessories are made of EVA, PP and HDPE.Ecoinvent life cycle data have been used for these components, except for the production of POM, for which the data were not available and have been sourced from Plastics Europe.As the materials are produced in China, Ecoinvent data for the electricity grid in China have been used for all production processes.Injection moulding is assumed for the production of different plastic parts.Steel stamping and aluminium cold impact extrusion are assumed for the production of steel and aluminium components, respectively.As no specific processes were available for copper shaping, Ecoinvent data for generic metal shaping have been considered.The primary packaging of the vacuum cleaner consists of a folding cardboard box, two cardboard trays, one interior protective cardboard and several plastic bags to protect the accessories.Inventory data for the board folding and production of the box with offset printing have been used for the folding box.The trays and cardboard have been modelled using data for corrugated board and PE film has been used for the plastic bags.Data from Ecoinvent have been used for all packaging.The following assumptions have been made to fill data gaps and adapt some datasets:For power cord and plug, Ecoinvent data for production of the computer cable have been adapted.The power cord H05VVH2-F 2 × 0.75 mm2 has been assumed, with a weight of 14 g/m of copper instead of the 19.5 g/m in a regular computer cable, based on own measurements.For the plug, the Ecoinvent dataset has been modified to reflect the actual mass of different materials.Data for electricity and water consumption for vacuum cleaner assembly and packaging have been obtained from AEA.Based on an average use of 50 h/year and an expected life of eight years, the estimated electricity consumption during the lifetime of the vacuum cleaner with the power rating of 1400 W is equal to 560 kWh.The inventory data for electricity generation for the EU28 have been obtained from the European Network of Transmission System Operators for Electricity to model the EU electricity mix; see Table S1 in Supporting information.The data correspond to the year 2013, which is considered here as the base year.For filter replacements, it has been assumed that air filter is changed once every two years, as recommended by manufacturers.The following assumptions have been made for the end-of-life stage for the main body of the vacuum cleaner:For metal waste, 95% recycling rate has been assumed and Ecoinvent data have been used to model the recycling process.The system has been credited for recycling by subtracting the equivalent environmental impacts of virgin metals, while also including the impacts from the recycling process.Based on the ‘net scrap’ approach, the credits have only been considered for the percentage of recycled metals that exceeds the recycled content in the original raw material.For example, copper is made up of 44% recycled and 56% virgin metal so that the system has been credited for recycling 51% of copper at the end of life.A similar approach has been applied for steel and aluminium.In the case of plastics, plastic disposal data for Europe in 2012 have been assumed: 26% recycling, 36% incineration with energy recovery and 38% landfilling.The system has been credited for recycled materials using the approach described above.For recycling, data from Schmidt have been modelled, but using the EU28 electricity mix in 2013.The same electricity mix has been used to credit the avoided impacts for the recovered electricity from incineration of plastic waste.The GaBi database has been used for these purposes since these data are not available in Ecoinvent.All polyethylene bags are assumed to be landfilled, using life cycle inventory data from Ecoinvent.For the cardboard packaging, the latest available packaging disposal data have been assumed as follows: 84% recycling, 7% incineration with energy recovery and 9% landfilling.Since the cardboard packaging is mostly made from recycled material, no credits for the avoided material have been considered.The WEEE directive established that for small household appliances, including vacuum cleaners, the rate of recovery should be 70% and at least 50% of the weight of appliance shall be recycled.Considering the assumptions made in this study, 73% of the weight of the vacuum cleaner is recovered at the end of life, which is in compliance with the directive.The transport details can be found in Table 1.If not specified in databases, the raw materials and the packaging are assumed to travel for a distance of 150 km to the factory in 16–32 t Euro 3 truck.After the production and packaging in China, the vacuum cleaner is shipped to Europe.The transport distances have been estimated considering shipping by a transoceanic tanker between major container ports in China and Europe and then road transport by a 16–32 t Euro 5 truck to a distribution centre in Munich, representing geographically a central point of the EU.Generic distances of 150 km have been used for transport from production factory to port of Shanghai and from distribution centre to retailer.For the transportation of waste to final disposal, a distance of 50 km and 100 km in a 16–32 t Euro 5 truck have been assumed.The Ecoinvent database has been used for the background transport data.This section gives an overview of the assumptions for the study at the EU28 level, carried out to evaluate possible implications of the implementation of the eco-design regulation and the WEEE directive.Two timelines are considered for these purposes: i) current situation and ii) a future scenario for the year 2020.In both cases, the impacts have been estimated for all vacuum cleaners in use in the EU28 countries over a year.As shown in Table 2, a number of different parameters have been considered, including power ratings and end-of-life options for vacuum cleaners as they are affected by both regulations.Furthermore, since electricity consumption is the main hotspot in the life cycle of vacuum cleaners, different electricity mixes are considered, assuming a lower carbon intensity in 2020 than at present.Finally, a different number of vacuum cleaners in use have also been considered.These assumptions are explained in more detail in the next two sections.The following assumptions have been made for the key parameters for the current situation:Average power rating: according to the European Commission, the average power rating of vacuum cleaners in the EU was 1739 W in 2010, with an average annual increase of 2.5%.Thus, the average power rating of 1873 W has been assumed for 2013.Electricity mix: 2013 electricity mix for the EU28 countries as described in Section 2.1.2.2 has been considered.Number of vacuum cleaners in use: based on the number of households in the EU28 countries in 2013 and assuming one vacuum cleaner per household, it is estimated that 213.84 million cleaners were in use that year.End-of-life disposal: waste options described in Section 2.1.2.3 have been used.By 2020, both the eco-design and the WEEE regulations will be fully implemented, therefore, the future scenario has been modelled for the year 2020.The key parameters for this scenario are estimated as below:Average power rating: The eco-design regulation establishes that new vacuum cleaners marketed after 1 September 2014 should have a power rating below 1600 W and those marketed after 1 September of 2017 should be below 900 W. Given that the mix of new and old vacuum cleaners will be in use in 2020, an average power rating of 1060 W has been assumed, based on the following: i) an average power value of 1400 W for vacuum cleaners produced between 2014 and 2016 and 800 W for 2017–2020; ii) an expected lifespan of eight years, equivalent to the annual substitution rate of 12.5% and iii) 9% increase in sales each year.Electricity mix: the most feasible scenario for decarbonising the EU28 electricity mix by 2020 projected by ENTSO-E has been considered.Number of vacuum cleaners in use: to estimate the total amount of vacuum cleaners in use in 2020, the same approach has been used as for the current situation.Considering the number of households in the EU28 countries in 2013 and 1% annual growth, it is estimated that 229.26 million of vacuum cleaners will be in use in 2020.End-of-life disposal: If the WEEE directive is fully implemented by 2020, small appliances such as vacuum cleaners should have a rate of recovery of 80% and, at least, 70% shall be recycled.To achieve these rates, and considering that the actual recycling ratio of metals is already high, future changes in recovery and recycling will be focused mainly on plastic components.Thus, to achieve the above rates, a minimum of 57% of plastics must be recycled and 15% incinerated with energy recovery, with the rest being landfilled.These rates have been applied to the plastic materials making up the main body of the vacuum cleaner.Note that the WEEE directive affects only the electronic section of products, hence the consideration of the plastics in the main body.For the other parts, no changes in waste treatment have been considered because of the lack of specific data or regulation on recycling.To evaluate the effects of the WEEE directive, a scenario with 100% landfilling of the main body and accessories has been used as a reference case for comparison.The other assumptions for the current situation and the future scenario are as follows:The same inventory data for the raw materials, packaging, production and transport presented in Section 2.1.2 have been used for both the current situation and the future scenario.The vacuum cleaner is representative of the actual European vacuum cleaners and, therefore, its inventory data are considered to be accurate for the current situation.However, development of new technologies to achieve the same cleaning efficiencies with less power can have environmental implications, especially in the manufacturing of future vacuum cleaner models.These developments do not necessarily imply an increase of environmental impacts, because, for example, a tendency to use less material has been observed in recently developed vacuum cleaners according to new regulations.Taking this tendency into account, the assumption made here for the same amount of materials used in the future as today can be considered as a worst case scenario.For both the current situation and the future scenario, the lifetime of eight years has been assumed for the vacuum cleaners.However, the European Commission stated that it is possible that the life expectancy of the average vacuum cleaner in 2020 would be reduced to five years.The effect of this assumption has been tested through a sensitivity analysis.For end of life, the same assumptions have been made for all vacuum cleaners.However, it can be argued that the recyclability can vary from one model to another.To explore the significance of this assumption, a sensitivity analysis considers an extreme situation with all the waste landfilled by 2020.The EU average electricity mix has been considered in the analysis; however, the electricity profile can vary highly from one country to another.Therefore, a sensitivity analysis has been conducted considering country-specific electricity mixes in 2020 for different countries.GaBi 6.5 software has been used to model the system and the CML 2001 mid-point impact assessment method has been applied to calculate the environmental impacts of the vacuum cleaner.The following impacts are considered: abiotic depletion potential of elements, abiotic depletion potential of fossil resources, acidification potential, eutrophication potential, global warming potential, human toxicity potential, marine aquatic ecotoxicity potential, freshwater aquatic ecotoxicity potential, ozone depletion potential, photochemical oxidants creation potential and terrestrial ecotoxicity potential.In addition to the CML impact categories, primary energy demand has also been calculated.The results are first discussed for the reference vacuum cleaner, followed by the impacts at the EU28 level.The environmental impacts of the reference vacuum cleaner and the contribution of different life cycle stages are shown in Fig. 4 and Table S2, respectively.For example, over its lifetime the vacuum cleaner will use 7.5 GJ of primary energy and emit 312 kg CO2 eq. The use stage is the major contributor to most impact categories, except for the ADPelements and HTP to which it contributes 62% and 68%, respectively.The impacts are almost entirely due to the consumption of electricity by the vacuum cleaner, partly because of the high energy usage over its lifetime and partly because of the dominance of fossil fuels in the electricity mix, particularly coal which is the main contributor to the ADPfossil, AP, GWP, EP, FAETP, HTP, MAETP and POCP.As can also be seen in Fig. 4, the raw materials used to manufacture the vacuum cleaner are important contributors to the ADPelements, HTP and TETP.Most of these impacts are associated with the use of copper and stainless steel.The manufacturing process for vacuum cleaners has significant contribution only for the ODP, mainly owing to the emissions of halogen organic compounds from the solvents used in the moulding of plastic components.The contribution of manufacturing to some other categories, such as the AP, POCP and GWP, is relatively small and even lower for the rest of the impacts.Transportation has no significant influence in any of the categories considered, with contributions below 1.5% for all indicators.The end-of-life disposal of the vacuum cleaner has an overall positive effect on the impacts because of the recycling credits, particularly for copper and stainless steel which affect the ADPelements, HTP and TETP most significantly.The potential environmental impacts of vacuum cleaners in 2013 and 2020 across the EU28 are compared in Table 3.The results for the future scenario also show the variations related to the implementation of the eco-design and WEEE regulations, the assumed changes in the European electricity mix and the projected increase in the number of units in use.As can be seen from Table 3, all the impacts would decrease by 2020 under the assumptions made here.The only exception to this is the ADPelements which is expected to increase by around 4%.Even though the implementation of the eco-design regulation will result in a significant reduction of this impact, this is offset by the increase associated with the change in the electricity mix and the expected rise in the number of vacuum cleaners in 2020 The expected improvements to the ADPelements related to the WEEE directive are negligible.The increase in the impact associated with the electricity mix change is mainly due to the assumed increase in the number of solar photovoltaic panels which are manufactured using scarce elements, such as tellurium and silver.As also indicated in Table 3, the other environmental impacts would be reduced by 20%–57% in 2020 as compared to 2013.For example, the GWP is expected to be 44% lower than at present, saving 4824 kt CO2 eq./year.To put these results in context, based on the data from the EU Joint Research Centre, this reduction in CO2 eq. is equivalent to the annual GHG emissions of Bahamas and 2.5 times the emissions of Malta.This is mainly due to the improvements in the energy efficiency related to the implementation of the eco-design regulation and to a lesser extent due to the decarbonisation of the electricity mix.Compared to these reductions, the decrease in the GWP associated with the implementation of the WEEE directive is small.On the other hand, the expected rise in the number of vacuum cleaners would increase the GWP by 784 kt CO2 eq./year.Therefore, the implementation of the eco-design regulation and decarbonisation of the EU electricity mix could compensate the annual GHG emissions of a whole country like Bahamas.Similar to the GWP, the most substantial reduction in the other impacts is due to the improvements in the energy efficiency of vacuum cleaners.Further notable improvements would be achieved through the decrease in the share of coal and increase of renewable electricity in the European electricity mix by 2020, particularly for the AP, EP, FAETP and MAETP.However, as shown in Table S3 in Supporting information, the change in the electricity mix would increase the ODP of electricity generation by 24%.This is due to the higher use of natural gas.Therefore, the expected reduction for this impact in 2020 is lower than for the others.The reduction in the TETP is also lower because the impact rise for the 2020 electricity mix associated with the increased share of wind power is offset by the lower contribution of lignite and hard coal and nuclear energy.This means that the TETP from electricity remains largely unchanged in 2020.Therefore, the improvements in this impact category are only affected by the increase in energy efficiency associated with the eco-design regulation.The following sections discuss in more detail the variation in the impacts associated with the eco-design regulation and the WEEE directive.As discussed in the introduction, a tendency to increase the power rating of vacuum cleaners has been observed in European countries in recent years.In the absence of the EU eco-design regulation, this trend would likely continue so that by 2020 the average power of vacuum cleaners would be 2337 W.Consequently, the environmental impacts of vacuum cleaners in the EU28 would be 82%–109% higher by 2020 as compared to the impacts with the implementation of this regulation.To put this in context, for the example of the GWP, the implementation of this regulation would lead to a reduction in GHG emissions of 6100 kt CO2 eq./year which is equivalent to the emissions of the whole country of Guyana in 2012.As mentioned in Section 3.2, the implementation of the WEEE directive would also reduce the environmental impacts of vacuum cleaners by 2020; however, the reductions are small in comparison to the reductions related to the eco-design regulation.Nevertheless, similar to the discussion on the absence of the eco-design regulation in the previous section, it is important to find out how the impacts would change if there was no WEEE directive.For these purposes, it is assumed that there is no recycling of vacuum cleaners or their incineration to recover energy, with the main body and the accessories being landfilled.These impacts are compared in Fig. 6 to those with the WEEE directive being implemented for the year 2020.Although this is an extreme scenario, only four of the 12 impacts would increase by > 10% as compared to the expected situation in 2020 with the implementation of the directive.All other impacts would increase by 2%–9%.Although these reductions appear minor, considering the large number of vacuum cleaners expected to be in operation in the EU in 2020, the overall environmental benefits would still be considerable.For example, the implementation of the WEEE directive across the EU28 countries would save 76 kt CO2 eq. and 220 kt CO2 eq.This is equivalent to the GHG emissions generated annually by around 67,500 and 195,500 light duty vehicles, respectively, assuming an average CO2 emission of 90 g/km and a distance of 12,500 km/year.Furthermore, possible future trends, such as lower lifetime expectancy, reduced availability of some raw materials or even greater improvements in energy efficiency or the electricity mix, can increase the relative environmental importance of recycling the vacuum cleaners.The analysis so far has been based on the average EU28 electricity mix.To explore the effect this assumption may have on the impacts, four EU countries with very different electricity mix are considered as part of the sensitivity analysis: i) Poland, ii) France, iii) Denmark and iv) Ireland; for details, see Table S1 in Supporting information.As previously, 2020 is taken as the reference year and the annual impacts are estimated considering the use of a vacuum cleaner for 50 h/year.Owing to a lack of data on the average power of vacuum cleaners at national levels, the average power of the vacuum cleaner is assumed.All other assumptions are the same as for the reference vacuum cleaner.The results in Fig. 7 suggest that in countries where electricity is dominated by coal, 10 out of 12 impacts of the vacuum cleaner are higher compared to the EU28 average.The greatest increase is found for the AP, EP, FAETP, GWP, MAETP and POCP.On the other hand, a higher share of natural gas in the electricity mix, results in six impacts being lower relative to the EU28 average; however, the other impacts are higher.Finally, the use of renewable or nuclear-dominated electricity leads to significantly lower impacts for 10 out of the 12 analysed categories.This is due to the reduced use of fossil fuels, with the largest reductions in the ADPfossil, EP, FAETP, GWP and MAETP.This demonstrates that the effect of energy efficiency measures, such as those in the eco-design regulation, can be greater if accompanied by electricity decarbonisation.On the other hand, for countries which already have a low-carbon grid, the environmental benefits of energy efficiency are lower, but nevertheless still significant.In the above analyses, an estimated lifetime of eight years has been considered for the vacuum cleaners.However, as mentioned earlier, the life expectancy of an average vacuum cleaner could be reduced to five years by 2020 Therefore, the effect of this reduction on the environmental impacts is considered here.The results suggest that a decrease in the service life of vacuum cleaners would cause a modest increase in impacts, 10%–20% for seven categories and < 10% in the other five.As shown in Fig. 8, the highest increase would occur for the HTP, which would rise by 19% and the smallest for the PED.These findings are perhaps not surprising as the impacts are dominated by the use stage.However, as the energy efficiency measures take hold and future electricity mix becomes less carbon intensive, the importance of the lifetime of vacuum cleaners as well as the other life cycle stages may increase.To illustrate the point, we consider the contribution of different life cycle stages to the impacts of vacuum cleaners with the life expectancy of five years.As can be observed in Table 4, the raw materials and vacuum cleaner production are jointly responsible for over 19% of the impact for each category considered.For the ADPelements and HTP, the combined contribution of these two stages is twice as high.Similarly, for these two categories, the contribution of waste management related to the improved end-of-life waste management is important, reducing the total impacts by over 20%.This suggests that the possible reduction in the lifespan of vacuum cleaners, along with the increased energy efficiency and future electricity decarbonisation, could increase the relative importance of the raw materials, production and waste management stages.Therefore, even though further energy efficiency improvements will be necessary, optimising the use of raw materials, production processes and disposal of waste should also be considered to help achieve greater environmental improvements in the life cycle of vacuum cleaners.This paper has presented for the first time a comprehensive study of life cycle environmental impacts of vacuum cleaners and discussed the implications in the EU context.Electricity consumption during the use stage is currently the main contributor to all impact categories in the life cycle of a vacuum cleaner.However, the raw materials and end-of-life disposal can be considered hotspots for the depletion of scarce elements, human toxicity and terrestrial ecotoxicity, and the production stage for the depletion of the ozone layer.These results demonstrate that environmental impact studies of vacuum cleaners and other energy-using products must consider all stages in the life cycle and not just their use, to avoid solving one environmental problem at the expense of others.This study has also considered the impacts of vacuum cleaners at the European level taking into account the effects of two EU laws as well as the expected decarbonisation of electricity and the projected increase in the number of units in use.By 2020, the combined effect of these parameters could amount to a 20%–57% reduction in the environmental impacts as compared to the current situation, with the global warming potential being 44% lower than today.The exception to this is depletion of elements, which will increase by around 4% on the present value.The implementation of the eco-design regulation alone would improve significantly the environmental performance of vacuum cleaners by 2020 compared with current situation, with all the impacts reduced from 37% for the depletion of elements to 44% for eutrophication and primary energy demand.On the other hand, if the regulation was not implemented and business as usual continued, the impacts would be 82%–109% higher by 2020 compared to the impacts with the implementation of the regulation.This would, for example, mean the additional emissions of GHG equivalent to 6100 kt CO2 eq./year.Improvements associated with the implementation of the WEEE directive are much smaller compared to the current situation than those achievable by the eco-design regulation.However, if the WEEE directive did not exist, then the impacts would be 2%–21% higher.Although this increase may sound small, it is still significant given the number of vacuum cleaners in use; for instance, the global warming would increase by 220 kt CO2 eq./year.Electricity decarbonisation would help to reduce the impacts of vacuum cleaners for all of the categories considered, except for the depletion of elements and the ozone layer.The former would increase because of the use of scarce metals for solar photovoltaics and the latter because of a higher share of natural gas in the electricity mix.Related to this, the environmental impacts of vacuum cleaners at a country level depend on the national electricity mix, with those based on renewable and nuclear power having lower impacts than those with fossil fuels.Therefore, policy measures addressing energy efficiency must be accompanied by appropriate actions to reduce the impacts of electricity generation, otherwise their benefits could be limited.Because of possible future trends, such as shorter lifetime of vacuum cleaners, reduced availability of some raw materials or greater improvements in energy efficiency or electricity generation, in addition to the use stage, it will be important to focus future efforts on understanding the impacts from other parts of the life cycle, including raw materials, manufacture and waste treatment.As the results of this study demonstrate, any improvements in these stages will result in significant environmental benefits, as the number of vacuum cleaners in use in Europe is expected to increase in the future.Further studies of possible improvements in these life cycle stages based on eco-design principles are recommended for future research.Furthermore, reuse and refurbishing of vacuum cleaners could be evaluated to identify opportunities for environmental improvements.Finally, the impacts of other types of vacuum cleaner currently not covered by any specific EU regulation should be considered in future studies to help inform policy development.
Energy efficiency of vacuum cleaners has been declining over the past decades while at the same time their number in Europe has been increasing. The European Commission has recently adopted an eco-design regulation to improve the environmental performance of vacuum cleaners. In addition to the existing directive on waste electrical and electronic equipment (WEEE), the regulation could potentially have significant effects on the environmental performance of vacuum cleaners. However, the scale of the effects is currently unknown, beyond scant information on greenhouse gas emissions. Thus, this paper considers for the first time life cycle environmental impacts of vacuum cleaners and the effects of the implementation of these regulations at the European level. The effects of electricity decarbonisation, product lifetime and end-of-life disposal options are also considered. The results suggest that the implementation of the eco-design regulation alone will reduce significantly the impacts from vacuum cleaners (37%-44%) by 2020 compared with current situation. If business as usual continued and the regulation was not implemented, the impacts would be 82%-109% higher by 2020 compared to the impacts with the implementation of the regulation. Improvements associated with the implementation of the WEEE directive will be much smaller (<. 1% in 2020). However, if the WEEE directive did not exist, then the impacts would be 2%-21% higher by 2020 relative to the impacts with the implementation of the directive. Further improvements in most impacts (6%-20%) could be achieved by decarbonising the electricity mix. Therefore, energy efficiency measures must be accompanied by appropriate actions to reduce the environmental impacts of electricity generation; otherwise, the benefits of improved energy efficiency could be limited. Moreover, because of expected lower life expectancy of vacuum cleaners and limited availability of some raw materials, the eco-design regulation should be broadened to reduce the impacts from raw materials, production and end-of-life management.
120
Real-time 3D face tracking based on active appearance model constrained by depth data
Many computer vision applications require accurate and real-time tracking of human faces.This includes gaming, teleconferencing, surveillance, facial recognition, emotion analysis, etc.A good face tracking system should track most human faces in a variety of lighting conditions, head poses, environments, and occlusions.A typical algorithm takes RGB, infrared, or depth frames as input and computes facial landmarks, head pose, facial expression parameters in 2D or 3D.Some of these parameters are correlated with each other and therefore are difficult to compute reliably.For example, errors in mouth size estimation contribute to inaccurate lip alignment.Real-time face tracking is challenging due to the high dimensionality of the input data and non-linearity of facial deformations.Face tracking algorithms fall into two main classes.The first class consists of feature-based tracking algorithms, which track local interest points from frame to frame to compute head pose and facial expressions based on locations of these points .Global regularization constraints are used to guarantee valid facial alignments.Local feature matching makes these algorithms less prone to generalization, illumination, and occlusion problems.On the down side, errors in interest point tracking lead to jittery and inaccurate results.The second class consists of algorithms that use appearance-based generative face models, such as Active Appearance Models and 3D morphable models .These algorithms are more accurate and produce better global alignment since they compute over the input data space.However, they have difficulty generalizing to unseen faces and may suffer from illumination changes and occlusions.Both classes of face tracking algorithms use generative linear 3D models for 3D face alignment.They either fit a projected 3D model to video camera input or combine projected model fitting with 3D fitting to depth camera input .Xiao et al. introduced 2D + 3D AAM, which uses a projected linear 3D model as a regularization constraint in 2D AAM fitting.Constrained 2D + 3D AAM produces only valid facial alignments and estimates 3D tracking parameters.It still suffers from typical AAM problems with generalization, illumination, and occlusions.Zhou et al. reduced these problems by introducing a temporal matching constraint and a color-based face segmentation constraint in 2D + 3D AAM fitting.The temporal matching constraint improves generalization properties by enforcing inter-frame local appearance.The color-based face segmentation reduces AAM divergence over complex backgrounds.They also initialize AAM close to the final solution to improve its convergence.The resulting system is stable, tracks faces accurately in 2D, and has good generalization properties.The 3D alignment is still based on fitting a projected 3D model to input images.Model inaccuracies lead to significant errors in 3D tracking.In general, all monocular solutions suffer from the same problem.Several teams tried to overcome it by introducing depth-based tracking.Fua et al. used stereo data, silhouette edges, and 2D feature points combined with least-squares adjustment of a set of progressively finer triangulations to compute head shapes.This required many parameters and therefore is computationally expensive.Others used depth data from commodity RGBD cameras to enable 3D tracking.These researchers used visual feature-based trackers to compute 2D alignment and combined it with the Iterative Closest Point algorithm to fit 3D face models to input depth data.The resulting systems track faces in 3D with high precision, but still may suffer from the shortcomings of feature-based trackers.Errors in interest point tracking may lead to jittery or unstable alignment.These systems may also require complex initialization of their 3D models, including manual steps to collect various face shapes .In this work, we introduce a new depth-based constraint into 2D + 3D AAM energy function to increase 3D tracking accuracy.Our work extends the energy function described in the work of Zhou et al. .We use the same set of terms, including: 2D AAM, 2D + 3D constraint, temporal matching constraint, face segmentation constraint.We also use optical feature tracking to initialize AAM fitting each frame close to the target, improving its convergence.These terms, combined with good initialization, improve AAM generalization properties, its robustness, and convergence speed.We improve 3D accuracy by introducing depth fitting to the energy function.We add a new constraint based on depth data from commodity RGBD camera.This constraint is formulated similar to the energy function used in the Iterative Closest Point algorithm .In addition, we replace the color-based face segmentation with the depth-based face segmentation and add an L2-regularization term.The resulting system is more accurate in 3D facial feature tracking and in head pose estimation than 2D + 3D AAM with additional constraints described in Zhou et al. .We initialize our 3D face model by computing realistic face shapes from a set of input RGBD frames.This further improves tracking accuracy since the 3D model and its projection are closer to the input RGBD data.We use a linear 3D morphable model as a face shape model.It is computed by applying Principal Component Analysis to a set of 3D human faces collected with help of a high-resolution stereo color rig.Initially, the face tracker uses an average face as the 3D face model.Our system computes a personalized face model once it collects enough tracking data.We use 2D facial landmarks along with corresponding depth frames for face shape computation.The shape modelling algorithm deforms the 3D model so that it fits to all the collected data.After computing a personalized 3D face shape model, we subsequently improve the face tracker accuracy by using that 3D face model in the tracking runtime.We observed that the face tracking algorithms based on 2D + 3D AAM described in Baker et al. and Zhou et al. are not accurate in 3D."The latter's system has good generalization properties and is robust in 2D video tracking, but is not accurate enough for real life applications in 3D tracking.Solutions are aligned with video data in 2D, but misaligned with range data in 3D.Fig. 1 shows a typical example of misalignment.The 2D + 3D AAM energy minimization has many solutions when fitting 3D mask to a face.When a face moves from left to right, the 3D mask may move closer or further from the camera.These movements can be as large as the head size in 3D space.At the same time, the projection-based AAM energy terms have small residuals and the tracked face stays aligned in 2D.One can easily see that tracking an object in 3D with a monocular camera is an ill-defined optimization problem if the precise 3D shape of the object is not known.To resolve this, we introduced a new constraint into 2D + 3D AAM fitting that minimizes a distance between 3D face model vertices and depth data coming from a RGBD camera.The extended energy function provides more accurate solutions in 3D space.This addition also allows easier tracking of volumetric facial features such as cheek puffs, lip pucks, etc., which are not always visible on 2D video frames that lack depth.We also considered building a tracker based on 3D AAM instead of 2D + 3D AAM.We abandoned this option due to the expensive training process.The 3D AAM training requires a large set of 3D facial expressions annotated with good precision.To cover all expressions, this task would involve collecting nearly ten times more data with our stereo capture rig.We would then need to compute all 3D head models with a time-consuming stereo-processing system and manually annotate these models in 3D with complex tools.Instead, we decided to capture only 500 neutral faces in 3D with our stereo capture rig and annotate them.We used Principal Component Analysis to build a statistical linear 3D face shape model.Our 3D artist created a realistic set of facial expression deformations for this model.The 2D AAM model was trained from 500 face images with various expressions.The AAM was trained by using PCA for a set of annotated 2D images.This training process was less expensive than full 3D AAM training for real-world applications that require tracking many facial types.Our goal is to improve 3D fitting accuracy of 2D + 3D AAM by using depth data from an RGBD camera like Kinect.To accomplish this, we introduce a new depth-based term to the 2D + 3D AAM energy function.The new term minimizes 3D Euclidean distances between face model vertices and closest depth points.We formulate this term similar to the energy function used in Iterative Closest Point algorithm.This addition reduces the solution space and finds better 3D alignments."In general, we may follow a Bayesian framework to derive formula and set weights to be the inverse covariance of the corresponding term's residuals.This approach is not always practical due to complexities in estimating residuals and the size of the residual vectors.So to simplify, we assume that all measurement channels are independent and that the noise is isotropic.We also set the weight of the 2D AAM term to 1.0 for computational convenience."Under these assumptions, other weights are set to be the variances of their term residuals divided by the 2D AAM term's variance.Fig. 2 shows the processing flow in our face tracking system.We initialize our face tracking system by finding a face rectangle in a video frame using a face detector.Then we use a neural network to find five points inside the face area—eye centers, mouth corners, and tip of the nose.We pre-compute the scale of the tracked face from these five points un-projected to 3D camera space and scale our 3D face model appropriately."When depth data is unavailable, we assume the user's head scale to match the mean face size.We also align a 2D AAM shape to these five feature points.This improves initial convergence when we minimize formula."We also follow Zhou et al. in terms of tracking initialization—we initialize the next frame's 2D face shape based on the correspondences found by a robust local feature matching between that frame and the previous frame.This improves the stability in tracking fast face motions and reduces number of Gauss–Newton iterations required to minimize formula.To handle large angles of rotation, we use a view-based approach and train three 2D AAMs for different view ranges—frontal, left and right."We switch to the left and right models from the frontal model when the angle between the line connecting the bridge of the user's nose and the camera reaches ± 75°.Next we briefly describe the formulation of the video data-based terms E2D,E2D3D,Etemp,Efseg.They are described elsewhere in full detail.We subsequently formulate and derive our new depth data-based term Edepth, and provide information about regularization term Ereg.In a real production system, we use the camera model that includes radial distortions and other camera intrinsic parameters and scale in formula at the tracking time, so we achieve more accurate animation results.Face shape and scale parameters are computed by the face shape modelling algorithm when it collects enough tracking data for its batched computation.Computing D in formula is a relatively expensive operation.First, we need to compute a 3D point cloud from the input depth frame and then find nearest points for all non-occluded vertices in S3D.Nearest point search is an expensive operation for real-time systems even when done with the help of KD-trees .So to make this operation faster, we utilize a property of the depth frames—they are organized as a grid of pixels since depth values are computed for each pixel projected to IR camera.Therefore, even when a pixel is un-projected to 3D camera space, we know its index in the depth frame grid.We use this property for fast nearest-point lookup based on uniform sampling.The algorithm works as follows:The depth frame is organized as a 2D array with rows and columns.For each element in this array, un-project corresponding depth pixel to 3D camera space and then convert the resulting 3D point to the world space where 3D tracking occurs.The resulting 3D point cloud is organized as a 2D array.Split the array into several uniformly distributed square cells with several array elements in each.Compute an average 3D point for each cell and save them as “sampling points”.To find the 3D point nearest to a 3D model vertex:Find the nearest “sampling point” to a given 3D vertex,Find the nearest 3D depth point to this 3D vertex by searching in the cell that corresponds to the found nearest “sampling point,This algorithm can also utilize a pyramid search to further improve its speed.To find the best position of the 2D features and 3D mask that fits the input RGBD data, we need to find the value of Θ that minimizes the non-linear function.We employ a Gauss–Newton method to find the optimum parameters.We perform a first order Taylor expansion of each term in formula and then minimize the expanded energy function to find parameter updates ΔΘ.The parameter updates are found by computing Jacobians and residuals for each term, deriving a set of normal equations, and solving them to find parameter update.The energy function is non-linear, so we iterate this process until convergence, updating Θ with each iteration.The 2D parameter updates Δp,Δq defined in the mean shape coordinate frame are used to update p,q defined in the image frame by the warp composition as described in Matthews and Baker .Our face tracking algorithm fits the 2D AAM and the linear 3D face model to compute 2D facial landmarks, 3D head pose, and 3D animation coefficients .We assume the head scale in formula and the face shape parameters in formula to be known and constant for a given user during tracking time.In this section, we describe how to compute these shape parameters and initialize our 3D model.We start tracking a new face with the 3D model initialized to the mean face.The system then collects tracking results and RGBD input data for successfully tracked frames.We ask the user to only exhibit neutral emotionless face during this time.We do this to reduce the influence of expressions on the face shape computation.For example, expressions like big smiles may influence the geometry of the mouth region.We also ask the user to look straight at the camera, to rotate their head left and right by 30° and to look up and down by 15°.This process is needed to gather more diverse data and eliminate occlusions.Once our system accumulates enough frames, we run a batch-based algorithm to compute 3D face model shapes.We found that 16 frames are enough to provide good photo-realistic quality with an Xbox One Kinect camera.We only collect frames in which the fitting error is less than a predefined threshold.Our shape modelling algorithm is similar to the model initialization procedure described by Cai et al. .The main difference is in 2D feature points—we use 2D face shapes aligned by our 2D + 3D AAM and Cai et al. uses points computed by a different feature detector.We also use depth frames in addition to 2D facial points.When the face shape modelling finishes, we update the 3D model used in formula with the newly computed parameters to further improve face tracking accuracy.We use the Levenberg–Marquardt algorithm in our production system.It is more robust than Gauss-Newton for the face modelling task.We also exclude depth points that may be located in the occluded face areas.Occluded regions may lead to wrongly computed shapes.We use a skin detector to find skin pixels in an input RGB frame.The face shape computation uses only depth points that correspond to skin pixels.We tested our face tracking algorithm by using annotated videos of people exhibiting facial movements in front of Kinect camera at a distance of 1 to 2.5 m.The Kinect system has a pair of cameras: a color video camera with 1920 × 1080 resolution and an infrared/depth camera with 512 × 424 resolution.These cameras are calibrated to provide registration between video and depth frames.We recorded people of different genders, ages, and ethnicities.Each video sequence starts with the user showing no emotions while they move their heads left/right and up/down, so our face shape fitting algorithm can compute their personalized 3D face model.Then subjects show a wide variety of facial expressions and movements like smiles, frowning, eye brow movements, jaw movements, pucks, and kisses.The videos were annotated by marking 2D facial landmarks on RGB frames by hand.In addition to Kinect recordings, we used 12 high-definition DSLR color cameras to capture 3D ground truth for the test subjects exhibiting neutral faces."The DSLR cameras were positioned in a hemisphere formation in front of the user's face.We used commercially available software to stereoscopically build highly detailed 3D ground truth models from the captured photos.The resulting models were manually annotated with facial landmarks in 3D.Some ground truth data was used to compute our 3D face model with the help of PCA.Other ground truth models were used for testing our face shape capture algorithm.We compute 2D face tracking errors as Euclidian distances between projections of key points on aligned 3D face models and annotated 2D landmarks.The 3D face tracking errors are computed as Euclidian distances between these 3D key points and nearest depth frame points.The face shape capture errors are computed as Euclidian distances between 3D landmarks on ground truth models and corresponding 3D vertices on computed face models.We compute root-mean-square errors for each video and for each face shape capture.We use 20–33 subjects to test our face tracking and 100 subjects to test our face shape capture.The two charts in Figs. 3 and 4 compare 2D/3D accuracy of our face tracking system with the new depth constraint enabled and with this constraint disabled.All other constraints and conditions are the same in both tests and defined as in formula.When depth data is available, the face tracker pre-computes a face scale from five facial points found during initialization.We run tests against 33 videos.Both cases are similar in terms of 2D alignment errors, but the experiment with the enabled depth constraint shows significantly better 3D results.Big 3D errors in the disabled depth constraint case happen when tracking children—our mean face 3D model is closer to an adult head and so video-only projective-based fitting provides very poor results.Our experiments show that depth data corrects these errors well.The 3D accuracy improvements are in 7.02–686.46 mm range.If we exclude test cases where the 3D model shape is far from the test subject faces, then the average gain in 3D accuracy is 23.95 mm.We can conclude that using depth data in our face alignment algorithm leads to significantly improved 3D accuracy.The charts in Figs. 5 and 6 compare two cases where we use the mean face versus a fitted personalized model as a 3D model in our face tracking system.We first compute personalized face models for all 20 test subjects and then run tracking tests.2D performance is almost the same.3D accuracy is slightly better with the personalized 3D model.The 3D accuracy improvements are in 0.01–2.35 mm range.Faces that are further away from the mean face show the greatest improvement.Fig. 7 shows samples of some facial expressions 3D alignments produced by our face tracking system.The 3D model is rendered as mirrored.Table 1 shows face tracking performance results.It lists the processing times and number of iterations required to compute face alignment for one frame.In this experiment, we used: a 2D AAM with 100 2D shape points and a 50x50-pixel template; a 3D model with 1347 vertices, 17 animation deformation parameters, and 110 shape deformation parameters.Processing time is listed with the measured range.Actual times depend on various factors, such as camera distance, head rotation, and speed of head motion.Fig. 8 shows the maximum and RMS errors for face models computed by our face shape modelling algorithm for 100 people.The RMS errors are in 1.95–5.60 mm range and the max errors are in 4.98–15.87 mm range.Fig. 9 shows examples of the computed face models compared to the ground truth.Each face is shown in frontal, ¾, and profile view.In this paper, we proposed how to extend a 2D + 3D AAM based face tracker to use depth data from a RGBD camera like Kinect.This extension significantly improves 3D accuracy of AAM based tracker.The resulting real-time system tracks faces with 3–6 mm accuracy.In Section 2.1 we showed why video data based 2D + 3D AAM is not accurate enough for 3D face tracking.It computes face alignments by minimizing distances between projected 3D model vertices and aligned 2D AAM points.This method is error prone, because tracking an object of unknown shape with a camera lacking depth data is an ill-posed problem."In Section 2.3 we introduced a new depth-based term into the face tracker's energy function.The term is formulated similarly to ICP energy function.It minimizes distances between 3D model vertices and nearest input depth points.Test results in Section 4 show significant improvements in 3D tracking accuracy when we use our new constraint in AAM fitting.The biggest improvements in 3D accuracy occur when we track children—our mean face model is closer to an adult head and therefore video-only 2D + 3D AAM produces significant 3D errors when tracking children.We also show how to initialize our 3D face model by computing its shape from a batch of tracking data and corresponding input depth frames.We incorporate the updated 3D model back into the face tracker to improve its accuracy further.The accuracy improvements are not as great as in the case where we introduce depth data into 2D + 3D AAM."Yet, in spite of these improvements, our tracker isn't perfect.Currently it can tolerate some occlusions, but fails when more than a quarter of a face is occluded.The 2D AAM fitting is not stable enough in the presence of larger occlusions.Consequently, thick glasses, beards or big mustaches can cause alignment errors.This problem can be reduced by proactively removing outliers in 2D AAM fitting.Our tracking runtime uses three different AAM models to cover more head rotations.Switching between these models may yield wrong face alignments for a few frames.This manifests itself as incorrectly computed facial animations even when a steady face rotates left to right.We believe that adding statistical regularization for our animation parameters based on possible facial expressions can reduce this problem.The tracking system should produce only the most probable facial expressions in this case.Our face shape computation algorithm relies on depth data and 2D face points.Therefore if some areas of a face are occluded by hair or something else and it is visible in depth, then our shape computation produces deformed shapes for those areas.We can mitigate this by using color-based skin segmentation and then removing areas from the depth image that are not skin from the shape computation.RGBD cameras have temporal and systematic noise.Higher levels of noise can contribute to inaccurate results."For example, Kinect's temporal depth noise increases quadratically with distance.We found that we need to lower the weight of our depth constraint beyond 2-meter distances to reduce noise influence, thus our tracker transitions into a mostly video-based tracker at larger distances.Time-of-flight depth cameras have their own systematic bias when computing distances to human faces.We plan to build error models for different depth cameras to estimate probabilities of measurement errors for input pixels.We can use such probabilities to reduce weights of noisy input data.
Active Appearance Model (AAM) is an algorithm for fitting a generative model of object shape and appearance to an input image. AAM allows accurate, real-time tracking of human faces in 2D and can be extended to track faces in 3D by constraining its fitting with a linear 3D morphable model. Unfortunately, this AAM-based 3D tracking does not provide adequate accuracy and robustness, as we show in this paper. We introduce a new constraint into AAM fitting that uses depth data from a commodity RGBD camera (Kinect). This addition significantly reduces 3D tracking errors. We also describe how to initialize the 3D morphable face model used in our tracking algorithm by computing its face shape parameters of the user from a batch of tracked frames. The described face tracking algorithm is used in Microsoft's Kinect system. © 2014 The Authors. Published by Elsevier B.V.
121
The contribution of TRPC1, TRPC3, TRPC5 and TRPC6 to touch and hearing
The mechanisms underlying mechanotransduction in mammals are incompletely understood.Piezo2 has been shown to be essential for light touch sensitivity, in mechanical allodynia in neuropathic conditions and produces a mechanically activated, rapidly adapting current .Transient receptor potential channels are a superfamily of structurally homologous cation channels which have diverse roles in sensory functions.We have previously discussed the extensive evidence implicating TRP channels in mechanosensory roles in many different species, including TRPA1 which has an important role in cutaneous mammalian mechanosensation .We also reported previously, a combinatorial role for TRPC3 and TRPC6 in mediating normal touch and hearing .The canonical subfamily of TRP channels have known roles in mechanosensory function in mammalian systems including the cardiovascular system and the kidneys and there is an increasing pool of evidence implicating members of the TRPC subfamily in cutaneous mechanosensory functions.In the DRG, TRPC1, TRPC3 and TRPC6 are the most abundantly expressed TRPC subunits and their expression has been observed in most sensory neurons in adult mice .In addition, TRPC5 has been found to be localised to small and medium diameter sensory neurons .A single cell RNA sequencing study also determined a non-peptidergic subset of neurons which express all four TRPC subunits meaning there is substantial potential for interaction between different combinations of these TRPC subunits.TRPC1 and TRPC6 are coexpressed with TRPV4 in dorsal root ganglia and it has been proposed that they may act in concert to mediate mechanical hypersensitivity in neuropathic and inflammatory pain states .TRPC1 null animals show a decrease in sensitivity to innocuous mechanical stimuli and show a reduction in down hair Aδ and slowly adapting Aβ fibre firing in response to innocuous mechanical stimulation .TRPC1 and TRPC5 confer sensitivity to osmotically induced membrane stretch in cultured DRG neurons and HEK293 cells, respectively .TRPC6 is also activated by membrane stretch while both TRPC5 and TRPC6 activity is blocked by a tarantula toxin known to inhibit mechanosensitive channels .In addition, TRPC channels are ubiquitously expressed in the inner ear in structures including the organ of Corti and the spiral and vestibular ganglia suggesting that, in addition to TRPC3 and TRPC6, there is potential for other TRPC subunits to play a mechanosensory role in hearing.In the current study we extended our analysis of TRPC channels and their role in mechanosensation.TRP channels are known to function in heteromeric complexes and are believed to show functional redundancy.In order to minimise the effects of compensation mechanisms which these qualities confer, we progressed from investigating sensory function in TRPC3 and TRPC6 double knockout animals to looking at animals with global knockouts of TRPC1, TRPC3, TRPC5 and TRPC6 channels.We previously provided evidence that TRPC3 channels contribute to mechanotransduction in some cell lines, but not others, consistent with some role for TRPC channels in mechanotransduction .Here we provide further evidence of a combinatorial role for TRP channels in mechanosensation.We found that QuadKO animals showed deficits in light touch sensitivity compared to WT animals, shown by an increase from 0.39 g to 0.69 g in the 50% withdrawal threshold to von Frey hairs and a 41% decrease in the percentage response to a dynamic cotton swab application to the paw.Interestingly, QuadKO animals did not show any difference in 50% withdrawal threshold compared to DKO animals but showed a decrease in the response to cotton swab stimulation compared to DKO, though this was not significant.Responses to high force mechanical stimuli, on the tail, were unimpaired in all groups.Unimpaired responses to noxious heat stimuli in knockout animals suggest that these TRPC channels are unlikely to be involved in transduction of noxious heat.We used a place preference paradigm to study QuadKO sensitivity to noxious cold temperatures.A baseline recording showed all groups spent ∼50% of the test session on the test plate; when the temperature was lowered to 4 °C, this dropped to ∼5% of the test session indicating all groups were aversive to the noxious cold temperature.Using the trunk curl test , we found that QuadKO animals show some vestibular deficits which are comparable to deficits in DKO animals but that TRPC multiple KO animals show latencies to fall from an accelerating rotarod that are comparable to those observed for WT and DKO mice suggesting unimpaired motor coordination.As we reported previously, the role for the rotarod test in assessing vestibular function has been disputed as other studies have found that it does not always correlate with vestibular deficits presented by other relevant tests .Also, the trunk curl test is a rudimentary measure of vestibular function therefore more in depth tests would likely provide more information about the nature of these deficits .Auditory brainstem response recordings were used to assess the auditory function of these animals where auditory pip tone stimuli are used to determine the threshold in decibels which is required to elicit a response at different frequencies.We found at frequencies of 8, 24, 32 and 40 kHz that QuadKO animals had a significantly higher response threshold than both WT and DKO animals.Mechano-electrical transducer currents evoked by sinusoidal force stimuli in both the basal and apical coil of the cochlea, show normal amplitudes in outer hair cells of QuadKO animals, of comparable size to currents recorded from OHCs of matching WT control mice .MET currents of the QuadKO OHCs were similar in all respects to those of the WT control OHCs: currents reversed near zero mV, a fraction of the MET channels were open at rest and this fraction increased for depolarized membrane potentials due to a reduction in Ca2+-dependent adaptation .These observations suggest that the process of mechanotransduction in the cochlea is unaltered in knockout animals.Earlier data , suggested that MET currents in basal-coil OHCs of TRPC3/TRPC6 DKO OHCs were on average substantially smaller than those of WT controls.Further experiments using the same methods for MET current recording showed that it is possible to record large MET currents from TRPC3/TRPC6 DKO OHCs in the basal coil.The current–voltage curves were similar between the five groups of OHCs being compared.For example, MET current size of OHCs at −104 mV was: WT control apical coil: −983 ± 47 pA, n = 5; WT control basal coil: −1185 ± 121 pA, n = 4; QuadKO apical coil: −993 ± 120 pA, n = 3; QuadKO basal coil: −997 ± 37 pA, n = 2; DKO basal coil: −1091 ± 187 pA, n = 3.There were no significant differences between any of these groups.This negates our earlier finding of on-average smaller currents in basal-coil DKO OHCs.The previously observed diminished inward currents in basal-coil DKO OHCs may be explained by sub-optimal organotypic cultures.The present results do not support a role for TRPC channels in primary mechanotransduction in the inner ear.TRPC1, TRPC3, TRPC5 and TRPC6 are all expressed in sensory ganglia and TRPC3 and TRPC6 have been shown to be expressed in cochlear hair cells .We previously reported that TRPC3 and TRPC6 DKO mice show selective deficits in sensitivity to innocuous mechanical stimuli and in hearing and vestibular function.TRPC3 and TRPC6 single KO animals, on the other hand, showed unimpaired responses to all sensory stimuli.Therefore it seems that TRPC channels may have a combinatorial role in mediating specific sensory functions.As TRP channels are known to heteromultimerise and are believed to show functional redundancy, the development of TRPC1, TRPC3, TRPC5 and TRPC6 QuadKO animals, generated on a mixed C57BL/6J:129SvEv background at the Comparative Medicine Branch of the NIEHS in North Carolina, by combinatorial breeding of single KO alleles, has provided us with a novel way of investigating the combined roles of the TRPC channels where monogenic studies may have been unsatisfactory.Using this approach, we have been able to show that knocking out TRPC1 and TRPC5 in addition to TRPC3 and TRPC6 augments specific sensory deficits.Sensitivity to light touch sensitivity is impaired in QuadKO mice.We found, however, that the impairment was only augmented compared to DKO animals in the cotton swab test while the von Frey withdrawal threshold remained comparable.The cotton swab stimulus is an unequivocally light touch stimulus which is dynamic and thus has different qualities to stimulation with punctate von Frey fibres.Garrison, et al. have previously found that in TRPC1 knockout animals the withdrawal threshold was unaltered but that the responses to subthreshold cotton swab stimuli were impaired.They suggest that this is indicative of a role for TRPC1 involvement in subthreshold mechanical responses which may also be reflected in our multiple KO animals.Responses to noxious mechanical stimuli were normal in these animals; this is consistent with other data showing TRPC channels do not appear to play a role in mediating noxious mechanosensation .This also highlights a modality specific role for TRPC channels in mediating sensitivity to innocuous, and not noxious, mechanical stimuli.Similarly responses to noxious heat and noxious cold stimuli were unimpaired in QuadKO animals.Although it has been suggested that cold-evoked currents can be produced following heterologous expression of TRPC5, Zimmermann, et al. found behavioural responses in TRPC5 null mice were unaltered.This may be indicative of TRPC5 functioning cooperatively with other TRP channels which are linked to a role in cold sensitivity.Cochlear hair cells are arranged in a frequency gradient along the basilar membrane in the organ of Corti.They project stereocilia which are deflected by shearing movements between the tectorial and basilar membranes in the organ of Corti in the inner ear, leading to opening of mechanosensitive channels.A similar mechanism of mechanotransduction is found in the vestibular system.Previously, we reported that TRPC3 and TRPC6 were, together, important for normal hearing and vestibular function.These new data support this suggestion and also implicates TRPC1 and TRPC5 in normal hearing function as ABR thresholds were higher in QuadKO animals than DKO animals.In order to determine whether the observed hearing deficits are the result of altered mechanotransduction in the cochlea, mechano-electrical transduction currents were recorded from cultured OHCs.Since the recordings taken from QuadKO animals were normal, similar both to matching WT controls and previous recordings from OHCs of CD-1 mice , we are led to conclude that the loss of TRPC channel function affects the auditory process downstream of the MET channel, though it is possible that function is impaired elsewhere in the cochlea, and that therefore, TRPC channels do not form part of a mechanotransduction complex in the inner ear.Our earlier electrophysiological work suggests that the role of TRPC channels in mechanosensation is context dependent .TRP channels are notoriously difficult to study in exogenous expression systems because of their function as heteromeric complexes and their interaction with other TRP proteins.Altogether, our data lead us to conclude that the function of TRPC channels involves combined activity of multiple TRPC proteins, something which has been elucidated as a result of the multiple knockout approach.The current work shows that by impairing the function of a further 2 members of the TRPC subfamily we can augment some of the sensory deficits we reported in DKO animals, reinforcing the concept that TRPC channels play a supporting role in in mediating or coordinating mechanosensation.This supports the view that this interaction within the TRPC subfamily is functionally relevant in mechanosensation as interfering with a single TRPC channel leaves behavioural responses unaltered while QuadKO animals show augmented deficits compared to DKO in specific sensory modalities.The current study substantiates our earlier conclusions that TRPC channels are critical for cutaneous touch sensation.We can now be confident their role in the auditory system is likely to be indirect, as TRPC channels are clearly not primary mechanotransducers.The expression of mechanosensitive currents in neuronal but not non-neuronal cell lines transfected with TRPC3 is intriguing, and suggests that TRPCs may interact with other proteins to form a mechanotransduction complex.TRPC channels are known to interact with a huge list of other proteins and signalling molecules, many of which have already been implicated in mechanosensory roles, including Orai1 which mediates stretch sensitivity in cardiomyocytes and phospholipases which are activated by stretch in a number of sensory systems .This serves to highlight the potentially complex roles these channels may be playing in mechanosensation but also provides an interesting route to identifying other constituents of mechanotransduction complexes.Mice were obtained from the Comparative medicine Branch at the NIEHS, Research Triangle Park, North Carolina, USA.TRPC1, TRPC3, TRPC5 and TRPC6 QuadKO animals were generated on a mixed C57BL/6J:129SvEv background by combinatorial breeding of single KO alleles, TRPC1 , TRPC3 , TRPC5 , TRPC6 .Quad KO mice exhibited generally good health and TRPC3/6 DKO mice were crossed with C57BL/6 mice to generate WT control animals.Both were used for comparison to a TRPC1/3/5/6 QuadKO test group, unless otherwise stated, and mice were aged and sex matched.Behavioural tests, ABRs, MET current recordings from OHCs in organotypic cultures made at postnatal day 2 and maintained in vitro for 1–2 days and statistical analyses were performed as previously reported .The authors declare no conflict of interest.All behavioural tests were approved by the United Kingdom Home Office Animals Act 1986.JNW designed experiments.JES and KQ performed animal behaviour and analysis.RT and AF performed ABRs and analysis.TD performed MET recordings and TD and CJK performed analysis.JA and LB generated KO mice.All authors contributed to manuscript preparation.
Transient receptor potential channels have diverse roles in mechanosensation. Evidence is accumulating that members of the canonical subfamily of TRP channels (TRPC) are involved in touch and hearing. Characteristic features of TRP channels include their high structural homology and their propensity to form heteromeric complexes which suggests potential functional redundancy. We previously showed that TRPC3 and TRPC6 double knockout animals have deficits in light touch and hearing whilst single knockouts were apparently normal. We have extended these studies to analyse deficits in global quadruple TRPC1, 3, 5 and 6 null mutant mice. We examined both touch and hearing in behavioural and electrophysiological assays, and provide evidence that the quadruple knockout mice have larger deficits than the TRPC3 TRPC6 double knockouts. Mechano-electrical transducer currents of cochlear outer hair cells were however normal. This suggests that TRPC1, TRPC3, TRPC5 and TRPC6 channels contribute to cutaneous and auditory mechanosensation in a combinatorial manner, but have no direct role in cochlear mechanotransduction.
122
Expanding the Circuitry of Pluripotency by Selective Isolation of Chromatin-Associated Proteins
In ESCs, the three master transcription factors Oct4, Sox2, and Nanog constitute the core transcriptional circuitry, which on the one hand promotes the expression of pluripotency genes, while on the other hand suppresses lineage commitment and differentiation.In mouse ESCs, pluripotency can be further reinforced by replacing serum in conventional culture medium with two kinase inhibitors, PD0325901 and CHIR99021, driving the ESCs into a condition resembling the preimplantation epiblast.Hence, cells grown in 2i medium are considered as an in vitro representation of the ground state of pluripotency.Transcriptome analysis indicated that most of the pluripotency-associated transcription factors did not change significantly in expression level between serum and 2i conditions, suggesting that additional proteins may sustain the functionality of core pluripotency factors in 2i.Since transcription factors, including pluripotency TFs, execute their function in chromatin, we aimed to identify proteins that associate with OSN in their DNA-bound state as opposed to interactions that may occur in soluble form.Despite the large diversity of available methods to identify protein interactions, very few of them differentiate between interactions that depend on the subcellular location.This is a critical shortcoming, especially for proteins that dynamically change location, either between or within organelles.Indeed, transcription factors have been shown to form different complexes on and off chromatin, as demonstrated for several FOX proteins.To specifically identify proteins in their DNA-bound state, we therefore developed a method for the selective isolation of chromatin-associated proteins.SICAP captures an endogenous protein under ChIP conditions and then biotinylates DNA, allowing the specific isolation of DNA-bound proteins on streptavidin beads, followed by mass spectrometric protein identification.Thus, by design, ChIP-SICAP identifies chromatin-bound proteins in the direct vicinity of the bait protein on a short stretch of DNA.Here we introduce and evaluate ChIP-SICAP and apply it characterize the chromatin-bound network around Oct4, Sox2, and Nanog in mouse ESCs.We demonstrate the power of ChIP-SICAP by the discovery of Trim24 as a component of the pluripotency network.Many studies have been devoted to defining interactomes of pluripotency factors, most of which are based on coimmunoprecipitation of Flag- or HA-tagged TFs, such as for Oct4, Sox2, and Nanog.The general limitation of these approaches is their need to introduce an affinity tag, often using an exogenous expression system.Studying protein interaction in the context of chromatin adds a number of other challenges, especially since chromatin is highly insoluble.To promote solubilization of chromatin, DNA can be fragmented, e.g., as carried out by sonication in ChIP protocols, combined with crosslinking to maintain protein-DNA interactions.Hence, different variations of ChIP protocols have been developed to study protein interactions on chromatin, including modified ChIP, ChIP-MS, and rapid immunoprecipitation mass spectrometry of endogenous proteins.ChIP-MS and RIME both apply mass spectrometric analysis on proteins immunoprecipitated from formaldehyde-crosslinked cells, but they differ in the fact that they digest proteins directly on the protein A beads or after elution.Yet, a number of issues limit the practical utility of these methods to specifically enrich for chromatin-bound proteins.First, they often suffer from the copurification of contaminating proteins that have been referred to as “hitchhikers” to indicate their avid binding to the highly charged backbone of DNA, and other contaminants that are commonly observed in affinity-purification experiments, as documented in the CRAPome.Another often-marginalized problem is that the antibody used for affinity purification represents a huge contamination in subsequent mass spectrometry, thereby masking lower abundance proteins.Finally, and maybe most importantly, none of the presented methods discriminate protein interactions occurring on and off chromatin.BioTAP-XL and a method coined as “chromatin-interaction protein MS,” confusingly also abbreviated as ChIP-MS, tag a given protein with protein A or a His tag along with a biotin-acceptor sequence.Although this allows for stringent washing after capture on streptavidin beads, introduction of the tag may alter the functionality or expression level of the protein, while requiring a cloning step that may not be suitable or desirable for all cell types.Because of these limitations in available approaches, we here introduce a method termed “selective isolation of chromatin associated proteins”, which we combine with ChIP to specifically purify, identify, and quantify the protein network around a chromatin-bound protein of interest.ChIP-SICAP combines the advantages of the aforementioned methods while bypassing their limitations, in that it targets endogenous proteins, does not require protein tagging or overexpression, uses formaldehyde for chromatin crosslinking, and allows very stringent washing, including removal of the antibody.Furthermore, ChIP-SICAP uniquely benefits from the double purification of protein-DNA complexes, accomplished by subsequent ChIP of the protein of interest, and an innovative step to biotinylate DNA allowing capture and stringent washing of the protein-DNA complex.ChIP-SICAP starts from crosslinked and sheared chromatin using established ChIP procedures, followed by addition of a suitable antibody and capture of the protein-DNA complex on protein A beads.The key step of ChIP-SICAP is then the end labeling of DNA fragments with biotin by terminal deoxynucleotidyl transferase in the presence of biotinylated nucleotides.TdT is a template-independent DNA-polymerase-extending DNA 3′ end regardless of the complementary strand, which is also used in the so-called TUNEL assay to detect double-stranded DNA breaks in apoptotic cells.Next, addition of ionic detergents and a reducing agent disassembles all protein interactions, denatures the antibody, and releases chromatin fragments.Biotinylated DNA-protein complexes are then captured on streptavidin beads, followed by a number of stringent washes to effectively remove contaminating proteins and the IP antibody.Finally, protein-DNA crosslinks are reversed by heating, and proteins are proteolytically digested for MS-based identification.As a result, ChIP-SICAP identifies the proteins that colocalize with the bait on a short fragment of chromatin.To evaluate the performance of ChIP-SICAP, we targeted Nanog as the bait protein in mouse ESCs and performed a comparative analysis with a no-antibody control using differential SILAC labeling.In two independent ChIP-SICAP assays, we reproducibly identified 634 proteins, of which 567 were enriched in comparison to the negative control.Reassuringly, ranking the enriched proteins by their estimated abundance revealed histones and Nanog itself as the most abundant proteins.This indicates the clear enrichment of chromatin and confirms the specificity of the used antibody.In addition, Oct4 and Sox2, two well-known Nanog interactants, were also among the top-enriched proteins.Proteins of lower intensity include many other known interaction partners of Nanog, as well as potential novel candidates.We then evaluated the benefit of DNA-biotinylation by repeating the same experiment, but omitting the TdT-mediated end labeling of DNA, in two slightly different procedures using protocols as described for RIME and ChIP-MS.Under ChIP-MS conditions, we identified 981 enriched proteins, i.e., twice the number obtained from ChIP-SICAP.Using RIME, i.e., digesting proteins on-bead rather than after reversal of crosslinking, we identified 1,232 enriched proteins.Apart from this even further increased number of proteins, ribonucleoproteins now outcompeted histones as the most abundant proteins.In ChIP-MS, Nanog was identified only in one replicate, while both in ChIP-MS and RIME Oct4 and Sox2 ranked much lower compared to ChIP-SICAP, possibly as the result of copurification of contaminant proteins.We next performed a rigorous analysis on these datasets to assess the performance and specificity of the three methods to enrich for chromatin-bound proteins.First, a Gene Ontology analysis revealed RNA processing and translation as the top-enriched biological processes in the ChIP-MS and RIME, reflecting the presence of many ribosomal proteins, hnRNPs, and splicing factors.These proteins are often observed to copurify nonspecifically in affinity-purification procedures, and indeed they feature prominently in the CRAPome database.This suggests that these are contaminant proteins not likely to be related to the Nanog network, although we cannot exclude that some individual RBPs can associate with chromatin.In contrast, processes related to chromatin and transcription are enriched in the ChIP-SICAP dataset, while RNA processing ranked only 17th.This indicates that ChIP-SICAP more specifically enriches for proteins that reflect the known function of Nanog in transcriptional regulation.We next evaluated the presence of 278 proteins previously reported to interact with Nanog in multiple coIP studies, collected in BioGrid.Of these, 109 were identified by ChIP-SICAP, compared to 132 proteins and 156 proteins in ChIP-MS and RIME, respectively.Although ChIP-SICAP recovers fewer known Nanog interactants, their proportion among the detected proteins is much higher, suggesting a higher precision of ChIP-SICAP over ChIP-MS and RIME.Although ideally both the absolute and relative number of returned true positives should be maximized in interactome analyses, specificity seems of greater practical utility.An extreme example is our total ESC proteome dataset containing, among 6,500 proteins, 232 Nanog interactants, i.e., with a specificity of 3.5%.To further compare the performance of each method, we included protein abundance as an additional parameter, allowing us to weigh proteins by relative enrichment within each dataset rather than treating all of them equally.Specifically, we summed the MS intensities of Nanog interactome and other chromatin/DNA-binding proteins as potential true positives.This was normalized for the total protein intensity of the same sample to estimate the relative abundance of PTPs for each method.Similarly, we calculated the ratios for ribosomal proteins and other components of RNA processing as well as cytoplasmic proteins as representatives of potential false positives.In doing so, 27% of protein intensity in ChIP-SICAP is represented by Nanog-interacting proteins, more than in any of the other datasets.In addition, other chromatin-binding proteins add another 57% of intensity, collectively accounting for 85% of the total amount of protein recovered by ChIP-SICAP, compared to 47% and 55% in ChIP-MS and RIME, respectively.Conversely, ChIP-SICAP better removes common contaminants and other cytoplasmic proteins, accounting for 7% of the protein intensity, compared to 29% and 33% in RIME and ChIP-MS, respectively.Taking the intensity ratio of PTPs and PFPs as a proxy for the specificity of each method, ChIP-SICAP scored significantly better than RIME and ChIP-MS or the total proteome as an example of a nonselective method.Furthermore, stringent washing procedures in ChIP-SICAP resulted in the detection of far fewer peptides originating from IgG and protein A, resulting in an overall reduction of these contaminating proteins between 10- and 10,000-fold in ChIP-SICAP compared to RIME and ChIP-MS.We next tested to what extent the various protein classes were enriched or depleted not only as a group but also as individual proteins.We therefore ranked all proteins in each of the four datasets by abundance, showing that, in ChIP-SICAP, known Nanog interactors, histones, and other chromatin binding proteins accumulate faster among the top-ranked proteins compared to all three other datasets.Conversely, common contaminants are largely depleted from the top 100 proteins and only appear among the less abundant proteins.This is in contrast to ChIP-MS, where copurifying ribosomal proteins rank as high as in a total proteome analysis, and to RIME, which seems particularly sensitive to contamination by ribonucleoproteins.Collectively, our data show that ChIP-SICAP surpasses ChIP-MS and RIME to more specifically enrich for chromatin-bound partners of a bait protein while more effectively removing common contaminants.To more systematically study the composition and dynamics of proteins associated with OSN, we separately carried out ChIP-SICAP for Oct4, Sox2, and Nanog in ESCs grown in serum and 2i plus LIF medium.In ChIP-SICAP against Nanog, we detected 666 proteins, of which 296 were significantly different between the 2iL and serum conditions.β-catenin was detected among the most enriched proteins in 2iL condition, which is expected because of the inhibition of Gsk3β by CHIR99021 resulting in activation of Wnt signaling and translocation of β-catenin to the nucleus.Other stem cell maintenance factors that preferentially associate with Nanog in 2iL-medium included Esrrb, Klf4, Prdm14, Rex1, Sall4, Tcf3, Tbx3, Stat3, Smarca4, Tfap2c, and Tfcp2l.Interestingly, all core-nucleosomal histones interacted less with Nanog in 2iL condition, suggesting that DNA is more accessible for Nanog in the ground state and suggesting that ChIP-SICAP may also inform on global chromatin structure.This is in line with a recent study showing that Nanog can remodel heterochromatin to an open architecture in a manner that is decoupled from its role in regulating the pluripotent state.Finally, Nanog-bound loci are co-occupied with proteins maintaining DNA methylation preferentially under serum conditions, fitting with the model of higher CpG methylation rate in this cellular state.Performing ChIP-SICAP for Oct4 and Sox2 produced results similar to that of Nanog, but with subtle yet important differences.In each experiment, all three master TFs—Oct4, Sox2, and Nanog—were identified, thus confirming their tight interconnection.Additionally, many stem cell maintenance factors such as β-catenin, Esrrb, Klf5, Mybl2, Prdm14, Rex1, Sall4, Tcf3, Tbx3, Stat3, and Smarca4, were similarly enriched in 2iL conditions in all three ChIP-SICAP assays, or in serum condition such as Uhrf1 and Dnmt3a.In contrast to Nanog ChIP-SICAP, most of the nucleosome components did not show significant changes in Oct4 and Sox2 ChIP-SICAP, with the exception of macroH2A1 and macroH2A2, which preferentially associate with Oct4.The different pattern for these transcriptionally suppressive H2A variants suggests that in 2iL condition some of the Oct4 targets may be transcriptionally repressed by recruiting macroH2A.We identified 407 proteins in the overlap among the three OSN ChIP-SICAP experiments, 365 of which are known to have a chromatin-related function, indicating that indeed we retrieved the desired class of proteins.To assess the specificity of ChIP-SICAP, and to rule out that the observed proteins were enriched irrespective of the used antibody, we used E-cadherin as an unrelated bait protein to perform ChIP-SICAP.Although Cdh1 is classically known as plasma membrane protein, its cleavage by α-secretase, γ-secretase, or caspase-3 releases specific C-terminal fragments that translocate to the nucleus and bind to chromatin.Following expectations, histones and Cdh1 were the most prominent proteins identified in Cdh1 ChIP-SICAP.In addition, and according to expectation, Cdh1 was identified exclusively by peptides originating from the most C-terminal CTF, along with known nuclear Cdh1 interaction partners β-catenin and δ-catenin.In contrast, the stem cell maintenance factors found in OSN ChIP-SICAP were not identified.Collectively, this demonstrates that ChIP-SICAP reveals target-specific protein-DNA interactions.To investigate whether changes observed in chromatin interactions around OSN were dependent on global protein expression level, we performed a total proteome comparison of ESCs grown in 2iL and serum conditions.Interestingly, protein ratios did not always correlate between in ChIP-SICAP and total proteome.For instance, β-catenin preferentially binds to OSN sites in 2iL versus serum, without a change in overall expression.We observed a similar trend for Esrrb, Kdm3a, Mybl2, Tcf7l1, Tle3, Sall4, Scml2, Smarcd2, Smarce1, Stat3, Trim24, and Zfp42.This suggests that alternative mechanisms are in place to induce interaction with chromatin in general, and with the OSN network in particular.Intrigued by the differential chromatin-binding proteins, we analyzed the OSN ChIP-SICAP data for the presence of proteins modified by phosphorylation, acetylation, methylation, and ubiquitination.Indeed, we identified 95 ChIP-SICAP proteins carrying one or more of these modifications.Phosphorylation was the most frequent modification, observed on 84 sites.Several PTMs differ in abundance between 2iL/serum, mostly following the trend of their cognate protein, with distinct exceptions suggesting a change in the stoichiometry of the modification in proteins associating with OSN in 2iL versus serum conditions.Although additional experiments will be required to confirm if these modifications are causally involved in modulating protein interactions in chromatin, ChIP-SICAP may provide a starting point to investigate how PTMs shape chromatin-bound protein networks.The 407 proteins that were consistently enriched with OSN were subjected to hierarchical clustering based on their ChIP-SICAP protein ratios between 2iL and serum conditions, showing high similarity between Oct4, Sox2, and Nanog experiments while Cdh1 remained as a separate group.Interestingly, many established stem cell regulators were enriched in 2iL conditions by each of the three TFs, indicating strong association with the OSN network in the naive pluripotent state.These include Nanog, β-catenin, Prdm14, Zfp42, Tcf7l1, Tbx3, and Kdm3a.Interestingly, Cbfa2t2, a transcriptional corepressor not previously known to interact with OSN, was identified very recently as a protein that regulates pluripotency and germline specification in mice by providing a scaffold to stabilize PRDM14 and OCT4 on chromatin.This is not only fully consistent with our observation of Cbfa2t2 in the OSN network but also provides an independent functional validation of our data.Another candidate that we identified is Trim24, an E3-ubiquitin ligase that binds to combinatorially modified histones.We performed ChIP-seq for Trim24 to identify its genome-wide occupancy in ESCs grown both in 2iL and serum media and compared this to genome occupancy of OSN.Overall, Trim24 colocalized with OSN in 813 enhancers, including 88 of the 142 previously reported superenhancers.Additionally, Trim24 preferentially binds to 237 enhancers in 2iL-condition compared to only 27 in serum condition, which is in line with the high ChIP-SICAP ratio of Trim24 in 2iL/serum.Interestingly, some of these enhancers are in close proximity to genes involved in either negative regulation of cell differentiation or positive regulation of cell proliferation, thus suggesting a regulatory role for Trim24 in processes that are fundamental to pluripotency.To better understand how Trim24 functions mechanistically in mouse ESCs, we performed knockdown of Trim24 using short hairpin RNA for 24 hr, followed by mRNA sequencing.We observed dysregulation of 1,562 genes.Interestingly, developmental genes were upregulated, including genes involved in neural differentiation, immune system, muscle differentiation, and spermatogenesis.On the other hand, numerous genes with central roles in cell cycle and proliferation were downregulated,.Remarkably Bmi1, Rnf2, Suz12, and Mtf2 were downregulated, which are well-known members of the PRC1 and PRC2 complexes.Altogether, this result indicates that Trim24 is required to suppress developmental gene, and to maintain expression of genes involved in proliferation, cell cycle, and DNA replication.Previously, Allton et al. have shown that Trim24 knockdown in mouse ESCs leads to p53-mediated apoptosis.To test coregulation of genes by Trim24 and p53, we carried out p53 knockdown as well as double knockdown of Trim24 and p53.As a result of p53 knockdown, 1,801 genes were deregulated, of which 353 genes were overlapping with Trim24 knockdown.We compared these data to a Trim24-p53 double knockdown to distinguish synergistic and antagonistic effects, revealing that 73.4% of the Trim24 target genes are regulated independent of p53.However, the effect of p53 on 18.1% and 8.4% of the Trim24 targets is antagonistic and synergistic, respectively.For instance, p53 has an antagonistic effect on Myb expression, rescuing Trim24 knockdown-mediated downregulation of Myb.Conversely, p53 and Trim24 have synergistic positive effects on Myc expression.Among the 1,562 genes that are differentially expressed after Trim24 knockdown, 198 genes are located near the Trim24 binding sites on the genome.Moreover, 68 ESC enhancers with Trim24 occupancies are located near the differentially expressed genes.The comparison of the genome-wide occupancy of p53 in mouse ESCs with our Trim24 ChIP-seq data revealed that 17 ES superenhancers are cobound by p53 and Trim24.Remarkably, this includes the superenhancers of pluripotency genes such as Nanog, Prdm14, Sox2, and Tbx3.Although Trim24 binds preferentially to these loci in 2iL media, knockdown of Trim24 had no significant effect on the expression of these genes, at least under the used conditions.Altogether, these data indicate that Trim24 functions to activate expression of cell cycle, DNA replication, and polycomb components and to suppress expression of developmental genes largely independently of p53.Since our observations position Trim24 in the OSN network, regulating the expression of cell cycle and developmental genes, we tested if Trim24 can promote the generation of iPS cells.We coexpressed Trim24 with OSKM in a doxycycline-inducible reprogramming system to induce formation of iPS cells from secondary MEFs.As a result, we observed that expression of Trim24 together with OSKM increased the number of Oct4-EGFP-positive colonies from 39 to 468 per plate compared to OSKM alone, i.e., an increase of 12-fold.This suggests that Trim24 stabilizes the transcriptional program imposed by OSKM to more efficiently establish and maintain pluripotency.We next investigated the feasibility of retrieving both proteins and DNA after ChIP-SICAP, aiming to identify the proteins that colocalize with the bait as well as its genomic binding site from the same sample.We therefore verified the presence of DNA in the supernatant of samples treated with SP3, the last step in the ChIP-SICAP procedure used for peptide cleanup and removal of detergents.Indeed, qPCR on DNA purified after Nanog ChIP-SICAP recovered the Nanog promoter, but not flanking regions, consistent with the notion that Nanog binds to its own promoter.Next, although the recovered DNA was end-biotinylated, we successfully prepared the library for NGS without any change in Illumina sample prep protocol.Strikingly, when comparing the result of regular ChIP-seq and ChIP-SICAP-seq using the same Nanog antibody, we identified a very similar number of peaks with very large overlap and similar enrichment.Among the top 10,000 enriched ChIP-seq peaks, only 33 peaks were not enriched by ChIP-SICAP, indicating that recovery of DNA by biotin labeling and streptavidin purification is very efficient in SICAP.Moreover, the recovery of the major ChIP-seq peaks without the introduction of artifactual peaks suggests that TdT biotinylates chromatin fragments in an unbiased manner.As a result, ChIP-SICAP can be used for the simultaneous analysis of proteins and DNA in an integrative workflow, to obtain highly complementary information on the identity of colocalized proteins as well as genomic binding sites of the bait protein.We have designed ChIP-SICAP to characterize the proteins that converge on chromatin with a protein of interest in its DNA-bound state, aimed to gain insight in the composition and function of the protein network around transcription factors and transcriptional regulators.We applied ChIP-SICAP to Oct4, Sox2, and Nanog in mouse ESCs to better characterize the protein network operating in the core of pluripotency in a quantitative and context-dependent manner and demonstrated the power of this approach by identifying and validating Trim24 as a protein that physically colocalizes and functionally interacts with core pluripotency factors.Compared to other methods, ChIP-SICAP benefits from the sequential enrichment of the bait protein and the DNA it is crosslinked to.In particular, TdT-mediated biotinylation of DNA and subsequent capture by streptavidin critically contribute to the specificity of the approach by allowing stringent washing to efficiently remove common contaminants, including the IP antibody, while providing evidence that the bait and colocalizing proteins bind to chromatin.A distinct advantage of ChIP-SICAP over conventional coIP is its ability to identify proteins that colocalize within a short distance on DNA, revealing functional connections between proteins that are not necessarily mediated by direct physical interactions.This is highly relevant in the light of recent data showing that interactions between many cooperative TFs are mediated by DNA rather than direct protein-protein interactions.Abundance ranking of proteins identified by ChIP-SICAP provides a characteristic signature allowing for quality control of the obtained results.Following histones as the most abundant proteins, the bait protein itself typically ranks among the top candidates, thereby validating the specificity of the antibody and thus satisfying the recommendations that were recently proposed for the quality control of antibodies in affinity-purification strategies.This is followed by dozens to hundreds of proteins with lower abundance, which we interpret as proteins that colocalize with the bait at decreasing frequency along the genome.This overall pattern, in combination with the identification of bait-specific protein profiles and the underrepresentation of common contaminants, argues against the possibility of systematic calling of false interactions due to overcrosslinking.Yet we cannot exclude the possibility that some of the interactions reported here may be indirect.We combined ChIP-SICAP with SILAC labeling, demonstrating both tight interconnectivity between 400 proteins that colocalize around the core pluripotency factors Oct4, Sox2, and Nanog and that the composition of this network depends on the pluripotent state.We focused our attention to Trim24 as a protein not known to partake in the pluripotency network but that tightly clustered with well-established pluripotency factors, especially in 2iL conditions.Trim24, also known as transcriptional intermediary factor 1a, has been identified as a E3-ubiquitin ligase but also as a reader of histone modifications.Functionally, Trim24 has been shown to modulate transcription in mouse zygotes, by moving from the cytoplasm to the nucleus and to activate transcription of the embryonic genome.Although Trim24 has never been directly linked to pluripotency, large-scale studies suggest that its expression closely follows the trend of bona fide pluripotency factors showing increased expression during reprogramming both at the transcript and the protein level.Our data demonstrate not only that Trim24 colocalizes to many OSN binding sites in the genome but also that it activates transcription of cell cycle and DNA replication genes while suppressing differentiation genes.These characteristics likely contribute to its role in promoting OSKM-mediated generation of iPS cells.Intriguingly, recent studies have correlated elevated expression of Trim24 with poor patient prognosis in various tumor entities.Furthermore, ectopic expression of Trim24 induced malignant transformation in epithelial cells, while its knockdown in colon cancer cells induced apoptosis.Collectively, this suggests that the main function of Trim24 resides in enhancing cell proliferation, thereby contributing to critical hallmarks both of pluripotency and cancer.Altogether, we have demonstrated that ChIP-SICAP is a powerful tool to gain a better understanding of transcriptional networks in general, and in pluripotency in particular.Considering that this method can be generically applied to any other cell type or chromatin protein, ChIP-SICAP should prove a useful and versatile tool to identify proteins that associate with a variety of TFs, transcriptional regulators, and posttranslationally modified histones.We anticipate that future use of ChIP-SICAP will extend to the analysis of protein translocation to chromatin as a mechanism to determine cell fate, to investigate the correlation between chromatin-association of TFs and their local histone-PTM landscape, and to examine the role of PTMs in protein association to chromatin.Its utility is further enhanced by the ability to simultaneously obtain DNA for high-quality ChIP-seq, to obtain highly complementary data types in an integrated workflow.One of the limitations of ChIP-SICAP is the need for a ChIP-grade antibody.Thereby it suffers from the same restriction as ChIP-seq, but with the distinction that the verification of the antibody specificity is an inherent part of ChIP-SICAP data analysis.Therefore, even antibodies against nonclassical chromatin proteins may be tested and validated by ChIP-SICAP.The need for protein-specific antibodies may be bypassed by employing CRISPR/Cas9 technologies to insert an affinity tag in the coding sequence of the gene of interest.As yet another approach, computational methods such as DeepBind may predict the score of binding for a given protein on the binding sites of the bait, although this is limited to proteins for which a motif is known.The sensitivity of ChIP-SICAP may be limited by the low efficiency of IP and by limitations in mass spectrometry to detect very low-abundance peptides.Consequently, proteins that colocalize with the bait protein at many genomic locations will be preferentially identified.The power of ChIP-SICAP resides in its unbiased protein identification to thereby suggest novel chromatin factors; however, their frequency and the exact sites of colocalization need to be validated by ChIP-qPCR for individual sites, or by ChIP-seq for global profiling across the genome.Mouse ESCs were grown feeder free on 0.2% gelatinized cell culture plates in either traditional ES media with serum or 2iL-media.Chromatin was crosslinked by suspending cells in 1.5% formaldehyde for 15 min, quenched in 125 mM Glycine, and stored at –80°C until use.Chromatin from 24 million fixed ESCs sheared by sonication, followed by immunoprecipitation with a suitable antibody.After capture on protein A beads, DNA was biotinylated by TdT in the presence of biotin-11-ddUTP and eluted, and protein-DNA complexes were bound to streptavidin beads.Proteins were digested with trypsin, and resulting peptides were fractionated by high pH reverse-phase chromatography and analyzed using LC-MS on a Orbitrap Velos Pro or Q-Exactive mass spectrometer.A detailed protocol and details for data analysis can be found in the Supplemental Information.After ChIP on crosslinked and sheared chromatin, protein was digested with Proteinase K, and DNA was purified using phenol/chloroform isoamyl alcohol and then precipitated.The libraries were prepared for Illumina sequencing, and sequencing was carried out by Illumina HiSeq 2000 according to the manufacturer’s protocols.Knockdown of Trim24 and p53 was carried out by the lentiviral vectors shTrim24 and shTrp53, respectively, in three independent transductions.Forty-eight hours after infection, ESCs were lyzed and RNA was extracted for mRNA-seq and RT-qPCR.M.-R.R. and J.K. designed the studies and analyzed the data.M.-R.R. performed all experiments.G.S. analyzed mass spectrometry data.C.G. analyzed sequencing data.M.-R.R. and J.K. wrote the manuscript with input from all authors.
Maintenance of pluripotency is regulated by a network of transcription factors coordinated by Oct4, Sox2, and Nanog (OSN), yet a systematic investigation of the composition and dynamics of the OSN protein network specifically on chromatin is still missing. Here we have developed a method combining ChIP with selective isolation of chromatin-associated proteins (SICAP) followed by mass spectrometry to identify chromatin-bound partners of a protein of interest. ChIP-SICAP in mouse embryonic stem cells (ESCs) identified over 400 proteins associating with OSN, including several whose interaction depends on the pluripotent state. Trim24, a previously unrecognized protein in the network, converges with OSN on multiple enhancers and suppresses the expression of developmental genes while activating cell cycle genes. Consistently, Trim24 significantly improved efficiency of cellular reprogramming, demonstrating its direct functionality in establishing pluripotency. Collectively, ChIP-SICAP provides a powerful tool to decode chromatin protein composition, further enhanced by its integrative capacity to perform ChIP-seq.
123
Health related quality of life of people with non-epileptic seizures: The role of socio-demographic characteristics and stigma
There has been a marked shift in thinking about what health is and how it is measured; with traditional clinical outcomes increasingly giving way to, or used in conjunction with, patient reported outcome measures .Health related Quality of Life, is a multidimensional PROM construct used to assess the perceived impact of health status on quality of life; comprised of physical functioning, emotional status, and social well-being domains .People with non-epileptic seizures, often referred to as psychogenic non-epileptic seizures or non-epileptic attack disorder, consistently report poorer HRQoL than those with epilepsy .A recent systematic review of the literature identified 14 studies arising from ten separate research projects that have explored associations between independent factors and HRQoL in this patient group .The evidence available suggests a strong adverse association between psychological factors and the HRQoL of adults with NES.Several studies show depression to be a strong predictor of poorer HRQoL in this patient group .Other psychological factors associated with poorer HRQoL in people with NES include the number/severity of mood and emotional complaints , illness perceptions , dissociative experiences , somatic symptoms , and escape-avoidance coping strategies .Condition-related factors, such as older age of onset and experiencing the condition for a shorter period of time have also been shown to adversely affect HRQoL.As with epilepsy patient groups , seizure freedom has been shown to be positively associated with HRQoL in patients with NES .However, whereas systematic reviews of the literature have found seizure frequency to a predictor of HRQoL in adults with epilepsy , the same was not found to be true for adults with NES .Yet, as Mitchell and colleagues point out , studies that attempt to produce a model to explain the factors that are associated with HRQoL in adults with NES only account for 65% of the variance at best .Our limited understanding of how social factors affect the HRQoL of those living with NES probably contributes to this shortfall.There are significant knowledge gaps in relation to domains such as stigma, employment status, and social and family relations .HRQoL in this patient group has been negatively associated with family roles and affective family involvement subscales using the Family Assessment Device , suggesting the roles and influence of significant others to be a potentially important predictor of HRQoL for people with NES.There is also some evidence that concerns about relationships with the main caregiver seem to cause more distress in those with NES than patients with epilepsy .We know that the stigma associated with epilepsy is considerable and that it has negative effects on HRQoL – in fact, it may account for more HRQoL variance than clinical outcomes .However, whereas there is a wealth of research to support the view that the social prognosis of epilepsy is often less good than the clinical one , comparatively little research has explored the social impact of NES , and none has explored the relationship between stigma and HRQoL in this patient group to date.The only study to have examined the role of socio-demographic variables found no significant correlation between employment status, marital status, having children, religious involvement, and proximity to family and HRQoL , but more research is needed to substantiate these findings.To add to the evidence base, this study seeks to explore the relationship between HRQoL and perceived stigma among adults with NES, and the role of participants’ socio-demographic characteristics.Findings will inform an upcoming qualitative study exploring the stigma perceptions of people with the condition, which will include exploring participants written texts about their family relations and the social impact of NES.Taken together, we hope to identify social dynamics that will contribute to larger studies aiming to produce a model to explain factors affecting the HRQoL of adults with NES.A link to an in-depth survey comprised of polar, frequency, Likert scales, and open questions was advertised to members of 20 patient and practitioner-led online support groups and websites for people with NES.The survey was piloted among 25 people living with the condition.Final survey data were organized around four key themes: 1) the diagnostic journey 2) access to and experience of treatment 3) interactions with healthcare professionals and 4) social support and social stigma.Advertising commenced May 2016 and final data collected from 1 July to 1 October 2016.To include as many people with NES as possible, the only inclusion criteria were that participants had to be over 18 years of age and had received a diagnosis of NES by a health professional.Participants were advised that we used the term NES throughout the survey to describe diagnoses of psychogenic non-epileptic seizures, non-epileptic attack disorder, and other diagnostic terms sometimes used to describe the condition and symptoms; such as, dissociative, conversion, functional, and pseudo seizures.They were also informed that we used the term seizure throughout the survey, whilst recognising that some people experience non-epileptic events in which they do not exhibit movements, only briefly lose consciousness, or experience an altered state of consciousness, or a mixture of these behaviours and sensations.Participants were advised that, unless otherwise stated, to consider the term “seizure” to include such “events”.Those with a dual-diagnosis of epilepsy and NES were asked to only comment on non-epileptic seizures and events wherever possible.Participants were able to save their answers and return to the survey via a secure and automated email link.Typically, open questions were optional and all others mandatory.The smart-logic survey format helped to protect against participants giving conflicting answers, and to ‘re-check’ and correct responses when they did so.This study uses a subset of the full survey data to explore associations between the socio-demographic and health-related characteristics of participants, their HRQoL, and levels of perceived stigmatisation; using the measures listed below.Participants were asked a range of socio-demographic and health-related questions, as indicated in Table 1.The 31-item Quality of Life in Epilepsy inventory was used to measure HRQoL.The inventory, designed for adults with epilepsy aged 18 years and older, is divided into seven subscales that explore various aspects of patients’ health and wellbeing: emotional well-being, social functioning, energy/fatigue, cognitive functioning, seizure worry, medication effects, and overall quality of life.A weighted average of the multi-item scale scores is used to obtain a total score.Although specifically designed for people with epilepsy, there are important clinical similarities and shared concerns between NES and epilepsy patient populations.A review of health status measures did not produce any better tools to assess the construct of HRQoL in this patient population ; and a recent systematic review identified the QOLIE-31 as the most popular measure in studies exploring the HRQoL of this patient group .Stigma was measured using the Epilepsy Stigma Scale developed by Dilorio and colleagues .The ten-item scale assesses the degree to which a person believes that their seizure condition is perceived as negative and interferes with relationships with others, rated on a 7-point scale from strongly disagree to strongly agree.Item responses are summed to yield a total score.In this study, overall median scores were calculated.Higher scores are associated with greater perceptions of stigma.To our knowledge, the measure has not been validated in a NES patient population.We assessed the scale for internal consistency and found α coefficient for the responses of our participants to be 0.89.Analysis of the data was performed using SPSS, version 24.To guard against assumptions of normality and homogeneity of variance, and because measures include ordinal data, non-parametric tests of significance and correlation were used.In some cases, mean scores are presented or discussed for comparative purposes.The primary outcome measure was QOLIE-31 total score.The Mann-Whitney U Test was used to compare quantitative variables between two independent groups."Spearman's rank correlation coefficient was used to compare continuous and ordinal data variables.The strength of correlations were defined as: 0–0.39 weak, 0.4–0.69 moderate, and 0.7–1 strong.The coefficient of determination was calculated to establish the proportion of shared variance between HRQoL domains and total stigma score.Holm’s Sequential Bonferroni Procedure was performed for multiple tests to protect against inflation of Type 1 error.Statistically significant results not rejected following the Holm’s Sequential Bonferroni method are shown in bold.289 people began the survey.Of these, 141 completed all mandatory questions and submitted their responses for inclusion in the study.Six people reported a diagnosis other than NES and their responses were excluded from further analysis.Of the remaining 135 participants, we report on 115 participants who described receiving “a formal diagnosis of NES” by a health professional.20 participants who reported receiving “a tentative diagnosis of NES” were excluded from further analysis.The socio-demographic and health characteristics of the 115 participants included in this study are shown in Table 1.As a group, participants demonstrated a total median QOLIE-31 score of 31.7.No significant differences in HRQoL were observed between those who self-reported a dual diagnosis of epilepsy and NES, and those with NES alone; nor were significant differences in individual HRQoL domain scores observed between the two groups.No significant associations were found between HRQoL total scores and time from onset of NES or time from onset to diagnosis.No significant differences in HRQoL were observed between those who had been erroneously diagnosed with epilepsy in the past and those who had not.Seizure frequency was shown to be significantly, but weakly correlated with HRQoL.HRQoL was not significantly correlated with participants’ age.Following the Holm–Bonferroni method, participants in work or education reported significantly better HRQoL than those who were not.As shown in Table 2, no other socio-demographic variables tested returned a significant result.The median stigma score across the whole group of participants was 5.2.No significant differences in perceived stigma were observed between those who self-reported a dual diagnosis of epilepsy and NES, and those with NES alone.No significant associations were found between total stigma scores and time from onset of NES or time from onset to diagnosis.No significant differences in stigma scores were observed between those who had been erroneously diagnosed with epilepsy in the past and those who had not.Seizure frequency was shown to be significantly, but weakly correlated with perceived stigma scores.After Holm-Bonferroni correction, no significant differences were observed between the stigma perceptions of socio-demographic groups detailed in Table 2, nor was stigma significantly correlated with participant age.A significant and moderate inverse correlation was found between perceived stigma scale and HRQoL total scores.As detailed in Table 3, analysis of QOLIE-31 subscales shows that seizure worrya, emotional wellbeingb, social functioningc and stigma scale scores were significantly and moderately correlated; the proportion of shared variance for these subscales was a23%, b18%, and c17%.Energy/fatigue and cognitive subscales were found to be significantly, but weakly correlated with stigma.Medicat ion effects and Overall QoL subscales were not found to be significantly correlated.This study sought to explore the relationship between social factors and the HRQoL of adults with NES.Participants were found to experience high levels of perceived stigma which was inversely correlated with HRQoL.Stigma perceptions were most strongly associated with the HRQoL domains seizure worry, emotional wellbeing, and social functioning.HRQoL was better amongst those in employment or education than those who were not.The levels of perceived stigma reported by our participants are considerably higher than typically found in epilepsy patient populations.A study of 314 people with epilepsy using the same measure reports a mean score of 3.7 .Similarly, a recent study using a single four-point Likert scale question taken from the NEWQOL-6D,found that perceived stigma was significantly higher among individuals with NES compared to those with epilepsy .These findings fit with the wider literature, which suggests that people with functional somatic syndromes experience greater perceived stigmatisation than those with comparable organic disease .The stigma of epilepsy is widely reported, and is consistently linked to reduced HRQoL .To our knowledge, ours is the first study to explore associations between HRQoL and stigma among adults with NES.A significant and moderate inverse correlation was observed; suggesting higher perceptions of stigma contribute to poorer HRQoL among those with the condition.Stigma perceptions were found to be most strongly associated with seizure worry, emotional wellbeing, and social functioning HRQoL domains; with over one-half of the variability related to these features.There is a dearth of research exploring the social stigma of NES, but peripheral findings from previous studies broadly corroborate our findings.Studies show that people with NES can experience feelings of shame , blame and stigmatisation ; and might conceal the condition and isolate themselves to avoid potential adverse social reactions to seizures and feelings of embarrassment .On-going support from family, friends and colleagues has been described extremely important in counteracting the social isolation associated with NES .For people with NES, their stigma perceptions are probably not without foundation.In Western nations derogatory views of NES may be linked to the disparaging use of terms such as ‘psychosomatic’ in the media, which might be taken to mean an illness that is feigned, malingered or representative of a character flaw .Unfortunately, these pejorative opinions are also found in medical circles .For those with the condition, stigmatising interactions with health professionals are not uncommon .People with NES often report their symptoms are met with disbelief, not taken seriously, and that the legitimacy of the illness is sometimes questioned by clinicians ; and research exploring health professionals’ views supports these assessments .Contrary to previous HRQoL findings , we found participants who reported being in employment or education to have significantly better HRQoL than those who were not.This discrepancy might be explained the classification of those in education as ‘employed’ in our analysis.Before applying the Holm–Bonferroni method, we also found receipt of disability benefits to be a differentiating factor.As previously observed , we did not find relationship status or participants’ age to be discriminating factors.Nor were significant differences observed in relation to participants’ country of residence, gender, or living arrangements and their HRQoL.To our knowledge, these are novel findings and require substantiation.Seizure frequency was shown to be significantly, but weakly correlated with HRQoL.This finding is in contrast to those of a systematic review which concluded that seizure frequency is not a predictor of HRQoL in this patient group ; but is consistent with a study of 96 patients with NES, which found seizure frequency to be significantly associated with lower HRQoL summary scores .Participants’ socio-demographic characteristics were not found to determine stigma perceptions.However, significant differences in levels of felt stigma according to relationship and employment status were noted prior to applying the Holm–Bonferroni method; and seizure frequency was shown to be significantly correlated with perceived stigma scores.These findings are consistent with studies of epilepsy patient populations , but require verification in NES patient populations.While our findings offer a novel contribution to the literature, it is important that they are interpreted within the context of their limitations.Perhaps the greatest concern with the collection of internet-based patient information is the reliability and validity of the data obtained.Yet, recent reviews suggest health data can be collected with equal or even better reliability in Web-based questionnaires compared with traditional approaches .Participants were able to complete the survey over an extended period if they so wished, and survey metrics show that 97% of respondents completed the survey within 82 h.The added benefit of time for reflection, the ability to consider and correct information, and the use of validation checks has been shown to improve data quality .There are also strong indications that web-based questionnaires are less prone to social desirability bias .Studies show that perceived health status data and HRQoL measures can be reliably collected using online methods.However, it is important to note that the standardised measures used in this study have not been tested in internet-based studies, and research is needed to confirm their online reliability.Due to the design of our study, there is no way to assess response rate.Using number of surveys started as a proxy denominator suggests a completion rate of 49%.Previous studies exploring the HRQoL of adults with NES have recruited participants from inpatient epilepsy monitoring units, outpatient neurology settings, psychotherapeutic centres, or a combination of these; and most report diagnoses were established using video-electroencephalography monitoring .Video-EEG is the best-practice diagnostic method ; however, it is expensive and resource limited and may not be feasible because of the low frequency of seizures .Of the participants in this study only half describe undergoing video-EEG monitoring, with the remainder reporting electroencephalography or ambulatory-electroencephalography testing.Our approach means that we cannot say to what extent participants met the diagnostic criteria for NES proposed by the PNES Task Force of the International League Against Epilepsy Non-epileptic Seizures Task Force guidelines .We must also consider that the diagnosis of NES is notoriously complex and difficult, and some participants may have been misdiagnosed.In view of the uncertainties about the diagnosis inherent in our recruitment method, the inclusion of people with a dual diagnosis of epilepsy and NES could also be considered a limitation of this study.However, given that this study was intended to explore the sociological dimension of NES, we thought it was important not to exclude any subgroup of the whole NES patient population.Epilepsy is an important comorbidity of NES and the 13% figure actually places our study well within the prevalence range of comorbid epilepsy which has previously been reported in HRQoL studies of NES patient populations .Participants with mixed seizure disorders were encouraged to think about their NES when responding to questions about their seizures, but we acknowledge that we cannot be certain that all respondents were able to distinguish accurately between their epileptic and nonepileptic seizures.Despite differences in recruitment methods, other socio-demographic and health-related characteristics of our participants are also within the range of those reported in previous studies exploring the HRQoL of people with NES.In terms of age and gender ; relationship status ; proportion in education or employment ; time from onset of NES ; time to diagnosis of NES ; and frequency of NES in the four weeks prior to testing mean .Data pertaining to physical and psychological comorbidities was not within the scope of our analysis, and our sample might differ from those previously described in these respects.There might also be important differences between people with NES who have access to the Internet and participate in patient support groups, and those who do not.It is also a weakness of the study that the recruitment method did not allow us to recruit a comparison group, and the study is cross-sectional and correlational, which means that results can be bidirectional and should be interpreted with caution.It is possible that changes in social circumstances or status are more relevant to HRQoL and/or stigma than current circumstances – something best explored longitudinally.The correlational nature of our findings means that we cannot say anything about causalities.Despite these limitations, the research improves our understanding of how social factors and dynamics might influence the HRQoL of adults with NES.To our knowledge, the study is the largest HRQoL survey of people with NES to date, and the first to explore the relationship between HRQoL and stigma in this patient group.An important finding is that participants experience high levels of perceived stigma, which negatively affects their HRQoL.Our data suggests that not being in employment or education is detrimental to the HRQoL of people with NES.These exploratory findings serve a heuristic function, in that they identify several issues for further research.Qualitative analysis can help achieve fuller and more complete descriptions of phenomena, help correct interpretation of quantitative results, and provide triangulation .Perceived stigma could be a treatment target, and research is needed to understand how – and why – those with the condition experience stigmatisation; which in our data was most strongly associated with seizure worry, emotional wellbeing, and social functioning HRQoL domains.More research is also needed to understand factors that impede and help facilitate the participation of people with NES in education and employment.These studies could be usefully followed-up by a project that looks specifically at the enacted stigma faced by this patient group.Ethical clearance was approved by Nelson Mandela University Human Research Ethics Committee on 22nd April 2016, where the first author was a visiting researcher at the time of data collection.The survey was hosted by a UK-EU data-protection compliant provider.Potential participants were guided through an online study protocol detailing the purpose of the study, and data protection, ethical compliance and complaint procedures.All participants gave informed electronic consent to participate, and did so again to confirm survey completion and authorise submission of their data for use in the research project.No competing interests are disclosed.The authors certify that they have NO affiliations with or involvement in any organization or entity with any financial interest, or non-financial interest in the subject matter or materials discussed in this manuscript.Funded by Wellcome Trust Grant REF: grant number: 200923.
Purpose: People with non-epileptic seizures (NES) consistently report poorer Health-Related Quality of Life (HRQoL) than people with epilepsy. Yet, unlike in epilepsy, knowledge of how social factors influence the HRQoL of adults with NES is limited. To add to the evidence base, this study explores the relationship between HRQoL and perceived stigma among adults with NES, and the role of socio-demographic characteristics. Methods: Data was gathered from a survey of 115 people living with the condition, recruited from online support groups. Participants provided socio-demographic and health-related data and completed a series of questions investigating their HRQoL (QOLIE-31) and stigma perceptions (10-item Epilepsy Stigma Scale). Results: Participants were found to experience high levels of perceived stigma (median 5.2, mean 4.9). A significant and moderate inverse correlation was observed between HRQoL and stigma (rs − 0.474, p = < 0.001); suggesting higher perceptions of stigma contribute to poorer HRQoL among adults with NES. Stigma perceptions were found to be most strongly associated with the seizure worry (rs = − 0.479), emotional wellbeing (rs = − 0.421), and social functioning (rs = 0.407) HRQoL domains. Participants who reported being in employment or education were found to have significantly better HRQoL than those who were not (p = < 0.001). Conclusion: More (qualitative and quantitative) research is justified to understand how – and why – those with the condition experience stigmatisation, and the factors that impede and help facilitate the participation of people with NES in education and employment.
124
Lyo code generator: A model-based code generator for the development of OSLC-compliant tool interfaces
OASIS OSLC is an emerging and promising open standard for the interoperability of software tools, allowing them to share data with one another.The standard is based on the Linked Data initiative and follows the REST architectural pattern.By adopting the architecture of the Internet–and web technologies in general–it supports a Federated Information-based Development Environment that needs not depend on a centralized integration platform to achieve the desired interoperability.Being a relatively new standard, adequate technical support is a critical factor for its wide adoption within the industry, by lowering the threshold of learning as well as implementing OSLC-compliant tools.This is particularly important, since the standard not only targets software tool and platform vendors.OSLC also provides the developing organization with the capabilities to customize its development environment for the best tool interaction white suite its specific needs .It hence becomes important that the technical support reaches out to this latter type of stakeholders, for which tool development is not necessarily a core business and/or competence.From a research perspective, many past and ongoing research projects focusing on tool interoperability are increasingly adopting OSLC as a basis.While part of the research effort is particularly focused on tool interoperability at its core a larger part of the research is in need of the OSLC capabilities in order to research into the added-values of integrating tools and processes.For example, in the iFEST research project whose main goal was to develop an integration framework for establishing tool chains, only one of the seven work packages of the project was devoted to the development of the framework.The remaining work packages acted as stakeholders of the resulting framework, focusing instead on optimizing specific phases of the system development process, and their respective tools.Similarly in the ongoing EMC2 project, one task is expected to work on an integration framework, upon which the remaining tasks are designed to provide additional services and implementations.In this context, and to generate maximum research impact, core interoperability research should emphasis the development of supporting tools, methods and guidelines in order to bridge the gaps from the standardizing body to the wider research community and the developing organizations that work with development tools and processes.The open source software presented in this paper aims to promote the adoption of the OSLC standard in the industry; and act as research software with high reuse potential, from which other research projects on interoperability can build-up on to further their needs This is achieved through a model-based code generator that allows for the graphical modeling of OSLC interfaces, and from which an almost complete OSLC-compliant interface can be generated.Being loyal to the OSLC standard, such models provide a high-level abstraction of the standard, while ensuring that the detailed interface implementations remain compliant to it.The software is released as part of the Eclipse Lyo open-source project .From our experiences in previous research projects, developers and researchers working on tool interoperability generally face the following challenges when trying to adopt the OSLC standard:Integration requirements elicitation—This entails the identification, formulation and prioritization of integration needs.This task is particularly challenging since integration needs–by definition–may be distributed across many stakeholders within the organization.Moreover, during the elicitation process stakeholders need both to be able to detach themselves from their current state-of-practice and limitations; as well understand what is potentially technically possible to achieve with the new standard.Implementation—Implementing OSLC-compliant tools entails competence in a number of additional technologies such as RESTful web services and the family of RDF standards.However, researchers and developers from other domains may not necessarily have this expertise, leading to a higher starting threshold in adopting the standard.Validation—A new standard may well contain ambiguous and/or incomplete formulations, which leads to misinterpretations of the standard.Few examples and guidelines are initially available to help diffuse such confusion.For example, In the definition of a Resource Shape in OSLC, the distinction between the oslc:representation and oslc:valueType properties is not well clarified, and misuse may lead to invalid combination of these two properties.This may ultimately lead to non-compliant tools being developed.This is particular problematic for an interoperability standard, since it will also lead to seemingly interoperable tools becoming non-interoperable.All in all, development support is needed across the complete development cycle of a tool or toolchain.Moreover, such support ought to abstract away enough of the OSLC standard to make it available for the wider research community, and the developing organizations.Towards this end, Lyo is an open-source Eclipse project aimed at helping the community adopt OSLC specifications and build OSLC-compliant tools.The project consists of open source components, such as the OSLC4J Software Development Kit with the necessary components/pieces to implement OSLC providers and clients in Java; a test suite which will test OSLC implementations against a specification, and a set of reference implementations that can be used to integrate and test against.The Lyo effort partially addresses the challenges discussed earlier, and the contribution of this paper aims to build upon and strengthen the Lyo project.Another relevant work is an open-source generator of domain-specific OSLC specification from Ecore meta-model .This solution focuses on the code generation of implementations, and does not attempt to have a broader perspective support for a more complete development process.Moreover, it seems to no longer be actively developed, nor does it built upon the OSLC4J SDK.Fig. 1 illustrates the overall approach of our contribution in order to meet the challenges discussed above.We build upon and strengthen the existing Lyo OSLC4J SDK, by further building upon its success.While OSLC4J targets the implementation phase of adaptor implementation, we aim to complement it with a model-based development approach, which allows one to work at a higher level of abstraction, with models used to specify the adaptor design, without needing to deal with all the technical details of the OSLC standard.A code generator can then be used to synthesis the specification model into a running implementation.Additional model-based components will be added at this level, such as the ongoing work on Test Suite generation.The software consists of the following components:Adaptor meta-model—defining a model of any tool interface according to the OSLC standard.Lyo code generator—generating the OSLC adaptor based on an instance of the EMF meta-model.The adaptor meta-model allows for the graphical and intuitive specification of an OSLC adaptor functionality.It is designed to be loyal to the OSLC standard, and–compared to the current approach of text-based OSLC specifications–this alternative can better enforce guidelines on how to define a specification.It is built based on the EMF Ecore framework .The Lyo code generator runs as an Eclipse project, and assumes a minimal set of plug-in dependencies.It is based on Acceleo which implements the OMG MOF Model-to-Text Language standard .The code generator is designed to take an instance of the OSLC meta-model as input, and produces OSLC4J-compliant Java code.Once generated, the Java code has no dependencies to either the code generator, or the input EMF model.The generated code can be further developed–as any OSLC4J adaptor–with no further connections to the generator.Moreover, it is possible to modify the adaptor model and re-generate its code, without any loss of code manually introduced into the adaptor between generations.This promotes the incremental development of adaptors, where the specification and implementation can be gradually developed.Fig. 2 illustrates the typical architecture of a tool adaptor, and its relation to the tool it is interfacing.As further detailed in the next section, the code generator can produce most–but not all–of the code necessary for a complete adaptor interface.The code to access the source tool’s internal data need to be manually implemented.However, for specific tool technologies, more and more of this manual code can be automatically generated.While such logic can be integrated into the code generator, we envisage instead that additional components can build upon our current general-purpose generator, in order to provide such additional functionality.For example, in the Crystal EU research project, a model generator that produces instances of our meta-model for EMF-based tools allows for the complete adaptor to be automatically generated .As further detailed in the documentation , the process of developing an OSLC4J adaptor using the Lyo code generator consists of the following steps:Setup OSLC4J project—manually create an empty Eclipse project that is configured for developing any OSLC4J adaptor.Model OSLC4J adaptor—Specify the desired adaptor functionality by graphically instantiating the adaptor meta-model,Validate adaptor model—Use provided validation mechanism to ensure all required model properties are defined as expected, before code generation can be possible.Generate adaptor code—Use provided generation mechanism to produce almost all the necessary Java code for a running OSLC4J adaptor server.Complete adaptor implementation—manually implement the code necessary to access the source tool’s internal data.Run adaptor—the adaptor is now ready to run.Upon generation, an adaptor is–almost–complete and read-to-run, and needs not be modified nor complemented with much additional manual code.Only a set of methods that communicate with the source tool to access its internal data need to be implemented.This communication is reduced to a simple set of methods to get create search and query each serviced resource.This manual code is packaged into a single class, with skeletons of the required methods generated.However, it remains the task of the developer to manual code these methods.The following capabilities are generated based on the adaptor specification model:Resources—Java classes representing each OSLC-resource of relevance to the adaptor.The classes include the appropriate OSLC annotations, attributes, and getters/setters—as expected by the OSLC4J SDK.Services—JAX-RS classes corresponding to the resource services defined in the model, as well as those for the Service Provider and Service Provider Catalog.,Each JAX-RS class contains the necessary methods that correspond to Query Capabilities, Creation Factories, Dialogs, as well as the CRUD methods to interact with resources—as defined in the adaptor model instance,Debugging code—Besides supporting the expected RDF/XML representation, the JAX-RS classes also contains methods that respond to HTML requests, which can be valuable for debugging purposes.HTML requests are handled through customizable JSP-templates that provide HTML pages representing resources, query results and dialogs.Web Server—All necessary classes and configurations to run the adaptor as a Jetty web server.Moreover, the generated code contains many placeholders that allow the developer to manually insert additional Java code and hence customize the default functionality where desired.For example, the generated JSP-templates provide a basic HTML representation for resources, which may very well need to be customized to match the look-and-feel of the tool being interfaced.Similarly, a JAX-RS class method may need to perform some additional business logic before–or after–the default handling of the designated web service.To this end, the developer can safely add manual code in the area between “//Start of user code” and “//End of user code” in the Java and/or JSP files.The generator ensures that such manual code remains intact after subsequent generations.This allows for the incremental development of the adaptor model, and its resulting code.The Lyo code generator has been used and validated by a number of partners in different research projects,In , adaptors for the EastADL and Autosar modeling tools were automatically generated with services for exposing 261 and 834 resources respectively.In both tools, the model entities form a complex hierarchy of multiple inheritances, which need to be mapped to a valid Java implementation.Hence, the manual development of OSLC adaptors was deemed prohibitive, necessitating the ability to automate the adaptor development.Moreover, targeting EMF-based tools in general, an additional component–EMF4OSLC–was developed to complement the Lyo code generator with two additional features The automatic creation of the adaptor specification The automatic generation of the necessary code to access and manipulate the data in the backend tool.This lead to the complete generation of the adaptors—as earlier described in Section 3.1, hence validating our layering approach of Fig. 1 Similarly in the ongoing ASSUME project, an additional component–EMF4SQL–is being developed to handle SQL-based tools.It is anticipated that much code reuse can be gained between EMF4OSLC and EMF4SQL, while ensuring that both components build on top of the Lyo code generator,As part of the EMC2 project , the Lyo code generator was used in to research into an orchestration mechanism that coordinates the activities and notifications across a tool chain.Moreover, a number of commercial tool providers used the generator to assist in the development of adaptors for their tools, making them easily available for the remaining partners.In , an investigation of adopting OSLC’s minimalistic approach to data integration in the domain of production engineering was performed.For the researchers from the production domain–with little OSLC expertise–modeling the adaptors was helpful in understanding OSLC, while the generator helped lowering the threshold in adopting it for the rapid prototyping of the tool adaptors.Finally, to further motivate the need of the generator in research—an earlier version of the generator, was extended in by proposing and linking a light-weight integration model to the tool adaptor meta-models.The main functionality of the code generator, as well as an overview of the steps required to model, generate and implement an OSLC adaptor are demonstrated in .The demonstration does not detail the exact instructions for working with the code generator, which are instead maintained textually under .In this paper, we presented a model-based approach to tool development that tries to lower the threshold of adopting the OSLC standard for both industrial developers, as well as the interoperability research community.Since its release, the open source software has been used–as well as extended–to support the needs of both kinds of stakeholders.While the former allowed us to verify the practical usability and validity of our contribution, the latter demonstrates the software’s research potential.In order to cover more of the development tool life cycle, the software is currently being extended to also support the automatic generation of test suites that can validate any implemented adaptor against an OSLC specification.Additional components are also being developed to support the complete code generation of interfaces for tools using specific technologies.
To promote the newly emerging OSLC (Open Services for Lifecycle Collaboration) tool interoperability standard, an open source code generator is developed that allows for the specification of OSLC-compliant tool interfaces, and from which almost complete Java code of the interface can be generated. The software takes a model-based development approach to tool interoperability, with the aim of providing modeling support for the complete development cycle of a tool interface. The software targets both OSLC developers, as well as the interoperability research community, with proven capabilities to be extended to support their corresponding needs.
125
The growth and development of cress (Lepidium sativum) affected by blue and red light
Light is not only a necessary source of energy for plants, but also an important signal which plays a major role in plant growth, morphological characteristics, cell molecule biosynthesis, and gene expression during the entire growth period of plants.Each different light spectrum has an exclusive effect on particular gene expressions in plants resulting in a number of different impacts.Several processes such as photosynthesis, germination, flowering, and biomass accumulation can be controlled and optimized via adjusting light wavelengths.Since plants are stationary creatures, they are in a constant competition over gaining light, space, water, and nutrients.Competition over the reception of light leads to morphological and growth-related changes in plants.Plants have the ability of detecting small changes in light spectrum, intensity and direction.Light receptors sense these signals which provide access to the information used for plant growth regulation.Known light receptors are classified into three major groups including phytochromes, sensitive to red light and far red; cryptochromes, sensitive to UV-A and blue light; and phototropins.Accordingly, recent studies have shown that the most important wavelengths for photosynthesis are blue and red light wavelengths, and the highest extents of photosynthesis were observed in 440 nm wavelength and 620 nm wavelength.Red light plays an important role in developing photosynthesis devices while controlling the changes in light of the phytochrome devices.Numerous studies have demonstrated that the use of mixed light spectrums results in the activation of a set of complex light systems in plants which ultimately leads to physiological and biochemical responses from the plants.Using the LED technology, monochrome light and specific spectrums can be produced via combinations of different waves.Today, LEDs with high energy and the ability to produce several wave spectrums are accessible in the horticulture industry.Though the manufacturing of LEDs with complete spectrums are more costly and complex, certain LED manufacturers are currently engaged with their production for commercial purposes.Of the advantages of LEDs over regular HPS lights include their longer lifetime and emission of less direct heat towards plants.The low-heat feature in LEDs allows for the implementation of lighting close to the plants, facilitating the exposure among and inside each culture row."LED lamps have always been considered as low-consumption lamps; however, despite how LEDs' extent of energy consumption is of small difference with HPSs, the technology is rapidly growing in favor of the former.Given the research in this area, LEDs produce almost twice the hours of lighting compared to HPS lamps.For years, studies were focused on the red spectrum and later, the observable blue spectrums in LED lights.Projects such as and similar ones at the time were supported by NASA.NASA has also continued their participation in LED related research since 1980 until present.NASA has shown interest in finding a good-quality source of light that involves proper energy consumption and low heat production while being suitable for growing edible plants in irregular conditions.Studies conducted on the effect of LED light which has recently received attention in greenhouses and centers for growing and breeding plants show the impact of wavelengths and light colors on quantity and quality of plant productions."Meanwhile, improper application of artificial lights could result in a set of problems such as increased heat, damage to plant tissues in the form of burned leaves, accelerated or delayed flowering, and increased electricity consumption expenses due to high intensity and/or unsuitable presence period of light within the plant's growth atmosphere. "In general, precise controlling of light environment with respect to the plant's requirements could enhance the performance, quality and production efficiency of the plant. "As a result, greenhouses in which LED sources of lights are used are of high potential power to specifically adjust the environment's lighting.Small plants such as leaf vegetables, small roots, and a number of medicinal plants are the most ideal types of plants to culture at greenhouses where LEDs are used.Currently, plant species cultured at greenhouses equipped with LEDs mostly include leaf vegetables such as lettuce and spinach.LED-interlighting products most commonly consist of blue- and red- LED chip combinations, specifically targeted for excitation of the chlorophyll pigments and thus for enhancing photosynthetic activity.Nonetheless, additional spectral compositions, including different ratios of blue, red, far-red and white light have been tested for interlighting.The presence of blue and red lights in lighting combinations in studies on parsley, lettuce, eggplant, and daisies resulted in an increase in height.Blue light leads to chlorophyll biosynthesis, opening of the pores, and increase in the thickness of leaves.A better growth of the strawberry plant was reported under blue LED treatment compared to red, and the combination of red and blue lights.The effect of blue and red light on changing the height of petunia flower is caused by the effect of blue light and excitation of cryptochromes which result in the production of signals that stimulate gibberellin production and stem height; this, in turn, leads to changes in the extent of blue light presence in the environment and its increase or reduction results in alterations in gibberellin excretion and ultimately, changes in height.Blue light was solely responsible for the increase in the height of basil medicinal plant.Fukuda et al. and Gautam et al. suggested that reducing the amount of red light in the environment alone results in delay or prevention of flowering.The effects of red and blue light on flowering is due to their impacts on the performance of phytochrome B pigments and cryptochromes.The ratio of red light to blue is of substantial importance; a ratio of 3 to 1 in red and blue light combination led to an acceptable growth of strawberry plants compared to the sole application of red or blue."Different red and blue LED light spectrums proved effective in increasing the amount of chlorophyll present in strawberry plant and enhanced the fruit's function.LED light also had a considerable effect in increasing the pure photosynthesis of dracocephalum plant.Cress is a small, herbaceous plant with 1 year of age from the Crucifera family, has a height of 50 cm and is rich with minerals and vitamins A and C which are very beneficial for anemia treatment and blood purification.This plant has extraordinary propertied both as an edible leaf vegetable and a medicinal plant.Given the effects of blue and red light spectrum on plants, particularly their positive impacts on leaf vegetables, the main purpose of this study is to examine and compare different combinations of blue and red lights on morphological and biochemical feature of cress plant as an important leaf vegetable and medicinal plant relative to the control treatment; this is done in order to introduce the most ideal growing condition of cress in terms of lighting ratios of blue and red spectrum combinations and compare the use of this technology with the control treatment in which natural sunlight is used.Plants were illuminated by light emitting diodes with different percentages of red and blue lights.Three spectral treatments were used in this study, namely 90%R+10%B, 60%R+40%B and control.The photoperiod was 12/12h, photosynthetic photon flux density was 168 ± 10 μmol m−2 s−1.The LED lights were prototypes from General Electric Lighting Solutions.These consisted of 0.26 m, 0.06 m, 0.05 m linear fixtures, on which were placed an array of 6 LEDs.Irradiance was measured routinely using a quantum sensor.Photosynthetic photon flux density intensities and light spectra were monitored using a light meter.The relative spectra of the light treatments are shown in Fig. 1.The distance between lamps and plants were adjustable during different stages of the growth via metal clips.At the control experiment unit, natural sunlight was used.The plants’ growing environment was completely covered using special plastic covers in order to avoid light interference while the lamps were active."The plant's height was measured and recorded using a tape measure with 0.01 m precision during growth season.It was measured using Image j software and Leaf Area Meter device.It was measured using a caliper with 0.01 mm precision.The mean data of a shrub during the growth season was examined in statistical analysis.The number of leaves in each shrub was counted ever since the appearance of the first leaf up until the time of harvest.The mean data of a shrub during the growth season was examined in statistical analysis.To measure chlorophyll and carotene, first 0.1 g of completely developed young leaves were separated.Then, it was grinded in a porcelain mortar with 10 mL of 99% methanol in order to extract pigments followed by mixture in centrifuge for 5 minutes with a speed of 3000 rpm.The extent of absorption from the resulting extract were read at wavelengths including 470, 653, and 666 nm using a spectrophotometer.Finally, chlorophyll contents were calculated via the following relations:CHLa = 15.65A666–7.340A653CHLb = 27.05 A653–11.21 A666CHLx + c = 1000 A470–2.860 CHLa - 129.2 CHLbCHLt = CHLa + CHLb + CHLx + cTotal Carotenoid: CHLx + c, amount of chlorophyll b: CHLb, amount of chlorophyll a: CHLa, total chlorophyll: CHLt,There were 15 cress shrubs at each pot; the mean data of each pot during the growth period was examined in statistical analysis.The data were subjected to two-way analysis of variance and the LSD test was used as a post-test.P ≥ 0.01 was considered not significant.Charts were drawn using Excel 2013 software.Assessment of different light treatments showed the effects of LED lights on the fresh and dry weights of cress as well as its biomass at 1% likelihood level; a considerable increase in the amount of fresh and dry weight of leaves, stems, and biomass of cress was observed compared to plants grown under natural sunlight conditions."According to LSD, 1% of the plant's fresh weight under 60R:40B treatment has 57.11% increase compared to natural light treatment. "Moreover, 26.06% increase was observed in the plant's dry weight under 90R:10B treatment compared to control sample.The highest amount of biomass was observed under 60R:40B treatment as 1.51 g/kg of dry weight.Given the examinations done on different light treatments compared to control treatment, the cress plant had its maximum height under 60R:40B light at 19.76 cm with 1% level of likelihood which was a 53.28% increase compared to the control light treatment.Under the 60R:40B light treatment, the cress leaf had the largest area at 56.78 cm2 with 1% level of likelihood, i.e. 47.46% increase compared to the control light treatment.The stem diameter and the number of leaves at 1% level of likelihood under 60R:40B treatment had their maximum values as 3.28 mm and 8.16, respectively; both were in turn increased by 56.7 and 61.27% compared to control treatment.According to the obtained results, the more the percentage of the blue light, the more the amount of chlorophyll was considerably increased based on 1% LSD.The highest amount of chlorophyll a,b.T was observed under 60R:40B treatment with values of 9.4, 5.68, and 15.09 mg g−1 FW leaf, respectively.Given the comparison between light treatments and the control treatment with natural sunlight, the highest amounts were observed under lights with higher percentages of blue light while the lowest were observed under control treatment with values of 6.54 mg g−1 FW leaf, 4.03 mg g−1 FW leaf, and 10.59 mg g−1 FW leaf in chlorophyll a, b, and T, respectively.Compared to the control treatment, there was 30.42% increase in chlorophyll a, 29.04% increase in chlorophyll b and 29.82% total chlorophyll based on 1% LSD.As for the amount of carotenoid, no significant difference was observed in the percentages of red and blue lights compared to control treatment.Based on the results, the total amount of phenol under the effect of LED lamps during the growth period was significantly increased.Both 60R:40B and 90R:10B treatments were able to raise the amount of phenol up to 46.67% in cress, compared to the control treatment in which no artificial light was received and natural sunlight was used.When LED lamps with different light percentages including 60R:40B and 90R:10B were used, considerable increase was observed in the amount of anthocyanin in cress during growth period with respect to the control treatment.This amount in cress was increased by 32.55% during the use of LED lamps compared to the control treatment.The minimum amount of anthocyanin under natural sunlight treatment was observed as 2.94 Mm g−1 FW leaf, with a significant difference compared to artificial light treatments.Light is considered as an important source for photosynthesis; however, photosynthesis can rely on a set of light regulators and sensors as well."Blue and red lights activate different light sensors and gene expressions that may positively or negatively affect the growth and development of plants.Consequently, it can be concluded that the presence of both wavelengths is necessary for plants; accordingly, the majority of research are now focused on achieving a suitable combination of these lights.Okamoto et al. reported that the presence of both the blue and red light is necessary and essential for photosynthesis as chlorophyll absorbs both of these wavelengths.They further suggested that the presence of blue light is beneficial for the morphology and the general health of plants.Overall, different wavelengths have been able to produce various effects on morphological and physiological characteristics as well as flowering capabilities and plant photosynthesis.When LED lights are used, the ratio of blue light to red light is of substantial importance as the application of both wavelengths can increase the growth and function of plants by 20% compared to the use of each wavelength in isolation.Table 1 demonstrates the measured traits of morphological and growing characteristics of cress.According to the results of the present study, the application of LED lamps in blue and red spectra has a significant effect on the morphological traits of cress."Among the treatments applied, the 60R:40B treatment had the highest effect on traits including fresh weight, biomass, plant's height, area of leaf, number of leaves, and stem diameter of cress, with significant differences compared to treatments without blue and red spectra.Dry weights of cress did not have a significant difference under various treatments, though a considerable, significant difference was observed compared to the control treatment.Phytochromes along with cryptochromes result in photomorphogenesis in plants; therefore, the study of light waves becomes substantially important.Moreover, more suitable growth responses can be obtained by taking the maximum light absorption by receptors into account when choosing the spectral quality of light."Many growth parameters such as the plant's fresh and dry weights, stem length and area of leaves are affected through impacts on phytochrome receptors along with red light spectrum.For instance, the application of red light increase the area of leaves in cucumber plants.Moreover, in a plant such as Scots pine, increase in the extent of applied red light increased its biomass; similar results were observed in the study conducted on cress.Albeit, in other studies, the addition of blue light to red was apparently essential for photosynthesis system activity and enables the production of more biomass in plants.Accordingly, in this experiment, various percentages of blue light were applied alongside red light so that the plant does not suffer from deficiency in terms of growth conditions.As the results showed, the use of a combination of blue and red lights produced the best result in morphological characteristics of cress compared to the control treatment."The application of red and blue lights improved the plant's growth in terms of the fresh weight of the stem and leaves compared to natural sunlight which were also consistent with the results of a study by Randall and Lopez.The highest fresh weight caused by the application of red and blue light was reported in daisies.The highest fresh weight in lettuce was also reported under blue and red light treatment.The use of red light alongside blue light yielded better results in increasing the dry weight of needle leaves in Norwegian and Scots pine seedlings.Moreover, the application of red and blue lights led to increase in the dry weight of leaves and stems of common sage, lettuce, radish, pepper, and spinach.At the chloroplast level, blue light has a high photosynthetic capacity with the expression of features similar to sunlight.Subsequently, it can be concluded that photosynthesis is increased when affected by blue light which may be due to the specific sensitivity of cryptochromes and phototropins to blue light; such an increase in photosynthesis, in turn, results in more vegetative growth in plants."With increased growth, increase in the stem's diameter and height as well as the area and number of leaves can be expected.The effect of 60R:40B treatment on increasing morphological characteristics of cress among other light treatments is easily witnessed."The effect of blue and red light on changing the height of petunia plant is caused by the effect of blue light and excitation of cryptochromes which would lead to the production of signals that stimulate gibberellin production and as a result, increasing the stem length; this alters the extent of blue light's presence in the environment and its increase or decrease results in changes in gibberellin excretion and subsequently, changes in height.The use of blue light alone increased the height of basil medicinal plant.As Fukuda et al. suggested, cytokinins are activated as the extent of photosynthesis raises in leaves under blue light treatment.Such an increase in height enhances the performance of products at the time of harvest; meanwhile, short heights in plants reduce the quality of the product in terms of market interest and places the harvest procedure at risk.Therefore, adjusting the lighting quality at the time of planting could introduce numerous morphological changes in plants.Similar to the results of this study with respect to increase in the area of leaves, same results were reported in leaves of lettuce, radish, soy, wheat and roses with the application of both the blue and red lights.Given the examinations conducted on lettuce, those plants that did not receive any blue light had a smaller area of leaves compared to plants that received even small extents of blue light, with 66% increase in the area of leaves.In general, if exposure to light results in biochemical and physiological changes in plants, then these changes can be associated with changes in morphological and anatomic structure of leaves, particularly the anatomic components of leave."Fig. 8 shows the increase in stem's diameter during the growth period caused by exposure to LED lights where the percentage of blue light was higher than other treatments. "Consistent with the results of this study, Glowacka reported increase in stem's diameter under blue light coverage.Researchers believe that blue light is involved in a vast spectrum of vegetative process such as photosynthesis performance of leaves and morphological structures of plants; this could also lead to an increase in the number of leaves in a vast number of plants.Results listed in Table 2 show the significant effect of blue and red light on biochemical characteristics measure in the cress plant.According to statistical results, the application of both blue and red light can increase the biochemical characteristics in cress compared to natural sunlight.Epidemiologic and experimental studies have shown that increased consumption of fruits and vegetables enhances human health and prevents cancer due to high amounts of fibers, vitamins, minerals and phytochemicals.Vitamins and green pigments are valuable compounds in this plant which play an important role in human health.Light adjustment can be pointed out as one of the strategic tools during cultivation for managing such valuable components in fruits and vegetables.Consequently, it can be predicted that the quality of light has an effective role in chlorophyll synthesis and accumulation of materials including phenols and anthocyanin.In this experiment, the highest amounts of chlorophyll, phenol, and anthocyanin were observed in treatments where blue and red light were applied during growth.Photosynthetic photon flux density and daily period duration are two major components in the adjustment of plant growth and development and nutritional value.For instance, the quality of light balanced photochemical characteristics of lettuce.Quality and intensity of light are also two effective factors on photosynthetic pigments; among monochrome spectra, the constant application of blue and red spectra have a positive effect on the growth of chloroplast and leaves in mesophyll.In this regard, studies conducted on plants such as lettuce and cucumber showed that the use of a suitable combination of blue and red lights would accelerate growth and photosynthesis.As with research on cucumber, the application of blue and red lights increased the amounts of chlorophyll a, b, and T compared to control treatment which was also consistent with the results of this study.In general, a proper combination of red and blue lights can increase the amounts of chlorophyll a an b which can be a suitable approach to confront tensions and damages caused by free radicals.According to available reports, blue light enables chlorophyll photosynthesis and opens pores.Furthermore, blue light increases the amount of chlorophyll and photosynthesis by 30%, based on the type of the plant.To this end, an examination by Yanagi et al. demonstrated the positive effect of blue light in cryptochromes system activity which increases the amount of chlorophyll.Wu et al. also showed that the blue light spectrum increases the amount of chlorophyll in green pea plant.The results of other studies are also in line with those obtained in the present research.The measurement results of phenol contents and the amount of anthocyanin showed that cress pots under red and blue light treatment produced higher amounts of phenol and anthocyanin compared to control treatment.Anthocyanin is a group of plant pigments which are capable of production and accumulation in response to light stimulants.The majority of studies on the effects of light quality on phenolic compounds, especially anthocyanin and other flavonoids were conducted prior to 2013.Considering the observations carried out on plants such as parsley, basil, and tomato leaves, it was stated that the application of blue light during growth could bring about considerable increase in the amounts of phenolic compounds.Light has a direct impact on growth, distinction, and synthesis of phytochemicals in the majority of plants.Consequently, changes in the density and quality of light results in a set of changes in certain biochemical and physiological process in plants which are observable through its reflection in morphological and anatomic parameters.Light is the main source for the absorption of photosynthetic carbon in plants which, accordingly, is an important factor in regulation and photosynthesis of phytochemicals.There are various studies that show the fact that different wavelengths of light result in an increase in the biosynthesis of different phytochemicals in plants.In general, the red light spectrum increases phenolic compounds in green vegetables.There are numerous studies suggesting the increase in total amounts of phenolic compounds as well as anthocyanin; overall, the use of LED lamps at the planting period can activate secondary metabolic paths in plants.In greenhouse cultivation of lettuce, it was observed that certain wavelengths are of considerable superiority over other wavelengths in activating phenol synthetic paths and its storage.In the red-leaf lettuce, the anthocyanin contents were considerably affected by red light.Moreover, it was shown that the red light spectrum has a substantial effect on the synthesis of anthocyanin along with regulating phytochromes.Accordingly, with respect to the effective impact of red light in anthocyanin synthesis in plants, an American company has mentioned the red light as the best stimulant for anthocyanin production compared to other light spectrums.The results obtained in this study are similar to those of Bantis et al.In this study, the entire morphological traits examined in cress were placed under red and blue spectrum LED lamps; each wavelengths had its own particular effect in their corresponding receptors in the plant, ultimately increasing the extent of growth and performance of the plant."As expected, the combination of blue and red lights as effective wavelengths on plant's growth had considerable effects on the vegetative traits compared to the control treatment.Therefore, it can be expressed that the presence of both wavelengths is necessary for a better and more complete growth of the plant; subsequently, proper percentages of the combination of these two wavelengths should be found."The results of this study showed that the biochemical characteristics of cress under LED light coverage was superior over natural conditions, which equally increases this plant's properties in terms of human health.Finally, it can be suggested that the use of these lamps can be possible in line with better economic production within controlled conditions.L. Ajdanian: Performed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper.M. Babaei: Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data.H. Arouei: Conceived and designed the experiments; Contributed reagents, materials, analysis tools or data.This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.The authors declare no conflict of interest.No additional information is available for this paper.
Today, the use of light emitting diodes (LEDs) are rapidly increasing in horticulture industry as a result of technological advancements. Lighting systems play an important role in the commercial greenhouse productions. As an artificial source of light, LED lamps can contribute to the better and faster growth of horticulture products such as vegetables grown in greenhouses. In this study, the effects of red and blue light spectrums were implemented and performed as a pot experiment under the cultivation-without-soil condition in greenhouse based on a completely random plan with 3 lighting treatments including natural light (control), 60% red light +40% blue light (60R:40B), and 90% red light +10% blue light (90R:10B), repeated 3 times. The results showed that the application of blue and red lights affected the fresh and dry weights of cress as well as its biomass, demonstrating a considerable increase compared to the plants grown under natural sunlight condition. In this regard, the fresh weight of the plant under the 60R:40B treatment had 57.11% increase compared to the natural light treatment. Compared to the control sample, the dry weight had 26.06% increase under 90R:10B treatment. The highest extent of biomass was observed under the 60R:40B lighting treatment with a value of 1.51 (g per kg dry weight of the plant), which was a 68.87% increase compared to the natural light treatment. Under the 60R:40B treatment, cress had its highest length at 19.76 cm. Under the similar treatment, the cress leaf had a total area of 56.78 cm2 which was the largest. The stem diameter and the number of leaves under the 60R:40B treatment had their highest values at 3.28 mm and 8.16, respectively. Accordingly, a growing trend was observed with 56.7 and 61.27% increase compared to the control treatment. Furthermore, the biochemical traits of cress, the amount of a, b and total chlorophyll, the amount of anthocyanin and phenolic contents under the application of red and blue light were at their highest values compared to the control treatment. The highest amount of chlorophyll was observed under 60R:40B treatment as 15.09 mg g−1 FW leaf. Moreover, the phenolic contents and the amount of anthocyanin were of significant difference at 1% level of likelihood compared to the control treatment. Therefore, the vegetative growth of cress was substantially affected by red and blue lights, resulting in the enhancement of the plant's biochemical features compared to control condition via adjusting the lighting quality and impacts of each red and blue light spectrum on their specific receptors. As a result, the presence of both lighting spectrums is essential for expanding and increasing the quality of the plant. At the large scale, this technology is capable of improving the commercial greenhouse production performance while helping farmers achieve maximum products. This particular combination of lights is one of the beneficial features of LED lighting systems intended for different types of commercial greenhouse productions, especially the valuable greenhouse products.
126
Towards motion insensitive EEG-fMRI: Correcting motion-induced voltages and gradient artefact instability in EEG using an fMRI prospective motion correction (PMC) system
Simultaneous electroencephalography and functional magnetic resonance imaging is a multimodal technique that aims to benefit from the temporal resolution of brain activity from the electroencephalography with the spatial resolution of fMRI.EEG-fMRI has been widely applied to study healthy and pathologic brain activity, such as for studying patients with epilepsy.More recently EEG-fMRI has been proven to be capable of mapping BOLD signal changes associated with epileptic seizures using events detected in EEG.However this endeavour can be severely hindered by subject motion.Subject motion during the acquisition of fMRI can lead to severe data quality degradation.The impact of motion on fMRI is well documented and it causes large amplitude signal changes across consecutive fMRI volumes increasing temporal variance and increasing type 1and type 2 errors.A number of techniques ranging from purely data-based to methods based on precise and independent measurement motion have been proposed.Recently, a prospective fMRI motion-related signal reduction system based on a camera-tracker system and MR sequence acquisition update has been implemented and commercialised.Briefly, an MRI compatible camera is used to image a Moiré-phase tracker attached to the head."The tracking information is converted into the head's position.This information is then used to update the radio frequency pulses and the gradients, applied in the imaging process in real time with promising results.EEG quality degradation due to motion is mostly the result of electromagnetic induction."Faraday's law states that magnetic flux changes through a conductor loop's area induce corresponding voltage fluctuations; in EEG circuits these voltages are superimposed onto the brain-generated signals, making the detection of activities of interest more difficult or even impossible.This is even more problematic for the application of EEG-fMRI in patients for whom the motion might be unavoidable, such as children or to study patients with epilepsy during seizures.Currently, most artefact correction methods are based on post-hoc EEG data processing techniques that rely on the identification of artefact waveforms and their subtraction with the aim of obtaining motion artefact-free signals.These processes do not take into account the measured motion.Systems to detect motion induced voltages based on dedicated sensors, such as piezoelectric devices and carbon loop wires, have shown promise.While these are able to deal with small amplitude movements, neither performed well in removing voltages induced by large motion events.Furthermore, in the case of the wire loops, there is the requirement for additional technology related to the data acquisition and this can pose safety risks in some circumstances.Finally, the additional requirement for fMRI data correction during subject motion is not addressed by these systems thereby limiting their impact.A PMC camera system has only previously been used for correcting the ballistocardiogram artefact found in EEG data acquired inside MRI scanners.While promising this previous work did not address large scale movements or the effects of applying PMC itself on EEG quality during fMRI data acquisition which may suggest that correction of EEG and fMRI is problematic using this approach.In this study, we aimed to focus on improving EEG data quality while suppressing fMRI motion artefacts using a commercially available PMC-camera system."We derived a model of voltage changes induced in the EEG from motion using the accurate measurement of head's motion recorded by the PMC system and used it to attenuate these artefactual signals.We tested this approach with and without large amplitude movements by modelling and removing motion induced voltages and assessed the EEG quality.Additionally, we determine the impact of PMC on the gradient artefact template temporal stability which contains both motion and gradient artefact instability.We also verify our experimental findings by determining the effect of PMC updating the magnetic field gradients on the EEG gradient artefact in the presence of motion.We acquired simultaneous EEG-fMRI data in three healthy subjects.During all recordings subjects were instructed to open and close their eyes every 60 or 30 s.Two different movement tasks were performed: In the ‘keeping still’ sessions subjects were instructed to keep as still as possible.In ‘motion’ sessions subjects were instructed to move performing repeated repetitions of the following movements: shaking their head side to side, nodding their head and rotating their head followed by a short period without movement."The subjects were instructed to start the first block with their eyes open allowing them to calibrate the motion's amplitude.In the second block the movements were repeated with eyes closed.Three repetitions of each block were made in each session.During all recordings subjects were instructed to alternate between eyes opened and eyes closed via a verbal queue.This was done in order to evaluate the practical contribution of EEG-motion artefact correction to measure physiological signal changes from the brain.Fig. 1 presents a schematic diagram summarising the acquisition set-up.All images were acquired at Great Ormond Street Hospital, London, United Kingdom, using a 1.5T Avanto.The functional images were acquired using a gradient-echo echo-planar imaging sequence with the following parameters: 30 ascending slices covering the whole brain with a slice gap of 0.5 mm, slice/volume TR = 74/2220 ms, TE = 30 ms, flip angle = 75°.Each fMRI run was composed of 100 volumes, obtained in 222 s.The posterior half of a 32 channel head coil was used for signal reception to allow for video recording of the subject motion and visual feedback of the motion to be provided to the subject.The EEG data were recorded using a MR-compatible Amplifier.An MRI compatible cap with 31 EEG-electrodes placed as the 10–20 international system, and an ECG channel, all made of Ag/AgCl with internal safety resistors and referenced to FCz was used.The EEG data were acquired with a sampling rate of 5 kHz and band-pass filtered from 0.1 to 250 Hz.The EEG acquisition was synchronised with the MR-scanner by the SyncBox device, allowing the EEG acquisition to be time-locked with the fMRI acquisition gradient switching-related artefact, facilitating its correction.Before analysing, all the EEG data were down sampled to 500 Hz.A MR-compatible camera was used for tracking a Moiré Phase Tracking marker attached to a ‘bite bar’ specifically developed for each subject based on a dental retainer."To make the bite bar a dental stone cast was produced from a dental alginate impression taken of each subject's maxillary dentition.A ‘dual-laminate’, thermoplastic, 3 mm thick bite guard was then thermoformed to the maxillary dentition using a Biostar machine."The ‘soft’ side of the ‘dual-laminate’ ensured a snug fit around the teeth whilst facilitating easy insertion and withdrawal of the guard, whilst the ‘hard’ side ensured that the bite bar didn't move when the subject bit down.A 6 cm × 2 cm × 3 mm strip of ‘dual-laminate’ was then formed into a bar with a 90 degree angle using localised heat.The bar also incorporated housing for the MPT marker.Finally, the bar was joined to the incisor region of the bite guard using medical grade cyanoacrylate glue.The MPT camera was used for recording the marker motion with six degrees of freedom, three translations and three rotations with a sampling rate of 85 Hz and recorded on a computer located outside the scanner room.The same computer was connected to the scanner to update the Radio Frequency and Gradients pulses used for MRI signal acquisition before every fMRI slice is acquired.Motion tracking was used in the scanner to record the motion parameters throughout the experiments."Real-time updating of the MRI gradients was switched either on or off in the scanner's software in different sessions to test its effect.We analysed the motion characteristics for each subject by calculating the root-mean-square for each of the six velocities derived from motion measurements and detecting the minimum and maximum amplitude on the axis with the largest RMS."We also calculated the fast Fourier transform on the motion data from the same axis, to allow us to check the motion's spectral characteristics.For simplicity we reported the frequency of the highest motion-related peak.We were interested in large amplitude movements in the maximum range that can be reliably tracked with our motion tracking system within the range from − 10 to 10 mm and − 6 to 6°.Subjects were provided with a visual display of the marker position in real time via a screen mounted outside the bore and a mirror to allow them to monitor their movements during the task.We also recorded video of the subject with the EEG in BrainVision Recorder via a second MRI compatible camera in order to visually monitor movement and aid the identification of events in the EEG.We refer to the process of calculating the EEGphysiological using Eq. 6 and the tracking information as Retrospective EEG Motion Artefact Suppression.We acquired baseline sessions of EEG data outside the scanner with subjects asked to keep still and to open and close their eyes every 60 s.Two sessions of EEG data were recorded inside the scanner without MRI scanning: the first while keeping still and the second while moving voluntarily.The EEG data was down-sampled to 500 Hz and imported into Matlab.Two processed versions of each EEG dataset were obtained for further analysis: EEG2S and EEG2M without REEGMAS correction but with BCG artefact correction by Average Artefact Subtraction and EEG2S-C and EEG2M-C with REEGMAS correction followed by BCG artefact correction by AASBCG as implemented in BrainVision Analyzer 2.0.EEG and fMRI data were acquired with the PMC system recording motion data but not updating the imaging gradients.We have acquired two successive sessions; with the subjects keeping still and while moving voluntarily, respectively.EEG and fMRI data were acquired with the PMC system recording motion and updating the scanning gradients.Two successive sessions were acquired; with the subjects keeping still and while moving voluntarily, respectively.The EEG data from both Experiments 3a and 3b were down-sampled to 500 Hz, imported into Matlab and REEGMAS was applied as described in Section 2.7, resulting in motion-corrected EEG.In order to correct the GA we have applied the Average Artefact Subtraction.Henceforward we refer to GA correction as AASGA and to ballistocardiogram artefact correction as AASBCG.In this study we built the templates in two different formats, volume–volume and slice–slice.Firstly, a volume-wise GA template was formed by averaging volumes we chose a low number of volumes to reduce motion sensitivity.Secondly, a slice-wise GA template was formed by averaging slice epochs.This is possible because the gradients applied are identical for each slice with the only difference being the RF pulse frequency offset.Following this, AASBCG was applied to both the REEGMAS corrected EEG and the original EEG without REEGMAS correction.We define the GA template variability as the slice-wise or volume-wise epoch-epoch differences that maybe caused by any factor.The GA is only the voltages resulting from the magnetic field gradients switching themselves.After the processing described above, the EEG data from all experiments were then down-sampled to 100 Hz and band-pass filtered from 0.5–40 Hz in BrainVision Analyzer 2.0.This is a common filter width for clinical EEG visualisation.All further analysis was performed in Matlab 2013a.For the data acquired during the moving sessions of Experiment 2, we evaluated the importance of each parameter to the motion regression model by calculating an F-test score using the function ftest.m.The quality of the REEGMAS artefact correction was assessed by comparing the EEG obtained in the scanner to the EEG acquired outside the scanner room in the following ways:We visually assessed the EEG data comparing the presence of physiologic signals/markers, such as eye-blink artefacts and presence of signal in the alpha-rhythm frequency band.To assess the impact of the motion correction across channels, the power topography over the scalp in the alpha rhythm frequency band was visualised.The alpha topography was the power in the frequency band 8–12 Hz averaged over an epoch of 5 s following the first period of eyes closed.We compared EEG data acquired during both ‘keeping still’, ‘motion’ uncorrected and corrected sessions.The EEG power spectral density was calculated by applying the Welch method using Hamming windows of 3 s with 1.5-second overlap during keeping still/motion sessions in eyes closed periods.The PSD was normalised at each frequency by the sampling rate and the number of samples inside each window.The mean PSD was calculated for the baseline EEG.Additionally we calculated two standard deviations of this baseline EEG-PSD across time and assumed that points lying outside this range were likely to be due to artefacts.The Root Mean Square Error defined as the difference between the uncorrected EEG-PSD and the baseline EEG mean PSD was calculated for each frequency from 0.5 to 40 Hz with a frequency resolution of 0.33 Hz.Then we calculated the average RMSE over the entire frequency range obtaining the Mean Root Mean Square Error.Finally, we applied a one-sample t-test in order to compare the MRMSE obtained for EEG data before and after REEGMAS correction.We then compared the GA template stability ρGAtemplate for each session of Exp.3a and 3b to the fMRI/PMC-off, keeping still session using a one-sampled paired t-test.This required 37 tests, one for each time point in the template, therefore we applied a Bonferroni correction and considered significant differences at a p-value of < 0.05 corrected.In all subjects, in the moving session, the maximum RMS velocity was for translations along the X axis: Vx RMS = 22.5, 23 and 20 mm/s for subjects #1, #2 and #3, respectively.The peak frequency of motion was 0.43, 0.71 and 0.55 Hz, for subjects #1, #2 and #3, respectively.All motion-related model regressors explained a significant amount of variance.The motion-related quantity that explained the most variance in the EEG was the velocity of motion as expected followed by position and squared velocity.EEG2S data were of high quality with a clearly visible alpha rhythm during epochs of eyes closed and the presence of eye-blink artefacts.The REEGMAS correction improved the visual appearance EEG2S attenuating BCG artefacts, even prior to applying AASBCG on EEG2S-C.REEGMAS correction did not change the visual appearance of alpha rhythm nor the eye-blink artefacts.In the moving session, large amplitude voltages were clearly visible during subject movement that can be seen in the MPT tracking derived velocity measurements.Following REEGMAS correction a substantial qualitative improvement in EEG2M was observed in EEG2M-C with strong attenuation of the large amplitude motion-related voltages.Application of REEGMAS improved the visual appearance of physiological signals such as the alpha rhythm in the keeping still and moving sessions.Similar results were obtained for the other two subjects.In the keeping still session of Experiment 2, the PSDs were similar for the EEG2S and EEG2S-C for all the subjects.In general the quantitative analysis did not show statistic differences between EEG2S and EEG2S-C.We observed an increase in EEG power, predominantly in the frequency range of 0.5–7 Hz during moving sessions for all subjects.REEGMAS decreased the power in this frequency range to baseline levels for all subjects.Similar results were obtained in other electrode locations.The motion correction decreased the MRMSE significantly, for all subjects, when compared to the uncorrected EEG data.We also note that residual gradient artefact can be observed in some subjects at the frequency of the slice acquisition and harmonics.In some subjects/conditions the power at these frequencies decreased towards to the levels found before without scanning suggesting AASGA has provided a better correction of the GA following REEGMAS.As expected, the electrodes with the largest voltage values in the alpha band are the parietal/occipital electrodes when subjects were still.The alpha rhythm was contaminated by motion induced voltages, mainly in frontal electrodes and in temporo-occipital electrodes.After REEGMAS, the alpha power was distributed over the occipital channels and more closely corresponded to the topography seen in the still session for all three subjects.For moving sessions of Experiment 3a, the maximum RMS for the velocities was for translations along the X axis: Vx RMS = 16.5 and 18.7 mm/s for subjects #2 and #3, respectively, and along the Z axis for subject #1: Vz RMS = 30.2 mm/s.The peak frequency of motion was 0.45, 0.51 and 0.52Hz, for subjects #1, #2 and #3, respectively.For Experiment 3b, the maximum RMS for the velocities was for the translations along the X axis: Vx RMS 15, 15.2 and 20.80 mm/s for subjects #1, #2 and #3, respectively.The peak frequency of motion was 0.36, 0.38 and 0.49 Hz, for subjects #1, #2 and #3, respectively."The AASGA correction of EEG data acquired in the keeping still sessions resulted in EEG data of good quality for both volume and slice wise templates independent of the fMRI/PMC being off or on; the alpha-rhythm was clearly visible and its frequency's power distribution was comparable to that recorded in Exp.1 for all three subjects.We verified that during the keeping still sessions the GA template had small variability, meaning that it was temporally stable in fMRI/PMC-off and fMRI/PMC-on acquisitions.As expected, the AASGA correction of EEG data acquired during the moving sessions resulted in decreased EEG data quality compared to the still sessions when No REEGMAS was applied and standard volume-wise AASGA correction was used.However, the strong residuals in the EEG data corrected by volume-wise based AASGA were attenuated when corrected by slice-wise AASGA.Following REEGMAS, AASGA and AASBCG, the EEG data quality was sufficiently improved to show physiological electrical activity, such as the alpha rhythm, in both volume and slice-wise based AASGA, the latter was less contaminated by GA.Furthermore eye-blink artefacts were clearly distinguishable from motion events following motion correction.The visual and quantitative improvement was observed for all subjects.The GA templates variability was dramatically increased during motion.The GA template variability was substantially reduced by slice-wise AASGA compared to volume-wise AASGA.The GA template stability was then further improved by REEGMAS where visually the stability approaches that seen when the subject was still.The PSD analyses of the data from Experiment 3 with fMRI acquisition, have shown results comparable to the EEG acquired in Experiment 2 without fMRI acquisition.For all Subjects there was a significant reduction of the MRMSE after applying REEGMAS prior to GA correction of the EEG data acquired in moving sessions of Experiment 3, a and b,.For the fMRI/PMC-off and fMRI/PMC-on scans, the RMS variance across the 3000 artefact epochs was higher for the EEG data acquired during moving sessions than the EEG data acquired during still sessions.When the REEGMAS was applied the variance was strongly attenuated for both acquisitions; fMRI/PMC-off and fMRI/PMC-on.For all subjects and scan conditions, there was a significant reduction in ρGAtemplate due to REEGMAS motion correction.The GA template variance reduction after applying REEGMAS varied between 62.6% and 81.89%.The variance of motion-corrected EEG and still EEG data was not statistically different in all the subjects for fMRI/PMC-on acquisitions and for subjects #1 and #2 for fMRI/PMC-off acquisitions, in subject 3 there was a significant difference.In summary, we have proposed a method for reducing motion-related artefacts in EEG data acquired inside the MR-scanner."This method is based on recording subject's head position using a MPT system and using this information to model and supress the motion-induced voltages in the EEG data.Our main finding is that the proposed method is able to considerably reduce motion-induced voltages both during small and large amplitude movements in the absence and presence of fMRI data acquisition.The motion-induced voltage correction was shown to increase the visual quality of the EEG data and the alpha-rhythm power topography.This was confirmed by quantitative analysis in the frequency domain where the MRMSE showed a consistent improvement in EEG data quality in 3 subjects when they were moving.The PSD analyses showed that the motion correction is beneficial in frequencies between approximately 0.5–8 Hz.In this study, motion information at frequencies above 11 Hz were filtered based on the assumption that the camera provides more accurate tracking at lower frequencies and that head motion is predominantly in this range.While the motion task was predominantly low frequency, higher frequency events were also corrected such as those relating to cardiac pulsation-related artefacts.Previous studies have shown the potential for correcting motion-induced voltages induced by small movements by linearly modelling motion-induced voltages based on head motion detection.Here we demonstrated for the first time the potential correction of motion-induced voltages in the order of 10 mm, 6 degrees and velocities up to 54 mm/s.Previous studies presented approaches that may be able to capture non-linear effects that can contribute to pulsation-related artefact in EEG data.However, the dominant contribution is due to rigid body rotation which REEGMAS corrects and the non-linear terms in the REEGMAS model also explained significant variance.EEG in the MRI scanner is degraded by large amplitude voltages induced by magnetic field gradients.These can be effectively corrected when they are temporally stable using template subtraction methods.The GA template stability can be degraded by subject motion, scanner instabilities or magnetic field gradient instabilities.However on most modern MRI systems the latter contribution is small.Subject motion affects the GA template used for the AASGA method in two ways.Firstly, motion-induced voltages are added to those induced by the switching gradients, therefore modifying the GA template.The voltage induced by motion is of comparable magnitude to GA voltages.Secondly, if the subject is in a different spatial location the magnetic field gradient experienced within the EEG circuit is different and correspondingly so is the voltage induced.From previous theoretical work the expected alteration in GA is spatially varying but will be 75 μV for 1mm translation on z-axis and 370 μV for a 5mm translation at electrode T8, assuming azimuthal angle of 6° between the head and B0.Our experimental data that showed a maximum volume–volume GA template difference in voltage of 382 μV for a head translation on x-axis of 2.44 mm.For a slice-wise template both head displacement and correspondingly maximum slice-slice GA template voltage change were reduced to 120 μV and 0.89 mm respectively.For these experiments GA template epoch differences are due to the summation of motion induced voltages and GA artefact differences as explained above.After applying REEGMAS, where motion induced voltages are mainly removed the residual variability is predominantly due to GA changes associated with shifts in spatial position.In this case the maximum volume–volume GA difference for the same conditions was 107.7 μV and slice-slice GA of 1.06 μV, broadly comparable to the expected values.Therefore the error in the GA template found experimentally due to GA variability related to changes in head position are modest provided a slice-wise template is used.For fMRI with the PMC ‘on’, the magnetic field gradients are updated based on head motion.In this case when the subject moves, the axes of the magnetic field gradients are maintained in the same plane relative to the EEG circuit.In some special cases the circuit should experience the same magnetic field gradients from epoch to epoch as long as the EEG circuit and head move in perfect harmony.In this scenario the PMC updates to the acquisition should contribute to increasing the GA stability.However, translations along each axis due to subject motion will cause a different magnetic field gradient to be experienced from epoch to epoch in either case of the fMRI/PMC being On or Off.Overall this suggests there should not be a significant penalty in GA variability from PMC gradient updates and even be an advantage.We therefore conducted simulations based on Yan et al., 2009.The simulations showed that the variance ρGAtemplate is slightly increased for some of the 37 points of a slice-wise epoch acquisition for fMRI/PMC-on during motion sessions.However, the mean variance considering the whole GA template epoch is not significantly increased for fMRI/PMC-on acquisitions when compared to fMRI/PMC-off acquisitions.This confirmed our experimental findings that despite possible confounding differences in voluntary movement between sessions suggested PMC-on and off sessions had similar GA stability.In practice not all of the EEG equipment will move with the head i.e. there will be increased GA instability from the cables connecting the amplifier to the cap electrodes.This necessitates the use of an alternative approach for minimising GA variance during movement; we have demonstrated that a slice based GA template approach is an effective solution requiring only equally temporally spaced slice acquisition during the volume acquisition period.Our data does confirm that there was not a significant penalty in EEG quality for using PMC with the large potential benefits of increased fMRI image stability.This is crucial because it suggests that PMC can be used for EEG-fMRI data acquisition to improve fMRI and also EEG data quality during motion.Moreover, the amplitudes of the motion-related EEG artefacts reduced by our approach are on the same order as the range of motion-related artefacts reported to be corrected by the PMC camera-system when applied to fMRI acquisitions.Therefore, our results represent the first successful implementation of motion-related artefact correction for both EEG and fMRI by using the same MPT system to monitor subject motion.For motion correction methods a key requirement is that data quality is not reduced in the most compliant subjects.In our group when the subjects were keeping still, our method did not change the alpha-rhythm wave shape or modify the eye-blink artefact.However, the greatest data quality benefit was obtained during large amplitude motion, where the motion-related artefact correction improved the representation of the alpha-rhythm signal, and preserved or recovered the eye-blinks.This improvement is clearly illustrated by the topographic distribution of the alpha-rhythm, which after applying motion correction, is comparable to that expected.More advanced analysis such as source localisation of in-scanner EEG could be severely affected by these motion induced changes in topography.At the same time, the motion parameters provide a useful aid for visual analysis of EEG data that is frequently used for studies of epilepsy, where the motion parameters displayed alongside the EEG can be used to determine the likely origin of EEG changes and thereby facilitate interpretation.The ability to scan and obtain EEG of sufficient quality during ictal events, which are clinically important but typically suffer from motion, should improve the yield and quality of these studies e.g. by allowing the ictal phases to be recorded with sufficient reliability while supressing motion related variance in the fMRI time series.EEG acquired in the scanner suffers from a characteristic cardiac-related artefact.The cardiac-related artefact was attenuated by our approach while subjects were keeping still.A recent study, used similar methodology to ours but applied it exclusively to correct the cardiac-related artefact.In our results, we verified that the motion-correction attenuated the cardiac-related artefact during ‘keeping still’ sessions.Our method cannot correct for non-rigid components of the BCG artefact, however realistic modelling of the artefact suggests that bulk head motion is its dominant cause.We have additionally shown how the system can be used during fMRI motion correction both in terms of effective motion-related voltage suppression and the maintenance of GA temporal stability."In contrast to our approach, LeVan et al. attached the marker on subject's forehead.The forehead is a region susceptible to involuntary movements such as eye blinks, which would be recorded as head motion, increasing tracking noise.We developed a marker fixation device based on a dental retainer thereby limiting the marker movement to movement of the skull.This may explain why in our study we were able to correct for a wide range of motion amplitudes and frequencies.Marker fixation has been extensively discussed in the literature due to the importance of accurate tracking information for PMC to be beneficial to MRI, and the teeth have been suggested as the best place to attach the marker to.Previous motion tracking studies involved the use of additional circuitry, such as loops and wires in close proximity to the subject.In our acquisitions the camera is spatially distant and electrically isolated from the subject and is therefore unlikely to pose an increased safety risk).In our data following motion correction, residual artefacts were visible in the corrected EEG data, during very high amplitude movements.This residual artefact could be due to nonlinear effects when the cap and head do not move together, for example.The residual artefacts appear to occur across channels and so additional spatial constraint on the motion related artefact removal may further improve the motion artefact correction.There are also limitations to the tracking especially for faster and larger amplitude motion which might be reduced by using multiple markers and outlier detection and removal in the tracked data.It must be emphasised that these remarks relate to extreme motion, which might be considered by some sufficient grounds to discard entire datasets.It is possible that these residual artefacts are due to GA instability although motion related voltages were in general the source of greater magnitude voltage changes in our data.In the context of the study of difficult-to-scan subjects, our method offers the possibility of offering a more robust scanning protocol, thereby optimising the use of patient and scanner time.The ability to record motion-insensitive EEG-fMRI data should enable the improved study of epilepsy patients during seizures and more generally populations both healthy and with pathology where combined recordings of EEG-fMRI can provide important information about brain function, especially considering that motion differences between populations can lead to bias.Additionally, studies related to different neuroscience areas, such as speech production and motor tasks should also benefit from our approach.Motion-related EEG artefacts in simultaneous EEG-fMRI data were corrected using a Moiré Phase tracking system and linear regression called REEGMAS.This was shown to considerably reduce motion-induced voltages both during small and large amplitude movements.Gradient artefact stability was comparable with or without prospective motion correction.The method was verified visually and quantitatively by comparison of EEG data during movement and when subjects were still inside the MRI with reference to data obtained outside the scanner.The motion correction allowed the recovery of physiological information, such as the alpha-rhythm and the eye-blink artefact while supressing cardiac and bulk head motion induced voltages.REEGMAS improves the quality of EEG data acquired simultaneously with fMRI for both fMRI/PMC-on and fMRI/PMC-off acquisitions.This is an important step forward because it allows simultaneous EEG-fMRI to be obtained in situations where both modalities data are affected by motion, which would allow a wider and more robust application of EEG-fMRI and can allow the study of challenging but important subjects, such as children and epilepsy patients during ictal events.The following are the supplementary data related to this article.Supplementary material - Simulation,Supplementary data to this article can be found online at http://dx.doi.org/10.1016/j.neuroimage.2016.05.003.All data used in this article are openly available from the Harvard Dataverse repository http://dx.doi.org/10.7910/DVN/LU58IU.
The simultaneous acquisition of electroencephalography and functional magnetic resonance imaging (EEG-fMRI) is a multimodal technique extensively applied for mapping the human brain. However, the quality of EEG data obtained within the MRI environment is strongly affected by subject motion due to the induction of voltages in addition to artefacts caused by the scanning gradients and the heartbeat. This has limited its application in populations such as paediatric patients or to study epileptic seizure onset. Recent work has used a Moiré-phase grating and a MR-compatible camera to prospectively update image acquisition and improve fMRI quality (prospective motion correction: PMC). In this study, we use this technology to retrospectively reduce the spurious voltages induced by motion in the EEG data acquired inside the MRI scanner, with and without fMRI acquisitions. This was achieved by modelling induced voltages from the tracking system motion parameters; position and angles, their first derivative (velocities) and the velocity squared. This model was used to remove the voltages related to the detected motion via a linear regression. Since EEG quality during fMRI relies on a temporally stable gradient artefact (GA) template (calculated from averaging EEG epochs matched to scan volume or slice acquisition), this was evaluated in sessions both with and without motion contamination, and with and without PMC. We demonstrate that our approach is capable of significantly reducing motion-related artefact with a magnitude of up to 10 mm of translation, 6° of rotation and velocities of 50 mm/s, while preserving physiological information. We also demonstrate that the EEG-GA variance is not increased by the gradient direction changes associated with PMC. Provided a scan slice-based GA template is used (rather than a scan volume GA template) we demonstrate that EEG variance during motion can be supressed towards levels found when subjects are still. In summary, we show that PMC can be used to dramatically improve EEG quality during large amplitude movements, while benefiting from previously reported improvements in fMRI quality, and does not affect EEG data quality in the absence of large amplitude movements.
127
Comparison of Cox Model Methods in A Low-dimensional Setting with Few Events
The applications of prognostic models, that is, models that predict the risk of a future event, include among others : informing individuals about a disease course or the risk of developing a disease, guiding further treatment decisions, and selecting patients for therapeutic research.Prognostic models derived using time-to-event data often make use of the Cox proportional hazards model.Thernau and Grambsch describe this model as the “workhorse of regression analysis for censored data”.When the number of events is small relative to the number of variables, the development of a reliable Cox model can be difficult.This can be challenging even in a low-dimensional setting where the number of predictors is much smaller than the number of observations.Existing rules of thumb are based on the number of events per variable, which is recommended to be between 10 and 20 .When performing variable selection, these EPV rules are applied to the number of candidate variables considered, not just those in the final model .Penalized regression methods that shrink the regression coefficients towards 0 are an option in a rare event setting, which may effectively increase the EPV , thus producing better results.Examples of these methods include ridge regression , the least absolute shrinkage and selection operator , and the elastic net , which is a combination of the former two.Backward elimination is another widely used method that seemingly reduces the number of predictors by applying P values and a significance level α to discard predictors.Our aim in this work was to compare, in a low EPV and low-dimensional setting, the performance of different approaches to computing the Cox proportional hazards model.We consider the following methods: full model, computed using all predictors considered via maximization of the partial log-likelihood, BE with significance levels α = 0.05 and α = 0.5, ridge, lasso, and elastic net.Simulations were used to compare different methods based on a prospective cohort study of patients with manifest coronary artery disease .Two main scenarios were considered: clinical variables relevant to CAD such as age, gender, body mass index, high density lipoprotein over low density lipoprotein cholesterol ratio, current smoking, diabetes, and hypertension, as well as blood-based biomarkers such as C-reactive protein and creatinine as predictors; and information on 55 genetic variants in addition to the variables used in scenario 1.These variants represented either loci that have been previously shown to be associated, at the genome-wide significance level, with CAD, or recently-identified CAD loci .Baseline characteristics are shown in Table S1.There are 1731 participants involved, with a median age of 63 years and 77.6% male.Table S2 provides information of the genetic variants used.The median follow-up was 5.7 years.In each scenario, a Weibull ridge model was fitted in the cohort.Each fitted model was considered the true model and was used to simulate the survival time.Censored Weibull quantile–quantile plots of the models’ exponentiated residuals are shown in Figure S1.Deviations from the Weibull distribution are observed in both scenarios.Cox proportional hazards models were calculated on the simulated datasets using the different methods considered for EPV equal to 2.5, 5, and 10, respectively.BE 0.05 selected no variable in 64% and 62% of the simulations performed with EPV = 2.5.For the same EPV, BE 0.5 selected no variable in 18% and 10% of the simulations for scenarios 1 and 2, respectively.This resulted in a model that predicted the same survival probability for all individuals in the dataset.The same occurred for BE with other EPV values and also for the lasso and the elastic net with EPV = 2.5.The ridge method also produced constant predictions as a consequence of shrinking the coefficients too strongly.Consequently, the computation of the calibration slope and the concordance becomes impossible.The calibration slope could not be calculated either, when a model assigned a predicted survival probability of 1 to at least one individual.This occurred for the full model in 72 and 3 simulations in scenario 1, and in 12 simulations in scenario 2.BE and the penalized models had 62 and 8 simulations, respectively, that predicted a survival probability of 1.The root mean square error could be computed in all these cases.However for consistency, the results shown below only reported the RMSE for the simulations where the concordance and calibration slope could be computed.Table 1 gives the number of simulations used to compute RMSE, calibration slope, and concordance on each scenario.For both scenarios we found a decrease of the RMSE as the EPV increases.The penalized methods have lower RMSE than the full model and the two BE variants considered.BE with a lower significance level showed a better RMSE than a higher significance level in our simulations.In both scenarios 1 and 2, the elastic net had the best RMSE, that is, the RMSE that was closer to zero.Looking at the average of the calibration slope across the simulations, the lasso method showed the best performance, being of all the methods the one with an average calibration slope closest to the ideal value of 1.Here, we observed that the average calibration slope for the ridge and the elastic net for scenario 1 and EPV = 2.5 was above 10.A similar but less extreme average calibration slope was observed in scenario 2.These extreme average calibration slopes for the ridge and elastic net were caused by excessive shrinkage of the regression coefficients.The extreme calibration slopes corresponded almost exclusively to models where the elastic net equalled or was comparable to the ridge model.Using a trimmed mean, 5% on each tail of the distribution, as a robust estimator of the mean, reduced the extreme calibration slopes in scenario 1 and EPV = 2.5 from approximately 15 to 9 for the ridge and from 12 to 6 for the elastic net.In scenario 2, the trimmed mean reduced the average calibration slope from approximately 4 to 2.26 for the ridge and from 2.4 to 1.12 for the elastic net.Examining the median calibration slope, we observed that the ridge has the best calibration slope in both scenarios with EPV = 2.5 and the elastic net with EPV = 5.The distribution of the calibration slope across simulations is shown as boxplots in Figures S3 and S4.On the boxplots we see how the interquartile range of the calibration slopes becomes narrower with increasing EPV, and that in both scenarios the ridge has the greatest calibration slope IQR for EPV = 2.5.For both the ridge and the elastic net, the increase in IQR with the decreasing EPV is proportionally larger on the 75th percentile-median difference, than in the median-25th percentile difference.A particular simulation in scenario 2 with EPV = 2.5 that produced extreme calibration slopes was examined.The calibration slopes for this simulation were 22 for the elastic net and 52.5 for the ridge.A scatterplot of the points used to compute the calibration slope is shown in Figure S5.Here we observed that the range of the estimated log odds of event is much shorter than that of the true log odds, indicating that too much shrinkage was applied.In both scenarios and all EPV values tested, the concordance was higher for the 3 penalized methods considered, except scenario 1 with EPV = 2.5, for which BE 0.05 had the highest concordance.In those cases for which the penalized methods showed better discrimination, either lasso or ridge had the highest concordance.To further explore the methods considered, a hybrid method was considered, where BE was followed by an application of ridge regression, that is, the coefficients of the variables selected by BE were shrunk using ridge.Both BE 0.05 and BE 0.5 were examined.The results showed that RMSE of both BE 0.05 and BE 0.5 was improved by the application of ridge, but it was still higher than that when using ridge, lasso, or elastic net alone.With the application of ridge, both the average and the median calibration slope of BE came closer to the ideal value of 1, whereas the concordance of BE improved only slightly.The three penalized methods considered have a tuning parameter, which gives the amount of shrinkage that is applied to the regression coefficients.The elastic net has an additional tuning parameter which determines how close the elastic net fit is to the lasso or ridge fit.These tuning parameters were selected in our simulations by 10-fold cross-validation.We next explored the sensitivity of the simulation results for the penalized methods to the number of folds used in the cross-validation during the selection of tuning parameters.In particular, we wanted to examine whether the extreme calibration slopes observed in some of the simulations were attributed to the method used to select the tuning parameters.To do this, the simulations were repeated using 5-fold cross-validation.RMSE, calibration slope, and concordance were overall similar to the previous results using 10-fold cross-validation, including the distribution of the calibration slopes, in particular, the extreme values observed in some simulations.Further additional simulations were run for the penalized methods using the predictor variables to balance the 10-folds used in the cross-validation.The observations were clustered in 10 groups via K-means and then each of the 10-folds used was chosen randomly so that it would contain approximately one tenth of the individuals on each cluster.Here again, the results for the RMSE, calibration slope, and concordance were similar to those for the initial simulations using 10-fold cross-validation, including the extreme values for the calibration slopes observed in some simulations.The different methods considered, to compute a Cox model, were applied to the clinical data that were used as the basis of our simulations.We used the same scenarios as in the simulations.The regression coefficients for both scenarios considered are shown in Tables S3 and S4.In scenario 1, creatinine was selected by all models performing selection, representing the only predictor selected by BE 0.05.BE 0.5 additionally selected age and C-reactive protein.The lasso and elastic net selected, on top of these two, LDL/HDL ratio, hypertension, and gender.In scenario 2, creatinine was the only predictor selected by BE 0.05, while BE 0.5 selected age additionally.None of the 55 variants considered was selected by these two methods.Lasso and the elastic net selected the same number of variables, of which 23 variables were selected by both methods.To quantify the discrimination of the different models we used the C-index , which estimates the probability that for a pair of individuals the one with the longest survival has also the longest predicted survival probability.The C-index is an extension of the area under the Receiver Operating Characteristics curve and has a similar interpretation .In scenario 1, the full model had a C-index of 0.599.The highest C-index was attained using ridge, followed by the elastic net and lasso.For scenario 2, the highest C-index was attained by the ridge, followed by the lasso and the full model, while the elastic net had a C-index of 0.600.Both BE regressions considered had C-indices ⩽ 0.577.The BE C-indices improved slightly after applying ridge regression.The full model had the calibration slope further away from the ideal value of 1 in both scenarios considered.The best calibration slope was achieved in scenario 1 by the lasso, followed by the combinations of BE 0.05 and BE 0.5 with the ridge, the elastic net, and the ridge method.The fact that these calibration slopes for the penalized methods were higher than 1 indicates that slightly too much shrinkage was applied by these three methods.In scenario 2, the best calibration slope was produced by the elastic net, followed by the lasso and ridge.Both BE methods had a calibration slope less than 0.65, indicating overfitting.The BE calibration slope was improved after applying ridge regression.In this work we aimed to compare methods to compute a proportional hazards model in a rare event low-dimensional setting.Applying simulations based on a dataset of patients with manifest CAD, we compared the full model that used all predictors, BE with α = 0.05 or α = 0.5, ridge regression, lasso, and elastic net.The penalized methods, i.e., ridge, lasso, and elastic net, outperformed the full model and BE, Nonetheless, there is no single penalized method that performs best for all metrics and both scenarios considered.BE performance was improved by shrinking the selected variable coefficients with ridge regression; however, this hybrid method was not better than ridge regression, lasso, or elastic net alone.Ambler et al. observed that the lasso and the ridge for Cox proportional hazards models have not been compared often in a low-dimensional setting.Porzelius et al. investigated several methods that are usually applied in high-dimensional settings and produced sparse model fits, including the lasso and elastic net, in a low-dimensional setting, via simulations.They found the overall performance was similar in terms of sparseness, bias, and prediction performance, and no method outperforms the others in all scenarios considered.Benner et al. found on their simulations that the lasso, ridge, and elastic net had an overall similar performance in low-dimensional settings.Ambler et al. , whose approach we follow in this paper, compared the models considered here on two datasets.They also studied the non-negative garrotte and shrank the coefficients of the full model by a single factor, but they did not examine the elastic net.In their simulations, the ridge method performed better, except that lasso outperformed ridge for the calibration slope.The full model and BE performed the worst on low EPV settings.They recommend the ridge method, except when one is interested in variable selection where lasso would be better.They also observed that in some cases the ridge shrunk the coefficients slightly too much.Lin et al. compared Cox models estimated by maximization of the partial likelihood, Firth’s penalized likelihood and using the Bayesian approaches.They focused on the estimation of the regression coefficients and the coverage of their confidence intervals.They recommend using Firth’s penalized likelihood method when the predictor of interest is categorical and EPV < 6.Firth’s method was originally proposed as a solution to the problem of ‘monotone likelihood’ that may occur in datasets with low EPVs and that causes the standard partial likelihood estimates of the Cox model to break down.In our simulations, there was no clear-cut winner, but certainly the penalized methods performed better than the full model and BE.The elastic net showed the best predictive accuracy and all three penalized methods considered had comparable discrimination.In some of our simulations, the penalized methods shrunk the coefficients too much, even though the “true” model was being fitted.This behavior was observed both when using 10-fold and 5-fold cross-validation to select the tuning parameters of the penalized approaches and even after attempting to balance the folds based on the predictors.This suggests, as it was also pointed out previously , that more work should be done in developing methods to select the tuning parameters of the penalized approaches.Van Houwelingen et al. describe a strategy involving penalized Cox regression, via the ridge, that can be used to obtain survival prognostic models for microarray data.In the first step of this approach, the global test of association is applied and ridge regression is used only if the test is significant.Even though this approach is suggested in a high-dimensional setting, applying this global test on a low-dimensional setting before applying a penalized approach may help identify situations, where a penalized method may apply excessive shrinkage.In our clinical dataset application on the scenario that included clinical variables, biomarkers, and genetic variants, the three penalized methods also had a comparable performance in terms of calibration and discrimination and showed better calibration than the full model and BE, in line with our simulation results.Some limitations apply to our study.First, the Cox models received as input all variables used in the true underlying models to simulate the data, that is, there were no noise predictors.This may have given an unfair advantage to ridge regression which does penalization but not variable selection like the lasso or elastic net.Second, all simulations are based on a single clinical cohort, which may be representative of other cohorts, but we cannot compare, the similarity or dissimilarity of the observed simulation results in other datasets.Third, we examined only on the Cox proportional hazards model and did not consider alternative approaches to prognostic models for survival data like full parametric approaches or non-parametric ones.Future work will address some of these limitations on other datasets and using non-parametric models.All three methods using penalization, i.e., ridge, lasso, and elastic net, provided comparable results in the setting considered and may be used interchangeably in a low EPV low-dimensional scenario if the goal is to obtain a reliable prognostic model and variable reduction is not required.If variable selection is desired, then the lasso or the elastic net can be used.Since too much shrinkage may be applied by a penalized method, it is important to inspect the fitted model to look for signs of excessive shrinkage.In a low EPV setting, the use of the full model and BE is discouraged, even when the coefficients of variables selected by BE are shrunk with ridge regression.This study adds new information to the few existing comparisons of penalized methods for Cox proportional hazards regression in low-dimensional datasets with a low EPV.AtheroGene is a prospective cohort study of consecutive patients with manifest CAD and at least one stenosis of 30% or more present in a major coronary artery.For the present study we focus on the combined outcome of non-fatal myocardial infarction and cardiovascular mortality.Time to event information was obtained by regular follow-up questionnaires and telephone interviews, and verified by death certificates and hospital or general practitioner charts.Genotyping was performed in individuals of European descent only using the Genome-Wide Human SNP 6.0 Array.The Markov chain haplotyping algorithm was used to impute untyped markers.The 1000 Genomes Phase I Integrated Release Version 2 served as reference panel for the genotype imputation.For the present study we use 55 genetic variants.These variants are taken from the CAD genome-wide association meta-analysis performed by the CARDIoGRAMplusC4D Consortium .Using an additive genetic model, these variants represent the lead CARDIoGRAMplusC4D variants on 47 loci previously identified at genome-wide significance and 8 novel CAD loci found by this consortium.Out of the 48 loci examined , rs6903956 was not nominally significant and is not used in our analyses.All SNPs and indels are used as allele dosages, that is, the expected number of copies of specified allele is used in the analyses.After exclusion of missing values, the dataset consists of 1731 individuals, 209 incident events and a median follow-up time of 5.7 years.We adopted the simulation design used by Ambler and colleagues by considering two main scenarios.For scenario 1, we consider clinical variables and blood-based biomarkers as predictors.For scenario 2, we added information on 55 genetic variants to these variables.On each scenario, we fit a Weibull ridge model from which we simulate the survival time using the methods of Bender and colleagues .Since the fitted Weibull model is used to simulate the survival time, this model provides the data generating mechanism, and as such it plays the role of the true underlying model.The resulting values of the survival time are then right-censored with help of a uniform random variable U on the interval, that is, if the simulated time exceeds U, the time is set to U.The δs are chosen to achieve an EPV of 2.5, 5, or 10.We generate 1000 simulated datasets.For each scenario and EPV, and on each one of them we fit a standard Cox model via partial likelihood, two BE models, with α = 0.05 and α = 0.5, a lasso model, a ridge model and elastic net model.For the lasso and the ridge, 10-fold cross validation is used and the parameter that maximizes the cross-validated partial log-likelihood is used as the corresponding penalization parameter.For the elastic net, we consider a course grid from 0 to 1 in steps of length 0.05 for the mixing parameter α.As for the lasso and ridge, the cross-validated partial likelihood is maximized.Additional analyses were performed selecting the tuning parameters using 5-fold cross validation and 10-fold cross-validation.The folds for the latter were obtained as follows.The observations were clustered in 10 groups using the predictors and K-means .Then each fold was chosen randomly so that it would contain approximately one tenth of the individuals on each cluster.The methods considered were applied to the AtheroGene dataset .As measures of performance, we computed the C-index Cτ and the calibration slope.For the computation of the C-index, the first five years of the follow-up were used.Since estimating the performance of a model on the same dataset the model was developed may produce over-optimistic performance estimates, both the C-index and calibration slope were corrected for over-optimism with help of the 0.632 bootstrap estimator .1000 bootstrap replications were used in the correction.All analyses were performed with R Version 3.2.1.The glmnet package was used to fit the penalized Cox regressions.BE was performed with the package rms .The survival package was used to fit the standard Cox model.The survC1 package was used to compute Cτ.FMO performed the simulations and data analyses.RBS provided the clinical perspective and information on the AtheroGene study.TZ performed genotyping and provided genetic information.CM and DB provided code and support for the data analyses.AS performed genotype calling and provided statistical advice.DAT performed genotype imputation.MH provided statistical advice.FMO drafted the manuscript.All authors critically revised and approved the final manuscript.The authors have declared no competing interests.
Prognostic models based on survival data frequently make use of the Cox proportional hazards model. Developing reliable Cox models with few events relative to the number of predictors can be challenging, even in low-dimensional datasets, with a much larger number of observations than variables. In such a setting we examined the performance of methods used to estimate a Cox model, including (i) full model using all available predictors and estimated by standard techniques, (ii) backward elimination (BE), (iii) ridge regression, (iv) least absolute shrinkage and selection operator (lasso), and (v) elastic net. Based on a prospective cohort of patients with manifest coronary artery disease (CAD), we performed a simulation study to compare the predictive accuracy, calibration, and discrimination of these approaches. Candidate predictors for incident cardiovascular events we used included clinical variables, biomarkers, and a selection of genetic variants associated with CAD. The penalized methods, i.e., ridge, lasso, and elastic net, showed a comparable performance, in terms of predictive accuracy, calibration, and discrimination, and outperformed BE and the full model. Excessive shrinkage was observed in some cases for the penalized methods, mostly on the simulation scenarios having the lowest ratio of a number of events to the number of variables. We conclude that in similar settings, these three penalized methods can be used interchangeably. The full model and backward elimination are not recommended in rare event scenarios.
128
Mass spectrometry identification of age-associated proteins from the malaria mosquitoes Anopheles gambiae s.s. and Anopheles stephensi
Two cohorts each of A. gambiae and A. stephensi were reared at different times.For each cohort, adults were collected 24 h post emergence and again at 9 and 17 d post emergence for A. gambiae and at 9, 17 and 34 d post emergence for A. stephensi.Heads and thoraces of five females at 1, 9 and 17 d of age for A. gambiae and 1, 9, 17 and 34 d for A. stephensi were dissected on a bed of dry ice and pooled into 2 ml plastic screw-cap vials containing 90 µl of 2D buffer, 1×dissolved PhosSTOP and two 3 mm silica glass beads.Proteins were extracted by homogenising samples using a Minibead-beater 96 for 1.45 min.The contents were collected by brief centrifugation at 12,000×g.The sample was transferred to new 1.5 ml micro-centrifuge tubes and clarified by centrifuging twice at 21,000×g for 10 min.Protein was purified using the 2D clean-up kit,according to the manufacturer׳s protocol.Samples were then resuspended in 2D buffer and total protein content was quantified using the 2D quantification kit following the manufacturer׳s protocol.Fifty micrograms of protein was prepared in a 2D buffer for each age sample.An internal standard sample was prepared by combining 25 µg of protein from each age sample into a single pool.The pH of all the samples was adjusted to between 8 and 9.1.A total of four and three biological replicates of A. gambiae and A. stephensi, respectively, were included at each age and the entire experiment was replicated a second time using the second cohort of mosquitoes.Experiments with 2D-DIGE followed the procedures described by Hugo and colleagues .The Stock solutions of each of Cy2, Cy3 and Cy5 fluorescent cyanine dyes was prepared by reconstituting the appropriate dye in 5 µl Dimethylformamide,to a concentration of 1 nM.Four hundred picomoles of either Cy3 or Cy5 was added to each sample and 2400 pmol of Cy2 was added to the internal standard.Each sample pool was combined with 4.5 µl of 100×ampholytes,5.6 µl Destreak reagent and made up to 450 µl in 2D Buffer.Samples were loaded onto 24 cm, pH 3–10 immobilised pH gradient strips in rehydration trays and left at room temperature for 4 h for the strips to absorb the sample.Following absorption, the strips were transferred to a new focussing tray, overlayed with mineral oil and allowed to rehydrate overnight at room temperature.For first dimension separation, the strips were transferred to a dry protean iso-electric focusing tray.The strips were overlayed with fresh mineral oil and iso-electric focussing was performed using the following run conditions: 50 μA per strip, 250 V for 15 min, 1000 V for 5 h, 10,000 V for 4 h and 500 V to reach a total of 80,000 V h−1.Focus strips were equilibrated for 15 min in buffer I containing 6 M Urea, 0.375 M Tris–HCl pH 8.8, 2% SDS, 20% glycerol and 2% DTT and then for 15 min in buffer II containing 6 M Urea, 0.375 M Tris–HCl, 2% SDS, 20% glycerol and 2.5% Iodoacetamide.The strips were placed on 12% acrylamide gels cast in 24 cm optically clear plates.Electrophoresis was performed in a Protean® Plus Dodeca cell at 5 mA/gel for 15 min, 10 mA/gel for 15 min and 30 mA/gel until the dye front reached the bottom of the gel.A. stephensi gels were scanned using a Typhoon FLA-9400 fluorescent imager at 100 µm pixel resolution, using the 520 band pass 40 filter for Cy2, 580 BP 30 filter for Cy3 and the 670 BP 30 filter for Cy5 emission.A. gambiae gels were scanned using a FLA-9000 Starion fluorescent imager at 100 µm pixel resolution using the BPB1 filter for Cy2, DGR1 filter for Cy3 and LPR filter for Cy5.Fluorescent images were processed and analysed using Delta2D version 4.0 software.Spots on gel images scanned from the same gel were then connected by direct warping and the spots on all gels were aligned and normalised for intensity using the Cy2 channel on each gel with the match vectors tool.All aligned images were fused and thereafter spot validations were transferred to all other images from this fused image .All spots that were visible on 2D-DIGE images for both species are shown in Supplementary tables 1–4.Protein spots for identification were manually excised from 2D gels using the method of Hugo and colleagues .The gels were stained with colloidal Coomassie 0.12% brilliant blue G250 stain, 10% NH4SO4, 10% phosphoric acid and 20% methanol in distilled water.The differentially expressed spots were excised from these gels using a glass plug cutter and destained overnight in 300 µl of MS fix solution containing 40% ethanol and 10% acetic acid.The solution was discarded and 100 µl of 25 mM ammonium bicarbonate, pH 8, was added and the samples were placed on an orbital shaker at room temperature for 15 min.The solution was discarded and the ammonium bicarbonate wash was repeated twice more.Gel plugs were dried in a vacuum concentrator for 30 min.The gels were rehydrated in 4 µl of 1 µg/µl proteomics grade trypsin in 40 mM ammonium bicarbonate containing 10% acetonitrile and incubated for 1 h at room temperature.A further 35 µl of 40 mM ammonium bicarbonate containing 10% ACN was added and the plugs were incubated overnight at 37 °C.Tryptic extracts bathing the plugs from equivalent spot samples were pooled into a single tube.20 µl of 50% ACN containing 0.1% Triflouroacetic acid was added to the gel plugs and incubated at room temperature, shaking for 1 h.The solution bathing the gels was combined with the previous extracts.Pooled extracts were lyophilised, rehydrated in 30 μl of 0.1% TFA and concentrated using C18 Zip-Tips,according to the manufacturer׳s instructions.In-gel tryptic digests were analysed by either MALDI-TOF/TOF MS or separated by CapHPLC and sprayed directly into the ion source of LTQ-Orbitrap XL hybrid MS .All mass spectra from the in-gel tryptic digests were acquired in positive ion mode on an Ultraflex III MALDI-TOF/TOF MS.Reflectron mode was used to measure the spectra with typical resolution within the range of 15,000 and 20,000.Mass accuracies of within 50 ppm for MS measurements and between 60 and 250 ppm for tandem MS measurements were obtained.Saturated α-Cyano-4-hydroxy cinnamic acid matrix was prepared in 97% acetone containing 0.3 mM ammonium dihydrogen phosphate and 0.1% TFA, subsequently diluted 15 times in 6:3:1 ethanol:acetone:10 mM ammonium dihydrogen phosphate and 0.1% TFA.This was used as matrix for all MS and MS/MS analyses at a sample to matrix ratio of 1:2.MS/MS spectra were automatically acquired with 40% higher laser intensity than that used for MS analysis.Spectra were calibrated using a peptide calibration standard mixture of nine peptides in the mass range of m/z=1046 and m/z=3147, with 62.5 fmol of each peptide applied to calibration spots.High precision MS/MS calibration was achieved using fragment ions derived from all the nine peptides in the MS calibration kit and the associated calibration coefficients were applied to the method file used to acquire MS/MS data.The data were automatically acquired and processed in a batch mode by Bruker Daltonics Flex-series software, and searched using MASCOT search engine with a custom made database using Bio-tools version 3.1.Instrument settings for MS were: ion source 1 potential, 25.00 kV; ion source 2 potential, 21.70 kV; reflectron 1 potential, 26.30 kV; reflectron 2 potential, 13.85 kV and for MS/MS were: ion source 1 potential, 8.00 kV; ion source 2 potential, 7.20 kV; reflectron 1 potential, 29.50 kV; reflectron 2 potential, 13.75 kV; LIFT 1 voltage, 19.00 kV; LIFT 2 voltage, 3.00 kV.Ion selector resolution was set at 0.5% of the mass of the precursor ion .All protein spots identified by MALDI-TOF/TOF mass spectrometry are reported in our accompanying manuscript .In-gel tryptic digests were fractionated by CapHPLC using a Shimadzu Prominence HPLC system and were introduced directly into the LTQ-Orbitrap XL hybrid MS equipped with a dynamic nanoelectrospray ion source and distal coated silica emitters.Acidified samples were loaded onto a 120 Å, 3 μm particle size, 300 μm by10 mm C18-AQ Reprosil-Pur trap column at 30 μl/min in 98% solvent A and 2% solvent B for 3 min at 40 °C, and were subsequently gradient eluted onto a pre-equilibrated self-packed analytical column using a flow rate of 900 nl/min.The LTQ-Orbitrap was controlled using Xcalibur 2.0 SR1.Analyses were carried out in data-dependent acquisition mode, whereby the survey full scan mass spectra were acquired in the Orbitrap FT mass analyser at a resolution of 60,000 after accumulating ions to an automatic gain control target value of 5.0×105 charges in the LTQ mass analyser.MS/MS mass spectra were concurrently acquired on the eight most intense ions in the full scan mass spectra in the LTQ mass analyser to an automatic gain control target value of 3.0×104 charges.Charge state filtering, where unassigned precursor ions were not selected for fragmentation, and dynamic exclusion were used.Fragmentation conditions in the LTQ were: 35% normalised collision energy, activation q of 0.25, isolation width of 3.0 Da, 30 ms activation time, and minimum ion selection intensity 500 counts.Maximum ion injection times were 500 ms for survey full scans and 100 ms for MS/MS .All spots identified by LC-MS/MS are shown in Fig. 1 and their identities are presented in Table 1.Protein identifications were performed by searching peptide peak lists detected by TOF/TOF MS and MS/MS against in-silico databases of theoretical trypsin digests of published proteins using an in-house MASCOT database search engine using Biotools version 3.1.The LC-MS/MS data were processed and searched against the same database using an in-house MASCOT database search engine integrated in Proteome Discoverer software.The database was compiled from UniProtKB database on 22/05/2012 and from VectorBase database on 26/05/14.A mass tolerance of 100 ppm for peptide precursor ions and 0.8 Da for fragment ions was applied in all searches except for Orbitrap dataset where a precursor ion tolerance of 20 ppm was applied.Database search parameters for both MALDI-TOF/TOF and LTQ-Orbitrap datasets were as follows: enzymatic cleavage, tryptic; fixed modifications, S-carboxamidomethylation of cysteine residues; variable modifications, methionine oxidation, deamidation of asparagine and glutamine; and missed cleavages.Peptides identified by TOF/TOF mass spectrometry were reported as significant scoring peptides at a MASCOT probability based score threshold of p<0.05.For the Orbitrap datasets, to estimate the false discovery rate at which the identification was considered correct, a Percolator algorithm implemented in Proteome Discoverer was used and proteins with peptides having a confidence threshold q-Value<0.01 were considered to be valid.A minimum of two confident peptide identifications were required to assign a protein identity.Putative function of hypothetical proteins was inferred using the VectorBase database.
This study investigated proteomic changes occurring in Anopheles gambiae and Anopheles stephensi during adult mosquito aging. These changes were evaluated using two-dimensional difference gel electrophoresis (2D-DIGE) and the identities of aging related proteins were determined using capillary high-pressure liquid chromatography (capHPLC) coupled with a linear ion-trap (LTQ)-Orbitrap XL hybrid mass spectrometry (MS). Here, we have described the techniques used to determine age associated proteomic changes occurring in heads and thoraces across three age groups; 1, 9 and 17 d old A. gambiae and 4 age groups; 1, 9, 17 and 34 d old A. stephensi. We have provided normalised spot volume raw data for all protein spots that were visible on 2D-DIGE images for both species and processed Orbitrap mass spectrometry data. For public access, mass spectrometry raw data are available via ProteomeXchange with identifier PXD002153. A detailed description of this study has been described elsewhere [1].
129
Agrobacterium tumefaciens deploys a superfamily of type VI secretion DNase effectors as weapons for interbacterial competition in planta
Bacteria produce diverse toxic compounds, including diffusible small molecules such as antibiotics, that allow them to thrive in a competitive environment.They can also produce and secrete enzymatic toxins targeting nucleic acids, membrane lipids, or the peptidoglycan of competing bacterial cells.The type VI secretion system is a molecular machine found in most Proteobacteria and can deliver effectors to both eukaryotic and prokaryotic cells, which appear to be the major targets.Functional and structural studies have shown that the T6SS nanomachine shares remarkable similarities with the bacteriophage tail structure.The system contains a TssB-TssC contractile sheath, which is proposed to accommodate the Hcp-VgrG tail tube/puncturing device.The contraction of the sheath leads to the propelling of Hcp, VgrG, and T6SS effectors across bacterial membranes.Time-lapse fluorescent experiments highlighted the dynamics of this mechanism by revealing “T6SS dueling” between interacting cells.To date, only a few toxins have been biochemically characterized and shown to contribute to the bactericidal activity mediated by the T6SS.The most remarkable examples are the cell-wall-degrading effectors that include the type VI secretion amidase effector and type VI secretion glycoside hydrolase effector superfamilies.The Tae family includes Tse1 from Pseudomonas aeruginosa and Ssp1 or Ssp2 from Serratia marcescens.The Tge family includes the Tse3 muramidase from P. aeruginosa and Tge2 and Tge3 from Pseudomonas protegens.VgrG3 from Vibrio cholerae represents another effector family with a distinct muramidase fold unrelated to the Tge family.These enzymes are injected into the periplasm of target cells, where they hydrolyze the peptidoglycan, thereby inducing cell lysis.The phospholipase Tle superfamilies represent an additional set of T6SS toxins.By degrading phosphotidylethanolamine, a major constituent of bacterial membranes, these effectors challenge the membrane integrity of target cells.A recent study reported the nuclease activity of two proteins, RhsA and RhsB from Dickeya dadantii, containing NS_2 and HNH endonuclease domains, respectively, which cause the degradation of cellular DNA and confer an intraspecies competitive advantage.However, whether the D. dadantii antibacterial activity mostly relies on the DNase activity, and whether Rhs proteins are delivered by a dedicated T6SS machine remains to be determined.Agrobacterium tumefaciens is a soil bacterium that triggers tumorigenesis in plants by delivering T-DNA from bacterial cells into host plant cells through a type IV secretion system.Although not essential for tumorigenesis, the A. tumefaciens T6SS is activated at both transcriptional and posttranslational levels when sensing acidity, a signal enriched in the plant wound site and apoplast.Here, using A. tumefaciens as a model organism, we report the discovery of a type VI DNase effector family that exhibits potent antibacterial activity.The toxic activity of the Tde DNase is counteracted by a cognate immunity protein, here called Tdi.The T6SS increases the fitness of A. tumefaciens during in planta colonization, and the bacterium uses Tde to attack both intraspecies and interspecies bacterial competitors.The widespread conservation of the Tde toxin and Tdi immunity across bacterial genomes suggests that an appropriate combination of a functional T6SS and a broad toxin repertoire is key to niche colonization within a polymicrobial environment.A. tumefaciens strain C58 contains a T6SS gene cluster in which 14 of 23 genes are essential for the assembly of a functional type VI secretion machinery.The other genes are dispensable because the secretion of Hcp, a hallmark for T6SS activity, is not significantly affected in corresponding mutants.The gene atu4347, which is located in the so-called hcp operon, encodes a T6SS-secreted protein predicted to act as a peptidoglycan amidase.The gene atu4347 and its neighboring gene atu4346 encode proteins orthologous to the S. marcescens T6SS antibacterial toxin secreted small protein, belonging to the amidase family 4, and a cognate immunity, classified as resistance-associated protein, respectively.Because several genes encoded in the hcp operon are dispensable for type VI secretion, additional T6SS toxin-immunity gene pairs may exist within this operon.Attempts to delete the atu4351 gene were unsuccesful, which suggests that it may encode for a potential immunity protein protecting against the activity of a cognate toxin.This toxin is probably encoded by the adjacent gene, atu4350, and the secretion of Atu4350 is indeed readily detectable with growth of A. tumefaciens on acidic AB-MES minimal medium, as was shown for the secretion of Hcp or Atu4347.The secretion of Atu4350 is T6SS dependent, since it was abolished in a T6SS mutant, ΔtssL.Atu4350 is annotated as a hypothetical protein, and no functional domains were identified by a BLASTP search of the NCBI database.A screening of the Pfam database linked the Atu4350 protein to a recently identified superfamily containing the putative domain toxin_43.This superfamily displays a conserved putative catalytic motif HxxD and exhibits an all-alpha helical fold feature.Furthermore, the members of this family are distinct from known polymorphic toxins and have been tentatively assigned a putative RNase activity.To investigate whether Atu4350 harbors a nuclease activity, we overexpressed a C-terminal His6-tagged fusion of the protein in Escherichia coli.Atu4350 was then purified in the presence of Atu4349, which resulted in increased Atu4350 yield and stability.Atu4350 did not display a detectable RNase activity in vitro.Instead, it showed a Mg2+-dependent DNase activity, as seen by the rapid degradation of supercoiled plasmidic DNA.The conserved HxxD motif is required for this DNase activity, since an Atu4350 derivative bearing amino acid substitutions within this motif lost its ability to degrade the pTrc200 plasmid.To assess the DNase activity in vivo, the atu4350 gene and its derivatives were cloned under the control of an arabinose-inducible pBAD promoter in the plasmid pJN105.Induction of atu4350 expression resulted in rapid degradation of the pTrc200 and pJN105 plasmids.Cells producing the Atu4350 variant with substitutions in the HxxD motif showed no DNase activity.The Atu4350-dependent DNA fragmentation was also characterized by using terminal deoxynucleotidyl transferase dUTP nick-end labeling with 3′-OH termini of DNA breaks labeled with FITC-dUTP.TUNEL-positive cells were observed in E. coli cells producing only wild-type Atu4350 but not the Atu4350 variant.More precisely, ∼50% of cells expressing Atu4350 but only ∼8% of cells producing the Atu4350 variant showed FITC labeling.Our results establish that Atu4350 is a bona fide DNase.The A. tumefaciens T6SS activity also relies on the expression of an operon encoding vgrG2, which is functionally redundant with vgrG1 for Hcp secretion.Standard bioinformatic tools showed that Atu3640 and Atu3639, encoded within the so-called vgrG2 operon, are homologuous to Atu4350 and Atu4351, respectively.As observed with Atu4350, Atu3640 also possesses a C-terminal toxin_43 domain, and production of Atu3640 in E. coli cells caused rapid degradation of plasmidic DNA.Collectively, our results suggest that Atu4350-Atu4351 and Atu3640-Atu3639, together with the Atu4347-Atu4346 proteins, are potential T6SS toxin-immunity pairs in A. tumefaciens.Atu4350 and Atu3640 have DNase activity, whereas Atu4347 is a putative peptidoglycan amidase.We used a strategy based on the coproduction of a given toxin-immunity pair to investigate the role of the putative immunity in protecting against the adverse effects of the toxin.The toxin gene was cloned under the control of an inducible promoter, whereas the putative cognate immunity gene was expressed from a compatible plasmid.The growth of A. tumefaciens cells harboring the vector controls increased steadily over time, with no growth upon induction of atu4350 and atu3640 expression.The growth inhibition exerted by Atu4350 and Atu3640 was readily alleviated by the coexpression of the cognate immunity genes atu4351 and atu3639, respectively.Atu4350 and Atu3640 exert a toxic effect via their DNase activity when produced within the cytoplasm, whereas the putative peptidoglycan amidase activity of Atu4347 is likely to occur within the periplasm.Indeed, the fusion of Atu4347 to a cleavable N-terminal Sec-dependent signal peptide led to a significant growth inhibition.The growth inhibitory effect of Atu4347 was neutralized by the coexpression of the cognate immunity gene atu4346, predicted to encode a protein bearing a typical N-terminal signal peptide.In conclusion, we identified three toxin-immunity pairs.The Atu4347-Atu4346 pair belongs to the family type VI secretion amidase effector and immunity, and the toxin likely targets the peptidoglycan.Atu4350 and Atu3640 represent a family of T6SS toxins and are named Tde1 and Tde2, respectively, for Tde.Their cognate immunity proteins Atu4351 and Atu3639 are named Tdi1 and Tdi2, respectively.The role of the three A. tumefaciens T6SS toxins Tae, Tde1, and Tde2 was assessed in bacterial competition, with T6SS-negative E. coli K12 cells used as prey cells.A. tumefaciens and E. coli strains carrying gentamicin resistance were cocultured on LB or acidic AB-MES agar, and E. coli survival was monitored by counting gentamicin-resistant colony-forming units.E. coli survival was greatly reduced when cocultured with wild-type A. tumefaciens strain C58, as compared to E. coli alone or the A. tumefaciens T6SS mutant, ΔtssL.Importantly, a strain presenting a functional T6SS, as shown by the high levels of Hcp secretion, but lacking all toxin-immunity pairs was unable to kill E. coli.These results demonstrate the antibacterial activity of the A. tumefaciens T6SS, which is relying on at least one of the three identified toxins, Tae, Tde1, or Tde2.Despite its usefulness in identifying T6SS antibacterial activity, the E. coli K12 model does not provide information on whether a specific set of toxins can be advantageous for A. tumefaciens.Thus, we investigated the function of the T6SS antibacterial activity during interbacterial competition between A. tumefaciens strains.The A. tumefaciens attacker strain was mixed with target strains carrying gentamicin resistance to allow the quantification of surviving cells.Although Tde1 and Tae were readily secreted when bacteria were grown on acidic AB-MES agar plate, the A. tumefaciens wild-type C58 strain had no significant growth advantage when cocultured with the strain Δ3TIs.However, the above described phenotypes may result from the limitations of an in vitro setup, which prompted us to assess the T6SS antibacterial activity in an environment closer to the in vivo situation.We thus assessed whether a functional T6SS and the associated toxins may give A. tumefaciens an advantage for survival inside the host plant.We used a combination of A. tumefaciens strains, which contain attacker and target cells, in coinfection assays.These strains carried the plasmid pRL662 encoding gentamicin resistance or pTrc200 conferring spectinomycin resistance, which allowed for selecting surviving cells within what we define here as the target cell population.The assay involved coinfiltration of A. tumefaciens attacker and target strains into Nicotiana benthamiana leaves.Coinfection with the A. tumefaciens wild-type C58 attacker strain caused a ∼5-fold decrease in surviving cell numbers of the Δ3TIs target strain in comparison to the C58 target strain.In contrast, coincubation of the Δ3TIs target strain with an attacker strain lacking a functional T6SS, ΔtssL, or the three T6SS toxins, Δ3TIs, resulted in wild-type levels of fitness.These results strongly suggest that the A. tumefaciens T6SS and its associated toxins provide a competitive advantage to this bacterium during plant colonization.We monitored the contribution of each individual toxin-immunity pair in this experimental model.Target strains lacking Tde1-Tdi1 or Tde2-Tdi2 toxin-immunity pairs lost their competitive advantage against the wild-type C58 attacker.Furthermore, the expression of a tdi immunity gene in the absence of the corresponding tde toxin gene was sufficient to protect the target strain against killing by the C58 attacker.In contrast, the Δtae-tai mutant showed wild-type levels of fitness, which suggests that both Tde1 and Tde2, but not Tae, are crucial for A. tumefaciens competition during colonization in planta.These observations are further supported by evidence showing that the presence of either of the tde-tdi toxin-immunity pairs is sufficient to attack the Δ3TIs target strain, but this ability is lost if the attacker is a double tde-tdi deletion mutant.Importantly, attacking strains producing any variants of the Tde1 proteins were unable to inhibit the growth of target cells, which suggests that the Tde DNase activity was essential for providing the competitive advantage.Of note, mutations in the HxxD motif did not affect the secretion of Tde1, Hcp, or Tae.These observations highlight the decisive role played by the Tde DNase toxins and their cognate immunity proteins in the fitness of A. tumefaciens during the colonization of the plant host.Because multiple microbial taxa coexist as communities to compete for resources, we further investigated the impact of the Agrobacterium T6SS activity in the frame of an interspecies context.P. aeruginosa is an opportunistic pathogen for humans and plants, but it also coexists with A. tumefaciens as common residents in freshwater, bulk soil, and the rhizosphere.We examined A. tumefaciens-P.aeruginosa competition in both in vitro and in vivo assays.For competition assay in vitro, we designed coculture conditions on LB agar for which type VI secretion is observed in both strains and measured the competition outcomes.Even though A. tumefaciens and P. aeruginosa cells were cocultured in equal amounts, P. aeruginosa outcompeted A. tumefaciens by at least 100-fold after 16 hr of coincubation.H1-T6SS is constitutively active in the P. aeruginosa strain PAKΔretS, and this strain exerted a stronger inhibition on A. tumefaciens growth than the wild-type PAK strain.Strikingly, upon contact with P. aeruginosa, the number of viable A. tumefaciens wild-type C58 cells was ∼5-fold lower than the isogenic ΔT6SS strain, suggesting that A. tumefaciens T6SS activity can trigger a P. aeruginosa counterattack.P. aeruginosa H1-T6SS is required for this counterattack as a mutant lacking this cluster was unresponsive to A. tumefaciens.An A. tumefaciens mutant lacking all three pairs of toxin-immunity genes displayed a higher survival rate when cocultured with the P. aeruginosa wild-type strain.Because the A. tumefaciens strain Δ3TIs was still T6SS active, the presence of a functional T6SS may not be sufficient for A. tumefaciens to trigger a P. aeruginosa counterattack.Of note, the A. tumefaciens wild-type C58, as well as the isogenic variants Δtde1-tdi1Δtde2-tdi2 and Δtae-tae mutants, could still deliver at least one T6SS toxin and were all killed by P. aeruginosa.These data suggest that the injection of A. tumefaciens T6SS toxins was required to trigger a P. aeruginosa counterattack.The advantage provided by the Tde toxins to A. tumefaciens when grown in planta but not in vitro underlines the importance of a physiologically relevant environment for studying bacterial fitness.Thus, we investigated whether the relationship between A. tumefaciens and P. aeruginosa could differ in planta.Remarkably, the survival of P. aeruginosa wild-type PAK and its isogenic H1-T6SS mutant was reduced by ∼5-fold following 24 hr coinfection with A. tumefaceins wild-type C58 in leaves of N. benthamiana.In contrast, we detected no significant growth difference for A. tumefaciens strains grown alone or coinfected with P. aeruginosa inside the host plant.The P. aeruginosa attack against A. tumefaciens observed in vitro may be totally inefficient or prevented in planta.Furthermore, the Δtae-tai strain retained the ability to attack P. aeruginosa, but ΔtssL or a strain lacking both tde-tdi were unable to kill P. aeruginosa.During plant colonization, A. tumefaciens is able to attack P. aeruginosa by using a functional T6SS and the Tde toxins, whereas the Tae toxin does not seem to act as a potent effector in this context.All together, the Tde DNase toxins may be pivotal antibacterial toxins that A. tumefaciens uses against competitors during in planta colonization, as shown by the different competition scenarios illustrated in Figure 6.The identification of Tde toxins and the characterization of their role in plant colonization by A. tumefaciens prompted us to explore whether the Tde family is prevalent in plant-associated bacteria.The results obtained by BLASTP sequence homology search and the information extracted from the Pfam database highlighted the conservation of Tde-like proteins harboring the putative toxin_43 domain across several bacterial phyla.The Tde-like superfamily can be divided into eight classes depending on the domain organization of the protein, ranging from a single or tandem toxin_43 domains to fusion with other domains with known or yet-to-be-identified functions.Tde1 belongs to class 1, the most frequent, and contains only an identifiable C-terminal toxin_43 domain.Tde2 falls in class 3 and displays a domain of unknown function, DUF4150, within its N-terminal region.According to the Pfam database, this domain shows similarity to the recently characterized proline-alanine-alanine-arginine domain, which can also be found in class 7.A direct sequence alignment between DUF4150 and PAAR motif-containing proteins revealed significant conservation between the two domains and suggests that DUF4150 could act as a PAAR-like protein.The immunity proteins Tdi1 and Tdi2 contain an uncharacterized GAD-like and DUF1851 domains, which are well-conserved features in other putative Tdi homologs.Notably, the tde-tdi gene pair is conserved in Gram-negative Proteobacteria harboring T6SS features and highly prevalent in a wide range of plant pathogens, symbionts, and plant growth-promoting bacteria, which further suggests their potential role for colonization in planta.The tde-tdi gene pair is also found in T6SS-negative organisms including Gram-positive Firmicutes and Actinobacteria as well as Gram-negative Bacteroidetes.This observation would imply the presence of alternative secretion mechanisms for Tde transport or other functions yet to be identified in this subset of microorganisms.In a form of bacterial warfare involving the T6SS nanomachine, peptidoglycan and membrane lipids were shown to be the main targets for T6SS toxins.Our discovery of a superfamily of DNases, together with the recently identified VgrG-dependent Rhs DNases and predicted polymorphic nuclease toxins, expands the repertoire of characterized T6SS-dependent antibacterial toxins.The Tde DNase toxins identified in this present study do not share homology with Rhs or any other characterized bacterial DNases, which suggests a unique biochemical activity for the Tde toxins.The widespread presence of tde-tdi couples in divergent bacterial phyla reveals the conservation of this family of toxin-immunity pairs.The presence of a genetic linkage between vgrG and tde-tdi genes in most analyzed Proteobacteria agrees with previous observations that vgrG genes are often linked to genes encoding toxins.Two recent reports further demonstrated the requirement of the cognate VgrG for specific toxin-mediated antibacterial activity.Considering the genetic linkage between vgrG1 and tde1-tdi1 or vgrG2 and tde2-tdi2 in A. tumefaciens, VgrG1 and VgrG2 may bind specifically to Tde1 and Tde2, respectively, either directly or indirectly, to facilitate their secretion and delivery in the target cells.Interestingly, the domain modularity observable in the Tde superfamily further supports the use of distinct transport mechanisms for each Tde class, as was generally suggested for the T6SS.For example, Tde1 contains only a recognizable C-terminal toxin_43 domain, whereas Tde2 contains an additional N-terminal DUF4150 domain that shares sequence similarity with PAAR motif-containing proteins.This PAAR superfamily of proteins was recently described to sharpen the VgrG spike and to act as an adaptor to facilitate T6SS-mediated secretion of a broad range of toxins.Thus, the DUF4150 motif within the Tde2 toxin may be required to adapt or connect the protein at the tip of a VgrG spike to allow for delivery.The DUF4150 domain is also found in class 2 Tde toxins and can have a similar function for this subclass of proteins.Additional adaptor domains including known PAAR domain and other uncharacterized domains located at the N-terminal sequence of different Tde subclasses may be candidates for this function.In contrast, independent adaptor domains could be involved, as it would be the case for Tde1, which does not display any recognizable domain at its N terminus.Of note, the importance of the T6SS and its associated toxins varies substantially depending on which set of bacteria are placed in competition and whether this occurs during in vitro or in vivo situations.Our findings that A. tumefaciens was outcompeted by P. aeruginosa in vitro is consistent with previous observation for significant competitive advantage of P. aeruginosa over A. tumefaciens in both planktonic and biofilm growth.The mechanisms for the domination of P. aeruginosa involve a faster growth rate, motility, and an unknown compound capable of dispersal and inhibition of A. tumefaciens biofilm.Interestingly, in addition to its obvious growth advantage over A. tumefaciens under laboratory growth conditions, P. aeruginosa further triggers a lethal counterattack against T6SS-active A. tumefaciens.This phenomenon is clearly reminiscent of the recently described T6SS-dueling behavior, with P. aeruginosa using a “tit-for-tat” strategy to counterattack threatening cells such as Vibrio cholerae or Acinetobacter baylyi.In regards to the A. tumefaciens-P.aeruginosa competition in vitro, the danger signal sensed by P. aeruginosa may be represented by the injected toxin and not the T6SS machinery itself.P. aeruginosa was recently found to induce a lethal T6SS counterattack in response to the T4SS mating system.In our study, the “T6SS counter-attack” trigger was not restricted to the Tde injection but was also effective with the injection of Tae, which alters the integrity of the bacterial cell envelope.Thus, the P. aeruginosa T6SS response may result from sensing a wide variety of cellular perturbation, including DNA damage or membrane/cell wall damage.The competition outcomes and the relationship between A. tumefaciens and P. aeruginosa appear to vary greatly when switching from an in vitro to an in vivo environmental context.Inside the host plant, A. tumefaciens exhibits the T6SS- and Tde-dependent competitive advantage over P. aeruginosa, which suggests that the plant environment is a crucial determinant for the selection of the fittest A. tumefaciens strains.It is also striking that this competitive advantage for A. tumefaciens during intraspecies interaction is only observed in planta but not during in vitro growth, even though both antibacterial activity and type VI secretion were readily detected in vitro.While the molecular mechanisms and biological significance underlying this observation await future investigation, we speculated that A. tumefaciens may be able to recognize Agrobacterium or Rhizobiaceae-specific components that are absent in other distantly related bacteria such as E. coli and P. aeruginosa and choose not to attack its own siblings during free-living environment.Once A. tumefaciens successfully infects the host plant, A. tumefaciens may adjust its antibacterial stragtegy to attack all other nonisogenic bacteria at both intraspecies and interspecies levels, aiming to secure the nutrient for its own replication in the apoplast.It is worth mentioning that Agrobacterium T6SS may be also regulated by nutrients as type VI secretion is active in neutral rich medium 523 or LB but not in minimal AB-MES medium.Thus, A. tumefaciens seems to regulate T6SS activity at multiple levels with complex mechanisms in response to different environmental cues.Therefore, beyond acidity, additional plant signal may be required to trigger the ability of A. tumefaciens in differentiating self from nonself in order to attack coexisting competitors in the same ecological niche.Recent findings for a role of T6SS in export of self-identity proteins to provide a competitive advantage and territoriality in the social bacterium Proteus mirabilis indeed support the importance of self-recognition in interbacterial interactions.The use of Tde as an antibacterial toxin to increase the fitness of A. tumefaciens during plant colonization lends support to their key role in a physiological and ecological context.This finding presents an unprecedented role of T6SS effector activity for bacterial competitive advantage at both intraspecies and interspecies levels inside a plant host.The distribution of tandem tde-tdi genes in the genomes of plant-associated bacteria suggests the conservation of this mechanism among other phytobacteria.Similar benefits were observed in the human pathogen V. cholerae during colonization of the infant rabbit intestine.Whereas A. tumefaciens uses the Tde DNases as major weapons to attack both its own siblings and P. aeruginosa during in planta colonization, V. cholerae delivers VgrG3 to target peptidoglycan of competing siblings for survival inside the animal host.In both cases, the cognate immunity is essential for this in vivo competitive advantage and sufficient to protect the toxin-producing bacterium from killing.In conclusion, the in vivo fitness advantage conferred by the T6SS for both plant and animal pathogens offers a unique perspective in the evaluation of the T6SS in the host, particularly within a polymicrobial environment.Strains, plasmids, and primer sequences used in this study are in Tables S1 and S2.E. coli and P. aeruginosa strains were grown in LB, whereas 523 medium was routinely used for A. tumefaciens strains unless indicated.Growth conditions and mutant construction are as previously described.All sequences identified in this study were obtained from the NCBI database.Tde family proteins were identified by a BLASTP search with the amino acid sequence of the toxin_43 domain for Tde1 and Tde2 against the non-redundant protein database to identify the Tde homologs with E value < 10−4 and extracted from the Pfam toxin_43 database.The Tde family was aligned by use of ClustalW on EMBL-EBI website, and the secondary structure for the Tde1 toxin_43 domain was predicted by using the PSIPRED server.Sequence logos were generated manually by examining the genome context of the neighbor genes.The presence of a signal peptide was predicted by using SignalP.Plasmid DNA of pTrc200 was incubated with purified C-terminal His-tagged Tde1 or Tde1 derivative in 15 μl of 10 mM Tris/HCl for 1 hr at 37°C in the presence or absence of 2 mM Mg2+.Plasmid DNA with sample buffer served as a control.The integrity of DNA was visualized on 1% agarose gel.Tde proteins were overexpressed and purified from E. coli by nickel chromatography with details described in Supplemental Experimental Procedures.Overnight cultures of E. coli DH10B strain harboring the empty vectors or derivatives expressing Tde toxins were harvested and adjusted to an OD600 0.3 containing 0.2% L-arabinose for a further 2 hr to produce Tde toxins.Equal cell mass was collected, and plasmid DNA was extracted within an equal volume for DNA gel analysis.Secretion assay from liquid culture was performed in LB or AB-MES for 4–6 hr at 25°C, as previously described.For detecting secretion on agar plate, A. tumefaciens cells were grown in liquid 523 for 16 hr at 28°C.The harvested cells were adjusted to OD600 1 with AB-MES, and 100 μl of cell suspension was spread and incubated on an AB-MES agar plate for 24 hr at 25°C.Cells were collected in 5 ml AB-MES and secreted protein was analyzed as described.Overnight cultures of E. coli DH10B strain harboring vectors or their derivatives were adjusted to OD600 0.1.Expression of the tested immunity protein was induced by 1 mM IPTG for 1 hr before L-arabinose was added to induce expression of the toxin.For growth inhibition assay with A. tumefaciens, overnight cultures of A. tumefaciens C58 strain harboring empty vectors or their derivatives were adjusted to OD600 0.1.The tested immunity protein was constitutively expressed, and the toxin protein was induced with 1 mM IPTG.The growth was monitored by measuring OD600 at 1 hr intervals.The in planta competition assay was carried out by infiltration of bacterial cells into leaves of Nicotiana benthamiana, and the bacterial cell number was counted after 24 hr incubation at room temperature.Interbacterial competition assay on agar plate was performed by coculture on LB or AB-MES agar at 25°C for 16 hr.The competition outcome was quantified by counting colony forming units on selective LB agar.All assays were performed with at least three independent experiments or a minimum of three biological replicates from two independent experiments.Data represent mean ±SE of all biological replicates.Statistics were calculated by Student’s t test, and the p value was denoted as ∗∗∗ = p < 0.0005, ∗∗ = p < 0.005, and ∗ = p < 0.05.Detailed methods and associated references are described in Supplemental Experimental Procedures.Overnight culture of E. coli DH10B strains harboring the pJN105 vector or derivatives expressing Tde toxins were harvested, fixed, and stained by Apo-Direct Kit, and the intensity of fluorescence was determined by MoFlo XDP Cell Sorter and Summit V 5.2 software.Detailed methods and associated references are described in Supplemental Experimental Procedures.
The type VI secretion system (T6SS) is a widespread molecular weapon deployed by many Proteobacteria to target effectors/toxins into both eukaryotic and prokaryotic cells. We report that Agrobacterium tumefaciens, a soil bacterium that triggers tumorigenesis in plants, produces a family of type VI DNase effectors (Tde) that are distinct from previously known polymorphic toxins and nucleases. Tde exhibits an antibacterial DNase activity that relies on a conserved HxxD motif and can be counteracted by a cognate immunity protein, Tdi. In vitro, A. tumefaciens T6SS could kill Escherichia coli but triggered a lethal counterattack by Pseudomonas aeruginosa upon injection of the Tde toxins. However, in an in planta coinfection assay, A. tumefaciens used Tde effectors to attack both siblings cells and P. aeruginosa to ultimately gain a competitive advantage. Such acquired T6SS-dependent fitness in vivo and conservation of Tde-Tdi couples in bacteria highlights a widespread antibacterial weapon beneficial for niche colonization. © 2014 The Authors.
130
Bioremediation: Data on Pseudomonas aeruginosa effects on the bioremediation of crude oil polluted soil
The always increasing global energy demand makes it imperative that the world is still highly dependent on petroleum products for meeting energy needs in many ramifications of livelihood, a condition that the necessitates continuous extraction/production of petroleum from its location deep down the earth .The situation ensuing from this include crude petroleum oil spill that could be through uncontained excessive pressure from production installations/platforms, e.g. raw crude oil from well-heads, blowouts, etc., or from transportation or improper handling e.g. of treated crude oil in flow lines or storage tanks .The resulting oil spill that could be into marine or soil environments are very toxic and hazardous to the environmental ecosystem and could adversely affect well-being of living organs, air, water and soil processes as well as the potential of fire hazards .Onshore spill of crude oil affects healthy living in the society, agricultural productivity, groundwater/sources for potable water, and living biota in flowing streams/rivers, among others .Avoiding or mitigating these adverse effects from crude oil spillage situation necessitates needs for amending the soil via the procedure known as remediation.Among known methods for remediating crude oil polluted soil, including physical separation, chemical degradation, photodegradation and bioremediation, the method of bioremediation is attracting preference due to its comparative effectiveness, relatively low cost and eco-friendliness compare to other the techniques .Unlike bioremediation, other methods that could be used for oil polluted soil remediation have also been recognized with the potential of leaving daughter compounds, i.e. secondary residuals, after the parent/primary crude oil pollutant has been removed, which can even exhibit higher toxicity levels than the parent crude oil pollutant .In contrasts, bioremediation technique usage detoxifies contaminants in crude oil and effectively removes pollutant by destroying them in the stead of transferring them to other medium .Studies have employed plants species for bioremediation, in processed known as phytoremediation , but the use of microorganisms as biologically-mediated remediation of crude oil polluted soil is still linked to the effectiveness of phytoremediation systems.This is due to the fact that microorganisms are still required in the rhizosphere of plants for efficient crude oil polluted soil remediation via phytoremediation .This is making the use of microorganism for crude oil polluted soil remediation purposes of increasing interests to researchers and stakeholders involved in crude oil polluted soil amendment.Bacteria strains of microbes, including Pseudomonas aeruginosa, have been used in reported works for effective repair of crude oil polluted soil .However, there is paucity of reported work employing Pseudomonas aeruginosa for the bioremediation of Escravos Light crude oil blend obtainable in Nigeria.No dataset of absorbance measurements exists in the literature from the Pseudomonas aeruginosa effects on raw and treated Escravos Light blend of crude oil polluted soil systems.This data article, therefore, presents absorbance dataset and its analyses obtained from two different concentrations, for simulating light and heavy onshore spill, of raw and treated Escravos Light crude oil polluting soil systems that was inoculated for bioremediation effect using Pseudomonas aeruginosa.Table 1, therefore, presents absorbance data measurements obtained from raw and treated types of Escravos Light crude oil polluted soil that had been inoculated with Pseudomonas aeruginosa strain of microorganism for bioremediation effects.Shown in the table are the absorbance data for 5% w/w concentration of crude oil pollutant in soil, for simulating light oil spill, as well as the data for 8% w/w soil-polluting crude oil concentration, for simulating the spillage of heavy oil.These raw data measurements are in duplicate measures of experimental design, taken in five days interval for the first 20 days and in 10 days interval, thereafter, for making up the 30-day period of absorbance data measurements from the crude oil polluted soil.This later jump in experimental monitoring interval was done for noting whether there will be a significant persistency of bioremediation effect by the Pseudomonas aeruginosa strain of micro-organism on the different types and concentrations of crude oil polluted soil systems, or otherwise.For these reasons, therefore, the table also includes the average of the periodic absorbance measurements taken in the intervals of measuring Pseudomonas aeruginosa effects on the different types and concentrations of crude oil polluted soil systems.For aiding further analyses of the data proceeding from Table 1.Fig. 1 presents plots of the descriptive statistics of the duplicated raw measurements of absorbance data by the Normal, Gumbel and Weibull probability distribution modeling functions.The use of these three distribution fitting models will lend insight into whether the bioremediation data could be best described by the random sampling distribution of the Normal and/or of the Gumbel and/or of the Weibull probability density modeling functions.For specific instance and comparison of the probability fitting models, the Normal distribution is a general descriptive statistics model that exhibits the advantage of being the simplest probability distribution that could be applied to randomly distributed data.This simplicity of application of the Normal distribution follows from the fact that the mathematical relationships for estimating important parameters for the distribution model are well-known and easily computed .In contrast, the Gumbel and the Weibull distributions are extreme value distribution models useful for studying the existence of asymptotic test-response in the data that could motivate underlying extreme value process in the Pseudomonas aeruginosa bioremediation effects on the different types and concentrations of crude oil polluted soil systems.From these two models, the Gumbel distribution is the extreme value distribution of maxima, which indicates whether the maximum of the tested effect in a system is responsible for the reliability or the hazard encountered in the system.The Weibull distribution is the extreme value distribution of minima, which details whether the minimum of the tested effect exhibits responsibility for the reliability or the hazard in the test-system.However, all of these distribution modeling tools suffer the disadvantage that their usage for describing data not distributed like the distribution could lead to grossly erroneous conclusion .Thus, Fig. 1 entails the plot of mean models of absorbance data by these statistical distributions in Fig. 1 and the standard deviation models of absorbance data by the distributions in Fig. 1.In the figure, RCOP refers to the raw crude oil polluted soil system, and TCOP refers to the treated oil polluted soil system, while the duplicate sampling was indicated by attaching the tag “_Dup”.It is also worth noting that the mean and standard deviation modeling in Fig. 1 employ the maximum likelihood estimation procedures for these measurements of central tendencies and measurements of dispersions using the Normal, the Gumbel and the Weibull distribution modeling.From a similarly considerations, therefore Fig. 2 presents plots of these descriptive statistics applications to the evaluated averaged data obtained from the duplicates of absorbance periodic measurements, and for these, also, the mean models are in Fig. 2 while the standard deviation models are in Fig. 2.In this second figure, the delineating tags now include “_5%” and “_8%” for indicating the 5% w/w and the 8% w/w concentrations of crude oil pollutant in the soil sample systems, as well as “_ave” was used for indicating the periodic average of absorbance measurements.For the measured data in this article, loamy soil was collected from Covenant University Farm.This soil from the agricultural site was air dried before being polluted with two different pollution concentrations, i.e. 5% and 8% w/w, of raw and treated Escravos Light crude oil blend obtained from Chevron® Nigeria Limited, Delta State, Nigeria.This was followed by the inoculation of each crude oil polluted soil design with Pseudomons aeruginasa, a bacteria strain of microorganism, which was collected from the Applied Biology and Biotechnology Unit of the Department of Biological Sciences, Covenant University, Ota, Ogun State, Nigeria .The Pseudomons aeruginasa bacteria strain usage for inoculation of the crude oil polluted soil system was at the concentration of 0.05 v/v of the microbial strain to each of the crude oil polluted soil systems for the study.From each of the systems of crude oil polluted soil detailed, selected mass sample was taken and dissolved in hexane by stirring in a magnetic stirrer.A portion from this dissolution was measured and made up with n-hexane for determination of absorbance at wavelength of 420 nm via a Jenway 6405 UV/VIS Spectrophotometer.These absorbance measurement experiments were executed in duplicates, starting from the 0th day, then in five days interval for the first 20 days and, thereafter, in 10 days interval, for making up the 30-day experimental design system, from which the data, presented in Table 1, was obtained.The descriptive statistics of the absorbance data from the crude oil polluted soil systems inoculated with Pseudomonas aeruginosa, as were presented in Fig. 1 and in Fig. 2, employ the distribution fittings of the Normal, the Gumbel and the Weibull probability density models .These fittings of the absorbance data to the each of the probability distribution functions are respectively presented in Fig. 3, i.e. for the Normal distribution in Fig. 3, the Gumbel distribution in Fig. 3, and the Weibull distribution in Fig. 3.Compatibility of the absorbance data, from the crude oil polluted soil systems having Pseudomonas aeruginosa inoculants, to the fittings of each of the Normal, the Gumbel and the Weibull probability distributions requires the Kolmogorov–Smirnov goodness-of-fit test-statistics, α=0.05 significant level .This Kolmogorov–Smirnov goodness-of-fit testing of compatibility statistics application to the absorbance data in this article are presented in graphical plots in Fig. 4, which also shows the linear plot of α=0.05 level of significance.By these, therefore, plots of Kolmogorov–Smirnov goodness-of-fit probability value that does not attain the α=0.05 linear plot in Fig. 4 indicate data that are not distributed like the probability distribution being applied for describing the model.In contrast, plots of Kolmogorov–Smirnov p-value that overshot the α=0.05 linear plot in Fig. 4 are indicative of data that are distributed like the probability distribution of application to the model.The duplicated design of absorbance measurements, as well as the different designs of crude oil pollutant systems in the soil samples, necessitates testing significance of differences from the measured absorbance data."For these, the between-duplicate and the between-different crude oil/soil system pollution test of significance models, employing the Student's t-test statistics was applied to the absorbance data using the homeoscedastic and the heteroscedastic assumption models . "Fig. 5, therefore shows plots of the Student's t-test statistics application to the absorbance data from the crude oil polluted soil system having Pseudomonas aeruginosa inoculants.In the figure, Fig. 5 shows the between-duplicate and Fig. 5 shows the between-different crude oil/soil pollution system tests of significance."Also included in each of Fig. 5 and are linear plots of α=0.05, for which Student's t-test p-value not attaining the α=0.05 linear plot is indicative of the fact that the experimentally observed differences between the two datasets being compared are statistically significant. "Otherwise, Student's t-test p-value that overshot the α=0.05 linear plot indicates that the experimentally observed differences between the two datasets being compared are statistically not significant, but are due to randomization ensuing from the experimental test-measurements.Worth noting includes the fact that the tags of abbreviations employed in Fig. 5 could be detailed as:RCOP_5%: compares significance of differences between datasets from the duplicated sampling for raw crude oil polluted soil system having 5% w/w crude oil/soil pollution concentration;,TCOP_5%: compares significance of differences between datasets from the duplicated sampling for treated crude oil polluted soil system having 5% w/w crude oil/soil pollution concentration;,RCOP_8%: compares significance of differences between datasets from the duplicated sampling for raw crude oil polluted soil system having 8% w/w crude oil/soil pollution concentration;,TCOP_8%: compares significance of differences between datasets from the duplicated sampling for treated crude oil polluted soil system having 8% w/w crude oil/soil pollution concentration.Also, the tags of abbreviation used in Fig. 5 are as follows:R-T_COP_5%: compares significance of differences between dataset from raw and dataset from treated crude oil polluted soil systems having 5% w/w crude oil/soil pollution concentration;,R-T_COP_8%: compares significance of differences between dataset from raw and dataset from treated crude oil polluted soil system having 8% w/w crude oil/soil pollution concentration;,RCOP_5%_8%: compares significance of differences between dataset from the soil systems polluted with 5% w/w and dataset from the soil systems polluted with 8% w/w raw crude oil pollutant;,TCOP_5–8%: compares significance of differences between dataset from the soil system polluted with 5% w/w and dataset from the soil systems polluted with 8% w/w treated crude oil pollutant.Proceeding from the probability distributions, employed in this data article, is the measurement of the probability of obtaining the analyzed mean of the raw absorbance data measurements, Fig. 1, and of the periodically averaged absorbance, Fig. 2, from the crude oil polluted soil systems.This particular measure of probability indicates the reliability of either the raw or the periodically averaged data on the remediation effect of Pseudomonas aeruginosa in the different concentrations/types of crude oil polluted soil systems.Though, it is worth noting, that the reliability monotonically=0.5 via the Normal, or=0.5704 via the Gumbel distribution models, irrespective of the mean value, the value of this parameter varies with the mean values in the Weibull model .This variability of reliability from the Weibull probability distribution modeling, therefore, aids comparisons with the reliability obtained from the other two distribution function models, of the Normal and the Gumbel.Thus, Fig. 6 presents the plots of the reliability by the Weibull probability distribution modeling of the absorbance data, for the raw in Fig. 6 and the periodically averaged measurements in Fig. 6.In the figure, also, the monotonic reliability value of 0.5 from the Normal and of 0.5704 from the Gumbel distributions are shown as linear plots.These reliability values are indicative of the cumulative distribution function applications of the Normal, the Gumbel and the Weibull to the mean models of these distribution fitting functions.They exhibit the significance that the estimated values, as indicated in Fig. 6, detailed values that could be related to the degrees of the bioremediation effect by the Pseudomonas aeruginosa on the different crude oil polluted test-systems.The implications ensuing from the usage of such estimated model for reliability follows from the fact that for the types of experimental measurements in this study, it is desirable to at least obtain the mean value of bioremediation effect estimated for each test-system, if not more, rather than the desirable event in failure-causing data which is that of obtaining lesser value than the estimated failure-inducing mean value.
This data article details Pseudomonas aeruginosa effects on the bioremediation of soil that had been polluted by different concentrations, 5% w/w and 8% w/w, of raw (for simulating oil spills from well-heads) and treated (for simulating oil spills from flow lines/storage tanks) crude oil. UV/VIS spectrophotometry instrumentation was used for obtaining absorbance measurements from the Nigerian Escravos Light blend (sourced from Chevron® Nigeria) of crude oil polluting soil samples, which, thus, also simulates light and heavy onshore oil spillage scenarios, in a 30-day measurement design. Data on bioremediation effects of Pseudomonas aeruginosa added to the crude oil polluted soil samples, and which were monitored at intervals via the absorbance measurement techniques, are presented in tables with ensuing analyses for describing and validating the data presented in graphs. Information from the presented data in this article is useful to researchers, the oil industries, oil prospecting communities, governments and stakeholders involved in finding solution approach to the challenges of onshore oil spills. This information can also be used for furthering research on bioremediation kinetics such as biostimulant analyses, polluting hydrocarbon content/degradation detailing, by Pseudomonas aeruginosa strain of microorganism, on petroleum pollutant removal from soil that had been polluted by crude oil spillage.
131
Heteroatom doped high porosity carbon nanomaterials as electrodes for energy storage in electrochemical capacitors: A review
Energy landscape is expected to go through significant transformation attributed to the crisis instigated by the imbalance in world's energy supply and demand.Environmental concerns and expanding gap between supply and demand of energy signify the implementation of renewable energy technologies such as solar, wind and tidal towards diversification of energy generation in order to maintain un-interrupted supply of energy at relatively lower cost combined with numerous environmental benefits.Due to the intermittent nature of these renewable sources of energy, appropriate electrical energy storage systems are required for ensuring security and continuity in the supply of energy from a more distributed and intermittent supply base to the consumer.Among different electrical energy storage systems, electrochemical batteries and electrochemical capacitors play a key role in this respect.ECs are devices that can fill the gaps between electrochemical batteries and electrostatic capacitors in terms of energy and power densities as shown in Fig. 1.Electrochemical capacitors, also known as supercapacitors or ultra-capacitors, are high power electrical energy storage devices retaining inimitable properties such as exceptionally high power densities , rapid charge discharge, excellent cycle-ability and high charge retention .Depending on their charge storage mechanism, ECs can be classified into two categories; electric double layer capacitors and pseudo-capacitors.In EDLCs, capacitance arises from purely physical phenomenon involving separation of charge at polarized electrode/electrolyte interface where as in PCs electrical energy is stored through fast and fully reversible faradic reaction coupled with the electronic transfer at the electrode/electrolyte interface , a schematic diagram of the charge storage mechanism of both electric double layer capacitor and pseudo-capacitor is shown in Fig. 2, followed by a detailed discussion on the charge storage mechanism in the electric double layer capacitors and pseudocapacitors.EDLCs maintains specific capacitance six to nine orders of magnitude higher when compared with conventional capacitors since charge separation ‘d’ is much smaller during the formation of an electric double layer, and the specific surface area ‘A’ of an active material is much higher when compared with electrostatic capacitors.Charge storage in EDLCs is purely a physical phenomenon without any electronic transfer which makes EDLCs an ideal candidate for high power application since it can be fully charged or discharged in a very short span of time and retains an exceptionally long cycle life .Energy storage in pseudocapacitors is realized through fast and fully reversible Faradic charge transfer, which is an electrochemical phenomenon where an electronic transfer occurs at the electrode/electrolyte interface as shown in Fig. 4.Ruthenium oxide , manganese oxide , iron oxide and nickel oxide are the most commonly used metal oxides whereas polyacetylene , polypyrrole , poly and polyaniline are frequently used conducting polymers as electrode materials in pseudocapacitors.PCs have much higher energy densities as compared to EDLCs since the specific capacitances of pseudocapacitive devices are also much higher which can have a positive impact on energy density of the device according to Equation.However, the pseudocapacitive devices have lower cycle life and cyclic efficiency in comparison to EDLCs since charge is stored within bulk of the active material where long term cycle-ability can have an adverse effect on the integrity of the active material.Alternative approach to enhance the energy density of an electrochemical capacitor cell is by increasing the specific capacitance of ECs.The improved specific capacitance is attainable by introducing the pseudo-capacitive entities such as metal oxides/conducting polymers or heteroatoms on the surface or within the structure of a carbon based active material where the total capacitance is the sum of both EDLC and PC.EDLC is exhibited by a carbon based active material and PC is due to the dopant such as metal oxides/conducting polymers or heteroatoms.However, the use of metal oxides based dopants in practical applications is limited due to its higher cost, lower conductivity and limited cycle stability .Heteroatoms doped carbons have displayed an improved capacitive performance due to the pseudo-capacitive contribution through a fast and fully reversible Faradic reaction without forfeiting the excellent power density and long cycle life .Numerous research studies have been performed to evaluate the contribution made by nitrogen boron , phosphorus and sulphur based functional groups in the field of energy storage especially when incorporated in carbon based electrode active materials for supercapacitor applications.Nitrogen is by far the most extensively investigated heteroatom whereas other heteroatoms are considered for investigation more recently.Specific capacitance of an electrochemical capacitor can be improved substantially by the mean of nitrogen doping in one such study, Han et al. prepared the pueraria-based carbon followed by nitrogen doping achieved by a simple thermal treatment of pueraria powder and melamine.It was observed that nitrogen doped carbon exhibited a remarkably superior capacitance of 250 Fg-1 as compared to 44 Fg-1 for un-doped carbon at the current density of 0.5 Ag-1 using 6M KOH as an electrolyte with its capacitance retention over 92% .Another study by Mao et al. showed that N-doping resulted in the improved electrochemical performance where N-doped carbon displayed an excellent areal capacitance with the attained specific capacitance of more than twice after nitrogen doping as compared to 330 mF cm−2 for an un-doped carbon when used as an electrode in the supercapacitor cell with an excellent long term cyclic stability of more than 96% after 10000 cycles .Inferior energy densities of supercapacitors limit their practical applications, and nitrogen doping can be adopted as a favourable technique to improve their energy densities for their wider adoption in practical use.The improved energy density of 6.7 Whkg−1 as compared to 5.9 Whkg−1 was attained after the introduction of nitrogen functionalities which provides a clear evidence that N-doping is an efficient way of improving the energy densities of the supercapacitor cells and the enhancement in energy densities will lead to their commercial applications .An exceptionally high energy density of 55 Wh kg−1 at a power density of 1800 W kg−1 with an excellent cycling efficiency of over 96% was achieved when Dai and co-workers used the nitrogen doped porous graphene as an electrode and n BMIMBF4 electrolyte to benefit from the higher operating potential of around 3.5 V .Nitrogen doping also improves the wetting behaviour of the electrolyte which improves the electrode/electrolyte contact at the interface along with reduction in solution resistance.A study by Candelaria et al. showed that the wettability improved after nitrogen doping with the drop in contact angle from 102.3° to zero as shown in Fig. 5.The nitrogen doped carbon attained capacitive value of twice higher than that of an un-doped carbon .Further examples of nitrogen carbons when used as an active material in supercapacitors with a comprehensive evaluation of their physical and electrochemical properties presented in the literature is shown in Table 1.Table 1 shows various physical and electrochemical properties of different types of nitrogen doped carbon based materials when used as electroactive materials.It can be established from the above discussions that nitrogen doping is the most favourable routes to synthesise functional electrode-active materials for supercapacitor applications.N-doping is advantageous to improve both physical and electrochemical properties such as wettability, capacitive performance and energy/power densities respectively which can have a positive impact on the overall performance of the system.Phosphorus displays analogous chemical properties as nitrogen since it has the same number of valence electrons; however, the higher electron-donating capability and larger atomic radius makes it the preferred choice for its adoptions as a dopant in carbon materials.A commonly used method to produce phosphorus doped carbons is through thermal treatment of carbon with phosphorus containing regents both at carbonization and activation stages , which results in introducing phosphorous on to the carbon surface whereas phosphorous species can be doped inside the carbon matrix when phosphorous containing precursor is carbonized at elevated temperatures .It is more convenient to prepare P-doped carbons through the first procedure; however by adopting the latter process P-doped carbon materials can be synthesised by precisely controlling the P content.Adoption of phosphorus-doped carbons for their application in the broad field of energy storage such as electrochemistry generally and as an electrode material in electrochemical capacitors particularly is a highly promising concept.However, the use of phosphorous doped carbon as an electrode in electrochemical capacitors has been limited, resulting in the limited understanding of its effect on physio-chemical properties ultimately restricting its potential to be used as an active material and hence on the overall performance of a supercapacitor cell .Phosphorous doping results in an improved charge storage due to the additional pseudo-capacitive component alongside electric double layer since phosphorus also possesses electron-donor characteristics and also an enhanced transport capability due to its exceptionally high electrical conductivity when used as an active material .Yi et al. synthesised the cellulose-derived both un-doped carbon and phosphorous doped carbon showing an excellent capacitive performance along with the improved conductivity.A specific capacitance of 133 Fg-1 at a high current density of 10 Ag-1 and the excellent capacitance retention of nearly 98% after 10000 cycles were achieved.A momentous drop from 128.1 to 0.6 Ω in charge transfer resistance alongside drop in contact angle from 128.3° to 19.2° after phosphorus doping was witnessed as shown in Fig. 6, where Fig. 6a) shows the drop in contact angle with an improved wetting behaviour and Fig. 6b) represents the Nyquist plots of various carbons characterizing the resistive behaviour of various carbon samples.In another study, phosphorus doped graphene was synthesised by the activation of graphene with sulphuric acid, which resulted in P-doping of 1.30%.It was established that P-doping not only improved the capacitive performance it also widened an operating voltage window of the cell which resulted in the enhanced energy density as given by Equation.An exceptionally high energy density of 1.64 Whkg−1 at a high power density of 831 Wkg-1 was realised due to the higher operating potential of 1.7 V rather than 1.2 V for an aqueous electrolyte .It has also been reported that oxygen surface functionalities such as chemisorbed oxygen and quinones of an active material are electrochemically active and can contribute towards the overall performance of the cell .However, these surface functional groups are unstable in nature and can cause deterioration in capacitive performance .Phosphorous can also be used as an oxidation protector when introduced within the carbon structure preventing the combustion of oxygen species which contributes toward the enhancement in the cell performance accompanied by the obstruction in formation of electrophilic oxygen species .A recent study by Ma et al. has shown that phosphorous doping not only enhances the capacitive performance due to the additional capacitance arising from the reversible redox reaction, but also prevents the formation of unstable quinone and carboxylic groups, resulting in a higher operating voltage of 3.0 V much when used in conjunction with pure carbon leading to the delivery of an exceptionally high energy density of 38.65 Wh kg−1at a power density of 1500 W kg−1 when used with the organic electrolyte .A wide range of phosphorus doped carbon based electrode materials with their physical and electrochemical properties is given in Table 2.Phosphorus-doping can assist in achieving higher capacitive performance alongside other supplementary benefits such as improved conductivity and reduced charge transfer resistance.However, immense research is mandatory in order to understand the underlying reasons for these improvements to adopt phosphorus doped active materials for use as electrode for electrochemical capacitors commercially.When compared with nitrogen, oxygen or boron, sulphur doping of carbon materials is still very rare which signifies an excellent research opportunity in the field of carbon materials for energy storage applications in general and electrochemical capacitors in particular.Very little has been known until very recently about the effect sulphur functional groups on the performance of these materials when adopted in applications related to field of energy storage.Electronic reactivity of active material can be improved by incorporating sulphur functional groups within the carbon scaffold or on the surface, since sulphur modifies the charge distribution within the carbon structure or on the surface respectively due to its electron donor properties which results in an increased electrode polarization and specific capacitance via fast and fully reversible faradaic process .Sulphur functionalized active carbon nanomaterials have been prepared using various methods which include the direct thermal treatment of sulphur containing compounds or by co-carbonization of carbon with elemental sulphur .Improved conductive performance and electrode/electrolyte wettability can be achieved by doping the carbon based electrode material with both nitrogen and sulphur functional groups however, recent work by X Ma and co-workers has shown that sulphur functionalities results in superior conductive performance as compared to nitrogen doping .Since sulphur doping improves electronic conductivity, so higher specific capacitance achieved due to pseudo-capacitive contribution along with electric double layer capacitance coming from sulphur functionalities and the porous parameters respectively of the active material.Sulphur functionalizing improves the energy density of the cell without any drop in its excellent power density due to its superior conductivity.Highly porous Sulphur doped carbon with specific surface area of 1592 m2g-1 and pore structure ranging from micro to macro was synthesised by carbonizing sodium lignosulfonate.Sample with high sulphur weight percentage of up to 5.2 wt% was prepared which exhibited the highest specific capacitance of 320 Fg-1 with high energy density of up to 8.2 Wh kg−1 at power density of 50 W kg−1 .In another study capacitive performance improvement from 145 Fg-1 to 160 Fg-1 was attained at the scan rate of 10 mVs−1 for un-doped and sulphur doped graphene respectively.High energy density of 160 Whkg−1 at a power density of 5161 Wkg-1 was reached using 6M KOH electrolyte for doped carbon.Improved wetting behaviour and capacitive performance was realized when sulphur-decorated nano-mesh graphene was used as an electro-active material.Sulphur decorated nano-mesh graphene was synthesised by thermal treatment of elemental sulphur with nano-mesh at 155 °C.Specific capacitance of 257 Fg-1 was attained which was 23.5% higher than un-doped graphene for the doping level 5 wt% of sulphur alongside drop in contact angle from 88.2° to 69.8° after doping as shown in Fig. 7 .Some further explaes of sulphur doped active materials are provided in Table 3.Sulphur doping can be considered as an efficient way to improve the active material performance including enhanced specific capacitance, conductivity and wettability whereas drop in charge transfer resistance and solution resistance of the active material can also be achieved.By Improving these performance parameters, energy density can be improved without scarifying their superior power densities which is the major hurdle towards the commercialisation of electrochemical capacitor technology.However, still very little research work has been performed to study the effect of sulphur doping and under lying reasons for these improvements.The electronic structure of a carbon based active material can be modified by introducing boron into the carbon framework.It is easier to dope carbon based nanomaterials either with nitrogen or boron since nitrogen and boron possess analogous electronic configuration and sizes when compared with carbon atom .Charge transfer between neighbouring carbon atoms can be facilitated by introducing boron into carbon lattice since it has three valence electrons and act as an electron acceptor which results in the uneven distribution of charges.This charge transfer results in an improved electrochemical performance due to the pseudo-capacitive contribution originated from this electronic transfer .Boron functionalizing can be accomplished using a diverse range of synthesis techniques such as laser ablation , arc discharge method , by means of hydrothermal reaction , by substitutional reaction of boron oxide or by adopting chemical vapour deposition technique .Hydrothermal reaction is the most commonly used technique to produce boron doped active materials, and the improved specific capacitance of 173 Fg-1 was achieved when boron doped graphene was synthesised through a thermal reaction.An atomic percentage of 4.7% of boron was found to be the optimum level of boron doping when introduced into the bulk of graphene, with the achieved capacitance of nearly 80% higher than that of an un-doped active material.The electrochemical capacitor cell delivered a superior energy density of 3.86 Wh kg−1 at a power density of 125 W kg−1, and managed to retain the energy density of 2.92 W h kg−1 at a much higher power density of 5006 kW kg−1 with an excellent cycling stability of nearly 97% after 5000 charge/discharge cycles as shown in Fig. 9 .Among other synthesis techniques, template or nanocasting method is also considered as a useful procedure which assists in controlling the porous structure in a precise manner resulting in a positive effect on the performance of the electrochemical cell.Boron doping not only improves capacitive performance it also enhances electrode/electrolyte wettability, resulting in reduction in solution resistance.A study by Gao and co-workers, where boron doped controlled porosity meso-porous carbon was prepared using a hard template approach, showed that the specific capacitance of 268 Fg-1 was attained after boron doping, which is considerably higher than 221 Fg-1 for an un-doped carbon at 5 mVs-1.The exceptionally low solution resistance RS of 1.05Ω was also obtained due to the improved wettability after the incorporation of boron functional groups .Improving the surface chemistry of an electrode active material after boron doping can have other benefits such as superior conductivity.Boron doped graphene oxide was synthesised through a simple thermal annealing of GO/B2O3 as shown in Fig. 8.The exceptionally high specific capacitance of 448 Fg-1 was reached after boron doping without using any conductivity enhancer such as carbon black since boron doping resulted in the improved conductivity of the active material .More examples of boron doped carbon when used as active materials in supercapacitors are presented in Table 4.We have discussed various functional materials including nitrogen, sulphur, phosphorus and boron which have been widely used by researchers to improve the performance of electrochemical capacitors.However, there is still an enormous scope to enhance the capacitive-ability of these electrochemical devices further which is achievable though co-doping of these carbon based electrodes.Co-doping of an active material using different combinations such as nitrogen/boron, nitrogen/sulphur or in some cases introducing more than two functional groups on the surface or inside the carbon matrix has been adopted, and its impact on the physical and electrochemical properties will be discussed in detail in the following section.Efforts have been made to understand the impact of co-doping on the performance of energy storage materials recently .Overall performance of energy storage devices can be improved further due to the synergetic effect of co-doping.Introduction of more than a single heteroatom can enhance the capacitive performance of the carbon when used as an electrode material by tailoring its properties such as by improving the wetting behaviour toward the electrolyte, by introducing pseudo-capacitive species and decreasing its charge transfer resistance .Heteroatoms such as nitrogen, boron, phosphorus and sulphur are incorporated in various combinations to tune carbon materials in a desired manner for superior performance of energy storage devices when used as electrodes .A study by Wang et al. showed that the capacitive performance of nitrogen and sulphur co-doped carbon samples outperformed the capacitive performance of carbons using either nitrogen or sulphur as dopant due to the synergetic pseudo-capacitive contribution made by nitrogen and sulphur heteroatoms.Specific capacitance values of 371 Fg-1, 282 Fg-1 and 566 Fg-1 were achieved for nitrogen, sulphur and nitrogen/sulphur co-doped samples respectively when used in supercapacitor cells with 6M KOH as an electrolyte .The maximum specific capacitances of 240 Fg-1 and 149 Fg-1 were achieved for aqueous and ionic liquid electrolytes respectively at a high current density of 10 Ag-1 using nitrogen and sulphur co-doped hollow cellular carbon nano-capsules, which are much the higher capacitive values for this type of electrode material reported in the literature .Nitrogen and sulphur co-doped graphene aerogel offered a high energy density of 101 Wh kg−1 when used as an electrode, which is one of the highest values ever achieved for this type of material.The electrode materials also offered a large specific capacitance of 203 F g−1 at a current density of 1 A g−1 when used alongside ionic liquid as an electrolyte .Similarly, a recent study by Chen et al. showed that nitrogen and phosphorus co-doping results in a very high specific capacitance of 337 F g−1 at 0.5 A g−1 which can deliver the energy density of 23.1 W h kg−1 to 12.4 W h kg−1 at power densities of 720.4 W kg−1 to 13950 W kg−1, respectively .Boron and nitrogen is considered as an excellent combination of heteroatoms which is used by researchers to elevate the performance of an electrode active material through the synergistic effects of more than a single dopant; nitrogen and boron co-doped materials have demonstrated an excellent electrochemical performance recently .Very recently, researchers have been trying to evaluate the impact of trinary doping where more than two functional groups are introduced and the overall electrochemical performance is a sum of the electric double layer capacitance coming from the porous parameters of the active materials and the pseudo-capacitance of heteroatoms.A very recent study by Zhao and co-workers has shown that the excellent electrochemical performance can be attained when more than two functional groups are introduced in a highly porous carbon.The specific capacitance of 576 Fg-1 together with an extraordinary energy density of 107 Wh·kg−1 at power density 900 W·kg−1 was achieved, when the active material was co-doped with oxygen, nitrogen and sulphur functional groups .The performance characteristics of various carbon based active materials have been summarised in Table 5.Nitrogen is the most explored functional material with promising results; however, other functional groups such as sulphur, phosphorus and boron have not been investigated yet in great detail.Recent attention has been focused towards co-doping with encouraging outcomes as shown in Table 5.Nitrogen and sulphur is considered as a natural combination for the maximum cell output whereas still enormous research is required to perfectly tune the combinations of various dopants to maximise the material productivity.There is still a vast scope of research investigation to analyse the effect of functional groups beyond nitrogen in various combinations while using them alongside non-aqueous electrolytes in order to achieve battery level energy densities.Even though nitrogen doped carbon materials have been investigated extensively for their application as electrodes in electrochemical capacitors, it is evident from this review that there is another class of functional materials which includes sulphur, phosphorus and boron beyond the nitrogen, possessing physio/chemical properties suitable for superior cell outputs.By adopting these emerging functional materials as electrodes, the performance of an electrochemical cell can be improved substantially.Nitrogen doping results in an improved electrochemical performance while retaining the high power density of the cell, since the introduction of nitrogen on the surface of the electro-active material results in an improved wetting behaviour which helps to maintain the low equivalent series resistance of the cell.Doping carbon based electrode materials with phosphorus results in the superior physio/chemical properties matched with nitrogen doping, and additional benefits of using phosphorus doped active materials include an increase in the operating potential of the supercapacitor cell which can have a positive effect on its energy density.Whereas, sulphur doping can be beneficial in improving the electronic reactivity of an active material, resulting in a higher pseudo-capacitive contribution when compared with the performance of an active material doped with other heteroatoms.Individual functional materials possess excellent properties which can have a positive impact on both the physical properties and electrochemical performance of the supercapacitor cell when introduced into the matrix or on the surface of the active material independently.However, recent attention has been diverted towards using more than one dopant where synergistic effects of both dopants yield even superior performance.Although nitrogen has been explored extensively and has revealed encouraging results, an immense research drive is till needed to explore other functional materials since this field is still very young with very little deliberation.Already these functional materials have shown an immense potential however, it will be extremely fascinating for researchers in the field of energy storage to follow further improvements in advanced functionalized carbon materials, and to witness how such materials will start to transform the field of materials for energy applications in general and for their suitability in supercapacitors in particular.
At present it is indispensable to develop and implement new/state-of-the-art carbon nanomaterials as electrodes in electrochemical capacitors, since conventional activated carbon based supercapacitor cells cannot fulfil the growing demand of high energy and power densities of electronic devices of the present era, as a result of the rapid developments in this field. Functionalized carbon nanomaterials symbolize the type of materials with huge potential for their use in energy related applications in general and as an electrode active material for electrochemical capacitors in particular. Nitrogen doping of carbons has shown promising results in the field of energy storage in electrochemical capacitors, gaining attention of researchers to evaluate the performance of new heteroatoms functionalised materials such as sulphur, phosphorus and boron lately. Literature is widely available on nitrogen doped materials research for energy storage applications; however, there has been a limited number of review works on other functional materials beyond nitrogen. This review article thus aims to provide important insights and an up-to-date analysis of the most recent developments, the directions of future research, and the techniques used for the synthesis of these functional materials. A critical review of the electrochemical performance including specific capacitance and energy/power densities is made, when these single doped or co-doped active materials are used as electrodes in electrochemical capacitors.
132
Correlates of attempting to quit smoking among adults in Bangladesh
The tobacco epidemic is bigger than most other public health disasters the world has ever confronted.Nearly 6 million people have killed annually due to tobacco use.Unless proper steps are taken by 2030, tobacco will kill >8 million people per year globally.Although smoking rate is decreasing in most developed countries, it is increasing in developing countries including Bangladesh.Because of the rapid rise of smoking in developing countries, by 2030, 7 million deaths will occur annually in these countries.Countries in Asia, especially, South East Asia region, are responsive to smoking epidemic.Approximately, 400 million tobacco users live in this region, which results in 1.2 million deaths annually.Therefore, increasing smoking cessation can have a substantial effect in reducing the tobacco-attributable deaths.Bangladesh is one of the ten heaviest smoking countries in the world.Bangladesh has a high smoking prevalence of 23.0% of adults who smoke which approximates to 21.9 million adults currently smoke tobacco.The general smoking prevalence increased from 20.9% in 2004–05 to 22.0% in 2010.Moreover, Bangladesh is one of the 15 countries in the world that have a greater burden of tobacco-associated illness.In 2004, World Health Organization showed that tobacco use was responsible for nearly 57,000 deaths and 1.2 million tobacco-attributable illness annually in Bangladesh.Another study conducted in Bangladesh using 2010 data observed that smoking was responsible for 42,000 deaths of men.This study also showed that each smoker waste average 7 years of life due to smoking.Because of the high rate of tobacco-induced deaths, the health and economic burden are increasing rapidly.To tackle this epidemic, there is crying need to reduce the use of tobacco, which will need preventing initiation of tobacco use and encouraging smoking cessation among smokers.Several previous studies determined the correlates of attempts to quit smoking and smoking habit in Bangladesh.Flora, Kabir, and Moni identified the social correlates of intention to quit and quitting attempt of smoking in Bangladesh by gender and place of residence.This study found that intention to quit smoking was influenced by education, age at starting smoking, type of smokers and number of smoker friends and attempt to quit smoking associated with type of smokers and number of smoker friends.Driezen, Abdullah, Quah, Nargis, and Fong identified the determinants of intention to quit smoking among adults in Bangladesh.In accordance with this study, intention to quit smoking associated with area of residence, number of cigarettes/bidis smoked per day, attempting to quit in the past year, visiting a doctor in last year, having children at home, perceiving health benefit from quitting, worrying about the health consequences of smoking, knowledge of second-hand smoke, enjoying smoking, and workplace smoking policy Flora, Mascie-Taylor, Rahman, and Akter examined the effect of parental smoking on adult Bangladeshis smoking habit.This study showed that non-smoker parents had higher chance of having nonsmoker offspring.However, the studies that examined the factors that are associated with quit attempt have been limited to specific populations such as young/adolescents, health center and/or continually-ill patients, specific ethnic background, homeless population, prisoners.There are a few studies that identified the correlates of making quit attempt of smoking in general populations.For example, in a recent study in Bangladesh, making quit attempt was associated with residential areas outside Dhaka, being aged 40 or older, having a monthly income of above BDT10,000 versus below BDT 5000, intention to quit sometime in the future.In another study in Poland, smokers were more likely to attempt to stop if they were aged 60 years or older, had a high educational qualification, were aware of the harmful effect of smoking.Moreover, in a study in South African population, female gender, older age, having tertiary education, living in smoke-free homes, smoke >20 cigarettes per day, or having alcohol dependence in the past were significantly associated with making a quit attempt.Quitting smoking is a continuous process and that may involve many failed quit attempts before ultimately succeeding.Therefore, quit attempts are very essential in population-based smoking cessation.In U.S.A, about 37% of all smokers have attempted to quit one to two times, 19% have made three to five attempts and 8% have tried to quit six or more times in their lifetime.Seventy percent of former smokers reported that they have made one to two attempts before stopping smoking in U.S.A.With a view to rising the odds of quitting smoking, it is critical to promote an attempt at cessation in smokers who might otherwise not try.Therefore, our objective was to identify the factors that are associated with making a quit attempt of smoking using a large representative sample from a cross-sectional national survey of Bangladesh.We used available nationally representative data from the 2009 Global Adult Tobacco Survey, Bangladesh.GATS is a standardized, cross-sectional, and nationally representative household survey of adults.In Bangladesh, the National Institute of Preventive and Social Medicine conducted GATS in 2009 in collaboration with the National Institute of Population Research and Training, and the Bangladesh Bureau of Statistics.Moreover, the Centers for Disease Control and Prevention, USA, and the World Health Organization provided technical supports.The sample was drawn using a three-stage stratified cluster sampling.In the first stage, 400 primary sampling units were selected using probability proportional to size sampling.In the second stage, a secondary sampling unit was selected from per PSU using simple random sampling.At the third stage, an average of 28 households was selected from each SSU.With this design, 11,200 households were selected.Among the selected households, 10,050 persons were found to be eligible for the single interview.Out of 10,050 households, 9629 individuals completed the interview successfully with a response rate of 93.6%.The sampling procedure and the study design is presented in Fig. 1.The detailed survey procedure, study method, questionnaires are available in elsewhere.We compared the current smokers who made a quit attempt with the current smokers who made no quit attempt in the past 12 months of the survey.Respondents were asked, “Do you currently smoke tobacco on a daily basis, less than daily, or not at all?,Those who responded “not at all” were excluded from analysis.Those who responded “daily” and “less than daily”, were current smokers.Current smokers were asked, “During the past 12 months, have you tried to stop smoking?,Response options were “yes”, “no”, or “refused”.Those who refused to answer were excluded from analysis.The screening process used to select who made a quit attempt and who made no quit attempt is illustrated in Fig. 2.Six socio-demographic characteristics such as age, sex, place of residence, occupation, education, wealth index was used in this study.The wealth index was created using principal component analysis.Behavioral characteristics included smoked tobacco used status, use of smokeless tobacco products, age at initiation of smoking, number of manufactured cigarettes smoked per day, time to first cigarette after waking up.Motivational factor included current smoker intention to quit.Knowledge and attitudes towards smoking indicated, belief that smoking causes serious illness, belief that cigarettes are addictive, and opinion about increasing taxes on tobacco products.Environmental characteristics indicated smoking rules inside home, smoking policy at work place.Quitting methods utilized included advised to quit smoking.Use of social media to quit smoking indicated exposure to anti-smoking advertisements.We compared the percentage of potential factors between current smokers who have attempted to quit and current smokers who have not attempted to quit during the past 12 months of the survey using the Chi-square test.Binary logistic regression analysis was used to identify the factors that are associated with making a quit attempt of smoking.We evaluated for co-linearity using variance inflation factor with a cutoff 4.0.We obtain all estimates and confidence intervals from weighted data, and the multistage stratified cluster sampling design was accounted for variance estimations.Logistic regression model was formed using forward selection procedure.First, we formed a null model with no predictors.Then, the first potential factor considered for entry into the model was the one, which was the most significant.After the first factor was entered, the potential factor not in the model that has the smallest p-value was considered next.The procedure was repeated until no effects met the 5% significance level for entry into the model.We calculated Akaike Information Criteria in each step.To assess the overall fit of the final model, we used Pearson Chi-square and Hosmer-Lemeshow goodness of fit statistic.To reflect the predictive accuracy of the final model, we used area under the curve of receiver operating characteristic curve.Statistical software SPSS and SAS version 9.4 were used for data management and analysis.Among the 9629 adults, 2233 were current smokers, and 7396 were nonsmokers.Therefore, the 7396 individuals were excluded from analysis.Among the current smokers, 1159 individuals made no quit attempt, and 1058 individuals attempted to quit smoking during the past 12 months of the survey.Thus, the 1058 current smokers who attempted to quit and 1159 current smokers who made no attempt to quit were the final study subjects.The bivariate comparison of study variables by making attempt to quit status are illustrated in Table 1.Considering socio-demographic variables, among current smokers, who had attempted to quit, about 50.6% were aged between 25 and 34 years old, 48.2% were male, 52.1% lived in urban areas, 62.5% were employed, 56.8% had high level of education, and 55.2% had highest wealth index.For behavioral variables, among current smokers who had attempted to quit, 49.6% used smokeless tobacco, 54.9% were occasional tobacco user, 56.7% started smoking after 25 years old, 55.2% first smoked >60 min of waking up, 57.1% smoked manufactured cigarette from 1 to 9.For the motivational characteristic, among current smokers who made quit attempt, 68.7% were thinking to quit within the next 12 months."Considering the knowledge and attitudes towards smoking, among current smokers who had attempted to quit, about 63.6% did not believe that smoking cause's serious illness, 53.0% did not believe that cigarettes are addictive, and 50.3% were in favor of increasing taxes on tobacco products.Considering the environmental characteristics, among the current smokers who made a quit attempt, smoking was not allowed, but with exceptions inside their homes, and 70.6% of smokers workplace smoking was not allowed, but with exceptions.Considering the quitting methods utilized, 60.5% were advised to quit smoking in current smokers who had attempted to quit.For the use of social media to quit smoking, among current smokers who had attempted to quit, 50.2% were exposed to antismoking advertisements.The summary results of the logistic regression model for making a quit attempt are shown in Table 2.With respect to behavioral characteristics, respondents who smoked their first cigarette within 6 to 30 min of waking up were 1.44 times more likely = 1.44, 95% confidence interval = 0.87–2.36) to make an attempt to quit than who smoked their first cigarette within 5 min of waking, and who smoked 10–19 manufactured cigarettes per day were less likely to make a quit attempt than those who smoked nine or less manufactured cigarettes per day.With respect to motivational characteristic, smokers who will quit someday, but not in the next 12 months were 13.74 times more likely to make a quit attempt than who were not interested in quitting.With respect to environmental characteristics, among those smokers where smoking in the house was never allowed were 2.58 times more likely to make an attempt to quit smoking.With respect to the use of social media to quit smoking, smokers who were exposed to antismoking advertisements on media were 1.55 times more likely to make a quit attempt than who were not exposed to antismoking advertisements.We found that the correlates of making a quit attempt were time to first cigarette after waking up, number of manufactured cigarettes smoked per day, intention to quit smoking, smoking rules inside the home, and exposure to anti-smoking advertisements.Our analyses showed that gender, age, level of education, and other socio-demographic characteristics were not associated with making quit attempt, which is in line with a previous study by Vangeli, Stapleton, Smit, Borland, and West.Consistent with previous finding, we found time to first cigarette after waking up was significantly associated with making a quit attempt.In this study, smokers who do not smoke quickly after waking up were more likely to make a quit attempt.The time to first smoking after waking up is a specific measure of nicotine dependence.People who smoke quickly in the morning might have a greater addiction for nicotine.This addiction giving rise to thoughts for continuing smoking without making any quitting efforts.The findings of our study suggest that encouraging the smokers who smoke quickly after waking up to make frequent quit attempts by increasing consciousness of the urgency of quitting.In this study, individuals who smoked >20 manufactured cigarettes per day were more likely to make a quit attempt.An inline result was observed by the South African study in which the authors showed that smoking a higher number of cigarettes per day was associated with increased quit attempt.This could be fact that heavier smokers have a high intention to quit, as they have a higher likelihood of experiencing the harmful effect of smoking.On the other hand, those who smoked fewer cigarettes did not identify smoking as an instant danger to their health and were therefore not highly prompted to quit."Therefore, smoking cessation plans should be adapted to the smoker's level of cigarette consumption.This could be done by inspiring low-intensity smokers to make any quit attempt and by treating interventions for heavy smokers to cut down cigarette consumption with a view to quitting successfully.Intention to quit smoking was significantly associated with making an attempt to quit, which is supported by Diemert, Bondy, Brown, and Manske.Our study found that smokers who have any intention to quit smoking had higher chance of making a quit attempt than those who have no intention to quit.This suggests the necessity of motivating smokers to think about quitting by increasing awareness about the importance of quitting smoking through educational campaign.This should also motivate smokers to make repeated quit attempts even if that quit attempt fails to succeed.A previous study found that the smokers who lived in a smoke-free home were more likely to make a quit attempt.Similarly, this study found that smokers living in a house where smoking was not allowed had a higher chance of making a quit attempt.This may be due to the fact that smokers living in a smoke free household may induce other smokers and members of the household for banning smoking in home, which leads to change their smoking behavior.Our findings suggest that increased knowledge about the harmful effect of secondhand smoke and the benefits of quitting should be focused among urban as well as rural residents.Similar to the findings by Farrelly et al., we found that the likelihood of making an attempt to quit was the highest among those who were exposed to anti-smoking advertisements in the media.Anti-smoking advertisements are a vital part of tobacco control programs which are designed to counter not only pro-tobacco influences but also expand pro-health messages.Smokers who are subjected to anti-smoking ads can perceive the detrimental effects of smoking, which might have influenced their determination to quit smoking.Therefore, with a view to promoting smoking cessation and minimizing the likelihood of initiation, we can carry on anti-smoking advertisements in media.Our study has several strengths: first, we used a nationally representative cross-sectional sample, which is unique in its inclusion of a broad range of factors.Second, we conducted extensive statistical models and assessed them using several model diagnostic tools.Finally, our final statistical model satisfied all model assumptions and has a good prediction power.There are some notable limitations of our study.First, this study is based on data that was collected about 9 years ago and the field of tobacco control has changed dramatically this decade.Second, as our study used the cross-sectional nature of the data, it does not allow us to see the changes of the characteristics over time.Third, many smokers may fail to remember or fail to report their quit attempts, which may influence our findings."Fourth, since the data were collected by self-reports of the respondents, smoking could be underreported due to the respondent's desire to provide a socially beneficial answer.Fifth, the definition of smokers and quitters was based on a single question of “Do you currently smoke tobacco on a daily basis, less than daily, or not at all?,This was not only ignored the complexity of smoking behaviors but also complexity in smoking cessation behaviors.Finally, a number of psychological factors such as depressive disorders, and anxiety disorders, physiological factor, alcoholism that may also have associated with quit attempts were not included in this study as they were not available in the dataset.Our study identified several correlates including time to first cigarette after waking up, number of manufactured cigarettes smoked per day, intention to quit smoking, smoking rules inside the home, and exposure to anti-smoking advertisements of making a quit attempt among Bangladeshi adult smokers.Policy makers should consider these factors when designing and implementing tobacco control strategies and programs.Our findings suggest a requirement to ensure targeted interventions for those smokers who have made no quit attempt, and those who are not interesting in quitting.Further research is needed for adult smokers followed repeatedly over a longer period in order to allow each respondent with a view to creating multiple opportunities to make a quit attempt.This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.SH searched the literature, planned the study, prepared the analytic dataset, conducted data analysis, interpretation, drafted the manuscript, and corrected it after comments from all co-authors.MABC helped in the data management and analysis, provided important comments on the interpretation of the results, draft manuscript, reviewed the final version of the manuscript.MJU supervised the overall work, helped in preparing, analyzing, and interpreting the data; provide very crucial scientific comments on the draft.The final manuscript was read and approved by the all authors.
Background: Quit attempts are very essential in population-based smoking cessation. Little is known about the correlates of making a quit attempt of smoking in Bangladesh. We aimed to examine correlates of making a quit attempt of smoking among adults in Bangladesh. Methods: We used data from the 2009 Global Adult Tobacco Survey, Bangladesh. A total of 2217 adult current smokers (2141 males and 76 females) aged 15 years and older who participated in the survey were included. We compared socio-demographic, behavioral, motivational, knowledge and attitudes towards smoking, quitting methods utilized, use of social media to quit smoking, and environmental characteristics of current smokers who made an attempt to quit with those who made no quit attempt during the previous 12 months of the survey. We applied multivariable logistic regression models for analyzing the data. Results: Among the 2217 current smokers, 1058 (47.72%) made attempt to quit. We found respondents who smoked their first cigarette within 6 to 30 min of waking up were more likely to make an attempt to quit than those who smoked their first cigarette within 5 min of waking. Moreover, among daily current smokers who smoked 10–19 manufactured cigarettes per day were less likely to make a quit attempt. We also found intention to quit smoking, smoking rules inside the home, and exposure to anti-smoking advertisements as significant correlates of making a quit attempt of smoking among adults in Bangladesh. Conclusions: Policymakers should consider our findings when implementing tobacco control programs in Bangladesh.
133
Data analysis of the U–Pb geochronology and Lu–Hf system in zircon and whole-rock Sr, Sm–Nd and Pb isotopic systems for the granitoids of Thailand
In this data article, we report isotopic data from Thai granitoids of the Southeast Asian Granitoid Belts.This data includes U–Pb geochronology and Lu–Hf analyses from over 480 zircons and Sm–Nd, Sr and Pb isotopic geochemistry from 14 whole-rock granitoid samples.U–Pb data were obtained during nine sessions along with common zircon reference materials.Lu–Hf data were obtained during three sessions along with common zircon reference materials.The zircon dataset contains the LA–ICP–MS raw and processed data.Each of the four whole-rock isotopes were measured for all samples in one TIMS analytic run.The mass fractionation for Sm–Nd, Sr, and Pb was controlled by the G-2 standard , with the BHVO-2 standard was also used for the Sm and Nd analyses.Twenty-nine granitoid samples in total are used for this study.For detailed preparatory methodology of the KM, ST, and NT samples see .The individual sample locations, lithology, associated granitoid belt where applicable and analysis method for each sample are outlined in Table 1 of .Details of the petrography including mineralogy, textures and degree of deformation are outlined in Table 1.Hand specimen and thin section imagery of the analysed samples are displayed in Fig. 1.Some of the analysed granitoids are associated with named batholiths or specific plutons whose petrography and petrogenesis has been previously described, for further information see Table 1 of .The sampling strategy for this study was to collect granitoids from all three terranes and across major faults and sutures to better delineate tectonic boundaries.Representative samples of granitoids, and therefore also their underlying basement, were collected from widespread localities within Thailand.The three main tectonic domains in Thailand: Sibumasu, Sukhothai and Indochina, are associated with three large granites provinces: Western Thailand/Myanmar, North Thailand–West Malaya Main Range and East Malaya, for further information and locations see .The Western Thailand–Myanmar/Burma province also known as the Mogok–Mandalay–Mergui Belt extends from eastern Myanmar southwards to Phuket Island.The Mogok–Mandalay–Mergui Belt and the Western Thailand North Thailand–West Malaya Main Range broadly correlate with the Sibumasu Terrane and Inthanon Zone respectively although this correlation depends on the delineation of the terrane boundaries, which are have been variably defined in the past .The mineralogy consists of hornblende–biotite I-type granodiorite–granites and felsic biotite–K-feldspar S-type granites .These granitoids are associated with abundant tin mineralisation in greisen type veins .Recent U–Pb dating of Western Thailand granitoids in Phuket displays zircons with Triassic cores and Cretaceous rims .The North Thailand–West Malaya or Main Range province occurs northward from West Malaya towards to the Doi Inthanon range of north Thailand.This province is characteristically is composed of biotite–K-feldspar S-type granites although also contains subordinate I-type granitoids .Searle et al. suggest that they are more likely to be evolved felsic I-types rather than the S-type granites proposed by Cobbing et al. .The ages of this province range from early late Triassic to late early Jurassic .The sheer batholithic proportions of this province suggest crustal anatexis as the potential source .It is suggested that these granitoids are orogenic, forming as a result of the crustal thickening following the closure of the Paleo-Tethys and the collision of Sibumasu and Sukhothai-Indochina in the late Triassic .The boundary between the Western province and the West Malaya Main Range has been historically defined as the Paleogene Khlong Marui fault.However, Searle et al. suggest that the nature of the boundary is not as clear as stated by Cobbing et al. due to the presence of S-type granite either side of the Khlong Marui fault, which also formed later than the granite emplacement.The Eastern or East Malaya granitoid province is mostly Permo-Triassic I-type granites, granodiorites and tonalities but with subordinate S-type plutons and A-type syenite–gabbros .The I-type granitoids of the East Malaya province are distributed throughout the Sukhothai and Indochina terranes .These granitoids are thought to originate from arc magmatism caused by the subduction of the Paleo-Tethys under Indochina .However, the genesis of the S-type granitoids associated with the Loei-Phetchabun Volcanic Belt is unlikely to be subduction-induced arc magmatism.Instead, it is suggested by Sone and Metcalfe to be associated to the crustal thickening of western Indochina that was induced by back-arc compression and later emphasised by the Sibumasu collision.The East Malaya province granitoids range primarily from the early Permian to end of the Triassic with occasional Cretaceous magmatism, which is shared with the Western Thailand–Myanmar/Burma granitoid province .Eighteen granitoid rock samples were crushed and sieved for collecting zircon grains through the conventional magnetic and heavy liquid separation procedures.About 50 randomly selected zircon grains were set in epoxy resin and polished for U–Pb zircon age analysis.In order to characterise the textural and chemical-zoning features within each zircon, cathodoluminescence images for zircon were obtained using the FEI Quanta600 Scanning Electron Microscope with the Mineral Liberation Analysis at Adelaide Microscopy, Adelaide, South Australia.Each mount was carbon coated prior to CL imaging to increase the conductivity of the sample and to retrieve higher quality images.For each sample, 20–30 magmatic grains were selected for laser ablation inductively coupled plasma mass spectrometry at Adelaide Microscopy, Adelaide, South Australia.All assumed inherited grains or distinct zircon domains were also ablated using LA–ICP–MS. The analyses were conducted on an Agilent 7900x with a New Wave NW213 laser ablation system with a TwoVol2 sample chamber.A 30 µm spot size was used where possible, however, one analysis session used a smaller 25 µm spot size due to the smaller size of the zircon grains.A 5 Hz pulse rate was used with a typical pit depth of 30–50 µm.For further details of the analytical methodology of this laboratory’s technique see .The exact fluence settings across the two-year analysis period ranged from 5.6 to 7 J/cm2.The isotopes measured for all analyses were 204Pb, 206Pb, 207Pb, 208Pb, 232Th and 238U.The GEMOC zircon standard GJ-1 was run as the primary standard every 10–20 unknown analyses, to correct for isotopic drift and down-hole fractionation.The Plešovice zircon standard; ) was analysed as a secondary standard to check the accuracy of the technique.Across all analytical sessions, analyses of GJ-1 yielded a 206Pb/238U weighted average age of 601.61 ± 0.47 Ma and Plešovice yielded a 206Pb/238U weighted average age of 339.55 ± 0.73.For the later batches analysed in 2017, 90Zr and 202Hg isotopes were measured where possible, see Table 3.For some LA–ICP–MS runs, 204Pb was not measured due to the unresolvable isobaric interference from 204Hg .For two of the analysis sessions, the rare earth element suite was also monitored.The 2017 batches were also run with the 91500 zircon as another secondary standard.During these analyses, 135 analyses of 91500 yielded a 206Pb/238U age of 1036.2 ± 3.3.Two different 91500 crystals were used for these analyses, where one crystal gives good reproducibility while the other is more heterogeneous.This heterogeneity accounts for the large MSWD across the three analytic runs monitoring this standard.However, in this study GJ-1 was used as the primary zircon standard so the isotopic drift across the analytic periods in the secondary standards.The number of Hf analyses for each sample was determined by the variability in the age data and the amount of interpreted inheritance.Nine samples used for Hf and Lu isotopic analyses were analysed using a Resonetics S-155-LR 193 nm excimer laser ablation system connected to a Nu Plasma II multi-collector ICP–MS in the GeoHistory Facility, John de Laeter Centre, Curtin University, Perth, Western Australia.Analyses were carried out using a laser beam diameter of 50 μm.After two cleaning pulses and 40 s of baseline acquisition, zircon grains were ablated for 35 s using a 10 Hz repetition rate, and a laser beam energy of 2 J/cm2.All isotopes were counted on the Faraday collector array.Time resolved data were baseline subtracted and reduced using Iolite where 176Yb and 176Lu were removed from the 176 mass signal using 176Yb/173Yb = 0.7962 with an exponential law mass bias correction assuming 172Yb/173Yb = 1.35274 as per .The interference corrected 176Hf/177Hf was normalised to 179Hf/177Hf = 0.7325 , for mass bias correction.Mud Tank was used as the primary standard for Hf isotopes and R33 as the primary standard for Lu–Hf analyses.Mudtank analyses yielded 176Hf/177Hf weighted-means of 0.2825064 ± 0.0000097 and 0.2825071 ± 0.0000081.The R33 standard yielded 176Hf/177Hf weighted-mean of 0.282718 ± 0.000010 and Lu/Hf ratios of 0.0019903 ± 0.0000097 for the first session and for the analyses for RDT16_053 yielded 176Hf/177Hf weighted-mean of 0.282708 ± 0.000017 and Lu/Hf ratios of 0.001988 ± 0.000040.Secondary standards were 91500, GJ-1 and FC-1.Additionally for the first session, Plešovice was used as a secondary standard.The 176Hf/177Hf weighted averages for the analysis of the secondary standards are outlined in Table 5.The corrected 178Hf/177Hf ratio was calculated to monitor the accuracy of the mass bias correction and yielded average values of 1.467219 ± 0.000016 and 1.467117 ± 0.000011, which are both within the range of values reported by .Hafnium analysis for one sample was undertaken using a Neptune Plus multi-collector ICP–MS at the University of Wollongong, New South Wales.The dwell time was 50 s with 5 Hz repetition rate and an intensity of 4.4 J/cm2.Standards were Mudtank and Plešovice, yielding 176Hf/177Hf weighted-averages of 0.282473 ± 0.000023 and 0.282450 ± 0.000018 respectively.The corrected 178Hf/177Hf ratio was calculated to monitor the accuracy of the mass bias correction and yielded an average value of 1.467227 ± 0.000012, which is within the range of values reported by .All Lu–Hf data was reduced using Iolite software .Calculation of εHf values employed the decay constant of and the Chondritic Uniform Reservoir values of , depleted mantle Lu/Hf values of and Hf/Hf values of .Sm–Nd and Sr isotopic whole-rock analyses were conducted for 14 granitoid samples and one duplicate run for Sm–Nd analyses at the University of Adelaide’s Isotope Geochemistry Facility.Eight samples and one duplicate were used for whole-rock Pb isotope measurements.These samples were chosen due to their spatial distribution across the main tectonic terranes in Thailand and also containing Nd, Sr and Pb elemental concentrations above the detection limits of the X-ray fluorescence spectrometer at Franklin and Marshall College, U.S.A.A detailed methodology for the Sr and Sm–Nd whole-rock isotope techniques conducted in the same laboratory are outlined by .To minimise error magnification, the optimal spike amount was calculated for each sample.Six of the 14 samples used for Nd and Sr whole-rock geochemistry contained Pb ppm concentrations below the 1 ppm detection limit of the XRF, therefore, the subsequent whole-rock Pb TIMS analyses were not completed on these samples.The Pb isotopes were corrected for mass fractionation using the Southampton–Brest lead 207Pb–204Pb double spike as outlined by .This spike was formulated to minimise uncertainty propagation with sample 206Pb/204Pb isotope compositions in the range of 14–30 .SBL74 is calibrated relative to a conventional reference value 208Pb/206Pb = 1.00016 for NIST SRM 982 and has a composition of 204Pb/206Pb = 9.2317, 207Pb/206Pb = 36.6450 and 208Pb/206Pb = 1.8586 .Lead contamination was effectively negated by removing metals and silicates and minimising atmospheric and procedural contamination by conducting analyses in a clean lab setting of the University of Adelaide’s Isotope Geochemistry Facility.Lead was isolated from the sample matrix by twice passing each sample through an HBr solution using anion exchange chromatography.For TIMS analysis, each sample was loaded with silicic acid–phosphoric acid emitter onto two zone-refined Re filaments using the double spike procedure outlined by .The two filament loads consist of: a “natural” run with sample only and a sample–spike mixture run.The optimum mixture of sample and spike was calculated as 204PbSample/204PbSpike = 0.09, with a tolerance range of 0.03–0.65 within which negligible uncertainty magnification was observed.Sm–Nd, Sr and Pb whole-rock isotopes were measured on the Isotopix Phoenix thermal ionization mass spectrometer at the University of Adelaide, South Australia.The total procedural blanks were <475 pg for Sr, <221 pg for Sm, <699 pg for Nd and <74.1 pg for Pb.If the procedural blank is <1/1000th sample then it can be considered negligible.The mass fractionation for Sm–Nd, Sr and Pb was controlled by the G-2 standard , which yielded average ratios of 143Nd/144Nd = 0.512261 ± 0.000002 and 87Sr/86Sr = 0.709770 ± 0.000003.For the Pb isotopes the G-2 standard , yielded corrected ratios of 18.3870, 15.6361 and 38.903 for 206Pb/204Pb, 207Pb/204Pb and 208Pb/204Pb, respectively.A secondary standard, BHVO-2 , was also measured for Sm and Nd analyses yielding average ratios of 143Nd/144Nd = 0.513034 ± 0.000002.Although none of the samples analysed are basaltic in composition, the Nd ppm value of the BHVO-2 basalt standard was similar to the expected Nd ppm concentration of the unknown granitoid samples.The Nd isotopic reference JNdi-1 , yielded average ratios of 143Nd/144Nd = 0.512100 ± 0.000002 and 0.512103 ± 0.000002.The University of Adelaide Isotope Geochemistry Facility’s laboratory average for the JNdi-1 Nd isotopic reference is 0.512106 ± 0.000009.During the period of Sr analysis, the two analyses of the Sr isotopic standard SRM987 yielded average ratios of 87Sr/86Sr = 0.710246 ± 0.000003 and 0.710243 ± 0.00002.SRM981, the Pb isotopic reference used in this study, yielded corrected ratios of 16.9436, 15.5013 and 36.729 for 206Pb/204Pb, 207Pb/204Pb and 208Pb/204Pb respectively.The whole-rock measurements from the TIMS analyses and calculated initial isotopic values are displayed in Table 7.The crystallisation ages have been interpreted individually for each sample, depending on the nature of the data.This is because different zircons behave differently in the U–Pb isotopic system.The data interpreted to represent the crystallisation age are stated in with further explanatory information detailed below for each sample.Inherited grains are defined as zircons that are older than the crystallisation age of the sample.The associated Concordia curves, regression lines and weighted average age plots are illustrated in Fig. 3 from .While the interpreted crystallisation age for ST-16 was Cambrian, there were also concordant 206Pb/238U ages between 371.5 ± 5.81 Ma and 83.7 ± 1.37 Ma, which may have been due to later resetting post-crystallisation.These younger analyses from ST-16 had very low Th:U, suggesting that the large Th ion has diffused from the zircon during a subsequent thermal event .The observation that young 206Pb/238U age zircons have low Th/U ratios suggests that both Pb and Th have been lost from the zircon.This is supported by petrographic observations of igneous garnet breaking down to muscovite, chlorite and biotite.Older concordant ages were measured from 207Pb/206Pb Age of 3189.3 ± 17.83 Ma to 206Pb/238U Age of 710.5 ± 11.63 Ma.The older ages analysed were interpreted to be inherited from events prior to the crystallisation of the granite.NT-17 contained concordant ±5% zircon analyses with ages ranging from 196.3 ± 2.8 Ma and 209.7 ± 3.0 Ma.The measured dataset included several older analyses with ages of 2755 Ma, 2685 Ma, 1248 Ma, 947 Ma and 836 Ma.The Concordia of NT-17 shows that many of the zircons analysed sit off the Concordia line, indicating that the U–Th–Pb system was no longer working as a closed system.A weighted average was first taken from all analyses within ±5% concordance, yielding a 206Pb/238U age of 211.2 ± 4.0 Ma.This calculated age has a very large Mean Square Weighted Deviation indicating that the data are overdispersed with the observed data scatter exceeding the predicted analytical uncertainties.In attempt to better constrain the crystallisation age, another weighted average age was taken from the cluster of ten concordant data analyses, which consequently yielded an age within error of the first weighted average and the data was more closely dispersed within the range of the predicted analytical uncertainties.An Upper Triassic crystallisation age was interpreted for ST-08A.Although the weighted average age MSWD of 3.1 indicates that the data are overdispersed, there are no clear distinguishing factors to filter the data any further.Two interpreted concordant inherited ages were found with a 207Pb/206Pb age of 2473 ± 34 Ma and a 206Pb/238U age of 393.1 ± 9.3 Ma.Similar to ST-08A, the crystallisation age of ST-13 was also calculated to be Upper Triassic.Cretaceous aged analyses from ST-13 often had very low Th:U, suggesting that the large Th ion has diffused from the zircon during a subsequent thermal event.The observation that young 206Pb/238U age zircons have low Th/U ratios suggests that both Pb and Th have been lost from the zircon.Therefore, this younger cluster of Cretaceous data was interpreted to be the age of metamorphic resetting.There was a prominence of age inheritance for ST-13 ranging from 1736 Ma to 500 Ma.Many of the older ages were taken from cores of zircons with multiple domains.ST-13 often up to four CL domains in the one zircon.Often at least one domain was Triassic in age, these inner domains were interpreted to be age of crystallisation.The majority of zircon analyses for ST-49A were around 80 Ma, with 11 concordant zircon analyses, yielding a 206Pb/238U age of 79.8 ± 1.6 Ma.This calculated age has a very large Mean Square Weighted Deviation indicating that the data are overdispersed with the observed data scatter exceeding the predicted analytical uncertainties.In attempt to better constrain the crystallisation age, another weighted average age was taken from the cluster of six concordant data analyses yielding a much smaller MSWD of 0.37.The lower intercept of a common Pb regression trend of 78.9 ± 1.1 Ma.All three of these calculated ages are within error of each other, however, the interpreted crystallisation age was determined to be the cluster of concordant data analyses yielding a 206Pb/238U age of 81.4 ± 1.1 Ma.One zircon analysis within 10% concordance gave a 206Pb/238U age of 259.8 ± 4.1 Ma, which is interpreted as an inherited grain, possibly reflecting an earlier magmatic event in the region.Like the crystallisation age of ST-49A, the interpreted crystallisation age for ST-18 was also determined to be Upper Cretaceous.Zircons from this sample have interpreted age inheritance with concordant ages spanning from 2729 Ma to 524 Ma.No crystallisation age could be constrained from the zircon analyses conducted on RDT15_076A.It was interpreted that the calculated ages were all inherited zircons.Interpreted concordant inherited ages range from 207Pb/206Pb age of 2462 ± 21 Ma to 206Pb/238U age of 448.1 ± 6.7 Ma.Twenty-two concordant magmatic zircon analyses from Th11/02 were used to calculate the weighted average, which yielded an Upper Triassic crystallisation age.A discordia line also gives a lower intercept within error of this weighted average at 206.7 ± 1.3 Ma.The upper intercept of this regression line gives an age on the boundary of the Eoarchean and the Paleoarchean at 3617 ± 200 Ma.One discordant grain was found at 258.8 Ma, with two older grains within 10% discordance with 206Pb/238U ages of 409.4 ± 6.53 Ma and 304.8 ± 4.99 Ma.There was also a concordant inherited grain with a 206Pb/238U age of 562.9 ± 8.82 Ma.The interpreted crystallisation age for RDT16_053 is the weighted average of all concordant analyses with Th:U>0.1, which also yielded an Upper Triassic age like Th11/02.Analyses younger than 200 Ma were interpreted to have lost Th and Pb from a subsequent thermal event, which was supported by the Th:U andi values.Three older inherited cores are present with Paleoproterozoic–Neoarchean 207Pb/206Pb ages ranging from 2746 ± 29 Ma to 2430 ± 27 Ma.The inherited zircon ages found in this sample are similar those found in ST-18 from the Sibumasu Terrane.The interpreted crystallisation age for RDT16_044 was is Upper Cretaceous in age.This age was determined from a singular zircon analysis.The Th:U of this zircon is >0.1 however, the majority of the analyses from this sample have low Th:U indicating that both Pb and Th have been lost from the zircon during subsequent thermal events.Discordia could not be completed with Isoplot software since the youngest cluster of data containing positive and negative rho values.All five samples taken from the Sukhothai Terrane yielded magmatic ages within 13 Ma between 238.6 and 226.5 Ma.No concordant inherited zircons were found from any of these samples.The oldest interpreted crystallisation age from the Sukhothai Terrane was interpreted from NT-12.The lower concordia intercept of a common Pb regression line for NT-12 is also within error of this weighted average age and yields a 206Pb/238U age of 238.6 ± 3.1 Ma.The interpreted crystallisation age for NT-10 was Upper Triassic.Analyses from NT-10 also form a linear trend with a lower Concordia intercept at 236.2 ± 2.7 Ma and an upper intercept of 5107 ± 750 Ma representing a common Pb component.The lower intercept of this trend is the same age as the weighted average age only the weighted average incorporates 0.2 more uncertainty.Nineteen concordant analyses from NT-11 with Th:U>0.1 yield a 206Pb/238U weighted average of 228.7 ± 2.1 Ma.No concordant inherited zircons were found, although two very discordant analyses had 206Pb/238U ages of 338.8 ± 5.66 Ma and 121.4 ± 2.08 Ma.The magmatic age of Th11/01 is interpreted to be the weighted average of the ±5% concordant analyses with Th:U>0.1 yielding a 206Pb/238U age of 227.9 ± 1.9 Ma.Th11/01 contained two older very discordant analyses with 206Pb/238U ages of 455.6 ± 7.64 Ma and 316.1 ± 5.47 Ma.Concordant analyses with Th:U>0.1 from sample NT-09 yielded a weighted average 206Pb/238U age of 226.5 ± 1.8 Ma.No concordant inherited zircons were found, although one very discordant analysis had a 206Pb/238U age of 692.4 ± 10.58 Ma.KM-20 contains 206Pb/238U ages within ±5% discordance for the entire first half of the Triassic period, from 249.4 ± 5.9 Ma to 221.5 ± 5.7 Ma.We interpret that the Lower and Middle Triassic ages are inherited and that crystallisation age of this sample was calculated from the weighted average of all analyses from the youngest to oldest concordant zircon in the Upper Triassic.The NT-07 quartz diorite sample, was collected further south 60 km east of Nakhon Sawan.The crystallisation age of this sample is interpreted to be Permian.It was very difficult to determine domains from the CL imaging since the majority of zircons from this sample contained dark homogeneous zones, sometimes the whole zircon was dark and homogeneous.Therefore, it was difficult to determine the domain being ablated.This sample yielded a spread of analyses that may suggest limited post-crystallisation disturbance of the isotopic system that is supported by the sericitisation of feldspars seen in thin section.
This data article provides zircon U–Pb and Lu–Hf isotopic information along with whole-rock Sm–Nd, Sr and Pb isotopic geochemistry from granitoids in Thailand. The U–Pb ages are described and the classification of crystallisation and inherited ages are explained. The petrography of the granitoid samples is detailed. The data presented in this article are interpreted and discussed in the research article entitled “Probing into Thailand's basement: New insights from U–Pb geochronology, Sr, Sm–Nd, Pb and Lu–Hf isotopic systems from granitoids” (Dew et al., 2018).
134
An exploration of the effectiveness of artificial mini-magnetospheres as a potential solar storm shelter for long term human space missions
The World׳s space agencies are actively planning for human space missions beyond Low Earth Orbit , and the scientific benefits resulting from human exploration of the Moon, Mars and asteroids are likely to be considerable .However, the risk posed by radiation beyond Earth׳s magnetosphere is one of the greatest obstacles to long term human space exploration .Thus careful consideration must be given to radiation protection.The US National Research Council Committee on the Evaluation of Radiation Shielding for Space Exploration recently stated:“Materials used as shielding serve no purpose except to provide their atomic and nuclear constituents as targets to interact with the incident radiation projectiles, and so either remove them from the radiation stream to which individuals are exposed or change the particles׳ characteristics – energy, charge, and mass – in ways that reduce their damaging effects.,This paper outlines one possible way to achieve this, by radically reducing the numbers of particles reaching the spacecraft.The technology concerns the use of “Active” or electromagnetic shielding – far from a new idea – but one which, until now, has been analysed without considering some crucial factors, leading to an expectation of excessive power requirements.The missing component is a self-consistent analysis of the role played by the plasma environment of interplanetary space.Presented here are the answers to three questions:What difference is made by the fact that the interplanetary space environment contains a low density plasma of positive and negative charges, to how a potential artificial electromagnetic radiation shield would work on a manned spacecraft?,How differently does a plasma behave at the small scales of a spacecraft compared to, say, the magnetosphere barrier of a planet?,How does this change the task of balancing the cost and benefits of countermeasures for engineers designing an interplanetary or long duration manned mission?,Initiatives such as Earth–Moon–Mars Radiation Environment Module aim to provide frameworks to overcome the mission safety challenges from Solar Energetic Particles.1,Accurate prediction of space “storms” is only valuable however if the means exist to protect the spacecraft and its crew.In this paper we discuss the principles and optimisation of miniature magnetospheres.The upper panel in Fig. 1 shows a photograph of a mini-magnetosphere formed in the laboratory in an experiment based on the theory outlined here.Furthermore, these principles have now been borne out by observation and analysis of naturally occurring mini-magnetospheres on the Moon .The theory provides a self-consistent explanation for the manifestation of “Lunar swirls” .The lower panel in Fig. 1 illustrates a mini-magnetosphere around a conceptual manned interplanetary spacecraft.In space the charged particles mostly originate from the sun.A magnetosphere is a particular type of “diamagnetic cavity” formed within the plasma of the solar wind.Plasma is a state of matter in which the diffuse conglomeration of approximately equal numbers of positive and negative charges are sufficiently hot that they do not recombine significantly to become neutral particles.Rather the charges remain in a dynamic state of quasi-neutrality, interacting, and self-organising in a fashion which depends upon the interaction of internal and external electromagnetic forces.These are the attributes to be exploited here as a means to protect vulnerable manned spacecraft/planetary bases.In interplanetary space the high energy component of the solar particles forms the “hazard” itself, in particular because of the high penetrating capability of energetic ions.These are the Solar Cosmic Rays.A smaller percentage of super energetic particles at GeV energies have been accelerated by exotic events such as super-novas.These form the Galactic Cosmic Ray component.Both high fluxes of SCR during storms and the long term exposure to GCR are a threat to astronaut health .Space plasmas are very diffuse indeed, with about 10 particles occupying the volume of the end of the average human thumb, and are considered ultra high vacuum by terrestrial standards.The mean-free-path between physical collisions between the particles is far longer than the system.This means the particles “collide” through their electrostatic charges and collective movements which are guided by, or result in, magnetic or electric fields.Because of the large dimensions of space, even a very low density is important.The electrostatic forces between two charges are 1039 times more intense than their gravitational attraction .A plasma is a rapidly responding conducting medium due to the free moving charges.It creates a magnetic field in opposition to an externally applied magnetic field, making it diamagnetic, and can result in local cavities.Diamagnetic cavities are a general phenomenon in plasmas, not only in space plasmas, and can be formed with or without magnetic fields .Magnetospheres are more generally associated with planetary magnetic fields interacting with the solar wind plasma .Miniature magnetospheres are fully formed magnetospheres, with collisionless shocks and diamagnetic cavities, but the whole structure is very much smaller, of the order of 110-100 s of km across.Mini-magnetospheres have been observed associated with the anomalous patches of surface magnetic field which exist on the Moon , Mars and Mercury , and also with asteroids such as Gaspra and Ida .It has also been demonstrated that mini-magnetospheres can form without the presence of magnetic fields.Examples include natural comets and artificial comets such as AMPTE .In these cases the term “magneto” can still be used because the currents induced in the sheath region include magnetic fields.Mini-magnetospheres are determined by the plasma physics of the very small scale which in general has been neglected in the analysis of the electromagnetic deflection as a means of spacecraft protection.The entire structures are smaller than the bending radius of an energetic ion about the magnetic field in a vacuum.Therefore this is not a conventional “magnetic shield”.Presented here is a “block diagram” of the characteristics and parameters needed to implement a mini-magnetosphere deflector shield for a manned space craft.The actual physics of the interaction is immensely complex and largely non-deterministic analytically due to non-linearities.Thus these are “rules of thumb”, intended only as a guide.A fully detailed analysis will require the use of complex plasma physics and simulation codes.Due to the resources needed this would best be conducted on a specific case for which as much verifiable data as possible is available.At the radius of the Earth׳s orbit the level of ultra-violet radiation from the Sun is sufficiently high that photo-ionisation results in almost all matter in free space being ionised.The medium of space is therefore a plasma, albeit of very low density.Solar eruptions consist of electromagnetic waves but also protons and electrons with a small percentage of higher mass ions.The radiation encountered in space is a composite of a small percentage of extremely high-energy galactic particles and a higher density but, much lower energy continuous outflow of particles from the sun, interspersed with intermittent, high density eruptions of very energetic particles originating from a variety of violent events on the sun.Events on or near the sun which result in shockwaves can accelerate ions and electrons to extremely high energies .An example showing the temporal and energy spectra of large Solar Energetic Particle events is shown in Fig. 3 .For a spacecraft in interplanetary space, such an event produces intense bursts of radiation of deeply penetrating particles capable of passing through the hull to the crew within.The result is a significant and dangerous increase in dose-rates above 0.05 Gy/h .The variable shape of the energy spectrum for each SEP is an extremely important factor for the total exposure calculation and not just the total fluence.For instance, protons with energies above 30 MeV can pass through space suits, while those above 70–100 MeV can pass through aluminium spacecraft hull walls of 5–10 g cm−2, with the added consequences of secondary particle radiation.The energy spectrum of some of the largest events of the last 50 years is shown in Fig. 3 .The vulnerability of different organs and systems varies considerably .Thus it becomes difficult to quantify the potential mission disruption caused by solar events based purely on predicted severity of the event.For long term interplanetary manned missions, protection against extremely large SEP which occur sporadically and with very little warning is a mission critical issue .The characteristics of the instantaneous plasma, particle distributions impacting the spacecraft, define how the plasma shield will function at any one instant.As will be seen below, although variable, the background “solar wind” plasma is what is used initially to create the barrier, it can be artificially augmented to increase the deflection of the hazardous high energy component of the particle spectrum.The principle of “Active Shielding” requires electromagnetic forces to balance the incoming pressure.An on-board “Mini-Mag” system would most likely consist of a superconducting coil .In a non-conductive medium, the magnetic field intensity of a dipole magnetic field diminishes rapidly with range.Higher-order structures, such as quadrapoles and octopoles, have fields which decrease more rapidly with radius from the coils which create them.The presence of the plasma changes the profile.This can be seen illustrated in Fig. 5.The prohibitively high power estimates of a magnetic shield are based on the vacuum profile.The vacuum field power estimates do not allow for the alteration in the profile and additional force illustrated in Fig. 5.The effect of the plasma environment is not just to extend the range of the magnetic field intensity.The effect of the magnetic “pile-up” comes with cross field currents in a narrow barrier region some distance from the spacecraft.These currents and accompanying electric fields alter the way in which the incoming plasma is deflected.The efficiency of the shielding is therefore found to be much greater than the initial vacuum calculation would have predicted.Evidence that this is the case will be shown in Section 7.Quantifying the level of enhancement and the effectiveness at deflecting higher energy particles is non-trivial.In the following section we shall provide estimates which can be used to determine the value of an artificial mini-magnetosphere shield for astronaut protection.Fig. 1 shows a two dimensional sketch of the morphology of a mini-magnetosphere surrounding a spacecraft.The size of the mini-magnetosphere is dependant upon two parameters.Firstly, rs the “stagnation” or “stand-off” distance of a magnetopause is where the pressure of the incoming plasma, Pin, is balanced by the combined pressure of the mini-magnetosphere.The second parameter is L the width of the magnetopause boundary.Clearly, to be within the safety of a mini-magnetosphere diamagnetic cavity, one must be further away than the thickness of the boundary.In kinetic studies of mini-magnetospheres we find that L≈electron skin depth.This balance occurs at a distance rs from the source of the magnetic field.In planetary magnetospheres rs would be the Chapman–Ferraro distance.The same calculation for an artificial source provides an estimate of the relationship between on-board power requirements and shield effectiveness.Interestingly Eq. reveals that the largest possible stand-off distance is achieved with the largest possible coil radius.This is intuitively reasonable because the long-range field strength diminishes with Boa3, so a small change in the radius of the coil has a large effect, far more so than the peak field in the centre of the coil.The classical skin depth is a rapid decay of electromagnetic fields with depth inside a conductor caused by eddy currents in the conductor.High frequencies and high conductivity shorten the skin depth as does an increase in the number of current carriers.We can now introduce a geometric parameter α as a quasi-linear attenuation factor.This is to provide an indication of relative effectiveness.The electric field component comes from the formation of currents which are induced to exclude the interplanetary magnetic field and create the cavity.The electric field created is responsible for changing the energy and trajectory of the energetic particles.The geometry of this for our case is illustrated in Fig. 6.As mentioned in Section 5.4, in 3D the physics is such that the electric field will always point outwards from the spacecraft.This results in a 3D safe zone effective against both directional and omni-directional threats.Thus we must determine the effectiveness of the high energy scattering process.Because the electric field is formed self-consistently by the plasma itself, and the high energy particles are scattered by a lower electric field, the problem of generating a secondary population of ions accelerated towards the spacecraft by the deflector shield itself does not arise.Quantifying the shield performance for specific spectra of high energy particles requires a full 3D recreation using a computer simulation, or an experiment, either in space or in the laboratory.Fig. 7 shows a simulation of high energy scattering from a dipole magnetic field.The 3000 “SEP” incoming ions are 100,000× the energy of the environmental plasma within the box.The simulations show that 100% of the “SEP” particles were excluded from the “safe zone”.For “SEP” particles approximately a million times the background energy, 95% of the particles were excluded.This indicates that a narrow electric field is responsible for the deflection rather than the gradual bending due to a magnetic field.Additionally it indicates the high scattering efficiency of the high energy “SEP” ions by the sheath electric field formed by the background plasma.The effects of the very largest storms could be mitigated by adding further plasma density around the spacecraft, similar to creating an artificial cometary halo cloud.Increasing the density within the mini-magnetosphere reduces the thickness of the skin depth).This could be done either by reducing the power required from the space craft to achieve the same deflection efficiency, or by boosting the shield effectiveness during the most severe stages of an SEP or CME event.Practically this could be done by releasing easily ionised material from the spacecraft.EUV ionisation, charge exchange, and collisional ionisation lead to the generation of ions and electrons which are incorporated in the mini-magnetosphere barrier.The mass loading leads to enhancement in the currents.Exactly how much Xe would be needed on a mission would depend upon the frequency of use.Allowing for approximately 3 SEP events to be encountered by the spacecraft in an 18 month period, this would require less than half a kilo of Xe.It would also be necessary to sustain the enhancement for 2–6 h.The resources required would then depend upon the rate of plasma loss from the mini-magnetosphere.This is discussed in the next section.To function as a shield, sufficient density must be retained for long enough within the cavity barrier to ensure the cavity is not overwhelmed by an intense storm for the duration of peak fluence).The plasma parameter, β, defined as the ratio of the plasma pressure to the magnetic pressure, does not provide a useful guide in this instance because the profiles of plasma density and temperature vary on spacial scales below the ion gyration radius.Furthermore the parameter β does not allow for electric fields which we know are fundamental to the mini-magnetosphere barrier.Since an analytical approach is not available as a guide, we can take an observational example from comets , and in particular the AMPTE artificial comet .The data recorded by the spacecraft monitoring the active magnetospheric particle tracer explorers mission, provide us with a lower limit of retention in the absence of a magnetic field.A ~1 kg mass of barium exhibits an ionisation time of ~20 min for a volume of 100 km−3 .In the cometary case the particle pick-up means that the confinement structure is essentially open-ended and the matter is rapidly lost.The addition of a magnetic field would undoubtably extend the plasma retention but to precisely what extent, particularly on the scale size of a mini-magnetosphere, could only be determined experimentally in space.Having outlined the principles behind the mini-magnetosphere shield operation, and assembled some performance parameters, we can now compute some figures of merit.A conceptual deep space vehicle for human exploration described in included a mini-magnetosphere radiation shield.The purpose was to present a candidate vehicle concept to accomplish a potential manned near-Earth object asteroid exploration mission.The power, physical dimensions, magnetic field intensities and density augmentation capabilities used here will be those presented in .The experimental and observational evidence in the formation of mini-magnetospheres has been established in the laboratory using Solar Wind Plasma Tunnels and spacecraft observations of natural mini-magnetospheres on the Moon .A photograph of a laboratory scale mini-magnetosphere is shown in Fig. 9 .A vacuum or an MHD description of the laboratory experiment would have predicted that the plasma stream would not be deflected and would hit the magnet.The equations provided above, can only give approximate values as the complexity of the interaction is highly variable, with multiple parameters inter-dependant in both time and orientation.This is a typical description of a non-linear system.We know that mini-magnetospheres work because of the example on the Moon .We know that the same principles used here occur for both natural and artificial comets .Injection of additional cold plasma from the spacecraft, such as xenon or krypton gas, which can easily be ionised by UV-radiation from the Sun, will significantly enhance the effectiveness of the shield.The concept of placing a plasma around a spacecraft may at first sound familiar to those looking at active shield systems .These, amongst others, have proposed various “Plasma Shield” schemes using flowing currents in plasmas around the spacecraft as means to extend the magnetic field or source of electrons to counter the incoming protons.The difficulty with these schemes has been omission of the role played by the environmental plasma, whose effect is to short circuit, screen or disperse the mini-magnetosphere plasma.The scheme suggested here does not attempt to control the plasma entirely but instead seeks to confine it sufficiently to allow its own nature to achieve the aim.Regardless of whether some of the details contained herein, can be improved or adapted, this paper has aimed to emphasise the importance of including the plasma environment when considering any means of active or electromagnetic shielding to protect spacecraft from ionising radiation.This paper has also aimed to demonstrate the importance of using the appropriate plasma physics dominant on the “human” rather than “celestial” scale.To estimate a realistic prediction of effectiveness we have sought to provide approximate expressions which are credible and not to underplay the complexity of research needed.The analysis shown here is for a modest powered mini-magnetosphere system which may function as a permanent means to increase the safe operating time for crew and systems in interplanetary space, functioning in much the same way as does the Earth׳s magnetosphere.Such a shield could also be enhanced to deal with extreme storms, against which it may be the only means of providing effective protection.Proposals for electromagnetic shields generally come with highly optimistic predictions of effectiveness, yet no such prototype system has been tried in space, perhaps because of the lack of credibility of such claims.This paper has presented an indication of the true complexity involved in active shielding.Calculations which have assumed a vacuum are incorrect because in fact a plasma exists.The role of the plasma environment has either been overlooked completely, or it has been analysed on an inappropriate scale size.Much detail has yet to be determined.An active shield system may not be practical without on-board power systems comparable to those envisioned in science fiction, but the concept should not be dismissed on the basis of an incorrect analysis.An active deflector shield system could never replace passive shielding or biological advances, but it can offer options, particularly for EVAs, extending the longevity of hardware and preventing secondary activation of the ship׳s hull and systems.It seems the only credible theory for deflection of GeV particles.The evidence that mini-magnetospheres actually work on the bulk plasma in space comes from magnetic anomalies on the moon , around asteroids and comets, both natural and artificial .This, combined with laboratory experiments and simulations, suggests that the high energy distribution can be sufficiently effected to justify optimism.The value of being able to predict the occurrence of potentially lethal storms can only be fully realised if we can develop the means to provide safe shelter.
If mankind is to explore the solar system beyond the confines of our Earth and Moon the problem of radiation protection must be addressed. Galactic cosmic rays and highly variable energetic solar particles are an ever-present hazard in interplanetary space. Electric and/or magnetic fields have been suggested as deflection shields in the past, but these treated space as an empty vacuum. In fact it is not empty. Space contains a plasma known as the solar wind; a constant flow of protons and electrons coming from the Sun. In this paper we explore the effectiveness of a "mini-magnetosphere" acting as a radiation protection shield. We explicitly include the plasma physics necessary to account for the solar wind and its induced effects. We show that, by capturing/containing this plasma, we enhance the effectiveness of the shield. Further evidence to support our conclusions can be obtained from studying naturally occurring "mini-magnetospheres" on the Moon. These magnetic anomalies (related to "lunar swirls") exhibit many of the effects seen in laboratory experiments and computer simulations. If shown to be feasible, this technology could become the gateway to manned exploration of interplanetary space.
135
Fisheries management responses to climate change in the Baltic Sea
With the introduction of the multi-annual management plan for cod stocks in the Baltic Sea, the European community aims to achieve stock levels that ensure the full reproductive capacity and the highest long term yields of cod.At the same time, EU fisheries management aims to achieve efficient fishing activities within an economically viable and competitive fisheries industry.One of the measures used to obtain these goals is to gradually adjust the allowed fishing mortality rate towards a specified sustainable target level.The current long term management plan for Baltic cod was established against the background of prevailing environmental and climatic conditions, but on-going climate change may alter the predicted effects of such management plans.The implications of climate change for economic as well as biological sustainability are still uncertain for fisheries managers and climate change may have implications for decisions regarding how to regulate fisheries in the future.In the eastern Baltic Sea, climate change is expected to affect the recruitment of cod as a result of declining salinity and oxygen levels and may result in a long term decline in the cod stock biomass.This will have an impact on the economic performance of fishermen who fish in the Baltic Sea.Therefore, if the goal is to maintain the economically important cod stock biomass at the current level or to maximise economic performance indicators for the fishing fleet, a readjustment of the current management plan may be needed.This paper presents an age-structured bio-economic model aimed at assessing the impact of climate change on the long term management of Baltic cod.Although the effect of climate change is expected to decrease the economic fleet performance as a result of reduced reproduction opportunities for cod, the expected economic loss may be reduced as a consequence of lower cod predation on sprat and herring, leading to higher production potential for these species.The model, which is outlined in Section “Method”, has been extended to include the effects of species interactions as well as climate change and is applied to the Baltic Sea fleets which target cod, sprat and herring.The long term economic and biological effects of three management scenarios are presented by simulating the long term dynamics of the fishing fleets.Cod is an economically important species in the northern hemisphere and the impacts of climate change on cod have therefore been the subject of several studies.The economic effect of climate change on the cod fisheries in the Barents Sea was estimated by Eide and included both a cooling effect on water temperature due to the weakening of the Gulf Stream and a direct warming effect due to a warmer climate.Eide concludes that the economic effect of climate change is insignificant compared to the economic effect of normal environmental fluctuations in the Barents Sea and compared to the economic impact of different management regimes.Another study found that the effect of climate change on the Norwegian cod fisheries of the Barents Sea would be an increase in stock abundance of about 100,000 tonnes per year, corresponding to more than one billion Norwegian Kroner per year.The potential cooling effect resulting from a weakening of the Gulf Stream is excluded in this study.A number of studies have been conducted into the impact of climate change on the ecosystems of the Baltic Sea.However, few have studied the economic effects of climate change in the Baltic Sea.Brandt and Kronbak investigated the stability of fishery agreements under climate change in the Baltic Sea using an age-structured bio-economic model.The authors used a Beverton Holt recruitment function with three different recruitment parameter values for cod, corresponding to low, medium and high impacts of climate change, to estimate the net present value over 50 years and concluded that climate change will lead to reduced reproduction, thereby reducing the likelihood of stable cooperative agreements.While the latter study was applied to one species, Nieminen et al. include cod, sprat and herring in an age-structured multi-species bio-economic model for the Baltic Sea, including interactions between different species.They assess different management scenarios and either use current fishing mortality rates or fishing mortalities that maximise the net present values, under “good” and “bad” environmental conditions respectively.The study shows large differences in net present values and fishing mortalities in the four management scenarios.The present paper also assesses the economic impacts of management scenarios for cod, sprat and herring, and includes species interaction as well.However, the present study differs from the study by Nieminen et al. in that it includes salinity predictions from a recent climate model for the Baltic Sea in the applied bio-economic model.Furthermore, the present model differs from the one used by Nieminen et al. in that it includes multiple fleet segments with detailed information regarding the cost structure, by including investment and disinvestment opportunities for the fleets and by including age-disaggregated prices.The increasing number of models that attempt to measure the main dynamics of marine ecosystems has also led to studies of the effect of a changing climate on these ecosystems and the resulting economic consequences for the fisheries.The effects of climatic change range from increasing sea surface temperatures and reduced ocean acidification to rising sea levels and varying frequencies and amplitudes of rainfall, storms and cyclones.In Northern Europe, the observed volume and intensity of precipitation increased during the period 1946–1999, which has increased runoff to water bodies and the risk of flooding.Runoff is expected to increase by 9–22% by the 2070s, which will increase the discharged volume of brackish water into the Baltic Sea.One of the consequences of this is reduced salinity concentrations in the Baltic Sea, which is also a result of a reduction in major saline water inflows from the North Sea through the Danish straits and the Belt Sea.Moreover, regional ocean models show that increasing sea surface temperatures of 2–3 °C are expected for the Baltic Sea by the end of the 21st Century, which is directly caused by air–sea interaction.The present study focuses on how climate change will affect cod recruitment and the resulting economic consequences for the fishing fleets.Therefore, the climatic effects on salinity levels are of special concern since they are found to be positively related to oxygen, which again is positively related to the success of cod recruitment.Because Baltic cod eggs are positively buoyant in saline waters and negatively buoyant in the bottom layers in fresh waters, periods with low salinity levels will mean that the cod eggs will sink to the deeper more oxygen poor layers.Therefore, reproduction of cod depends on major Baltic inflows of saline and oxygenated water from the North Sea mixing with the bottom layers, causing the eggs equilibrate at more oxygen rich layers of the water column of approximately 14.5 practical salinity units.However, the frequency of the MBI has been reducing since the 1980s and was, together with high fishing mortality rates, deemed a major reason for the decline in the cod stock during the 1980s and 1990s.Meier et al. simulated the effect of climate change on salinity concentration based on a global climate model and two greenhouse gas emission scenarios, which are based on assumptions regarding demographic, economic and technological developments of the world).These estimates are used in the present paper to simulate the economic consequences of climate change in the Baltic Sea.The method used to evaluate the effect of proposed climate change on the multi-annual management plan for cod is as follows.First, the biological and economic effects of the multi-annual management plan for cod are simulated in the bio-economic model that allow the age-disaggregated cod stock to predate on the age-disaggregated sprat and herring stocks.This is the baseline model, which is compared with a climate change model that, besides the inclusion of species interaction, also includes the effects of climate change on cod recruitment, where the salinity level is used as a proxy for environmental forcing on cod recruitment, cf. Section “The effect of climate change on the Baltic Sea cod”.Secondly, this climate change model, which is based on the multi-annual management plan, are compared with two alternative management scenarios that both are focused on keeping the cod stock at a sustainable level given the induced climate change and species interaction.1,The three management scenarios are:Scenario 1: Multiannual management plan for Cod,It follows the multiannual management plan for cod.Target fishing mortalities for sprat and herring is set according to ICES.Scenario 2: Cod preservation,The cod stock is kept above the initial SSB of 2008.This implies that the target fishing mortality rate of cod is varied such that the stock level of cod in the final year of the simulation is the same as the value in the start year.The economic performance consequences of protecting the stock in this way are assessed.Target fishing mortalities for sprat and herring are kept constant, as in scenario 1.Scenario 3: NPV maximisation,The target fishing mortalities of cod, sprat and herring are varied such that the total net present value over an infinite time period2 for the Baltic fleets are maximised.The cod spawning stock biomass is kept above a limit reference point of 90.000 tons.The analyses include 10 fleet segments that all have their main fishing ground in the eastern Baltic Sea and covered 93% of the total landed value of Atlantic cod, European sprat and Atlantic herring in the eastern Baltic Sea in 2011.The Fleet segments are mainly selected based on the length groups, but also the gear type and the fishing region are used to define the segments.To keep the analyses to a relatively small number of segments, the 8 EU countries that surround the Baltic Sea are divided into 4 regions: DEU/POL, DNK/SWE, LTU/LVA and EST/LVA, where each region is expected to have a similar fleet cost structure, but where differences in cost structure between the four regions may be significant due to historical, regulatory or social reasons.An overview of the main technical and economic characteristics of the 10 fleet segments is provided in Table 1.The fleet segments with the most sea days per year is the passive gears 0–12 m and they amount for 64% of the total sea days, but only 10% of the catch value.In comparison does the trawlers 24–40 m amount to 17% of the sea days, but 55% of the landings value.The biological characteristics of the three species included in the model include the catchable stock biomass in the first period of the simulation and the recruitment coefficients.CSB is defined as the catchable stock biomass that can be caught by fishermen, in this case the age groups 1–8 for herring and sprat and the age groups 2–8 for cod.Other characteristics include the Total Allowable Catch, which is dependent on both the CSB, the target F, the natural mortality and management rules regarding the annual magnitude of change, see Salz et al.Furthermore, the average landings prices are presented in Table 2.Age-class specific characteristics of the model include weight at age, maturity at age, natural mortality rate and the initial stock abundance of the species all of which are presented in Table 3.Salinity concentration from 1966–2009 was obtained from the Swedish Oceanographic Data Centre at the Swedish Meteorological and Hydrological Institute.The salinity measurements from station BY31 in the Landsort Deep were used as a proxy for the environmental conditions in the Baltic Sea by Heikinheimo.However, this method is criticised by Margonski et al. who points out that there is no record of substantial cod spawning in the Landsort Deep.Thus, the average values of the salinity concentration from station BY5, which are the most important cod spawning area in the Baltic Sea, are used in the present context as a proxy for the environmental conditions in the Baltic.The average salinity concentrations from April to August at depth 10, 20, 30, 40, 50, 60, 70 and 80 m were used in the stock-recruitment estimation of cod.This period corresponds to the spawning peak of cod, which varied substantially between the end of April to the end July during the period 1969–1996 and in July/August in the period 1992–2005.The development in spawning stock biomass, salinity concentration and cod recruitment during the period 1966–2010 is shown in Fig. 1.Since the cod recruits are measured at an age of 2, the time series of cod recruitment is lagged with 2 years in order to show the direct correlation between SSB, salinity and cod at the egg stage.It is here assumed that the same proportion of cod eggs survive to the age of 2 during the period.Overall, the development in spawning stock biomass is following the same trend as the cod recruitment.A visual investigation of the salinity data in Fig. 1 shows a cyclical development, where at least three periods show a clear downward sloping trend.Before these periods similar trends could be identified but not as profound.The average PSU for 1966–81 was 10.26, while it was 10.09 for 1983–2010.Although the relationship between salinity and recruitment is not obvious it could be argued that the few years with high salinity is not enough to secure recovery of the cod stock taking high fishing mortality rates into account.Furthermore, there is a distinct correlation that periods with very low salinity concentration decreases cod recruitment significantly, indicating the impact of oxygen depletion on cod recruitment.In periods with higher amount of salinity, the correlation between salinity and cod recruitment is less distinct.Nevertheless, the expected future decrease in the salinity level is predicted by Meier et al. to be 3.2 PSU and 3.4 PSU respectively at the end of the 21st century using 2 greenhouse gas emissions scenarios A2 and B2 as input in the global climate model, ECHAM4.The average of these two climate change estimates is in the present context used to simulate the economic consequences of climate change in the Baltic Sea, where it is assumed that the salinity content of the Baltic Sea declines gradually over the period, corresponding to 0.04 PSU per year.The multi-annual management plan for cod, demands that the target fishing mortality rate for cod is 0.3.Moreover, the maximum change in cod TAC from one year to the next is restricted to 15%.There exists no multi-annual management plans for sprat or herring in the Baltic Sea.Instead, the target fishing mortalities estimated by ICES are used in the present context.Thus the multi-annual management plan for cod, also denoted the “EU cod plan”, is used as a starting point when assessing the economic as well as the biological implications of changing the management regimes, while acknowledging that climate change is expected to change the recruitment viability of cod and that cod predates on sprat and herring.The effect of including climate change on the long term economic fleet performance is shown in Table 4.The present value of the economic performance indicators, revenue, variable costs, gross cash flow, capital costs and profit all show the same trend, i.e. that the climate change model results in lower performance than the baseline model obtains the lowest fleet performance, while the baseline model obtains the highest fleet performance.This result is caused by the climate change effect that has a negative effect on the long term cod stock biomass.The negative economic effect of climate change is reduced by the relative increase of the sprat and herring stocks that is caused by lower cod predation on herring and sprat.As described in Section “The effect of climate change on the Baltic Sea cod”, climate change effects the salinity concentration.Moreover, salinity is a good indicator for cod recruitment.The relationship between the expected development in salinity concentration and cod recruitment is shown in Fig. 3.As the salinity concentration declines, the cod recruitment does the same.The recruitment of cod in the beginning varies.This is because it is based on the most recent recruitment estimates.This reflects well the stochasticity that is expected in both the salinity concentration and the cod recruitment over the entire period.However, since the purpose of this paper is to describe the overall trend of the recruitment, stochasticity is not dealt with for the long term simulation.Given the above analysis, it is clear that climate change will alter the expected economic and biological outcomes of the multi-annual management plan for eastern Baltic cod to some extent.Therefore, it is important to discuss alternative management strategies that can maintain the cod stock within sustainable limits while ensuring an economic outcome which is as high as possible for the fishery.Table 5 shows the present value of profit for the Baltic fleet for three management scenarios, all evaluated based on both the baseline model and the climate change model.Scenario 1 is assumed to follow the management plan for cod and corresponds to the NPV of Table 4.Scenario 2 is based on cod preservation, where the cod stock is maintained above the 2013 level and Scenario 3 maximises the NPV by adjusting the fishing mortality with the constraint that all stocks are above the minimum viable stock level.Under the baseline model, the simulated NPV of the fishing fleets, obtained in the cod preservation scenario, is approximately the same as under the cod management plan as a result of a fairly stable development in the cod stock, while the NPV maximisation scenario is increased by 13% in the NPV maximisation scenario by reducing the fishing mortality of cod and increasing the fishing mortality of herring and sprat.Under the climate change model, the NPV will increase with 4% relative to scenario 1, because the conservation strategy will mitigate the pressure on the cod stock.This means that increasing the cod stock under climate change by lowering the target fishing mortality for cod will have a positive influence on the economic performance of the Baltic Sea fleets.In the NPV maximisation scenario, the NPV is increased by 21% by reducing the fishing mortality of cod even further and by increasing the fishing mortality of herring and sprat.The effects of the NPV maximisation scenario through reallocation of fishing mortality will thereby be positive regardless of climate change, but the need for such management plan will be significantly larger with climate change.The development in the revenue over the entire simulation period is shown in Fig. 4 for both the baseline model and the climate change model.Following the EU management plan for cod in the climate change model, the revenue of the entire eastern Baltic fleet is expected to decrease to €145 million in 2036 as a result of climate change, whereas the cod preservation management scenario will lead to slightly higher revenue.Scenario 3 increases the generated revenue until year 2026 and is then reduced, partly because of climate change and partly because the revenues and profits in the beginning of the period contribute more to the NPV than later, due to discounting.This results in a NPV at €472 million.In the baseline model, the development in revenue is relatively stable for Scenario 1 and 2, while the revenue in scenario 3 decreases initially, after which it increases to €205 million in 2036.The fishing mortalities that lead to the above dynamics are shown in Table 6.The target fishing mortality rates that are used in scenario 1 are the same as those used in the multi-annual management plan for cod and for the Baltic Fisheries Assessment Working Group.Through optimisation, scenario 2 finds the target fishing mortality rate that preserves the cod stock.This is estimated to be 0.31 for the baseline model and 0.28 for the climate change model.The target F of cod that maximises the NPV declines to 0.21 for both models, compared to the 0.3 used in the EU cod plan, while the target F for sprat and herring increases to 0.37 and 0.46 for the baseline model and 0.33 and 0.38 for the climate change model, compared to 0.29 and 0.22, used by ICES.It is worth noting that in scenario 3 where the model is allowed to estimate all F’s, the target fishing mortality for herring and sprat is higher than the MSY-F for these stocks estimated on a single stock basis.Furthermore, these target F’s are lower in the climate change model compared to the baseline model.This reason for this is that the increase in stock sizes reduces the recruitment rate as a Ricker stock-recruitment relationship is assumed.Hence, lower target fishing mortality rates are required to maximise NPV in the climate case than in the baseline case.It is clear from Table 6 that the target fishing mortality rate for cod must be reduced significantly in order to maximise the net present value of the fishing fleets.If the purpose is to preserve the cod as in scenario 2, the fishing mortality rate of cod must also be reduced if climate change is taken into account.The stock development for cod, sprat and herring is given in Fig. 5 for scenarios 1–3.Following the EU cod plan, the model predicts that the cod stock will decline from year 2019 due to climate change, but it will still be above the limit reference points3 for cod in 2036.The cod stock development of the cod preservation scenario is slightly higher and reaches 296,000 tonnes in 2036.If the purpose is to maximise the long term profits of the fishing fleet, the stock biomass should be allowed to increase to 485,000 tonnes until year 2026 by initially lowering the catches of cod and then slowly increasing the catches.The cod stock is then reduced to 436,000 tonnes in year 2036 due to a combination of high catches and climate change effects.The sprat stock decreases rapidly in the beginning of the simulation period for all management scenarios, in order to reach the MSY biomass level, in accordance with ICES.Compared to multi-annual cod plan, the sprat stock decreases both slightly in scenario 2 and scenario 3, because the increasing cod stock in scenario 2 and 3 induce higher predation on the sprat stock.The herring stock biomass is expected to increase slightly for both the cod management plan as well as for the cod preservation scenario.Similar to the sprat stock, the herring stock is also expected to decrease due to increased cod predation.In this paper, the expected economic consequences of climate change in the Eastern Baltic Sea have been investigated in a dynamic age-structured bio-economic model that takes species interactions into account.Through three management scenarios, representing different management objectives, the paper analysed how a manager could react to these changes.The effects of the management scenarios are also shown for the baseline model, which does not include the effects of climate change.The management scenario that follows the multi-annual management plan for cod in the Eastern Baltic Sea shows that climate predictions have a negative effect on the net present values of the fishing fleets.The cod stock is affected negatively by the predicted climate change, leading to lower biomass levels at the end of the simulation period.Preserving the cod stock, as done by management scenario 2 by lowering the target fishing mortality rate of cod, leads to higher net present values of the fishing fleets as well as higher cod stock levels compared to the baseline scenario.The last management scenario maximises the net present value of the fishing fleets by optimising the target fishing mortality rate, which results in reduced target fishing mortality rate for cod and increased target fishing mortality rate for sprat and herring, compared to the baseline scenario.The last management scenario shows that if the purpose is to maximise the net present values of the fishermen, the current target fishing mortality rate of cod should be reduced and the current fishing mortality of herring and sprat should be increased.This point is true irrespective of climate change, but the economic gains from such a management scenario are significantly higher in the climate change model, compared to the baseline model.Compared to the multi-annual cod management plan, the implementation of target fishing mortalities that maximise the long term net present value for the fishing fleets must be expected to favour the fleet segments which target cod, since the relative cod stock increases compared to the multi-annual management plan, which affects the relative stock of sprat and herring negatively.Economic policy analyses usually do not consider economic distributional effects among fishermen.However, the use of a numerical dynamic simulation model makes it possible to present results disaggregated into regions, fleet segments and fish stocks, which serves to enlighten the decision making process.In the present context, salinity is only included as a proxy for the environmental effects on cod recruitment, while no climate effects are included for sprat and herring.Sea surface temperatures and/or salinity levels could have been included as environmental drivers for sprat and herring, but it was decided not to as too many opposing environmental factors make it difficult to interpret the results.Therefore, this paper is restricted to the effect of salinity concentration on cod recruitment, while the effect of including other environmental variables is the subject of future work.The results presented in this paper are subject to uncertainties on different scales.On the larger scale, the results are subject to uncertainties in the two climate change scenarios that form the basis for the global climate models, predicting the future salinity concentration.The climate change scenarios are based on expected demographical, economic and technological developments and are uncertain.However, a global climate model from a recently published paper did not predict significant disparities in the estimated concentration of salinity in the Baltic Sea when two different climate change scenarios were considered.The average salinity concentration of these two scenarios has therefore been used in the present context, thus catching the noise of the study by Meier et al.The relationship between salinity and cod recruitment is widely acknowledged and has been used in several studies, but the magnitude of this relationship is uncertain and depends on the chosen time series.In this paper, the entire time period from 1966–2008 for cod and 1974–2008 for sprat and herring is chosen to take account of both good and poor environmental periods.Furthermore, uncertainties will also exist in the biological data used for stock estimations and in economic data used in the bio-economic model.The results are therefore indicative and further empirical studies are recommended to support the results.This study suggests that long term management plans should include the effects of climate change, if the aim is to secure the future long term economic performance of the fleets, while maintaining sustainable stocks in the eastern Baltic Sea.Furthermore, the study indicates that target fishing mortality rates should be lowered for cod and increased for sprat and herring in order to optimise the economic performance of the fleets, regardless of climate change.As such the study yields valuable information for the future management of the eastern Baltic Sea fishery, given that it is today an acknowledged fact that climate will change in the future, and thus affect the recruitment of, among others, the eastern Baltic cod.The study suggests that the currently accepted long-term management plan for cod may not be optimal, given the proposed climate changes, and suggests alternative management scenarios, thus also proposing a valuable tool for management assessments given climate change.
The long term management plan for cod in the eastern Baltic Sea was introduced in 2007 to ensure the full reproductive capacity of cod and an economically viable fishing industry. If these goals are to be fulfilled under changing environmental conditions, a readjustment of the current management plan may be needed. Therefore, this paper investigates the economic impacts of managing the cod, sprat and herring stocks in the eastern Baltic Sea, given on-going climate change, which is known to affect cod recruitment negatively. It is shown that climate change may have severe biological and economic consequences under the current cod management plan and that the negative effects on the economic performance of the fishermen as well as on the abundance of cod can be mitigated by reducing the target fishing mortality rate of cod. These results are obtained by simulating three management scenarios in which the economic consequences of different management objectives for the fishing fleets are assessed through a dynamic multi-species and multi-fleet bio-economic assessment model that include both species interactions and climate change.
136
The role of rock strength heterogeneities in complex hydraulic fracture formation – Numerical simulation approach for the comparison to the effects of brittleness –
Hydraulic fracturing has been recognized as a key technology to develop unconventional hydrocarbon reservoirs and enhanced geothermal systems.Since the geometrical complexity such as the branching of induced fractures in rock mass significantly influences the permeability for production, it is necessary to evaluate the efficiency of hydraulic fracturing, especially in terms of the formation of fracture network.The formation of the fracture network depends on various factors such as in-situ stress field, brittleness of rock mass, injection fluid viscosity and existence of natural fractures.Although many researchers have studied the effects of the factors to the fracture formation, the characterization of hydraulically induced fracture and the accompanying network still remains as an active research theme."The evaluation of fracture network for shales have shown the behavior of hydraulic fractures could rely on two rock physics parameters, i.e., Young modulus and Poisson's ratio in the laboratory experiments.Rock specimens exhibit their own heterogeneous property of strength, which could be observed as the locality in the occurrence of acoustic emissions or in the micro seismicity in laboratory experiments.Although it is well known that the strength heterogeneities of rock mass have a great influence on the creation of complex fractures, few studies have focused on the role of the strength heterogeneities of rock mass in the process on the creation of fracture network in hydraulic fracturing.It is, therefore, necessary to investigate the role of strength heterogeneities for various types of rocks possibly stimulated by hydraulic fracturing.We would like to focus on the role of strength heterogeneities in the creation of fracture network.The characterization of fracture formation has been attempted by the combination of numerical simulation and laboratory tests, and it has been shown that numerical methods could successfully take account of the heterogeneous effects and reflect some aspects of rock failure.Among numerous numerical simulations conducted to deal with failure behavior of rock mass so far, we would like to employ the discrete element method because of its capability of capturing discontinuous behavior.Since it is necessary to give appropriate accounts for the heterogeneity of DEM model in a quantitative manner, we would like to modify the existing models to simulate the heterogeneous feature of rock mass using random arrangement of particles of many different radii and random setting of microscopic strength properties.The relationship between the heterogeneity in DEM models and real rock have to be carefully investigated.In the present study, we focus on the strength heterogeneities of rock as one of the key factors in the creation of complex fracture network for accurate performance evaluation of the hydraulic fracturing.To accomplish this, we conduct a series of numerical experiments of hydraulic fracturing using the DEM.Before simulating hydraulic fracturing, we first investigate appropriate introduction of heterogeneous property of rock in a quantitative manner based on the relationship between AE counts and loading stress obtained by uni-axial tensile tests.We then conduct numerical simulations of hydraulic fracturing using our DEM models with different heterogeneous properties.Finally, we compare the complication mechanism due to the brittleness, which is often considered as the index of the complication.To investigate complex fracture network formation, we made use of an original extension of the DEM algorithm proposed by Shimizu et al., which had been successfully applied to numerical simulations of hydraulic fracturing.In this section, we will give a brief explanation of our method due to limitations of space.Details can be found in a reference.The criterions implicitly include the effect of pore pressure that acts on particle surface as a component of total normal and shear stresses between each pair of adjoining particles.When the bond breaks at a contact point of the adjoining particles, a micro crack is generated between the particles.These microscopic parameters of σc and τc are also calibrated using a method mentioned in section 2.2.To create numerical model for rock tests and hydraulic fracturing, a packing procedure is conducted based on Potyondy and Cundall and Shimizu et al.The model is expressed as an assembly of particles connected by the bonds.The particle radii are set in a random manner and follow the uniform distribution bounded by maximum radius Rmax and minimum radius Rmin.To select appropriate microscopic model parameters, preliminary simulations of unconfined compression test, uniaxial tensile test, and permeability test are conducted.Five microscopic parameters are adjusted to represent five macroscopic mechanical properties obtained from these rock test simulations."The five microscopic parameters are Young's modulus of the bonds Ep, stiffness ratio of the springs α, tensile strength σc, shear strength of the bonds τc, and the initial aperture w0, and the five macroscopic parameters are Young's modulus, Poison's ratio, unconfined compression strength, uniaxial tensile strength, and permeability. "The Young's modulus, the Poison's ration, the UCS, and the UTS are estimated from the stress-strain curves obtained in the numerical simulations of unconfined compression test and uniaxial tensile test.Permeability is obtained from flow rates and difference of fluid pressure applied top and bottom of rectangular numerical specimen.Conventionally, heterogeneities of DEM model is introduced by random setting of particle radius and bond strength.Heterogeneous model has large maximum to minimum radius ratio.Past studies have shown that distribution of particle radius has a great influence on the failure behavior and the macroscopic parameters of numerical models such as Young modulus.A variety of particle radii could make hydraulically induced fracturing complex in geometry.In addition, Potyondy and Cundall set tensile and shear strength of bond randomly to follow the normal distribution, and introduce heterogeneities into their DEM model.However, the influence of variety of particle radii and bond strengths with normally distributed strength heterogeneities of DEM model in hydraulic fracturing has not been revealed yet.To study the effect of strength heterogeneities on the fracture network induced by hydraulic fracturing, microscopic strength heterogeneities should be modeled in DEM and be controlled with input parameters.Therefore, we evaluate the strength heterogeneities introduced by the distribution in particle radius and in bond strength, and attempt to express appropriate strength heterogeneities in our DEM model.In addition to the normal distribution, the Weibull distribution is examined for bond strength, as microscopic strength of rock following the Weibull distribution is often used in the other numerical studies.To investigate strength heterogeneities of DEM model, we use a method originally proposed by McClintock and Zaverl and Takahashi et al.McClintock and co-workers showed that microscopic strength heterogeneities of rock followed the Weibull distribution.Note that the microscopic Weibull coefficient obtained from laboratory uniaxial tensile tests is different from the Weibull coefficient of shape parameter which is regarded as homogeneity index for macroscopic strength.In McClintock and Zaverl, it is shown that plots of loading stress and AE counts obtained in uniaxial tensile test on logarithmic aligns in a straight line expressed as Eq. mentioned below, and this tendency proves that microscopic strength heterogeneities of rock follow the Weibull distribution.Moreover, McClintock and Zaverl mentioned that microscopic strength heterogeneities could be evaluated with Weibull coefficient obtained as the slope of the line expressed as Eq.This theory was verified by laboratory experiments in Takahashi et al. and Sato and Hashida."In these papers, microscopic Weibull coefficients of various rocks are estimated in McClintock's approach.These results show that microscopic Weibull coefficients of them range from 0.9 to 11.4 and the mean value is about 4.1.In this section, the method of evaluating microscopic Weibull coefficient and application to DEM are shown."To adopt the McClintock's approach in DEM, we perform numerical simulations of uniaxial tensile tests.Fig. 1 shows an example of our numerical model used for uniaxial tensile tests.Size of the model is 10 cm in length and 5 cm in width.Macroscopic properties of Lac du Bonnet granite are used for calibration.Targeted and calibrated properties in our simulations are shown in Table 1.For calibration, we use one model whose Rmax/Rmin is 1.5 and the Weibull coefficient for bond strength distribution is 3.0 because the average value of the Weibull coefficient of granite investigated in Takahashi et al. and Sato and Hashida is about 3.8.These microscopic model parameters are applied to the following numerical experiments.In the uniaxial tensile tests, particles located both top and bottom edges of the model are moved upward and downward at a fixed velocity.The loading rate is fixed to 0.1 cm/s so that stable macroscopic property could be obtained from uniaxial tensile tests and the calculation time is optimized as short as possible.The validity of the loading rate is described in detail in Appendix.The axial loading stress applied is calculated by totaling axial stress acting on the edge particles.The microscopic Weibull coefficient m is obtained by a fitting of the curve in Eq. in the cross-plot of the integrated number of broken bonds against the loading stress.We evaluate the strength heterogeneities of the numerical models considering the microscopic Weibull coefficient and the error of least square approximation as in McClintock and Zaverl.Small error means that the microscopic strength distribution follows Weibull distribution like real rocks.Since the DEM results depend on packing state of particles, we perform numerical simulations using 10 realizations with different arrangement of particles for each heterogeneous model.To investigate the strength heterogeneities due to the radius distribution, we use seven heterogeneous models whose Rmax/Rmin values are 1.5, 2.0, 3.0, 4.0, 5.0, 7.5, and 10.0.The close-up views of each heterogeneous model are shown in Fig. 3.The average radius of particles is fixed to 0.05 cm in all heterogeneous models.We eliminate unexpected values by the Thompson rejection to avoid misinterpretation of the results.Significance level in our study is set to 0.1%."The unexpected values of the Weibull coefficient are caused by very small number of bond's breakage before macroscopic failure.Some models are too homogeneous to have enough number of breaking bonds for the estimation of the Weibull coefficient.Fig. 4 shows the Weibull coefficient of each sample and the mean values of the Weibull coefficient in each radius group after the rejection test.As shown in Fig. 4, mean value of the Weibull coefficient m decreases as Rmax/Rmin increases.This indicates that large Rmax/Rmin could indirectly make numerical models heterogeneous in the strength since the value of the Weibull coefficient becomes smaller as material becomes more heterogeneous.The mean value of the Weibull coefficient seems to converge around 12, while most of real rocks have less than 10 for the Weibull coefficient.Therefore, we think that the heterogeneities introduced by the radius distribution is not sufficient for representing that of real rocks even with the high value for Rmax/Rmin.As a result of numerical simulation, Microscopic Weibull coefficient decreases as standard deviation increases.On the other hand, the errors of least square estimation in the normal distribution models increase as the standard deviation increases.Fig. 6 and Fig. 7 show the Weibull coefficient of each distribution model and the relation between the error of least square estimation and normal distribution model, respectively.As shown in Fig. 6, the mean value of the Weibull coefficient of “Normal 0.75”, “Normal0.85”, “Normal0.95”, and “Normal1.0” is less than 10 and small enough to simulate the strength heterogeneities of real rock.However, the error of least square estimation rapidly increases from “Normal0.85” to “Normal1.0” as shown in Fig. 7.This means that the number of breaking points does not vary linearly with the loading stress,).In the case of “Normal0.95” and “Normal1.0”, a few bonds break with comparatively small loading stress, and then, the number of broken bonds increases rapidly.This failure behavior is caused by microscopic strength distribution of the models.Fig. 9 shows probability density of bond strength of each normal distribution model in logarithm.The tendency of increasing bond breakages of all normal distribution models shown in Fig. 8 almost correspond to the probability density in Fig. 9.When the range of the bond strengths follows the normal distribution, the bond breaks in a bimodal way against the stress as the curve for the AE number shows an inflection as increasing stress in Fig. 8, which does not appear in the past experiments.The bond strength distribution would have a dominant influence on the microscopic strength heterogeneities and the microscopic strength distribution of normal distribution model is similar to normal distribution.The bond strength in each Weibull distribution model is set in a random manner to follow the Weibull distribution.We create five models, “Weibull1.5”, “Weibull3.0”, “Weibull5.0”, “Weibull10.0”, and “Weibull20.0”.Their shape parameters of the Weibull distribution are set to 1.5, 3.0, 5.0, 10.0, and 20.0.The bond strength distribution of Weibull distribution model is shown in Fig. 10.In the case of the Weibull distribution model, the relationship between input and output Weibull coefficients is the linear relation except “Weibull20.0”."Input Weibull coefficient is set for bond strength distribution while output Weibull distribution is obtained from the McClintock's approach.We, therefore, could directly control the microscopic strength heterogeneities of DEM models by assigning the strength distribution of bonds based on the Weibull distribution except for higher value of the Weibull coefficient, i.e. “Weibull20.0”.Since the output Weibull coefficient for models with Rmax/Rmin of 10.0 is about 10 as shown in Fig. 4, the output Weibull coefficient of “Weibull20.0” is determined by the radius distribution rather than the microscopic distribution of the bond strength.In addition, the variance of output Weibull coefficient increases as input Weibull coefficient increases.This tendency is also observed in laboratory experiment and the other numerical simulations.Moreover, the error of least square estimation of Weibull distribution models is smaller than that of the normal distribution model.These observations may imply that the Weibull distribution model could reproduce the microscopic behavior of real rocks better than the normal distribution model.As shown in Fig. 13, the cross-plot of the number of the bond breakages against the loading stress shows the fitting curve of a straight line for Eq.The tendency in increasing bond breakages corresponds to the distribution of the bond strength as shown in Fig. 14.The microscopic strength distribution of Weibull distribution models follows the Weibull distribution as well as that of real rock.Therefore, the bond strength of DEM model could be chosen to follow the Weibull distribution to simulate both the microscopic strength heterogeneities and the failure behavior using the DEM models.We perform a series of numerical simulations using the DEM models with quantitative strength heterogeneities mentioned in section 3.The size, shape and the hole setup of the model used in our hydraulic fracturing simulations is shown in Fig. 15.The ratio of maximum to minimum particle radius is 1.5 for the simulation models.The confining compressional stresses of 10 and 5 are applied to the longitudinal and lateral walls, respectively, before the numerical simulation of hydraulic fracturing.These confining stresses are kept constant during numerical simulation.Initially, the numerical model is assumed dry and the saturation of every pore is set to 0.During the simulation, a fluid is injected with fixed flow rate to accumulate the fluid pressure in the borehole.The accumulated pressure causes the normal force calculated with eq. placed against the borehole wall.Water is assumed as the injected fluid with the water viscosity of 0.001 .The mechanical model parameters are calibrated as described in section 2.2 to imitate the Lac du Bonnet granite.The macroscopic parameters of the numerical model after the calibration are shown in Table 4.To reveal the influence of the strength heterogeneities on the results of hydraulic fracturing, we prepare 4 numerical models.Three models have different bond strength distributions from each other.Their bond strengths of the three models follow the Weibull distribution with the Weibull coefficients of 1.5, 3.0, and 5.0 that are called as “Weibull1.5”, “Weibull3.0”, and “Weibull5.0,” respectively.We set a constant to all the bond strengths to the other model, which is called “Fixed.,As a result of our numerical simulations, geometrically complex fractures are created in the heterogeneous models.The created fractures and the micro cracks induced by the hydraulic fracturing as well as the infiltrated fluid in each model are shown in Fig. 16.The black and gray parts of the lines in the figure indicate fractures or micro cracks created by tensile and shear breaks, respectively.These lines are drawn, assuming that fractures and micro cracks are a series of pores connected to broken bonds.The shaded areas indicate the saturated areas with the injected fluid.Fig. 17 shows the sequence of the pressure in the borehole in each numerical experiment.After the injections of the fluid into the borehole, the borehole pressure increases until a hydraulic fracture is created at the borehole wall to allow the fluid to flow into the fracture.The borehole pressure decreases when the number of the infiltrated pores increases.The number of branches, micro cracks, and the connection of the main fracture with the micro cracks are shown in Table 5.As the Weibull coefficient decreases, the order of heterogeneities increases.The more heterogeneous the numerical model become, the more branches and micro cracks induced fractures have, so that the geometrical complexity of induced fractures increases.The strength heterogeneities have, therefore, explicitly a great influence on the formation of geometrically complex fractures in hydraulic fracturing.The orientation of hydraulically induced fractures at the borehole wall lies exactly in that of the maximum principle stress in the “Fixed” model.According to the theoretical solution, the first crack induced by hydraulic fracturing is to be generated in the direction of maximum principle stress due to stress concentration.On the other hand, hydraulic fractures on the wall of borehole in laboratory and field tests do not always coincide with the direction of maximum principle stress because of the influence from the other factors such as the shape of borehole and the strength heterogeneities.The orientation of hydraulically induced fractures initiated on the borehole wall in our heterogeneous models, “Weibull1.5”, “Weibull3.0”, and “Weibull5.0,” did not correspond to that of maximum principal stress, either, due to the strength heterogeneities as are visible in Fig. 16-.In our numerical experiments, we observed two mechanisms that cause the formation of complex geometrically complex fractures in the hydraulic fracturing due to the strength heterogeneities.One of the mechanisms is the scattered creation of micro cracks around the tips of fractures about to extend.In the most heterogeneous model “Weibull1.5”, it is observed that many micro cracks are simultaneously generated around the both ends of the main fracture).When a hydraulic fracture propagates, the tensile stress acts on the matrix around the tip of the fracture due to the fluid pressure.Since there are many weak bonds included in the model of the strength heterogeneities, microscopic failures could take place at scattered locations around the tip of the main fracture due to the stress buildup.These micro-cracks and the main fracture interact, propagate and coalesce with each other, and then many branches and curvatures are formed along the main fracture,).On the other hand, in the homogeneous model “Fixed”, few micro cracks are generated even if the tensile stress acts on the matrix around the tip of the main fracture as it propagates).Since there is no difference in the bond strengths in the homogeneous model, no plural micro cracks is created, and a newly induced fracture is created by the expansion of the main fracture due to the buildup of the injected fluid pressure).As a result, geometrically simple fractures with few branches and minimum curvatures are formed in the homogeneous model.In place of the homogeneity in the bond strength, the distribution of particle radii generates the heterogeneous stress field and results in the small number of branches and flexions in the propagation of the main fractures.As mentioned in section 3, DEM models with a variety of particle radii are heterogeneous in some extent and include microscopic weak parts in the matrix.The other mechanism of the geometrical complication is branches generated at pores that have plural weak bonds in the surrounding particles.Some pores could be arranged with their long axes laid perpendicular to the maximum principal direction as shown in Fig. 20.Since the shapes of each pore are determined randomly in the process of particle arrangement, such “long” pores could exist in our numerical model.Fluid pressure acting in such pores induces tensile stress parallel to the main fracture.Due to this tensile stress, a new fracture is generated at a tip of such pores with weak bond strength) followed by the formation of another fracture at the other tip of the pore when the failure criterion is satisfied).Both the newly induced fractures propagate in the different directions from each other, and the branching of the main induced fracture takes place.However, the branching may not always happen even at “long” pores and the condition of the branching depends on the order of heterogeneities of the numerical model.Fig. 21- shows the results of hydraulic fracturing for the models “Weibull1.5”, “Weibull3.0”, and “Weibull5.0” to the same particle arrangement.The induced fractures and the infiltration of the injection fluid are displayed when the main fractures propagate about half the lateral dimensions of the model.The utilization of the same pore arrangement cause the main fractures of these models to propagate on the same path.Four long pores on the main fracture are filled with injection fluid, too.However, the difference in the bond strength distributions brought the difference in the branching.In “Weibull5.0”, no branching takes place at the long pores, while 2 and 3 branches are generated in “Weibull3.0” and “Weibull1.5” models, respectively.The results of our experiments indicates that the necessary conditions for branching is not only the existence of long pores but enough heterogeneities in the bond strength to have plural weak bonds around the long pores.The branching of hydraulically induced fractures is another cause of geometrically complexity, which depends on the microscopic heterogeneities, in the fracture network.To investigate the influence of brittleness on the mechanism of fracture complication in geometry, we use two numerical models with different Brittleness Indices, named BI20 and BI60.The numerical model for hydraulic fracturing has the shape of rectangular with a borehole at the center like the model we used in section 4.The shape of the numerical model is the same as the model used in section 4.The Brittleness Index of BI20 and BI60 are around 20 and 60, respectively."Brittleness Index of each numerical model is defined as Eq., and the Young modulus, the Poisson's ration and the other macroscopic properties are shown in Table 6.These properties are results of preliminary numerical experiments of unconfined compression tests, uniaxial tensile test, and permeability test shown in section 2.2.The macroscopic properties are set to simulate mechanical properties of shale rock."The Poisson's ratio of the two numerical models is the same and only the Young modulus is different because the Poisson's ratio has little influence on the geometrically complex fracture creation under the same stress condition.Unlike the numerical model in section 4, all bonds have the same tensile and shear strength.We use lower flow rate of 0.002 in this section because the strength of numerical model is lower than in the previous sections.As a result of our numerical experiments, a geometrically complex fracture network with many branches is formed in BI60, while simple fracture is created in BI20.Rickman et al. pointed out the possibility of geometrical complexity in the fracture network that increases with BI, which Hiyama et al. later observed to take place in numerical experiments.Fractures induced by hydraulic fracturing and infiltrated injection fluid in each model is shown in Fig. 22, and the number of branches, micro cracks, and tensile and shear events on main fractures in each model is shown in Table 7.The difference in the number of shear AE for the models BI20 and BI60 in Table 7 indicates that the shear failures attribute to the branching of fractures.These branches extend perpendicularly to the propagating direction of the main fracture.At some tips of the branches, new fractures are generated and extend in the maximum principle stress direction, and several fractures parallel to each other are formed.Some parallel fractures are connected again by shear failure occurring at the tip of parallel fracture.In this way, complex network is formed by shear failures in numerical model with high brittleness.On the other hand, almost no shear event occurs, and simple fracture without a branch is formed in hydraulic fracturing with low brittleness model.The results of our numerical experiments would imply that the brittleness is an indicator to the number of induced shear failure and to the formation of geometrically complex fractures in hydraulic fracturing.Shear failure occurring in the numerical model with large Young modulus is caused by strong fluid pressure acting on the fracture surface.Fig. 23 and Fig. 24 shows fracture propagation and fluid pressure transition in each model, and borehole pressure transition.Fluid pressure in the borehole increases with fluid injection, and a micro crack is generated on the borehole wall with the same fluid pressure in BI20 and BI60,).Injection fluid starts flowing into the micro crack as well as into another crack generated on the opposite side of the borehole, and fluid pressure in the borehole decreases in the model BI 20).The injection fluid fills the fractures and the tensile stress acts at the tip of the fracture on the left hand side.The tip of the main fracture opens due to the tensile stress and the fracture extends without branches).Fluid pressure acts on the fracture surface is not strong enough to create branches.On the other hand, in the model BI60, the injection fluid hardly flows into cracks on the borehole, and fluid pressure is accumulated in the borehole).Since the high Young modulus makes the BI60 model to minimize the deformation of the matrix, the influx of the injected fluid into the main fracture is minimized.Fig. 25 shows the sequence of main fracture width against time in each model.The width of the main fracture in the model BI60 is kept narrower than that in the model BI20 even after cracks are generated on the borehole wall.The fluid pressure continues to increase after the cracks are generated on the borehole wall.High fluid pressure acts on the fracture surface in the model BI60 due to the fracture extension and the subsequent fluid infiltration into the fracture).Fluid pressure in the main fracture is much stronger than the compressional stress acting on the wall of numerical model.High compressional stress caused by the fluid pressure and by the confining stress induces high shear stress in the vicinity of the main fracture.Due to the high shear stress, shear failure would occur to create new fractures around the main fracture.The results of the numerical experiments suggests that the shear failure could be a cause of the formation of geometrically complex fracture network in the model BI60).Less shear failure in the model BI20 could explain the generation of a simple fracture path without branching).We conducted a series of numerical experiments to investigate the influence of the strength heterogeneities of rocks on the geometrical complication of the fracture network in hydraulic fracturing.We first examined what is the optimum way to represent the strength heterogeneities in our DEM model in a quantitative manner, and then conducted numerical simulations of hydraulic fracturing using models with different microscopic Weibull coefficients.Finally, we also investigated the geometrical complication of fracture network for different Brittleness indices.As a result, we obtained the following conclusions.Although the particle radius distribution has some influence on the heterogeneities of our DEM model, realistic Weibull coefficients could not be obtained.The Weibull distribution of the bond strengths is confirmed appropriate to represent the heterogeneous nature of rocks, and is used to simulate the strength heterogeneities with our DEM model.Geometrical complication of hydraulically induced fracture network could be observed in the heterogeneous model as the microscopic Weibull coefficient decreases.The branching of the main fracture to plural micro cracks attributes to the complication.Geometrical complication of hydraulically induced fracture network could also be observed as the Brittleness Index increases.The complication takes place due to shear failures in the vicinity of the main fracture and is due to a completely different mechanism from that in the heterogeneous model.These observations indicate that the geometrical complication of induced fracture network could be pre-estimated more accurately before hydraulic fracturing using the Weibull coefficient for microscopic strength distribution as an index of strength heterogeneities.
It is imperative to develop deep understanding of the behavior of hydro fracture propagation, in which numerous rock-physics factors could be involved, to manage or control fracturing to form a network of interconnected fluid pathways. Since it is known that the strength heterogeneities of rock mass are one of the factors, we conduct numerical experiments using the discrete element method to estimate how the formation of fractures are quantitatively influenced by the strength heterogeneities in the process of fracture network formation. We first justify the validity to introduce the heterogeneities in the numerical models under an assumption that the microscopic strengths of our numerical models conform to the Weibull distribution, and then the simulation of hydraulic fracturing is conducted. In our heterogeneous models, a complex fracture network growth is induced by branching of micro cracks around the tip of main fracture and pores depending on the uniformness of the surrounding stress field. It is revealed that the complication caused by the strength heterogeneities is quite different from that introduced by the brittleness, although the complication factor is thought directly related to the brittleness in practice in the field. Our results indicate that the effect of the strength heterogeneities of rock should be considered as a key factor of the complication of fracture networks, and the evaluation accuracy could be improved by taking into account the strength heterogeneities of rock.
137
Understanding chemistry-specific fuel differences at a constant RON in a boosted SI engine
The US Department of Energy Co-Optimization of Fuels and Engines initiative aims to foster the codevelopment of advanced fuels and engines for higher efficiency and lower emissions.A guiding principle of Co-Optima is the central fuel properties hypothesis, which states that fuel properties provide an indication of the performance and emissions of the fuel, regardless of the fuel’s chemical composition.CFPH is important because many of the fuel candidates being investigated in the Co-Optima initiative are bio-derived compounds with oxygen-containing functional groups not typically associated with commercial transportation fuels.The purpose of this investigation was to determine whether the fuel properties associated with knock resistance, namely the research octane number and the motor octane number, are consistent with CFPH.More than a century ago the complex abnormal combustion phenomenon in spark ignition engines known as “knock” was attributed to end-gas autoignition .However, barriers still exist to developing a fully detailed understanding of knock, which is problematic because it continues in higher efficiency SI engines.Recent downsizing and downspeeding trends with cars and trucks exacerbate the issue by driving engines toward higher power density and higher load duty cycles, where knock is more problematic.In the United States, gasoline is sold with an antiknock index rating that is the average of the RON and the MON.The RON test was originally introduced in 1928, but the MON test was not developed until 1932, motivated by a finding that real-world fuels were underperforming the certification results by 2.5–3.0 octane rating points .The MON test introduced a higher engine speed and a higher intake temperature.The MON test conditions were an effort to address the reality that the relative ranking of knock resistance among a set of fuels changes as the engine conditions change.As a measure of this, the concept of octane sensitivity, defined as the difference between RON and MON, was introduced.Leppard investigated the chemical origin of S, focusing on different chemical classes.For alkanes, there is a two-stage ignition process: low-temperature heat release followed by a negative temperature coefficient region wherein the reaction rate becomes inversely proportional to the temperature.Following these processes is a high-temperature heat release event.The chemical origins and dependencies of the two-stage ignition process were later elucidated in the development of chemical kinetic mechanisms for the primary reference fuels n-heptane and iso-octane , the paraffinic fuels that define the RON and MON scales.The PRF fuels exhibit similar knocking behavior in the RON and MON tests even though the intake manifold temperature increases significantly, thus the NTC behavior makes the fuels insensitive to changes in intake temperature.Because other paraffinic fuels also exhibit two-stage ignition behavior, low S is ubiquitous among paraffinic fuels.In contrast, Leppard showed that neither aromatics nor olefins exhibited the two-stage ignition behavior, and as a consequence, these fuels have high S. Recently, the chemical origins of the fuel S findings of Leppard have been studied by Westbrook et al. , illustrating that fuel S effects can be explained through local electron delocalization.Independent kinetic modeling studies performed by Yates et al. and Mehl et al. demonstrated that the RON and MON tests represented two different pressure-temperature trajectories.The MON trajectory had a higher temperature at a given pressure and consequently avoided the pressure-temperature conditions that resulted in LTHR before encountering the pressure-temperature conditions of the NTC region.In contrast, the RON trajectory had a lower temperature at a given pressure, leading to conditions that yield a much stronger LTHR event prior to entering the pressure-temperature conditions of the NTC region.Thus, the relative knock-resistance order of fuels can change depending on the pressure-temperature trajectory, and this ranking is largely dependent on S. Further, Yates et al. showed that carbureted engines operate at pressure-temperature trajectories between RON and MON, port fuel injection engines operate closer to the RON trajectory, and boosted direct injection engines can operate at pressure-temperature trajectories outside the bounds of RON and MON.While Yates et al. describe these engine technologies in terms of their fueling technologies, the movement toward the RON trajectory can be attributed to a variety of technologies that reduce the temperature of the charge at intake valve closing, with engine breathing technologies being particularly important.In this study we sought to determine whether OI adequately explained the knock behavior using a set of seven fuels under boosted conditions in an SI engine equipped with DI fueling.Three of the fuels investigated are bio-blendstock candidates that are potentially of interest to Co-Optima and represent unconventional fuel chemistries for SI engines.To assess fuel performance over a broad set of intake conditions, the engine was operated over a range of backpressure, exhaust gas recirculation, and intake manifold temperatures at the same nominal fueling rate.The experimental apparatus and much of the methodology used in this study have been previously reported .A 2.0 L GM Ecotec LNF engine equipped with the production side-mounted DI fueling system was used for this investigation.Engine geometry details are presented in Table 1.The engine was converted to a single-cylinder engine by disabling cylinders 1, 2, and 3.The combustion chamber geometry and camshaft profiles were unchanged from the stock configuration.The engine was operated using a laboratory fueling system with a pneumatically actuated positive displacement pump in conjunction with an electronic pressure regulator to provide fuel rail pressure.A constant fuel rail pressure of 100 bar was used throughout this study.A laboratory air handling system was constructed to allow for external EGR while using a pressurized facility and achieving a flipped pumping loop where the exhaust pressure is lower than the intake pressure.To accomplish this, pressurized and dried facility air having <5% relative humidity was metered to a venturi air pump using a mass air flow controller.The venturi created up to an 8 kPa pressure differential, with the vacuum side pulling EGR through an EGR cooler into the intake.A schematic of the air pump arrangement with the electromechanical backpressure and EGR valves is shown in Fig. 1.The desired intake manifold temperature was achieved regardless of the EGR concentration using the combination of an EGR cooler and an electrical heater upstream of the intake surge tank.The EGR used in this study was not treated with an exhaust catalyst before being recirculated to the intake.EGR was measured using a nonintrusive method that utilizes pressure-compensated wideband oxygen sensors in both the intake and exhaust.A Drivven engine controller with the Combustion Analysis Toolkit package was used to control the engine and acquire crank angle ––resolved data; however, detailed analysis on reported data were performed using an in-house–developed code.For each condition tested, cylinder pressure, spark discharge, and camshaft position data were recorded at 0.2° CA resolution for 1000 sequentially fired cycles.Cylinder pressure was measured using a flush-mounted piezoelectric pressure transducer from Kistler, and camshaft position was recorded from the production hall-effect sensors.Fuel injection timing was started during the intake stroke and was held constant at 280° CA before firing top dead center.Spark timing was adjusted as needed to achieve the desired combustion phasing, and spark dwell was held constant with the stock ignition coil at 1.8 ms to maintain constant ignition energy.To prevent hot-spot runaway at high load, a spark plug two heat ranges colder than the production engine’s plug was used.The colder spark plug is not produced by or directly available from the engine or factory original spark plug manufacturer; it was identified through cross-referencing current production spark plugs with the same thread and reach as the factory original spark plug.A Denso Iridium Power spark plug was identified as being compatible and was used throughout the study.Engine fuel flow was measured with a coriolis-based fuel flow meter, and engine coolant temperature was maintained at 90 °C.Combustion analysis was performed using a LabVIEW-based routine, which was applied on a per-cycle basis.For each cycle, trapped mass was calculated using the cam position sensor feedback with in-cylinder pressure and by applying the method described by Yun and Mirsky with the measured polytropic expansion coefficient.Temperature at intake valve closing was then calculated using the heat capacity of the intake and trapped residual constituents, assuming the trapped residual temperature was equivalent to the measured exhaust temperature, with subsequent crank-angle resolved temperature solved through the ideal gas law.Apparent heat release was solved using the zero-phase-filtered in-cylinder pressure and the individual cycle trapped mass with the approach described by Chun and Heywood .Current analysis was based on apparent heat release on a cycle-by-cycle basis.A total of seven fuels were investigated as part of this work, with fuel properties given in Table 2.Three of the fuels, described as the “Co-Optima Core” fuels, were custom blended by Gage Products to produce a desired set of orthogonal fuel properties while varying the fuel composition.All three of these fuels have the same nominal RON while providing two nominal levels of S.The high S for the Co-Optima aromatic fuel is primarily attributable to a high concentration of aromatic compounds, while the high S for the E30 fuel is primarily attributable to a high concentration of ethanol, which also produces a high latent heat of vaporization.Thus, the chemical source of the high S for these fuels is different.The fourth fuel investigated was the Tier III certification fuel.This fuel is intended to be representative of regular-grade pump fuel in the United States and, as a result, contains 10 vol% ethanol and has lower RON and MON values than the Co-Optima Core fuels.The Tier III certification fuel was obtained from Haltermann under product number HF2021.The final three fuels contain fuel components that are of interest to Co-Optima: ethyl acetate, methyl butyrate, and anisole.The pure compound RON, MON, and S values for these compounds, and ethanol for reference, are given in Table 3.The ester compounds, EA and MB, are interesting because unlike ethanol, which has high RON and high S, these have high RON and low S, or as is the case with EA, negative S.According to Westbrook et al. , high S can be attributed to electron delocalization within a molecule effectively stabilizing the radical after hydrogen abstraction.Since EA and MB both have electron delocalization associated with the permanent dipole moments as part of the ester functionality, our expectation would be that these compounds would have a high S.It is interesting to note that the HoV of EA, MB, and anisole are all significantly lower than that of ethanol.These components were blended at a level of 25 mol% to produce a nominal RON of 98, as shown in Table 2.The EA, MB, and anisole were each blended at 25 mol% into five-component surrogate mixtures that contained 23 mol% toluene, 5 mol% 1-hexene, and 47 mol% saturates.The saturate composition was a mixture of n-heptane and iso-octane, with the ratio of these species varied to produce a RON of 98 for the blend, where the blending ratio to produce a RON of 98 was determined through trial-and-error.The detailed composition of all fuel blends is shown in Table 4.The EA, MB, anisole, and 1-hexene were obtained from Sigma-Aldrich.The toluene, n-heptane, and iso-octane were PRF grade and were obtained from Haltermann.In order to calculate the K factor for OI using the multi-variable linear regression method, Zhou et al. explain that the RON and MON values within the chosen fuel set have to be uncorrelated.Failure to meet this requirement causes the RON and MON to be nearly interchangeable with each other, and as a result causes the K factor to be erratic.To test this data set, Fig. 2 shows that the correlation between RON and MON is poor, with a correlation coefficient of 0.266.Thus, the RON and MON values in this data set are sufficiently uncorrelated to allow for the multi-variable linear regression analysis to determine the K factor.Throughout this experiment, the engine air flow rate was held constant at 900 g/min, the engine speed was held constant at 2000 rpm, and a stoichiometric air-to-fuel ratio was maintained.Combustion phasing sweeps were conducted at a total of eight engine operating conditions as shown in Table 5.Spark timing authority was exercised to sweep combustion phasing from a very retarded midpoint combustion phasing to the knock-limited CA50 combustion phasing.Operating conditions 1–4 were conducted at a low intake manifold temperature, while operating conditions 5–8 replicated the same operating conditions at a higher intake manifold temperature.Operating conditions 1 and 5 were conducted without applying backpressure to the engine, while with operating conditions 2 and 6, the backpressure valve was actuated to increase the backpressure.For these conditions, the pressure differential between the intake and exhaust manifolds was 8 kPa, with the intake manifold being higher pressure than the exhaust manifold.For operating conditions 3 and 7 the same pressure differential of 8 kPa was maintained, but 10% EGR by mass was included.Similarly, operating conditions 4 and 8 included 20% EGR by mass while maintaining the same pressure differential of 8 kPa.By maintaining a stoichiometric air-fuel ratio and constant air flow rate, the fuel energy delivered wasn’t strictly held constant.However, as can be seen in Table 2, the lower heating value per kilogram of air changes only a small amount across the seven fuels investigated.Fig. 3 demonstrates that changes in engine load are primarily a function of combustion phasing with very little fuel-to-fuel variability.Advanced combustion phasing produced a load of 18 bar indicated mean effective pressure, and retarded phasing produced a load as low as 13 bar IMEP.OI, shown in Eq., is dependent on the fuel properties RON and S but is also dependent on K. K is a heuristic that allows the changing knock propensity for different engine conditions to be understood.By definition, KRON = 0 and KMON = 1, but for all other operating conditions K is based on experimental data and does not have a fundamental underpinning.In his early work, Kalghatgi explained that K is derived empirically through a multivariable linear regression analysis of KLSA.Since then, a variety of methods have been introduced to determine K, including an empirical correlation that was dependent on the temperature at a compressive pressure of 15 bar , non-linear regression analyses , and direct comparisons with PRFs .Kalghatgi also explained that K is also a descriptor of the trajectory of the compression process in the engine through the pressure-temperature domain.Pressure-temperature trajectories that are beyond RON have a negative K value, whereas pressure-temperature trajectories that are beyond MON have a K value that is greater than unity.To develop expectations for the K factor, the pressure-temperature trajectories of the RON and MON tests are shown in Fig. 4 for a compression ratio of 9.2:1, redrawn from .The RON and MON lines represent only the compression process and end at TDC; thus the compression heating that occurs in the end gas after combustion begins is not considered.The beyond RON region has a pressure-temperature trajectory that is at a lower temperature for a given pressure than the RON condition.To provide a reference in crank angle space, data markers are shown on both the RON and MON space at several discrete points.Pressure-temperature trajectories are presented in Fig. 5 for the low intake temperature conditions and in Fig. 5 for the high intake temperature conditions, where the pressure is experimentally measured and the temperature is calculated as described previously.The pressure-temperature trajectories only include data up to TDC for engine cycles where ignition occurred after TDC.Additionally, the data shown in Fig. 5 are for E30, which was the only fuel not to exhibit pre-spark heat release at any operating condition.The role of pre-spark heat release in stoichiometric SI combustion was first presented in one of our previous publications and is discussed in detail in the combustion analysis section of this paper.Fig. 5 reveals that all operating conditions have pressure-temperature trajectories that are beyond RON in the pressure-temperature domain, where K < 0 is expected.As the engine operating conditions progress from low backpressure to the conditions with the higher backpressure and higher EGR, the compressive temperature and pressure increase.This is expected because each of these conditions increases the trapped mass in-cylinder and dilutes the fuel-air charge, causing the ratio of specific heat to increase.Thus, while each of these conditions has the same amount of air and fuel, at a constant intake manifold temperature the TDC pressure and temperature can increase by as much as 8 bar and 75 K.It is also worth noting that the higher TDC pressure and temperature are produced for the conditions with the higher intake manifold temperature).Subsection 3.1 established an expectation for K < 0 for all engine operating conditions investigated.However, the pressure-temperature trajectory changes significantly across the eight conditions; thus significant variation in K is expected while maintaining K < 0.Fig. 6 shows OI as a function of K for each of the seven fuels investigated for a range of −2 < K < 0, where OI is a proxy for the expected knock resistance of the fuels.The fuels with the highest OI are the aromatic and E30, which have nearly identical RON and S.The three bio-blendstock fuels have nominally the same RON but have lower S.As a result, these fuels are expected to provide the same knock resistance for K = 0 but reduced knock resistance as K decreases.The alkylate fuel also has the same nominal RON as the aromatic and E30 fuels but has very low S.As a result it is expected to provide even less knock resistance as K decreases.Finally, the Tier III regular grade fuel has a lower RON but a relatively high S.For K = 0, this fuel is expected to provide the worst knock resistance.However, as K decreases the knock resistance of the Tier III fuel improves relative to the alkylate fuel until a crossover point occurs at K = −1.For K > −1, the alkylate fuel is expected to provide more knock resistance, and for K < −1, the Tier III fuel is expected to provide better knock resistance.By its original definition, the K factor was determined using KLSA .However, KLSA is a proxy for combustion phasing, and further, significant changes in the spark timing are expected with the addition of EGR due to a rapid decrease in the laminar flame speed and a resulting increase in the duration of the flame kernel development process .As a result, the knock-limited CA50 combustion phasing was used in this investigation instead of KLSA.The knock-limited CA50 combustion phasing was defined as the phasing at which 10% of the engine cycles exhibit a peak-to-peak knock intensity > 1 bar.This is illustrated in Fig. 7 for engine operating condition 1.At late or retarded combustion phasing, no engine cycles show a significant amount of knock.As combustion phasing is advanced, the peak-to-peak KI increases rapidly, going from 0% of cycles up to 80% of cycles with KI > 1 bar in a window of 2–3° CA.Using linear interpolation, the knock-limited CA50 combustion phasing was calculated for each fuel and each condition.Using RON, MON, and the knock-limited CA50 combustion phasing, the K value for each engine operating condition was calculated using a multivariable linear regression analysis as described by Kalghatgi .The results from the multivariable linear regression analysis are shown in Fig. 8 for both the low intake manifold temperature and the high intake manifold temperature.As expected based on Fig. 5, the multivariable linear regression analysis confirmed K < 0 for all of the operating conditions, ranging from −0.13 to −1.28.As expected, higher OI does provide improved knock resistance, with the low OI fuels generally having a more retarded knock-limited CA50 combustion phasing.Fig. 6 illustrated that there was an expected crossover point between the Tier III fuel and the alkylate fuel at K = −1.That crossover was confirmed and can be observed in Fig. 8.For the low intake manifold temperature condition, the alkylate fuel has the lowest OI and the most retarded knock-limited CA50 phasing for both condition 1 and condition 2.However, this changes for conditions 3 and 4, where the Tier III fuel has the lowest OI and the most retarded knock-limited CA50 phasing.For the high intake manifold temperature, a similar transition occurs between conditions 5 and 6.Fig. 8 can also be used to assess whether individual fuels are over- or under-performing expectations.In each of the plots, fuel-specific data points on the trend line meet the expected performance based on OI.If a point is above the line, it has a later knock-limited CA50 phasing than expected based on the OI and is therefore underperforming expectations.Similarly, if the point is beneath the trend line, it has a more advanced knock-limited CA50 phasing than expected based on OI and is overperforming expectations.Based on this, there are two notable observations.The first is that the fuel containing anisole consistently overperforms the OI expectations at both the low intake manifold temperature and at the high intake manifold temperature.The second observation is that the aromatic fuel underperforms expectations based on OI for the high intake manifold temperature conditions.The correlation coefficient for the knock-limited CA50 combustion phasing as a function of OI is shown in Table 6.Additionally, R2 is also shown for the correlation of the knock-limited CA50 phasing to RON and AKI.OI provides the best agreement of the three metrics, with R2 > 0.75 for all but one of the operating conditions.RON produces R2 > 0.6 at half of the operating conditions.The conditions where RON shows reasonable agreement are the conditions for which the K value is approaching zero.Because OI and RON are numerically the same value when K = 0, it is not surprising that RON provides reasonable correlation for K approaching zero.However, at all conditions the R2 of OI is improved compared to the R2 of RON.Finally, AKI does not provide reasonable correlation coefficients for any of the engine operating conditions.While this finding is consistent with previous studies, it is worth pointing out that AKI, the fuel property metric used for gasoline sales in the United States, is not predictive of the knock propensity under these boosted operating condition.The IMEP that corresponds to the knock-limited CA50 combustion phasing is shown for each operating condition is shown in Fig. 9.The IMEP loss between the best fuel and worst fuel at a given condition is 1.3–2.7 bar, a difference which is attributable to the knock-limited CA50 differences.This is a 7–16% loss in engine load at a nominally constant fueling, demonstrating the significance that combustion phasing differences of this magnitude have on efficiency and performance.While OI is a superior metric for knock resistance relative to AKI or RON, it is problematic that OI cannot be determined based solely on fuel properties and instead requires experimental results at an engine operating conditions so that K can be calculated through a multivariable linear regression analysis.However, because K is also intended to be a descriptor of the physical trajectory of the compression process in the pressure-temperature domain, by acting as a weighting factor between RON and MON, a better understanding of the physicality of K may enable an improved a priori estimation of K.Fig. 10 shows the same experimental pressure-temperature trajectories as Fig. 5, except that the K metric for each pressure-temperature trajectory is also shown.While the progression of the pressure-temperature trajectory is monotonic as backpressure is increased and as EGR is added, the progression of the K metric is not.For both the low and high intake manifold temperature conditions, the 10% EGR condition has the highest K value, with K decreasing again with 20% EGR.Further, the physical proximity of these trajectories is far from the RON line, where KRON = 0 by definition.Thus, while the beyond RON and MON framework is a useful tool for conceptualizing OI, the K metric obtained through the multivariable linear regression analysis does not provide a physical representation of the pressure-temperature trajectory in this domain.One reason that the trajectory in the pressure-temperature space is nonphysical is that the end point of the trajectory isn’t captured or accounted for.To illustrate this, Fig. 11 shows the pressure-temperature trajectory for the RON and MON tests at three different compression ratios redrawn from where the trajectories were calculated using adiabatic compression from intake valve closing conditions.The trajectory in the pressure-temperature domain is the same for all compression ratios, but the 13 : 1 compression ratio achieves much higher temperature and pressure at TDC than the 7.8:1 compression ratio condition, making the 13:1 compression ratio condition much more knock prone.Due to the changing temperature and pressure boundary conditions with the eight engine operating conditions in this study, a similar phenomenon is occurring.While the pressure-temperature trajectories of the operating conditions investigated varied significantly, so too did the distance they traveled on the trajectory.As a result of this, there is no universal correlation between the knock-limited combustion phasing and OI across the eight engine operating conditions, as is illustrated in Fig. 12 with R2 = 0.302.By including the TDC pressure and temperature for each operating condition, the trajectory as well as the end-state can be accounted for.This provides a significant improvement in the correlation between the predicted and actual knock-limited combustion phasing, as seen in Fig. 13.It is noteworthy that by using this method with the combined data set of all eight fuels, the correlation coefficient is nearly as good as any individual operating condition, as shown in Fig. 8.While prediction of the knock-limited combustion phasing can be improved by including an estimation of the TDC pressure and temperature, any OI framework is an oversimplification of the complex autoignition chemistry that leads to knock.To illustrate the complexities of the fuel-specific differences in the processes, this section focuses on combustion analysis from the perspective of engine operating conditions 1, 5, and 6.Fig. 14 shows the combustion duration from spark timing to 5% heat release, which represents the early flame kernel development process.For operating condition 1, the spark–CA05 duration is a strong function of the CA50 combustion phasing.This is because when the spark timing is retarded past TDC, the turbulence field around the spark plug diminishes, prolonging the transition from laminar to turbulent combustion.However, no fuel-specific differences were observed for condition 1.In contrast, for condition 5, the alkylate fuel has a much shorter spark–CA05 duration than the other six fuels, which all exhibit similar behavior.Additionally, while the strong dependence of the spark–CA05 duration on CA50 combustion phasing remained for most fuels, the trend was diminished or absent for the alkylate fuel.Finally, for condition 6, the Tier III fuel, like the alkylate fuel, had a significantly shorter spark–CA05 duration than the other five fuels, which exhibited more fuel-specific separation.Fig. 15 shows the cylinder pressure and heat release rate for the same three engine operating conditions at the most retarded common combustion phasing.A late combustion phasing was selected to maximize the time available to observe the end gas chemistry before it was obscured by the heat release from the deflagration event.For simplicity, these plots show data from the Co-Optima core fuels and the Tier III fuel only, as the behavior of these fuels served to bracket the behavior of the whole data set.For condition 1, no substantial pre-spark heat release was observed for any of the fuels.For condition 5, only the alkylate fuel showed a substantial amount of pre-spark heat release.For condition 6, pre-spark heat release, at varying levels, was present for three different fuels.For the alkylate fuel, the pre-spark heat release progressed into a two-stage ignition event, with LTHR followed by an NTC region.The next most prominent pre-spark heat release was from the Tier III fuel, followed by the aromatic fuel.The pre-spark heat release phenomenon for stoichiometric SI combustion was described in an earlier publication , where the magnitude of the pre-spark heat release was altered by the intake temperature.The presence of pre-spark heat release is the root cause of the decreased spark–CA05 duration shown in Fig. 14.Pre-spark heat release occurs in the bulk-gas and doesn’t involve a propagating flame front.As a result of the bulk-gas pre-spark heat release, CA05 is achieved at an earlier stage of flame propagation.In the previous work , a higher S value decreased the pre-spark heat release propensity.In this study, the same general trend applies: the fuels with lower S values have the highest magnitude pre-spark heat release.However, both the aromatic and the E30 fuels have the same S value, yet this only leads to pre-spark heat release in the aromatic fuel.Thus, the fuel properties associated with the RON and MON tests are useful, but fuel-specific variations beyond these properties also play a role in the pre-spark heat release and knock tendency.When calculating a K value for a given operating condition, there is an assumption that all fuels share the same pressure-temperature trajectory during compression, prior to spark ignition, for a given speed-load operating condition.This is observed in a study by Kalghatgi et al. where a correlation between K and the compressive temperature at 15 bar is introduced.However, fuel-specific properties cause differences in the pressure-temperature trajectory.The pressure-temperature trajectories for the Co-Optima core fuels and the Tier III fuel are shown in Fig. 16 for operating conditions 1, 5, and 6.All pressure-temperature trajectories are plotted for the compression stroke and end at the ignition timing; note that these are the same trajectories for E30 shown in Figs. 5 and 10.For condition 1, all of the fuels exhibit a “cooling hook” near TDC, caused by heat losses; the in-cylinder temperature reaches a local maximum prior to TDC and then decreases prior to spark timing.It is notable that while the alkylate, aromatic, and Tier III fuels all shared a near-common pressure-temperature trajectory, the in-cylinder temperature of the E30 fuel was 10–15 K lower.The primary effect of this is the charge cooling of the high HoV of the E30 fuel and is consistent with the magnitude of temperature decrease expected with E30 , but a reduced ratio of specific heat may also contribute.For condition 5 the E30 fuel again had a lower temperature by 10–15 K, and the E30, Tier III, and aromatic fuels again exhibited the “cooling hook” behavior before spark.However, instead of a “cooling hook,” the pre-spark heat release for the alkylate fuel caused heating of the mixture before spark.As a result, at the time of ignition the cylinder contents with the alkylate fuel were about 50 K hotter than the cylinder contents with the E30 fuel.For condition 6, the alkylate, aromatic, and Tier III fuels all exhibited pre-spark heat release, which caused either a net heating effect or a diminished “cooling hook.,The E30 fuel is the only fuel at condition 6 that did not exhibit pre-spark heat release and thus retained the “cooling hook” behavior.This combustion analysis section focused on heat release from the fuel-air mixture in the unburned zone prior to spark.However, the kinetic activities that lead to knock continue in the unburned zone through the flame propagation process.To accurately track the temperature of the unburned zone, the kinetics must be accurately modeled.The modeling effort is an ongoing activity and will be reported as part of a future study.The purpose of this study was to test the CFPH under boosted engine operating conditions in a stoichiometric SI engine.The CFPH states that a fuel’s properties determine its performance in an engine regardless of the fuel’s chemical composition.On one hand, it was found that OI was a much better predictor of knock resistance than either AKI or RON and therefore is a better fuel property to use for fuel property rating.Within this context, the fuels with unconventional chemistry performed within the expected range based on their fuel properties, though it is worth mentioning that the anisole-containing fuel consistently overperformed based on its expected knock propensity.On the other hand, OI is a fuel metric that relies on an empirically derived constant, K, and requires knock-limited data at the same operating conditions with multiple fuels.From a knock-prediction standpoint, it is essential to ascribe some physical meaning to K to allow it to be calculated a priori.While K has been described as a weighting factor for the pressure-temperature trajectories from the RON and MON tests, this work demonstrated that K lacks physicality because it does not consider the end state in the pressure-temperature domain.If the end-of-compression states are considered, the knock propensity can be more accurately predicted over a wider range of conditions.Finally, this work also demonstrated the complexity of the autoignition processes leading to knock with the presence of pre-spark heat release.This bulk-gas heat release phenomenon will also occur in the end-gas after spark and similarly increase the temperature of the unburned zone by 50 K or more.Large differences in pre-spark heat release can occur in fuels with similar OI, for example, the aromatic fuel and E30.As a result, the complex chemistry leading to knock cannot readily be accounted for with metrics such as RON, AKI, or OI.In this investigation, the validity of the CFPH was investigated under boosted operating conditions in an SI engine.It was found that OI was a superior knock-resistance metric relative to either AKI or RON.It was also found that fuels of unconventional chemistry, namely ester and ether fuels, behaved within an expected range based on their fuel properties, supporting CFPH.Despite the high R2 associated with OI, though, knock-limited combustion phasing was found to vary as much as 4° CA from the expected behavior.This lack of agreement was possibly due to the fact that the pressure-temperature trajectories of the boosted operating conditions had a large variation in TDC pressure and temperature.Taking this into account led to an improved correlation of knock limited CA50.In addition, it was found that there is a lack of physicality for the K factor that is used to calculate OI, thereby preventing this from being calculated a priori.One major barrier to the development of a fuel-based metric for knock propensity predictions is the complexity of the chemistry leading to knock, including the pre-spark heat release.This manuscript has been authored by UT-Battelle, LLC, under contract DE-AC05-00OR22725 with the US Department of Energy.The US government retains and the publisher, by accepting the article for publication, acknowledges that the US government retains a nonexclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this manuscript, or allow others to do so, for US government purposes.DOE will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan.
The goal of the US Department of Energy Co-Optimization of Fuels and Engines (Co-Optima) initiative is to accelerate the development of advanced fuels and engines for higher efficiency and lower emissions. A guiding principle of this initiative is the central fuel properties hypothesis (CFPH), which states that fuel properties provide an indication of a fuel's performance, regardless of its chemical composition. This is an important consideration for Co-Optima because many of the fuels under consideration are from bio-derived sources with chemical compositions that are unconventional relative to petroleum-derived gasoline or ethanol. In this study, we investigated a total of seven fuels in a spark ignition engine under boosted operating conditions to determine whether knock propensity is predicted by fuel antiknock metrics: antiknock index (AKI), research octane number (RON), and octane index (OI). Six of these fuels have a constant RON value but otherwise represent a wide range of fuel properties and chemistry. Consistent with previous studies, we found that OI was a much better predictor of knock propensity that either AKI or RON. However, we also found that there were significant fuel-specific deviations from the OI predictions. Combustion analysis provided insight that fuel kinetic complexities, including the presence of pre-spark heat release, likely limits the ability of standardized tests and metrics to accurately predict knocking tendency at all operating conditions. While limitations of OI were revealed in this study, we found that fuels with unconventional chemistry, in particular esters and ethers, behaved in accordance with CFPH as well as petroleum-derived fuels.
138
Current status of recycling of fibre reinforced polymers: Review of technologies, reuse and resulting properties
Fibre reinforced resins, thermosets as well as thermoplastics, are increasingly used to replace metals in numbers of industrial, sporting and transport applications.One of the biggest challenges posed by fibre reinforced composites is their recycling.Environmental legislation is becoming more and more restrictive, and just the environmental impact of these materials disposed in landfills is accelerating the urgency to reach more industrial scale solutions to the recycling of composites.Landfill is a relatively cheap disposal route but is the least preferred waste management option under the European Union’s Waste Framework Directive , and opposition to it is expected to increase over the coming years; it is already forbidden in Germany, and other EU countries are expected to follow this route .Many different recycling techniques have been studied for the last two decades: mechanical processes , pyrolysis and other thermal processes , and solvolysis .Some of them, particularly pyrolysis, have even reached an industrial scale, and are commercially exploited: for example, ELG Carbon Fibre Ltd. in United Kingdom use pyrolysis , Adherent Technologies Inc. in USA use a wet chemical breakdown of composite matrix resins to recover fibrous reinforcements and, in France, Innoveox propose a technology based on supercritical hydrolysis.Pyrolysis is the most widespread technology as it is a proven and heavily used process in the chemical industry.However as the fibres degrade at high temperatures, solvolytic processes have attracted increasing interest, especially over the last decade.Supercritical fluids have received much attention because of their tuneable properties depending on operating conditions, however the associated equipment can be very expensive due to the severity of the conditions.Recent investigations have considered less severe conditions, but in detriment to process time .Solvents and/or catalysts are used that can be toxic and difficult to dispose or separate.The fibres can also be more damaged by the use of catalysts.If the objective of recycling is to recover fibres, this cannot be done in detriment of environmental aspects.A complete evaluation must be carried out in order to compare the different technologies in terms of environmental impact, efficiency and commercial viability.It is also quite clear that the choice of the separation/recycling method depends on the material to recycle and on the reuse applications of the fibres in particular.The recovered products, mainly fibres and products from resin decomposition, are most often systematically characterised and show that they can be reused.In particular recycled carbon fibres have been incorporated with success in a few trials .Recycled glass fibres can also be reused as fillers in thermoplastics or short fibres in BMC, mainly .On the other hand, fractions containing products from resin degradation by solvolysis have received very little consideration.Pyrolysis products from the resin have been mainly considered as a source of energy to feed back into the process.Currently solutions do exist to recycle composite materials.It can be seen in the literature that many different processes and methods have been applied and have shown the feasibility of recycling such materials, some of them being more commercially mature than others.However, industrial applications using recycled fibres or resins are still rare, partly because of a lack of confidence in performance of rCFs, which are considered as of lower quality than virgin carbon fibres , but also because rCFs are not completely controlled in terms of length, length distribution, surface quality or origin.Furthermore, recycling of composites is globally not closed-loop in terms of resource efficiency as recycled fibres cannot be reused in the same applications as their origin.For example if they come from an aircraft structural part, they cannot be reused in a similar part.In light of this, relevant applications have to be developed and specific standards are required in order to manage those reclaimed fibres.Demonstrators have been manufactured with rCFs, showing potential new applications .The article presents a review of technologies that have been investigated for about 20 years to recycle composite materials, from industrially exploited processes to methods still in research.The review covers applications for both reclaimed fibres and products recovered from degraded resins, as well as some economic and Life Cycle Assessment aspects.Recent articles have updated the state of the art of the recycling of composite materials , but they are orientated towards carbon fibre reinforced composites and near- and supercritical solvolysis , whereas the vast majority of composites being recycled concerns glass fibre reinforced composites.CFRCs show a high commercial value due to the presence of carbon fibres, which is probably the reason why they benefit from more attention.A review of the different existing methods applied to recycle composites is presented and discussed, and highlighted according to the materials to recycle and the potential reuse applications.This technique consists of grinding materials more finely after a first crushing or shredding step into smaller pieces ; the latter is common to all the recycling techniques.Generally different sizes of recyclates can be recovered and separated by sieving into resin-rich powders and fibres of various lengths that are still embedded in resin.Flakes of materials can also be found in the recyclates.Mechanical grinding has been more applied to glass fibre reinforced composites , in particular SMC and BMC , but work on CFRCs also exists .One of the first evaluations of the potential of this technique was published in 1995 by the Clean Washington Center where Kevlar® aramid and carbon fibre reinforced scrap edge trimmings from moulded composites and cured prepreg composites were considered .The use of ground composite materials can have two purposes: filler or reinforcement.Their use as filler is not commercially viable because of the very low cost of virgin fillers such as calcium carbonate or silica, this is particular pertinent if CFRCs are considered.The fillers concerned are the powdered products recovered after sorting.The incorporation level of filler material is quite limited because of the deterioration in mechanical properties and increased processing problems at higher contents .If they cannot competitively replace currently used fillers, they could be used as an energy source as they are rich in resin.CF reinforced polyether-ether-ketone resin was studied by Schinner et al. .The authors observed that cutting mills gave more homogeneous fibre length distribution and longer fibres than hammer mills although the cutting blades wore faster.The ground materials were then successfully incorporated into a virgin PEEK resin and moulded by injection or press up to 50 wt.%.A direct reforming process without grinding was also successfully performed, however this was not applied to end-of-life materials.The fibrous fractions of ground thermoset materials are reported to be more difficult to reuse, in particular as reinforcement and, even with low reincorporation, the resulting mechanical properties are significantly impaired due to a poor bonding between the recyclates and the new resin.However Palmer et al. showed that when the fibrous fractions similar to virgin fibre bundles were incorporated at about 10 wt.% in DMC, the mixing time can affect positively the mechanical properties of the resulting materials.A longer mixing time of the paste with the recyclate enabled improved mechanical properties compared to a standard mixing time and furthermore properties comparable to the standard material could be achieved.This is explained by an improved interface between the recyclate and the new resin.Industrial applications of this technique actually exist largely among glass fibre reinforced composite manufacturers, like Mixt Composites Recyclables, a subsidiary of Plastic Omnium in France, Filon Products Ltd. in United Kingdom and a few others as shown in Table 1.There is no known grinding process exploited industrially to treat CFRCs.Other processes which allow a separation of the fibres and the matrix are preferred for carbon fibres as they can be recovered without contamination and potentially reused as reinforcement in new composites.Recently a novel grinding process was proposed by Roux et al. that used electrodynamic fragmentation to shred carbon fibre reinforced thermoplastic.In this method the material is placed in water between two electrodes, and a high voltage, between 50 and 200 kV, is then applied to fragment the material into smaller pieces.Thermal processes include pyrolysis, fluidised-bed pyrolysis and pyrolysis assisted with micro-waves .These techniques allow the recovery of fibres, eventually fillers and inserts, but not always the recovery of valuable products from the resin.The resin is volatilised into lower-weight molecules and produces mainly gases such as carbon dioxide, hydrogen and methane for example, and an oil fraction, but also char on the fibres.The processes operate between 450 °C and 700 °C depending on the resin.The lower temperatures are adapted to polyester resins, whereas epoxides or thermoplastics, like PEEK for example, require higher temperatures.Even higher temperature processes are also possible but concern combustion with energy recovery found in cement kilns, in which the composite waste is converted into energy and into raw materials components for the cement.However no more than 10% of the fuel input to a cement kiln could be substituted with polymer composites reinforced with glass fibres.Indeed the presence of boron in E-glass fibres was found to affect the performance of the cement .However Jean-Pierre Degré, Senior Vice President, Holcim Group Support, Sustainable Development – Alternative Resources, confirmed that co-processing of glass fibre reinforced composites waste as alternative fuel at Lagerdorf has no negative effect on the quality of the cement produced , however the incorporation proportions are not given.According to the European Plastics Converters, the European Composites Industry Association and the European Recycling Service Company, “the European composites industry considers the cement kiln route to be the most sustainable solution for waste management of glass fibre reinforced thermoset parts” .However at present this solution is not economic compared to landfill where landfill is an option .The most studied thermal process is pyrolysis performed in absence or in presence of oxygen, and even more recently in presence of steam .The matrix degradation produces an oil, gases and solid products.The fibres are contaminated by this char and require a post-treatment in a furnace at 450 °C at least to burn it, for example for GFRC .This also leads to a higher degradation of the fibres.This process has been more developed to recycle carbon fibre reinforced matrices and has reached commercially exploited industrial scale, as shown in Table 2.Glass fibres suffer from the high temperatures and their mechanical properties are decreased by at least 50%, especially as the minimal process temperature is 450 °C .Carbon fibres are less sensitive to temperature but they can be contaminated by a char-like substance remaining from the degradation of the resin, which prevents a good bond with a new resin.At 1300 °C this substance is completely removed and the fibres are perfectly clean with highly activated surface, but their strength is significantly reduced .A small number of mechanical property values measured by different work groups were gathered by Pimenta and Pinho .Additional values were also found and are reported in Table 3.They show that the tensile strength can be reduced by up to 85%, but can also be unaffected by the treatment.The treatment conditions thus play a great role on the resulting fibre properties.A lower reduction in tensile strength was observed when fibres were reclaimed from a composite than when they were heated in air on their own .Above 600 °C the tensile strength of the rCF was reduced by over 30%.The fibres also seem to have different sensitivity to pyrolysis conditions depending on their type .For example, Hexcel AS4 carbon fibres showed a strong oxidation from 550 °C in oxygen, whereas Toho-Tenax high tenacity carbon fibres were not oxidised below 600 °C in air.Air is less oxidant than neat oxygen.In oxidant conditions epoxy resins are more easily degraded than in inert conditions and at temperatures in the range of 500–600 °C it is possible to completely remove resin residues .A compromise is thus necessary between resulting mechanical properties and the amount of remaining resin residue.HTA fibres recovered after a first step at 550 °C in nitrogen during 2 h and a second step at 550 °C in oxidant conditions retained more than 95% of their tensile strength without resin residue on the surface .A pyrolysis temperature in the range of 500–550 °C appears then to be the high limit of the process in order to maintain acceptable strength for carbon fibres; whereas glass fibres retain less than 50% of their mechanical properties at the minimal temperature of 400 °C .This type of technique appears therefore to be more adapted to recover carbon fibres.Finally, in a real industrial process recycled carbon fibres are reclaimed from a diverse feedstock based on different types of carbon fibres with varying properties.They are mixed together during the separation process so that it becomes hard to compare properties of single fibres to those of virgin fibres.Fibres reclaimed from industrial processes ultimately present a distribution of properties.As explained by ELGCF , they blend their products in order to minimise property variation.They claim that the fibre properties are approximately 90% of those of virgin fibres following their pyrolysis process.It was reported that laboratory-scale or pilot-plant pyrolysis led to better results in terms of fibre surface quality and mechanical properties than the ones obtained with industrial-scale processes .Fluidised-bed process has been applied to the recycling of glass fibre reinforced composites and CFRCs .This pyrolysis-based process uses a bed, of silica sand for example, fluidised by hot air so conditions are oxidant.It enables a rapid heating of the materials and release the fibres by attrition of the resin.As in classical pyrolysis, a small amount of oxygen is required to minimise char formation.A rotating sieve separator was implemented in Pickering’s process to separate fibres from fillers of recycled GFRCs .The organic fraction of the resin was further degraded in a secondary combustion chamber at about 1000 °C, producing a clean flue gas.At 450 °C glass fibre tensile strength was reduced by 50%, while at 550 °C the reduction achieved 80%.Carbon fibres show a lower strength degradation of about 25% when processed at 550 °C .Analysis of their surface showed that the oxygen content resulted in a little reduction, indicating that the fibres have good potential for bonding to a polymer matrix .The interest of this process is that it can treat mixed and contaminated materials, with painted surfaces or foam cores in composites of sandwich construction or metal inserts .This process is therefore particularly suitable for end-of-life waste, however it has not largely been applied to reclaim fibres, in particular carbon fibres.Furthermore the fluidised bed process does not allow recovery of products from the resin apart from gases, whereas pyrolysis can enable the recovery of oil containing potential valuable products.Carbon fibres seem to be more damaged than with pyrolysis; however the process has not been optimised.In addition to the high temperature, attrition by the fluidised sand might also damage the fibres.Microwave-assisted pyrolysis has also been considered over the last ten years to recover carbon and glass fibres by Lester et al. at the University of Nottingham , by the American company Eltron Research , and more recently by Åkesson et al. at the University of Borås .The main advantage of microwaves is that the material is heated in its core so that thermal transfer is very fast, enabling energy savings.Microwave-assisted pyrolysis heats composite wastes in an inert atmosphere, degrading the matrix into gases and oil.The first application of this heating method to recycle composites was studied by Lester et al. in 2004.They used quartz sand to suspend the samples made of carbon fibres and epoxy in a microwave cavity and glass wool to prevent solids leaving the cavity.Microwave treatment was also used by the American company Firebird Advanced Materials at the same time, but they stopped their activity due to a lack of investors in 2011.In Sweden the project involving the University of Borås and Stena Metall Group used this method to recycle wind turbine blades made of glass fibres and a thermoset resin.Three kg of ground materials were pyrolysed in a 10 L reactor at 440 °C during 90 min and an oil and glass fibres were recovered and analysed.A non-woven mat was then manufactured with the recycled fibres and used in a new material by alternating these new mats with virgin glass fibre mats.However as the recovered fibres were coated by residual char, the adhesion with a new matrix was unsatisfactory, which led to poor mechanical properties when only recycled fibres were in the composites.The amount of recovered fibres has to be limited to 25 wt.% in order to obtain acceptable mechanical properties but they were not as high as the equivalent material made of vGFs.The quality of the new mat also affected the mechanical properties of the overall composite as the fibre length distribution was quite wide and the fibres were heterogeneously distributed in the mat.Solvolysis consists of a chemical treatment using a solvent to degrade the resin.This technique was first considered about 30 years ago and applied to unsaturated polyesters and SMCs as UP is one of the most widely used thermoset resins and in particular in SMCs.Hydrolysis between 220 and 275 °C either with or without added solvent or catalyst was used by Kinstle et al. in the 1980s to degrade UP into its monomers and a styrene–fumaric acid copolymer .Since then many different conditions and solvents were tried in order to recycle thermoplastics, thermosets and their fibre reinforced composites .Solvolysis offers a large number of possibilities thanks to a wide range of solvents, temperature, pressure and catalysts.Its advantage, compared to pyrolysis, is that lower temperatures are generally necessary to degrade the polymers, in particular UP and epoxides.However when supercritical conditions, of water for example, are reached, reactors can become expensive as they have to withstand high temperatures and pressures, as well as corrosion due to modified properties of the solvents .A reactive solvent, sometimes in mixture with a co-solvent or with a co-reactive solvent, diffuses into the composite and breaks specific bonds.It is therefore possible to recover monomers from the resin and to avoid the formation of char residues.Depending on the nature of the resin, more or less high temperatures and pressures are necessary to degrade the resin.Polyester resins are generally easier to solvolyse than epoxy resins and so require lower temperatures to be degraded.During the last decade this method has been more intensively used to recycle composites, in particular CFRP, as the recovery of carbon fibres has become a commercial interest.Among all the tested solvents, water appears as the most used, sometimes neat , and sometimes with a co-solvent .Often it is used with alkaline catalysts like sodium hydroxide or potassium hydroxide , but less often with acidic catalysts .Acidic catalysts were mainly used to degrade more resistant resins, for example PEEK, or to degrade epoxy resins at low temperatures.A few other solvents have also been used, mainly alcohols like methanol, ethanol, propanol, and acetone or even glycols, with or without additives/catalysts .Numerous lab-scale experiments have been carried out, but only a few studies have reached industrial or semi-industrial scale.ATI and more recently Innoveox have proposed to sell or licence their technology.In addition Panasonic Electric Works show a willingness to exploit their hydrolysis process to recycle 200 tons of GFRC manufacturing wastes annually .Depending on the amount of solvent and on temperature, the fluid can be vapour, liquid, biphasic or supercritical.When the fluid is in vapour phase or has a gas-like density in the supercritical fluid state, the process is more a thermal process than a solvolysis.Since 2000, supercritical conditions have gained more attention due to the tuneable solvent properties which significantly change from subcritical to supercritical conditions.Supercritical fluids show properties intermediate between liquid and gas phases.They have low viscosities, high mass transport coefficients, high diffusivities, and a pressure dependent solvent power .The involved chemistry is affected by these changing solvent properties.This offers the capability of controlling the solvent properties and reaction rates and selectivities through pressure manipulations .Water in particular has been considered because of its temperature and pressure dependent properties.Depending on the conditions, it can support ionic, polar non-ionic or free-radical reactions, so that it is said to be an adjustable solvent .Supercritical water has been mainly applied to CFRC in order to recover carbon fibres of good quality without paying much attention to the products of the resin degradation.This is because of the potential high commercial value of rCF; however the intense hydrolysis conditions require specific and expensive reactors.Alternative solvents with lower critical temperature and pressure have been considered, mainly ethanol, methanol, propanol and acetone, as well as additives or catalysts added to water in order to moderate the operating conditions.Supercritical alcohols or acetone, for example, actually require temperatures as high as pure water in order to achieve a sufficient elimination of resin from the carbon fibres , however the pressure is much lower than with water.Only a catalyst and semi-continuous conditions enabled a significant reduction in the temperature .The catalyst appears to modify the reaction pathway .It appears furthermore that efficient temperature and pressure levels depend deeply on the type of epoxy resin and more widely on the type of resin.Indeed Elghazzaoui observed that RTM6 epoxy resin requires supercritical conditions of water to be degraded, whereas subcritical conditions at about 350 °C were sufficient to degrade the 914 epoxy resin, both from Hexcel.A bisphenol-A epoxide cured with 1,2-cyclohexane dicarboxylic anhydride could be completely eliminated from carbon fibres in supercritical methanol at 350 °C , whereas LTM26EL epoxy resin from Cytec, which contains cresol and BPA cured with amine agents, requires about 450 °C to be eliminated .In light of this, it is hard to compare the results obtained at different conditions when the resins are different.Regardless of epoxy resin type and conditions, phenol was the main degradation product.Other phenolic compounds were identified such as isopropyl phenol and cresol depending on the conditions and the epoxy type, as well as anilines when the curing agent was an amine.Gases were also produced but were not always mentioned nor analysed.Elghazzaoui identified hydrogen, oxygen, carbon dioxide, methane, ethane and propane in particular from the degradation of the 914 epoxy resin from Hexcel.Thermoplastic resins like PEEK appear to be harder to degrade due to their high thermal stability.It is generally necessary to reach at least their melting temperature, which is about 345 °C for PEEK.Until now only supercritical water was reported to be efficient to degrade PEEK, producing mainly phenol as the degradation product .Otherwise strongly acidic or alkaline conditions are necessary.For UP, lower temperatures and pressures are required, generally below 300 °C .Mainly water was used to degrade UP, as hydrolysis is the reverse reaction of esterification.Mixtures with alcohols and amines were also tried or with ketones , as well as neat alcohols .The addition of a catalyst like sodium hydroxide did not show a significant effect to lower conditions and was extremely damaging for the glass fibres above 300 °C .Mixtures with alcohols and amines showed a significant effect on the process yield, which was explained by transesterification occurring in presence of alcohols .In most cases only the resin degradation was studied, GF were almost never considered except in reference .This is due to the low commercial value of GF and to their fragility when they are exposed to thermal, acidic and alkaline conditions.At small scale, the reaction rate is not limited by diffusion , however at larger scale, diffusion limiting effects due to the scale up and to the selection of larger pieces of composites could be observed as shown in Fig. 3.For this reason SCFs have been used as they behave like gases, showing low viscosities, high mass transport coefficient and diffusivity and a pressure dependent solvent power .It should also be noted that if the resin concentration is too high, the liquid medium becomes saturated and the reaction is slowed down .Semi-continuous conditions were then used by Pinero-Hernanz et al. to improve those aspects.This enhanced the diffusion processes; it may also avoid the deposit of resin residue on the fibres and the degradation of valuable products released from the resin.After solvolysis in a batch reactor the recovered fibres are coated with an organic residue that requires a long and expensive rinsing to remove .The characterisation of fibre surface after semi-continuous solvolysis in supercritical propanol showed that the oxygen content was much lower for rCF than for vCF, which led to lower interfacial shear strength .In oxidant conditions, it was possible to completely remove the resin and to leave no residue on the fibre surfaces, which then have an oxygen content higher than virgin fibres , however the tensile strength significantly decreased.LTP solvolysis is generally carried out below 200 °C and at atmospheric pressure.Catalysts and additives are necessary in order to degrade the resin as the temperature is very low, stirring can also be necessary .Acid medium has been mostly used in comparison to HTP where alkaline conditions have been tested most often.Some acid solutions are very strong and can be very dangerous in terms of safety.The only advantage of this method is that it offers a better control of the occurring reactions, and as the temperature is low, secondary reactions do not seem to occur.This enables a higher recovery of epoxy monomers, but not necessarily molecules of curing agent.As shown in Table 5, the effects of LTP solvolysis on fibre properties are comparable to the ones observed at HTP, due to the use of strong acid or oxidant conditions.Finally, LTP methods use solutions that can be difficult to dispose of or to recycle.Pyrolysis and solvolysis are the most preferred techniques to recycle composites, in particular CFRCs, both having the objective to reclaim fibres.Both techniques also require a first step of shredding or crushing into smaller pieces because of the size of some industrial parts in relation to the size of reactors.They both have proven to enable the recovery of carbon fibres largely maintaining their reinforcement capability, whereas glass fibres are quite damaged.Compared to pyrolysis, solvolysis is able to avoid the formation of char that contaminates the fibre surface and prevents good interaction between rCFs and a new matrix .Semi-continuous conditions seem to be necessary to reclaim clean fibres without using oxidant conditions, otherwise a post-rinsing step is required to remove the organic residue that deposits during the cooling phase in batch reactors.Fibres recovered by solvolysis can be cleaner than those recovered by pyrolysis but that depends on the conditions employed.When the fibres are woven in the composite material, it seems harder to remove the resin in particular between two intersecting tows by pyrolysis .It would be pertinent to compare the efficiency of a solvolysis process to that of pyrolysis for the removal of residual resin as a result of improved mass transfer.The purity of recycled fibre surface has an effect on its capability to adhere to a new resin.When resin residues remain at the surface after solvolysis or pyrolysis, the single fibre tensile strength seems to be improved .However, when reincorporated into a new composite, this residue reduces good interaction with the new matrix, leading to poorer mechanical properties.A study realised at the North Carolina State University compared rCFs recovered from end-of-life F18 stabiliser components by Milled Carbon’s pyrolysis process and by ATI’s process .The analysis of the fibre surface showed that fibres recovered from pyrolysis and from ATI’s process are comparable to each other in terms of oxygen content.While fibres recovered after the ATI’s LTP process showed that some catalyst molecules remained on the fibre surface resulting in a poor resin to fibre adhesion, and consequently in poor mechanical properties when incorporated in an injection moulded polycarbonate resin .ELGCF improved its pyrolysis process and produced rCFs clean enough to enable good adhesion with a new matrix, as can be seen in Fig. 4.Tables 3 and 4 gather the most relevant results of single fibre tensile tests realised with rCFs and rGFs.It can be seen that solvolysis has a smaller effect than pyrolysis on the mechanical properties of both CFs and GFs.However in strong solvolysis conditions, CFs can be seriously damaged as well.GFs are much affected by the treatment temperature in particular, but it appears possible to find some solvolysis conditions in which their degradation can be minimised even if they are still significantly damaged compared to virgin fibres .The global tendency for both glass and carbon fibres is that the lower the temperature the lower their degradation.It has been observed that no deep understanding of the chemistry involved in the solvolysis of resins has been realised in the literature.And yet a successful application of solvolysis at HTP with water or other solvents as reaction media requires the right combination of the chemistry involved in the resin degradation with that of the solvent properties provided at HTP.Indeed Elghazzaoui observed that the epoxy resin of Hexply 914 cured prepreg was more easily degraded in subcritical conditions than in supercritical conditions of water, with the latter conditions leaving the recovered fibres coated with resin residues and glued to one another.The degradation pathway of UPs in subcritical HTP hydrolysis at about 275 °C has been shown to be an Aac2 mechanism .Under these conditions the ionic product reaches its maximum and the reaction mechanism is similar to that for model molecules or for simple esters at ambient temperature.The main secondary reactions were also identified: decarboxylation of produced dicarboxylic acids and dehydration of glycols, which were also acid-catalysed.Fully hydrocarbon compounds are generally resistant to HTP hydrolysis.Compounds particularly susceptible to HTP hydrolysis are those containing a saturated carbon atom attached to a heteroatom-containing functional group .The hydrolysis mechanism of epoxy or PEEK resins at HTP have not been much investigated, however we think that CO, CN, or OSO bonds, that are heteroatom-containing bonds in epoxy resins, are hydrolysable.The amino-alcohol group as well as the SO2 group are the main hydrophilic sites in these epoxy-based materials .The chemical structure of PEEK resin, shown in Fig. 6, presents CO bonds that are hydrolysable but the steric hindrance is such that they are actually harder to break, explaining why a higher solvolysis temperature is required.However when the solvent is not water, no mechanisms have been proposed.Solvolysis at HTP is consistent with accelerated ageing, in particular when the solvent is water.Work on hydrothermal ageing has shown that water diffusion in a resin and in a composite material and the subsequent resin degradation are influenced by interactions of water molecules with the resin’s hydrophilic sites and also with the fibre–matrix interface.Water can build relatively strong hydrogen bonds with polar groups in the resin network, in particular between groups in close proximity to form a complex with a water molecule.In all inventoried studies about hydrolysis, the water was always de-ionised or distilled but at industrial scale the amount of solvent would be greater so mains water might preferably be used.Laboratory-scale experiments must therefore use mains water to confirm the industrial scale operation.Post-treatment of recovered organic fractions has also received very little attention in the inventoried studies, in terms of recovery of valuable products or just recycling as they may contain toxic substances.If the objective is to recover only fibres, and to dispose of the organic fractions in energy recovery, then pyrolysis would appear as the more economical technique.We have also observed that no mixtures of resin type or composites have been studied in solvolysis, for example epoxide with polyester and/or thermoplastics, depending on the reinforcement.It might be interesting to investigate if a synergistic effect could arise from the degradation of resin mixtures, as well as to consider resin mixtures reinforced with different types of fibres.A mixture of glass and carbon fibres for example might be interesting in certain applications, where a replacement of vGFs by rCFs was tried.Finally, all the existing techniques to recycle composites have advantages and drawbacks.All the inventoried work showed that it was possible to obtain grades of differing quality of fibres depending on the conditions.However a trend emerges: mechanical grinding is more suited for GFRCs and pyrolysis or solvolysis for CFRCs, when the damage to GFs by thermo-chemical processes and the potential commercial value of rCFs are considered.According to ATI , pyrolysis, HTP and LTP solvolysis have decisive drawbacks; however combined together they can produce rCFs of optimum quality.For this reason they have developed a three step process, which included a thermal pre-treatment followed by two solvolysis steps, first at LTP and then at HTP if resin residues remain on the fibres.Products from resins are mainly recovered from thermolysis and solvolysis.Thermolysis produces gases and oil from resin degradation.Pyrolysis of glass fibre reinforced SMCs, that initially contain only about 25 wt.% resin, produced about 75 wt.% solid residue, 14 wt.% oil and the remainder as gases .The oil recovered from the degradation of polyester resin contains monomers that could be reused in a new resin , providing a cost-effective recovery method is used.However material recovery from the polymer was judged not economically viable .This condensable product is a complex mixture of numerous different compounds, mainly aromatics due to the presence of styrene and phthalic acid in the resin structure, with a broad spread of boiling points and a high content of oxygen.It might be difficult therefore to separate them according to their boiling temperature.Carboxylic acids like phthalic or isophthalic acids, which are main monomers of polyesters, could be easily separated during condensation as they are solid crystalline products .Furthermore, not all the products are valuable, and the amount of valuable products should be present in sufficient amount to justify their separation.The remaining solution of non-valuable products would still require disposal.The energy recovery therefore seems to be the most suitable solution up to now to reuse the liquid fraction.The liquid fraction has a gross calorific value estimated around 32–37 MJ/kg, which is low compared to conventional hydrocarbon liquid fuels and is due to the high oxygen content .Once the lighter components have been distilled, it could be reused as a fraction for blending with petrol .Without post-treatment refining the composition of this oil led to a closed-cup flashpoint below the limits specified by both UK and US health and safety legislation.Users would have therefore to face extra safety obligations in order to use it .After distilling off the lighter volatile compounds, less than 70 wt.% of the oil with a flashpoint greater than 55 °C is recovered and about 40 wt.% of the oil was in the distillation range equivalent to petroleum .The remaining 60 wt.% could be reused as commercial heating fuel; however the oil fraction represents only about 14 wt.% of the initial amount of composite waste.After pyrolysis and post-treatment less than 10 wt.% of the composite is recoverable as fuel.The gases were mainly CO2 and CO for pyrolysis of polyesters and provided a low GCV as well, nevertheless high enough to heat the process .Pyrolysis of composites made of epoxy resins produced gases rich in methane, with a high GCV .The condensable products for such composites have been less studied as they generally contain carbon fibres that have higher value.The degradation of epoxy produced mainly aniline in the liquid condensable products and water in the gaseous products .In a fluidised-bed process only gases are recovered from the resin, with a composition similar to the one observed for pyrolysis.It is assumed that the gases produced might be used for self-heating the process, but was not considered in the literature.Solvolysis produces mainly a liquid fraction from the resin degradation, in which products are dissolved according to their solubility.Depending on the solvent used, it is possible to observe sedimentation of components that are not soluble or form due to saturation at ambient conditions.Depending on the conditions small amounts of gases were produced, showing the same composition as those found with pyrolysis .The interest in the use of a solvent is that thermal reactions occur, which can reduce the production of gas, tar and coke .The solvent is generally a reactant and dissolves the products of the reaction, preventing or delaying higher order reactions just by dilution and thus potentially enabling the recovery of resin monomers or at least more valuable products than with pyrolysis.It has been shown that hydrolysis of polyesters, for example, was the reverse reaction of esterification and leads to the production of resin monomers and of a styrene–fumaric copolymer resulting from the styrene network created during polymerisation .Glycols were separated from the liquid fraction and reused in a new UP and the SFC was modified to a low profile additive by using 1-octanol and sulphuric acid .The LPA thus obtained was successfully incorporated in a new GFRC.A similar linear polystyrene derivative was also recovered from UP by hydrolysis in presence of an aminoalcohol .A post-reaction was performed with maleic anhydride and the resulting functionalised polystyrene was then crosslinked with styrene to give a solid polystyrene-based polymer.The latter was submitted to the same hydrolysis treatment to give back the linear polystyrene derivative.The authors do not suggest any applications; however we may assume that the obtained polymer could replace existing polystyrene-based materials.Another work enabled the manufacture of an unsaturated polyester resin by the recovery of dimethyl phthalate from GFRC by solvolysis in supercritical methanol in presence of N,N-dimethylaminopyridine.Black oil was separated from methanol and contained about 31 wt.% DMP, which was then recovered and purified by washing with water.A new polyester resin was manufactured with the purified DMP, ethylene glycol and maleic anhydride and crosslinked with styrene.The first results were not satisfactory because of the presence of DMAP in the oil, but after purification the authors successfully manufactured samples containing different proportions of DMP with hardness comparable to that of the resin without recovered DMP.Less work has been done on reuse of products recovered from epoxy resin.Globally epoxy resins cured with amines lead to phenolic mixtures .Only one work realised by ATI displayed interest in the reuse of this mixture .The recycling process developed by ATI produced a mixture that the authors found to be similar to the complex mixtures of phenols recovered from biomass pyrolysis.The work showed that the mixture could potentially be used to produce phenolic resin.However this was performed with a mixture made of model compounds and not on a real one.A hand-made epoxy resin crosslinked with a dicarboxylic anhydride was solvolysed in supercritical methanol .The ester bonds created were easily broken by methanol and the epoxy network was preserved.The recovered thermoplastic epoxy was then re-crosslinked with acid anhydride.Reuse of products from polyester degradation has been more considered in the knowledge that rGFs alone could not be expected to make a separation process viable economically.For CFRC, carbon fibres are so valuable that products recovery from resins has almost been neglected.The only considered solution was energy recovery.Due to the broad range of resin formulations, organic fractions recovered by either thermolysis or solvolysis may lead to very complex mixtures of products.However it was shown that ortho- and isophthalic polyester resins, which represent more than 95% of the total volume of GFRC, could be treated together as they have similar behaviour towards hydrolysis; whereas a specialist UP modified by dicyclopentadiene showed a different behaviour and was reported to require a specific treatment .It also appears to be necessary to consider epoxy resins according to their curing agent.A classification is necessary therefore according to resin structure and behaviour towards separation processes, which requires an understanding of the reaction mechanisms.Thermoplastic resins like PEEK would similarly necessitate a classification.Whatever the chosen solution, energy or product recovery, reuse is not straightforward and requires separation and purification steps.The few studies realised up to now have shown significant potential in the reuse of products from resins.Further investigations and developments are necessary to propose suitable recovery solutions.In light of this, new resins that are recyclable have started to be developed, in particular by Adesso Advanced Materials in Japan and Connora Technologies in USA .Adesso proposes either recyclable curing agent for epoxy resins or recyclable epoxy systems.Connora has developed a recyclable curing agent for existing epoxy systems and a low-energy based recycling solution that produces after solvolysis treatment an epoxy thermoplastic by breaking a specific bond in the crosslinking agent.They are also working on the development of a crosslinking agent for polyesters.Different types of fibre waste exist according to the step in the manufacturing process.Dry fibre waste inherent in the production of reinforcements is generally already treated by manufacturers such as Hexcel and Toray.Fibre waste, either dry or wet, is produced during the first steps of the production of composite parts.It can also arise from prepreg rolls that did not pass the quality control, that stayed out of the recommended storage conditions for too long and those outside the guaranteed shelf life.After a part is manufactured, finishing steps produce processed material waste that cannot be directly reused, in particular waste from thermosets; some parts can be scrapped after quality control.A significant amount of waste is produced during these steps.According to Alex Edge from ELGCF, the majority of the carbon fibre waste they treat actually arises from these steps and not from end-of-life carbon fibre composite parts.This is due to the long service life of these materials, but also certainly because the waste stream has to be implemented between the dismantling sites and the recyclers.Ground carbon and Kevlar® fibres from cured thermoset moulded parts have been incorporated into sporting goods prototypes like snow and water skis .Carbon fibres gave better improvement in the mechanical properties of the resulting material than Kevlar® fibres.Experiments showed that when the amount and/or length of fibres were too high, the dough was too viscous, difficult to mix and the resin did not uniformly impregnate the ground fibres.The best result was achieved with fibres of 0.5 mm length at a loading of 1% in an epoxy resin, giving a strength increase of 16%.The incorporation in polyurethane foam at a maximum amount of 0.5% gave approximately 15% improvement against failure under load.It was indicated that the two-step process cost was estimated to less than $2 per pound of recycled material in 1995.Carbon fibres, as well as glass and aramid fibres, recovered from a grinding and sifting process were also characterised by Kouparitsas et al. and the fibre-rich fractions were reincorporated in thermoplastic resins.The incorporation of ground GF in polypropylene at 40 wt.% and of ground aramid fibres in ionomer resin at 15 wt.% gave tensile strength comparable to the same resins reinforced with virgin fibres.In contrast ground CFs incorporated at 20 wt.% in ionomer showed a tensile strength reduced by about 35% compared to the same resin reinforced with vCFs.More recently ground thermoset CFRCs were reincorporated in new SMCs by Palmer et al. .A classification and sieving method enabled separation of the products into four grades.One grade contained small, fine bundles of fibres approximately 5–10 mm in length.They were similar in length and stiffness to glass fibre bundles normally used in automotive SMC.This fraction corresponded to 24 wt.% of the recyclate and showed a fibre content of 72 wt.%.About 20 wt.% of the vGFs were replaced by this grade of carbon fibre recyclate.The obtained SMC composite showed mechanical properties comparable to that of the class-A standard automotive grades and good surface finish.However, at the end only a quarter of the original material could be reused and at a replacement yield of 20 wt.% in a new material.Takahashi et al. crushed thermoset CFRCs into square flakes of about 1 cm2 that were then incorporated into thermoplastic resins to manufacture materials by injection moulding.Compared to the unreinforced resin, the material with 30 vol.% crushed CFRCs showed better mechanical properties except for the flexural strain at fracture, and even comparable to ABS reinforced with the same amount of vCFs.The same tendency was observed with PP reinforced with crushed CFRCs, and the authors also showed that repeating four times the injection moulding process did not affect significantly the mechanical properties.The obtained properties appeared to be comparable to current glass fibre reinforced thermoplastics.Similarly, Ogi et al. also crushed thermoset CFRCs into small pieces with average size of 3.4 mm × 0.4 mm and further ground some into 1–10 μm particles.The mechanical characterisation showed that crushed CFRCs provide a better reinforcement than milled CFRCs thanks to longer fibres.The incorporation of crushed CFRCs showed an optimum level above which the mechanical properties deteriorated with increasing content.According to the authors, the crushing process would have removed epoxy resin from a part of the fibres which enabled them to be coated with the thermoplastic resin.However the nonlinear stress–strain curves obtained at different fibre contents were attributed to microscopic damage during the testing due to debonding and resin cracking.This would mean that the adhesion of the thermoplastic resin with the fibres was globally the weak point of the material.The fibres also seemed to be somewhat oriented in the direction of the injection flow and present as single fibres as well as fibre bundles in the material.Other work also exploited the unique properties of this type of recyclate : as a core made of ground glass fibre composites, for example.Ground CF reinforced PEEK resin incorporated into virgin PEEK resin in different weight proportions and processed by injection moulding gave a material with mechanical properties comparable than the same virgin material .The elastic modulus increased with the proportion of ground C/PEEK, while the tensile strength decreased but to a low extent.The strain capacity was more affected by the ground C/PEEK content: it was almost not affected from 30 to 40 wt.% but from 40 to 50 wt.% a decrease of 18–36% was observed.Press moulding of C/PEEK without adding virgin PEEK resin led to improved flexural properties compared to the same virgin material; however the higher the moulding pressure, the poorer the flexural properties.This might be explained by less resin between the fibres due to a spinning-like phenomenon.Finally, the same C/PEEK was directly reformed without grinding post-treatment.This led to a material with almost unchanged mechanical properties .This was not undertaken with end-of-life materials but it may be appropriate for production waste.Pull-out testing would also be necessary in order to analyse the mechanical bonding between the resin residue on the fibre and the new resin.Fibrous fractions recovered from mechanical methods appear to be more suitable for reuse in bulk or sheet moulding compounds, however they actually seem to better suit incorporation into thermoplastic resins, whether initially reinforced or not.This might be due to a better adhesion to TP resins as their processing does not rely on a chemical reaction to obtain the final material and thus the adhesion is more mechanical than chemical.Considering the effect of this incorporation on the resulting material properties, such recycled fibres cannot be reused in structural applications.Glass fibres recovered by fluidised-bed pyrolysis at 450 °C showed a tensile strength reduced by 50% while at 550 °C the reduction reached 80%.It was however possible to restore some fibre strength with a potassium salt at high temperature.Potential applications of such fibres are limited by their discontinuous and fluffy nature.Incorporation of glass fibres reclaimed from SMC into DMC for compression moulding did not affect tensile, flexural or impact properties at concentrations up to 50%, but beyond this percentage all properties significantly deteriorated .According to the authors, light duty parts such as vehicle headlight housing and instrumentation panels offer significant potential for commercial applications.The University of Strathclyde announced a patent application covering cost effective, industrially applicable, treatments to regenerate the strength of thermally recycled glass fibres .This could increase the value of rGF and widen their potential applications, but it will also increase their cost, which may not be competitive compared to vGF.Glass fibres recovered from HTP hydrolysis were incorporated into BMC using two different methods to improve the adhesion to the new resin: the refunctionalisation of the rGF and the incorporation of coupling agents directly into BMC semi-products.The results showed that both methods could improve the adhesion between the rGF and the new resin.The maximum fibre reincorporation percentage was 20% in order not to induce a significant decrease in the mechanical properties.Real parts were successfully manufactured without changing any process parameters; however the surface finish was of lower quality.GFs recovered from thermolysis at 550 °C were reused to successfully produce a glass–ceramic material that could find applications in architecture .The factor of interest is that separation is not necessary for GFRC containing fillers, such as calcium carbonate, as they can be processed with the recovered GF.In light of the above, it appears that GF recovered by thermolysis or solvolysis does not allow a reuse of higher value than that for GF recovered by mechanical methods.Considering the price of those methods, which is evidently higher than that of mechanical grinding, we might conclude that such fibre–matrix separation techniques are not suitable for GFRC.It might be worth using these techniques only on composite materials reinforced with long high value GF, but this needs to be demonstrated.Carbon fibres recovered by thermo-chemical techniques have been more considered for reusing in new composite materials.Random and aligned discontinuous fibres have been investigated but very few real prototypes have been manufactured.Furthermore, two types of rCF can be identified: random short fibres from either woven or non-woven fibre composites and pieces of woven fabrics.The size of recovered pieces of woven fabrics depends on the shredding pre-treatment; however the fabrics can actually retain their woven shape after the recycling treatment.This can be very interesting in terms of fibre alignment and so in terms of reinforcement retention.This type of reinforcement has received little attention , whereas it represents more than 60% of CF waste just in Europe .Most often rCFs are not woven; in a wide range of lengths the fibres are intermingled like a tuft of hair.Different trials have been performed using rCF either as is or reshaped by different techniques.The other important issue is the absence of sizing on the recycled fibre surface and the presence of residues that can coat the fibres.Due to their short length, rCFs were first incorporated into random discontinuous fibre materials like SMCs and BMCs.They generally need to be chopped again after reclamation in order to conform to virgin fibre length commonly used for this type of material .Some trials used rCFs as they were , but the resulting materials were not homogenous in fibre distribution, thus in thickness, resulting in a quite low fibre content and high void content.Despite a fibre content lower than that of a BMC with vCF, the flexural properties were equivalent to the virgin BMC : ultimate tensile strength decreased by 5% and flexural modulus decreased by 15%.The flexural properties obtained for an SMC were between 87% and 98% of those obtained for the SMC with vCFs, fibre contents being comparable .Other methods used two manufacturing processes, compounding and injection moulding, to manufacture BMC materials.If the rCFs were too long, they gave pellets that were difficult to process for BMC .The materials were made with TP resins and rCFs from pyrolysis and from solvolysis.The material containing ELGCF rCFs showed properties very comparable to the material with vCFs.ATI rCF materials showed an average decrease of about 36% in stiffness and strength properties; however the material was harder to process because of the longer fibres, but probably also because the fibres were coated with resin and catalyst residues, and insufficiently dried.The measurement of resistivity for both polycarbonates reinforced with vCFs and with rCFs revealed that the fibres are more randomly distributed in the material containing rCFs.Compared to the unreinforced resin, the incorporation of CFs, either virgin or recovered by pyrolysis, enabled an almost doubling in the tensile and bending strengths.This confirmed the reinforcement potentiality of rCFs.The properties of the PPS materials were compared to those of equivalent composites containing vCFs.The authors observed that the rCFs showed a broader distribution in single fibre tensile modulus and strength due to the various types of materials that can be found in feedstock.Consequently, the average mechanical properties of PPS composites containing rCFs were not as good as those obtained with vCFs.However the rCFs/PPS materials were completely comparable to existing PPS resins reinforced with short vCFs, and in some cases slightly better.rCFs were essentially aligned along the injection direction and homogeneously distributed in the matrix, similar to the commercial materials.Despite the absence of sizing on the rCFs, good fibre–matrix adhesion was observed giving a failure more by fibre rupture than by fibre pull-out.The replacement would be rated a complete success if the rCFs were cheaper than the short vCFs used, which are low cost fibres.In terms of Life Cycle Assessment, it may be assumed that the environmental impact is in favour of the rCF for this type of material.These aspects are further discussed in the next section.Mats and veils have been manufactured by wet papermaking methods .The method induced a preferred orientation in the mats with 60% of fibres having the same orientation in plane and a very small proportion in out-of-plane direction .The mats were then used to manufacture composite laminates by compression moulding with films of either epoxy or PP .Mechanical testing showed that the epoxy materials obtained had specific mechanical behaviour especially in terms of failure mode .The fibre–matrix adhesion was good but the tensile strengths were low.The extremely high moulding pressures required to manufacture rCFRCs with high fibre content led to severe breakage during compression, which degraded the fibre length considerably and left more than 60% of the fibres with a length lower than critical.Furthermore, the presence in the mats of fibre bundles in the mats was shown to improve the local fracture toughness depending on their geometry, location and orientation.However, it seems that this was actually detrimental to the strength of rCFRCs.More investigations are necessary in order to improve the understanding of the mechanical response of rCFRCs.It was reported that TP materials gave a lower void content, probably due to better compression enabling improved resin diffusion .Furthermore, the epoxy resin required a chemical reaction to achieve its final structure, which may have slowed resin diffusion into the mats.In both cases, fibre pull-out was observed, indicating a poor fibre–matrix adhesion.The veils obtained by Wong et al. were moulded onto a GF-reinforced polyester in order to provide electromagnetic interference shielding.Given that retention of fibre strength is not critical as in other applications, rCFs would be suitable for this type of application thanks to their low electrical resistivity.The shielding effectiveness increased with increasing veil area density of rCF up to 60 gsm.The performance could be improved by removing the long fibre portion from the rCF length distribution in order to achieve a more homogenous distribution.As mentioned previously woven fabrics can be found among recovered CFs.Composites reusing woven rCF fabrics recovered from uncured prepreg by ELGCF pyrolysis were manufactured by Resin Film Infusion at Imperial College London .The pyrolysis conditions led to two types of plies: rCFs-B plies that lost their nominal rigidity and were thus more drapable, more deformed but also with significantly impaired mechanical properties; and to rCFs-D plies that kept their shape and nominal rigidity due to resin residues in the less aggressive conditions and showed a retention of 95% in single fibre tensile strength.When the shape and the rigidity of the plies were retained, the fibre re-impregnation by RFI was harder, and significant inter-tow and inter-fibre voids were observed in the composite, whereas the re-impregnation was better with no voids when the plies were more flexible.The autoclave manufacturing process improved the quality of the resulting material with fewer voids, but the compaction was still worse than the original material.The composite containing rCFs-B gave fibre content and elastic properties very similar to the original material, however resulting tensile strengths were significantly reduced.On the other hand, the composite containing rCFs-D gave lower elastic moduli, and in particular, lower compression strength compared to the original material but also to the composite with rCFs-B.The autoclave process improved the resulting properties but not sufficiently.The impaired mechanical properties would thus mainly be due to the lack of fibre wetting and resin diffusion into the rCFs-D plies during the remanufacturing process, but also to the residual resin on the fibres which prevented good adhesion to the new matrix.The new resin could not effectively penetrate the weave, all the more so, since the reclaimed fabric maintained its rigidity.Woven rCFs, which were also recovered from uncured prepreg by ELGCF pyrolysis, were reused by Meredith et al. .The fabrics were re-impregnated and then all laid at 0°, vacuum bagged and cured in autoclave under pressure.The modulus of the obtained composite was slightly reduced compared to the same material with vCFs.Tensile and flexural strengths were significantly reduced, however the material retained 94% of its specific energy absorption capability, indicating that this type of material could be used in applications that require high performance energy absorption.Carberry from Boeing stated that the key to unlocking the manufacturing future for recycled fibre beyond injection moulded applications is aligning the fibres.Different methods have been tried, in particular by MIT-LLC and by the University of Nottingham .The 3-DEP process developed by MIT-LCC enables to make three-dimensional preforms and is able to control the fibre placement and orientation.A demonstrator of the front lower wheelhouse support for the Corvette, currently manufactured with GFs was produced .The materials obtained would be suitable for structural but not safety–critical applications.The University of Nottingham have also developed two methods for realigning the fibres: one is based on a modified papermaking technique and the other is a centrifugal alignment rig .They enabled the achievement of up to 80% and up to 90% of realignment respectively, however they were used to produce quite low density veils and mats.Very recently, at the University of Bordeaux, a process to unweave and realign the recovered tows was developed .It enabled them to produce 50 mm wide tapes with aligned discontinuous fibres and an areal density of about 600 g/m2.A prototype was built and is reported to be able to treat pieces of 200 mm × 500 mm with a yield of 3 kg/h.However the process requires improvement in terms of realignment quality and homogeneity of the area density.It can be concluded that there is no satisfactory realignment process and this issue is still in progress.In order to really benefit from the reinforcement capability of rCFs, the other possibility would be to reshape the rCFs in continuous yarns.This has been done in the Fibre Cycle project , in which yarns were produced using rCFs recovered by ELGCF pyrolysis comingled with polypropylene fibres made by existing textile techniques.Only fibres of length higher than 500 mm were considered and then cut to 50–55 mm to be processed.They were subsequently blended with long PP fibres at different rCF/PP ratios.The crimped PP fibres acted as carriers for the un-crimped rCFs, providing the required fibre-to-fibre cohesion.Wrap-spun yarns of about 1 k tex linear density were obtained and wound onto a steel frame unidirectionally.The layers were then compressed and consolidated using hot compression moulding.The characterisation of the obtained yarns indicated a rCF content of 25 wt.% and 40 wt.% for ratios of 30:70 and 50:50, respectively.Fibre breaking occurred during the carding process, which led to lower fibre content than that of the initial blend.Most of the fibres showed a length decreased to 15 mm and down to 5 mm.Furthermore, the higher the initial rCF/PP ratio, the greater the breaking of fibres during carding, however higher ratios gave higher tensile and flexural strengths and moduli.After compression moulding, the rCFs were spread out uniformly in the PP matrix without evidence of voids.It also appeared that the majority of the rCFs were parallel and aligned with the yarn axis as shown in Fig. 10a. Nevertheless, SEM analysis of the fracture surface indicated that failure occurred largely by fibre pull-out indicating a weak fibre–matrix adhesion.According to us, this might be explained by the presence of resin residues or char on the fibre surface, considering the results of the fibre characterisation prior to processing.The interface between this coating and the rCFs might be weak and furthermore the smooth aspect of the fibre surface might have been detrimental to mechanical adhesion to the PP matrix.In the AERDECO project , the best method to separate and realign the rCFs was found to be dispersing the fibres by blowing with compressed air; carding was not efficient and broke the fibres.First trials of rCF reincorporation without post-treatment separation or surface treatment led to the absence of fibre–matrix adhesion, leading to the conclusion that residues were present on fibre surface.Realignment of long rCF bundles, found in batches of rCFs, was realised by hand to make unidirectional plies.They were then laid and vacuum moulded to make coupons.Due to the small size of the coupons, only bending tests could be performed and compared to real unidirectional laminates made with vCFs.The obtained flexural properties were almost the same as for the real unidirectional composite.This shows that rCFs are equivalent to vCFs in terms of reinforcement capability.At larger scale than that of the coupons, rCFs could enable the production of composite materials with aligned discontinuous rCF comparable to materials containing vCFs, but ensuring alignment, distribution and area density equivalent to a virgin material.This also shows that the advantages of rCF could be exploited if it is possible to make reinforcements with rCFs comparable to existing aligned discontinuous reinforcements.In all cases cited previously the surface quality appeared to be the weakness of the rCFs.Whatever the thermo-chemical method used to recover fibres, measurements of fibre diameter and observation and characterisation of fibre surface indicated that the rCFs were often coated by a layer of either char or resin residues.This resin residue might consist of material from the fibre/matrix interface, comprising the sizing initially present on the fibre surface.As seen previously, this coating caused failures by fibre pull-out due to a weak fibre-to-matrix adhesion.For sized vCF, the surface oxygen content and the nature of the functional groups present on the fibre surface could influence the fibre–matrix adhesion.Different post-treatments have been tested in order to improve the fibre surface quality but this has not been performed systematically.The objective was to modify the chemical structure of fibre surface in order to promote adhesion to a new resin either by oxidative thermal treatment, nitric acid treatment or plasma treatment.Greco et al. studied different post-treatments, thermal and chemical.Thermal post-treatment was not satisfactory as it produced either more damaged fibres or fibres without evident surface activation, but fibre/matrix interfacial shear strength was improved by 40%.On the other hand, chemical treatment in nitric acid led to limited damage of the fibres while improving the surface chemistry.The use of DGEBA epoxy resin as sizing on rCFs treated by nitric acid showed a significant improvement of the materials in which they were incorporated compared to the same material containing untreated rCFs.However, the harsh conditions of such post-treatments lead to fibre damage, sometimes even more than the reclamation process itself .Work by Pimenta and Pinho indicated that when the rCFs showed high surface purity the re-impregnation and adhesion to a new epoxy resin were better.This would indicate that re-sizing of the reclaimed fibres is not necessary before reuse.A specific re-sizing of rCFs recovered by fluidised-bed pyrolysis presented a very significant improvement of the mechanical properties of polypropylene composites and of the adhesion to polypropylene.Pimenta et al. however, showed that the fibre–epoxy interfacial shear strengths were comparable for vCFs and rCFs recovered by pyrolysis at ELGCF, and in some cases it was even slightly higher for rCFs.The presence of either striations or roughness seems to play a role in fibre–matrix adhesion and optimisation of mechanical properties .Carbon fibres are furthermore known to be microporous and it was suggested that absorption of epoxy resin into these pores is a key factor determining interfacial bond strength in CF/epoxy composites .However these pores may become clogged by organic residues in rCFs.It was found that the presence of functional groups is not necessarily the most important factor, depending on the type of fibres : for HM fibres removing the surface oxygen did not change their IFSS, whereas for HT fibres the IFSS follows the concentration of surface oxygen.In the case of epoxy resins reinforced by HM fibres, IFSS would depend more on the physical surface area which determines the strength of mechanical bonding .A comparison of fibre diameters measured on vCFs and on rCFs recovered by subcritical and supercritical hydrolysis at 340 °C and 380 °C respectively showed that the recycled fibres were coated with an organic residue even after washing several times with solvents such as acetone or dichloromethane.Atomic Force Microscopy confirmed that the surfaces of rCFs were not comparable to those of vCFs.The latter had smooth surfaces whereas the rCF’s surfaces were rougher, even more so after supercritical hydrolysis.The remaining coating was then removed according to French standard NF EN ISO 10548, used to determine the sizing content of carbon fibres.The tensile modulus was halved compared to the value obtained before Soxhlet extraction.Similar behaviour is seen for glass fibres, where the modification of the structure of the fibres may be used to support this result.Commercially, rCFs are available for reuse.Some applications have been proposed, but no products made of rCFs are available on the market.The studies carried out so far show that a direct reuse is difficult due to the fluffy character of the fibres after reclamation.In order to facilitate their reuse it might be necessary to develop semi-products similar to those made of virgin fibres to manufacture SMC, BMC, laminates, preforms, etc.The fibre re-alignment techniques require more development, as well as the techniques to make discontinuous fibre yarns.The characterisation of the fibre surface and of the fibre–matrix adhesion also requires further investigation.All those efforts are necessary for widespread industrial and customer acceptance .If manufacturing costs were more affordable, the use of rCFs could attract interest in applications requiring less strength than for highly structural parts like those found in aircrafts or cars.Ideally rCFs should be incorporated in resins that are easier to recycle, for example TPs, or even in newly developed recyclable resins such as those mentioned previously in Section 3.1.Such materials could be used in the automotive industry, sports goods or niche markets like orthopaedic, laptops, mobile phones and even in aircraft interior components.Different grades of CF exist depending on the precursor and the carbonisation conditions.Most of the available CFs are based on polyacrylonitrile.The carbonisation phase is used to enhance physical and chemical characteristics of the fibres and the conditions determine the carbon content of the fibres.It would be relevant to classify the rCFs and related waste according to the fibre grade in order to optimise fibre reuse.Economic and environmental aspects have attracted more attention in the case of CFRC than for GFRC, because CFRCs have a larger impact on advanced engineering and contain more expensive fibres than GFRCs.However, if we consider the volumes consumed, CF composites represent only 1.5 wt.% of the world production, the vast majority being GFRC .Nevertheless, the global demand for CFs is expected to double by 2020 .All the studies on recycled GFRC lead to the conclusion that the best way to recycle these materials would be co-incineration in cement kilns; although significant controversy surrounds the environmental impact of using this strategy .The price of vGF is so low that no process currently available can provide rGF with the same characteristics as virgin fibres at a competitive price.They would struggle to be used as fillers when considering the low cost of commonly used fillers; the economic balance is less clear than for CFRC.Two UK companies, Hambleside Danelow and Filon, are claiming that they can cost-effectively recycle GFRC through mechanical processes using waste from production .When companies recycle their own waste in this way, the cost and the environmental impact are improved by the fact that there is no transport.The situation might be different when it concerns end-of-life waste.rGF are also too significantly damaged by thermochemical processes to be used as reinforcement, even at low concentrations.Nevertheless, it might be relevant to distinguish short GF composites from long GF composites.The latter can be found in boat building or even more in wind turbine blades and are generally made with quite high value fibres.Investigations carried out in the EURECOMP project in twelve west European countries revealed that boat building was the sector producing the largest volume of GFRC, generating production waste as well as existing end-of-life waste.In modern cars the proportion of composite is relatively low .The subcritical hydrolysis treatment developed to recycle GFRC in this project was also economically and environmentally evaluated by comparison to mechanical recycling and pyrolysis .The study revealed that the developed solvolysis treatment would produce a rGF that is more expensive than vGF and showed an environmental efficiency lower than mechanical recycling but comparable to that of pyrolysis.The main drawback appeared to be the energy consumption of the process.Based upon scale-up of a pyrolysis pilot plant, Pickering et al. evaluated the economic viability of their fluidised-bed process to recycle GFRCs.Their assumptions, including waste collection and grinding post-treatment, led to a break-even point at a scrap capacity of about 9000 tonnes/year.This appeared a little unreasonable with assumptions of waste sourced within 80 km radius and of rGFs retaining 80% of their initial mechanical properties.Improvements are necessary to make these types of treatments applicable to GFRC.A first step could be the study of milder solvolysis conditions.In contrast, this is not possible for pyrolysis as a minimum temperature is required to volatilise the resins.In order to make thermo-chemical recycling treatment more economical, it would also be necessary to find a way to recover valuable products from the resins.Resin prices follow the evolution of petroleum prices, so higher monomer prices might justify this approach.Due to the much higher price of vCF, solvolysis and pyrolysis processes appear to be more applicable to CFRC, at least in terms of the economics.CFs cost more because they require more expensive raw materials, and moreover, more energy to be produced in comparison to GFs.Furthermore, if we consider the manufacture of laminates using long CF prepregs, the high energy consumption is due to the energy necessary to produce the CFs, then the prepregs and finally the composites by the autoclave process.These kinds of materials are particularly and increasingly used in aeronautics, but their recycling will not generate rCFs that can be reused in the same applications, unless a recycling technique able to preserve CF characteristics is developed.Currently available fibres reclaimed from cured composites are generally unsized, limited in length and available in random directional distribution, whereas virgin fibres are sized and available in every length between short and long, in unidirectional tapes or fabrics.rCFs cannot therefore be described as identical to vCFs.Reclaimed fibres could replace short virgin fibres in applications using discontinuous fibre composites.However rCFs are quite fluffy and thus difficult to manipulate.Although some studies have shown their reuse potential, the proposed applications mainly concerned down-cycling or reuse in unreinforced material.The incorporation of rCFs in such materials may lead to a higher cost and a lower environmental efficiency.Boeing estimates that CFs can be recycled at approximately 70% of the cost to produce virgin fibres as given in Table 11.The prices for rCFs commercially available from ELGCF indeed are in this range.It was suggested that vGFs or vCFs could be replaced by rCFs , for example in SMCs and BMCs.The resulting materials show a higher cost compared to vGF materials even if the mechanical properties are better.Furthermore, considering the energy consumption of thermo-chemical processes it may also lead to a poor environmental balance.Indeed a LCA study revealed that when rCFs replace vGFs in proportions giving equivalent material stiffness, incineration with energy recovery generally has a smaller impact.Whereas when rCFs replace vCFs with proportions giving equivalent material stiffness, recycling showed clear environmental benefits.An analysis comparing CO2 emissions related to hypothetical automotive components utilising vCF, rCF and vGF showed that the components using rCFs in replacement of vGFs can begin to be profitable, thanks to the weight gain, when travelling about 41,000 km.It is important to consider the component use life in evaluating impact on the environment and not only the manufacturing and recycling processes.Recycling by pyrolysis would therefore be the most environmentally favourable option for waste treatments compared to incineration and landfilling.Furthermore, it appears that when CF is recycled and reused to replace vCF, the benefit to the environment arises from avoiding vCF production impacts; similar conclusions were also reached by Hedlung-Åström .The same study has to be done to evaluate the other available recycling techniques and compare them against each other, this is one objective of the EXHUME project ; this is also the objective of the French project SEARRCH .It is necessary to define indicators to assess the economic and environmental impacts of each recycling technique for each type of material class and according to remanufacturing applications.Finally unprocessed and out-of-date production waste should be used without going through a recycling process in order to reduce the economic and environmental cost of their recycling.Unprocessed CF or GF have the advantage of still being sized, and are therefore easier to manipulate.They are also comparable to virgin fibres, also in terms of mechanical properties.Fibres of thermosetting prepreg waste could be separated by low temperature processes, in particular solvolysis, which could give improved recovery of valuable products from the uncured resin.In the present article the most reported techniques to recycle composite materials were reviewed, these included both mechanical recycling and thermo-chemical processes.The inventoried studies led us to the conclusion that the recycling technique must be selected according to the material being recycled and also to the reuse applications.In light of this, a first classification arises as shown in Table 13.Mechanical recycling appears to be more suitable for GFRCs and possibly even for CF composites that are reinforced with either lower grade CFs and/or short CFs.This is currently done in-house by some manufacturers, in particular for GFRCs but this concerns only production waste.EoL waste is currently more often landfilled.The European composites industry has stated that the cement kiln route is the most suitable solution to dispose of GFRCs.We assume that this solution is also suitable to treat EoL GFRC waste.Thermo-chemical processes are up to now not viable for GFRCs considering the low price of vGF and the degraded mechanical properties of GFs recovered in this way.These processes have been largely used to recover valuable products from the resin to reuse in new resins.Most of the currently available CFRC waste comes from production.The durability of those materials and their recent rise will see this waste increase.However, EoL waste will be prevalent soon, largely from aerospace, and will require a proven recycling solution.Thermo-chemical processes are suitable for CFRC recycling due to the high value of the CFs.Pyrolysis is currently commercially exploited and two solvolysis processes are available for commercial exploitation.Both techniques are able to provide clean and high quality CFs; however they also consume more energy due to the high processing temperatures.These are hard to lower for pyrolysis whereas solvolysis can potentially be optimised.Both techniques have drawbacks and advantages and, according to the American company Adherent Technologies, are considered as complementary rather than competing.The best treatment for them is a combination of a wet chemical processing followed by pyrolysis to remove residual organic substance that may remain on the CF surface.A drawback common to both techniques however exists: the rCFs are fluffy in nature and show a specific surface quality.Due to this their reuse is hardly straightforward.Depending on the type of rCF, it seems that the surface oxygen content is not always a parameter that influences adhesion in a new resin.In general a re-sizing step significantly increases the IFSS, regardless of the thermoset or thermoplastic matrix.It is particularly relevant when thermoset resins are chemically bonded to the fibres during the curing phase, whereas in thermoplastics fibre-to-matrix adhesion is also generated mechanically.In light of this the reuse of ground CFRTPs in the same class of thermoplastic resin is very relevant.Furthermore, as CFs are short and randomly distributed after thermo-chemical reclamation, they cannot be used in highly structural applications, for example in parts from which they are recovered.rCF properties also strongly depend on the fibre length: the longer the fibres, the higher the probability of defects induced by the reclamation process, making the fibres more fragile.Composites are reinforced with CF of different types, and consequently the recovered fibres are a mixture of different grades that may be reprocessed together.This also explains the deterioration in mechanical properties generally observed.A classification according to CF grade would thus be relevant in order to optimise the reuse of rCFs, as well as a classification according to length, which appears to be necessary.The shorter fibre fractions could be reused in SMCs or BMCs in replacement of vCFs.The longer fibre fractions could be reused in more structural applications.This necessitates a re-alignment of the rCFs, a process which is still under development.Another possibility is reshaping of the rCFs into continuous yarns.Existing spinning techniques have been tested by the University of Leeds and gave promising results, but further work is necessary to improve the resulting yarns.Among all the CF waste in Europe, 60% consists of woven fabrics.It has been shown that they can retain their woven shape after a thermo-chemical treatment, which could be interesting to exploit in terms of structural reinforcements.It might be wise to consider EoL waste separately from production waste, and among the latter cured thermoset from uncured thermoset and dry fibres.Uncured CF materials and dry fibre waste would not need to go through a fibre–matrix separation process.If necessary pre-impregnated fibre waste could be treated in a separation process but under significantly milder conditions.In order to facilitate the reuse of rCFs it seems to be necessary to develop ready-to-use semi-products, like those using vCFs to manufacture SMCs, BMCs, laminates or preforms.This could be the key to unlocking the reuse of rCFs in real applications.Specific characterisation standards are also required to give a framework and reference points to potential reusers.All these efforts are necessary to encourage widespread industrial and customer acceptance.It was suggested in the literature that vGFs could be replaced by rCFs, however LCA studies showed that this was not economically viable as rCFs are more expensive than vGFs.Furthermore, the global environmental impact is also worse.Nevertheless, it was confirmed that benefits to the environment can arise from the replacement of vCFs by rCFs, mainly thanks to the energy saved by avoiding the production of CFs.The organic fractions recovered after thermo-chemical separation have received little attention.It has been stated that this is not economically viable, but in spite of this valuable products have been identified after resin degradation and successfully reused in new resins.Considering that resin prices follow the evolution of the petroleum prices, it might be of interest to develop viable methods to recover monomers from resins.In a similar light, recyclable resins have begun to be developed.These could judiciously be used to manufacture composites with recycled fibres, so that it would be unnecessary to go through the same reclamation processes several times.Indeed, we may assume that the number of recycling treatments gradually affects the reinforcement properties of the fibres, at least with regard to the required size-reduction post-treatment.Highly structural applications are unlikely to incorporate recycled fibres, in particular those made from long CFs; new fibres will always be necessary.Efforts are also necessary in order to produce cheaper carbon fibres.The recycling of composite materials is on the right track, but challenges still have to be taken-up in order to finally make it a commercial reality.The innovation has just started in this field and for this reason it is also a source of opportunities.
A complete review of the different techniques that have been developed to recycle fibre reinforced polymers is presented. The review also focuses on the reuse of valuable products recovered by different techniques, in particular the way that fibres have been reincorporated into new materials or applications and the main technological issues encountered. Recycled glass fibres can replace small amounts of virgin fibres in products but not at high enough concentrations to make their recycling economically and environmentally viable, if for example, thermolysis or solvolysis is used. Reclaimed carbon fibres from high-technology applications cannot be reincorporated in the same applications from which they were recovered, so new appropriate applications have to be developed in order to reuse the fibres. Materials incorporating recycled fibres exhibit specific mechanical properties because of the particular characteristics imparted by the fibres. The development of specific standards is therefore necessary, as well as efforts in the development of solutions that enable reusers to benefit from their reinforcement potential. The recovery and reuse of valuable products from resins are also considered, but also the development of recyclable thermoset resins. Finally, the economic and environmental aspects of recycling composite materials, based on Life Cycle Assessment, are discussed.
139
Ethyl pyruvate attenuates acetaminophen-induced liver injury in mice and prevents cellular injury induced by N-acetyl-p-benzoquinone imine, a toxic metabolite of acetaminophen, in hepatic cell lines
Acetaminophen is a common analgesic/antipyretic in numerous medicinal and over-the-counter drug formulations.Although APAP shows few side effects at therapeutic doses, overdoses can produce severe hepatic injury.APAP-induced liver injury is the most frequent cause of acute liver failure in the United States , the United Kingdom and other countries .APAP hepatotoxicity is characterized by extensive centrilobular necrosis and infiltration of inflammatory cells .The hepatocellular necrosis and inflammation is initiated by a reactive metabolite of APAP, N-acetyl-p-benzoquinone imine, mainly produced by cytochrome P450 2E1 .NAPQI seems to injure hepatocytes with glutathione depletion, oxidative and nitrosative stress, and inflammation.To detoxify NAPQI, the glutathione precursor N-acetylcysteine was identified and has been used as the only approved antidote against APAP hepatotoxicity.However, NAC sometimes shows limited hepatoprotective effects in APAP-overdose patients because of delay in starting treatment.Therefore, the development of a novel effective antidote is needed.Numerous researchers have looked for effective candidate compounds against APAP-induced liver injury .Ethyl pyruvate, an ethyl ester derivative of pyruvic acid, was identified as a potentially good compound by Yang et al. .EtPy has been developed to be a highly membrane-permeable derivative of pyruvic acid, a glycolysis intermediate with multiple physiological properties including potent anti-inflammatory and anti-oxidative effects .Yang et al. reported that EtPy inhibited increases in transaminase levels and tumor necrosis factor-α concentrations in serum, hepatic centrilobular necrosis, and infiltration of inflammatory cells in a mouse model of APAP-overdose, and suggested the preventive effects of EtPy against APAP hepatitis in mice.Suppressive effects of EtPy on activated inflammatory cells, such as macrophages and neutrophils, and reduction of cytokine release from cells, such as TNF-α, may be involved in the hepatoprotective mechanism of EtPy.However, EtPy also has a scavenger action against reactive oxygen species, such as hydrogen peroxide, and cytoprotective potential against cellular injury through anti-apoptotic and necroptotic effects .Therefore, it is suggested that the antioxidative action and direct hepatocyte protection by EtPy against NAPQI-induced hepatocyte injury without inhibiting inflammatory cells should be considered.Little is reported on the hepatoprotective effects of EtPy in APAP-induced liver injury models.This study was conducted to evaluate the hepatoprotective action of EtPy against APAP hepatotoxicity with regard to its antioxidative and cytoprotective effects.Using a mouse model of APAP hepatotoxicity, we examined the effects of EtPy on the increase in serum transaminase levels, centrilobular necrosis, DNA fragmentation, and nitrotyrosine formation in the liver.We also conducted an in vitro study to determine the scavenger potential of EtPy against reactive oxygen species, such as hydroxyl radicals and peroxynitrite, which play important roles in the development of APAP liver injury.In addition, we evaluated whether EtPy shows direct hepatocyte protection during APAP-induced liver injury using an in vitro system to counteract the influences of inflammatory cells, such as macrophages and neutrophils.We examined the effects of EtPy on NAPQI-induced cell injury in a representative hepatocellular model mainly using mainly HepG2 cells, a human hepatoma cell line.Then we compared the effects of EtPy with pyruvic acid, a parent compound of EtPy, and phosphopyruvic acid, a precursor of PyA in glycolysis, against NAPQI-induced cell injury.EtPy was purchased from Alfa Aesar and Sigma-Aldrich.Sodium pyruvate was obtained from Nacalai Tesque.Sodium phosphopyruvate was kindly donated by Ube Kousan.APAP, N-acetyl cyctein and NAPQI were purchased from Sigma-Aldrich.A cell counting kit was obtained from Dojindo Laboratories.An annexin V-FITC apoptosis detection kit was purchased from R&D Systems.All other reagents and solvents were of reagent grade.Deionized and distilled bio-pure grade water was used in the study.Male wild-type C57BL/6JJcl mice were used.Animals were housed in cages under controlled conditions at 24 °C on a 12-h light/dark cycle and had free access to food and water.All experimental procedures conformed with the Animal Use Guidelines of the Committee for Ethics on Animal Experiments at Kumamoto University.The study protocol was approved by the ethical committee.The mouse model of APAP hepatitis was developed as described previously .Briefly, APAP was dissolved in phosphate-buffered saline at 55–60 °C and prepared in a 2% solution.An overdose of APAP was intraperitoneally injected into mice to induce hepatotoxicity.EtPy was dissolved in Ringer’s solution and administered intraperitoneally at 0.5, 2, 4, and 6 h after the APAP injection.Animals were divided by three groups: a vehicle group; a low-dose EtPy group; and a high-dose EtPy group.A treatment schedule is provided in Fig. 1A. Mice in the low-dose EtPy group were treated with 20 mg/kg EtPy at 0.5 h after the APAP injection, and 10 mg/kg of EtPy at 2, 4, and 6 h after the injection.Mice in the high-dose EtPy group were treated with 40 mg/kg EtPy at 0.5 h after the APAP injection, and 20 mg/kg of EtPy at 2, 4, and 6 h after the injection.Mice in the vehicle group were administered Ringer’s solution at 0.5, 2, 4, and 6 h after the APAP injection.Mice were killed at 8 h after the APAP injection and blood and tissue samples were collected.The protective effects of EtPy were evaluated by examining serum alanine aminotransferase and aspartate transaminase levels, and histological analysis using hematoxylin and eosin staining of samples.DNA fragmentation and nitrotyrosine formation were evaluated using the terminal deoxynucleotidyl transferase dUTP nick end labeling assay and immunohistochemistry, respectively.The assay was performed as described previously .Blood samples was centrifuged at 4000 × g at 4 °C for 10 min after coagulation and serum was collected.Serum ALT and AST levels were determined using a bio-analyzer.Liver tissue samples were fixed in 10% neutral buffered formalin and embedded in paraffin.Microtome sections were prepared and stained with hematoxylin and eosin.The TUNEL assay was performed using the ApopTag® Peroxidase In Situ Apoptosis Detection Kit according to the manufacturer’s instructions.To evaluate nitrotyrosine formation, immunohistochemical analysis using an anti-nitrotyrosine polyclonal antibody was performed.Microtome sections were incubated overnight at 4 °C with the anti-nitrotyrosine antibody and stained with Histofine® Simple Stain MAX PO.Following the wash step, 3,3-diaminobenzidine was applied to sections and sections were incubated with Mayer’s hematoxylin.Histological evaluation was performed in an unblinded manner.HepG2 cells, a human hepatoma cell line, was purchased from RIKEN BioResource Center.HepG2 cells were cultured in minimum essential medium containing 10% fetal bovine serum, 100 IU/mL penicillin, 100 mg/mL streptomycin, and 0.1 mM non-essential amino acids.Cells were cultured under 5% CO2 and 95% air at 37 °C.RLC-16 cells, a rat hepatocyte cell line, was purchased from RIKEN BioResource Center and cultured in same conditions as HepG2 cells.Cellular injury induced by NAPQI was evaluated using methods described previously .HepG2 or RLC cells were seeded at 1 × 104 cells/well into a 96-well plate.After 24 h to allow cells to adhere, medium was replaced with fresh medium containing NAPQI, with or without test compounds, such as EtPy, PyA, or PEP.NAC was used as the positive control against NAPQI-induced cellular injury.Mitochondrial dehydrogenase activity was estimated 24 or 48 h after NAPQI treatment using a modified MTT assay assay) and a cell counting kit.Apoptotic or necrotic-like cellular injuries were measured using an annexin V-FITC apoptosis detection kit.HepG2 cells were placed at 1 × 105 cells/dish into a 35-mm2 dish for 24 h. Culture medium was replaced with fresh medium containing NAPQI with or without the test compounds.Cells were stained using an annexin V-FITC apoptosis detection kit and analyzed using a BD FACSCalibur™.To determine the antioxidative effects of EtPy, we evaluated the scavenging activity against peroxynitrite and hydroxyl radicals, which play important roles in the development of APAP hepatotoxicity.The methods have been described previously .We used dihydrorhodamine, a reduced form of rhodamine, as a probe for peroxynitrite activity to examine the scavenging effect of EtPy.Peroxynitrite was diluted with 0.3 M NaOH to give a 7-mL solution.A mixture containing 100 mL sodium phosphate buffer, 0.3 mg/mL gelatin, 25 mL dihydrorhodamine, and EtPy was pre-incubated at 37 °C for 5 min, and the reaction was started by adding 200 mL of the mixture to 2 mL of peroxynitrite solution.After incubation at 37 °C for 5 min, the generated rhodamine was estimated using the above-described fluorescence measurements.A peroxynitrite-dependent increase in fluorescence was then converted into the rhodamine concentration from the external rhodamine standard.OH was generated using an H2O2/ultraviolet radiation system and was measured by electron paramagnetic resonance spin-trapping with 5,5-Dimethyl-1-pyrroline N-oxide.The reaction mixture contained 100 μM diethylenetriamine-N,N,N′,N″,N″-pentaacetic acid, 9 mM DMPO, and 500 μM H2O2 in the absence or presence of EtPy.This mixture was immediately transferred to the EPR flat cells and irradiated with UV for 30 s. EPR spectra were obtained immediately after UV-irradiation and were recorded at room temperature on a JES-TE 200 EPR spectrometer.After recording EPR spectra, the signal intensities of the DMPOOH adducts were normalized to that of a manganese oxide signal, in which Mn2+ served as an internal control.An EPR spectrum of DMPO spin adducts of OH was generated, and the relative quantification of the scavenging activity was evaluated using DMPO spin adduct intensities of OH.Statistical analyses were performed using GraphPad Prism.Multiple comparisons were used to examine the statistical significance of the data.When uniform variance was identified using Bartlett’s analysis, one-way analysis of variance was used to test for statistical significance.When significant differences were identified, data were further analyzed using the Tukey multiple range test, as appropriate.To confirm the effects of EtPy against APAP hepatotoxicity, we used a mouse model of liver injury induced by an overdose of APAP.Mice were treated with multiple doses of EtPy after the APAP injection, for totals of 50 and 100 mg/kg of EtPy for each group.Treatment with EtPy significantly reduced serum ALT and AST levels at 8 h after the APAP injection.In the vehicle treatment group, numerous areas of centrilobular necrosis with bleeding were observed, which is a typical manifestation of APAP hepatotoxicity.Although slight histological changes, such as swelling of hepatocytes around central veins, were observed, little centrilobular necrosis was visible in both EtPy groups.To investigate the effects of EtPy on nuclear DNA fragmentation, the TUNEL assay was performed.As shown in Fig. 2A, many lobes in the vehicle group showed some TUNEL-stained cells.In contrast, only a few TUNEL-positive cells were visible in the EtPy groups.Hepatic nitrotyrosine adduct formation was evaluated using immunohistochemical staining for nitrotyrosine in hepatic histological sections.Extensive nitrotyrosine staining of centrilobular hepatocytes was observed in all groups.Treatment with EtPy did not seem to inhibit the nitrotyrosine adduct formation induced by APAP.To evaluate the cytoprotective potential of EtPy against hepatocellular injury induced by NAPQI, a toxic metabolite of APAP, we examined the effects of EtPy on mitochondrial dehydrogenase activity in NAPQI-treated HepG2 and RLC cells.A significant decrease in mitochondrial dehydrogenase activity was observed with NAPQI treatment in HepG2 cells.Treatment with EtPy significantly prevented the decrease in mitochondrial dehydrogenase activity induced by NAPQI in a dose-dependent manner in the cells.These preventive effects exerted by 1 mM EtPy were comparable with the effects of 1 mM NAC, a therapeutic antidote against APAP hepatotoxicity.The same preventive effect of EtPy was observed in another hepatocellular model using a rat hepatocellular cell line.We compared the cytoprotective effects of EtPy with the parent compound, PyA, and a glycolysis intermediate, PEP, a precursor of PyA, in NAPQI-treated HepG2 cells.The 1 mM PyA group showed a significant inhibition in the decrease of mitochondrial dehydrogenase activity induced by NAPQI in HepG2 cells, as well as the EtPy and NAC groups.However, 1 mM of PEP did not exert any effect on NAPQI-induced mitochondrial dehydrogenase inactivity in HepG2 cells.We evaluated the effects of pyruvate derivatives on cellular apoptosis- or necrotic-like cell death induced by NAPQI in HepG2 cells.Representative data from FACS analysis of annexin V-FITC- and propidium iodide-stained HepG2 cells are shown in Fig. 6A. Significant increases in both annexin V and propidium iodide fluorescence intensities were observed with NAPQI treatment compared with the control group.Treatment with EtPy attenuated the increase in annexin V and propidium iodide fluorescence intensities induced by NAPQI, with statistical significances being observed in quantitative measurements of numbers of annexin V-stained cells and annexin V–propidium iodide double-stained cells.The attenuating effects were comparable with the effects of NAC.Although PyA also tended to prevent the increase in annexin V and/or propidium iodide fluorescence, no statistically significant differences were observed.PEP did not show any significant effects on annexin V and propidium iodide fluorescence intensities in NAPQI-treated HepG2 cells.As shown in Fig. 7A, 1 and 10 mm EtPy slightly scavenged peroxynitrite, reducing levels by approximately 10%.Fig. 7B shows data for the radical scavenging activity of EtPy as measured by EPR.The PR signals generated by the H2O2/UV radiation system showed typical OH signals and were slightly blocked by 10 mM EtPy.EtPy has been identified as a possible antidote against APAP-induced hepatotoxicity in mouse models .However, the precise mechanism of the in vivo hepatoprotective action of EtPy remains unclear.In this study, we confirmed that EtPy prevented serum transaminase elevation, hepatocellular centrilobular necrosis, and DNA fragmentation, but did not prevent nitrotyrosine adduct formation, induced by APAP in mice.The results support the previous report by Yang et al. , and suggest the preventive mode by which EtPy attenuates APAP hepatotoxicity, at least in part, through the inhibition of hepatocellular injury without oxidative stress.Another objective of this study was to assess whether EtPy has a direct hepatocellular protective effect against cellular injury induced by NAPQI, a highly reactive toxic metabolite of APAP, in an in vitro hepatocellular experimental system.In this study, we found that EtPy prevented changes in cellular injury parameters, such as mitochondria dehydrogenase inactivity, and increases in annexin V and propidium iodide fluorescence intensities induced by NAPQI treatment in HepG2 cells.The results indicate that EtPy has protective potential against cellular injury induced by NAPQI in cultured hepatic cell models.Numerous previous studies have demonstrated the potent anti-inflammatory potential of EtPy .EtPy can inhibit the release of cytokines from macrophages, such as TNF-α and IL-6 , and reactive oxygen species , which are associated with the development of APAP-induced liver injury in mice .Therefore, these anti-inflammatory properties of EtPy seem to play an important role in the attenuation of hepatitis observed in APAP-overdosed mice; that is “indirect” hepatocyte protection of EtPy exerted through inhibition of macrophages, cytokines, and reactive oxygen species.Oxidative stress is an important factor in APAP hepatotoxicity, and mitochondrial oxidative stress seems to play a critical role in the development of hepatotoxicity .Jaeschke et al. reported that nitrotyrosine formation is a marker of mitochondrial oxidative stress in APAP-induced liver injury.Nitric oxide reacts with superoxide in mitochondria and forms the reactive nitrogen peroxynitrite, which can then bind to the tyrosine residue of cellular proteins and develop into nitrotyrosine.Previously, we found that hydrophilic C6010 nanoparticles can attenuate APAP hepatotoxicity via a potent scavenger action against reactive oxygen and nitrogen, including peroxynitrite .In this study, although extensive nitrotyrosine formation was observed in hepatocytes around central veins, EtPy did not inhibit nitrotyrosine formation in the mouse liver.Consistent with previous reports, we also found that EtPy showed a scavenger effect against peroxynitrite and hydroxyl radicals.Although the effects were statistically significant, higher concentrations were needed than the concentration required to exert a cytoprotective effect.From this, we consider that peroxynitrite and hydroxyl radicals may not be a main target molecule of EtPy in the protection against APAP hepatotoxicity.Further studies are needed on this point.The results also indicate that inhibition of hepatocellular death induced by NAPQI may also be involved in antidote potential of EtPy against APAP hepatitis.The preventive effects of EtPy against NAPQI-induced cellular injury were comparable with the effects of NAC, an approved drug for APAP hepatitis, in this study.The results also suggest that EtPy is an attractive therapeutic candidate against APAP hepatotoxicity.NAC seems to detoxify NAPQI toxicity through glutathione supplementation and an anti-oxidative action during APAP hepatotoxicity .Wang et al. demonstrated that EtPy has a preventive action on dopamine-induced cell death in PC12 cells, which may be because of the anti-apoptosis action of EtPy.Some in vivo studies suggest anti-apoptotic and/or necrotic effects of EtPy.In the current study, EtPy inhibited the increase in annexin V- and/or propidium iodide-stained cells in NAPQI-treated HepG2 cells.Annexin V stain is a representative marker for apoptosis- or necrosis-like dead cells.Therefore, EtPy may attenuate cellular injury induced by NAPQI through the inhibition of an apoptosis- or necrosis-like cell death pathway.PyA, a parent compound of EtPy, and PEP, a precursor of PyA, have been demonstrated to show anti-inflammatory and anti-oxidative potential in previous studies .Although PEP did not show significant protection against NAPQI-induced cellular injury, PyA tended to attenuate the cytotoxicity induced by NAPQI in the current study.Therefore, the protective action of EtPy might be derived from the parent compound, PyA.It is known that PyA rapidly forms parapyruvate, a potent inhibitor of the Krebs cycle, via an aldol-like condensation reaction in aqueous solution .Therefore, the use of PyA per se as a therapeutic reagent is problematic; EtPy has been developed to circumvent this problem.Indeed, EtPy has been demonstrated to be a suitable derivative with the inherent advantages of PyA as an anti-inflammatory and anti-oxidative drug .Clinical trials on EtPy have been performed to evaluate the safety and therapeutic effects in high-risk cardiac surgery patients with systemic inflammation and in horses .Although no significant therapeutic benefit has been identified in clinical trials to date, the possibility of the clinical use of EtPy cannot be denied because of its safety observed in the trials.Further studies are necessary to evaluate the clinical benefits in other diseases.From previous and current findings, we suggest that EtPy should be investigated for its potential benefits as a new drug in patients with APAP overdose.In this study, while we observed the attenuating effects of EtPy in mouse and cellular models of APAP-induced liver injury, the study does have some limitations.First, the protective mechanism of EtPy against APAP hepatotoxicity was not fully clarified in this study.We first thought that inhibition of oxidative stress, particularly with regard to nitrotyrosine formation, was a central part of the protective mechanism of EtPy.However, treatment with EtPy did not inhibit nitrotyrosine formation in the mouse liver treated with APAP.Therefore, we now consider that other factors in APAP hepatotoxicity, such as mitophagy , endoplasmic reticulum stress , DNA fragmentation by endonuclease G, and apoptosis-inducing factor , may be critical targets of EtPy.Further studies are needed to clarify the precise mechanism.Second, we used mitochondrial dehydrogenase activity as a measure of cellular dysfunction.Although mitochondrial dehydrogenase activity measured using WST or MTT assays has been used as a parameter in viable cells, it does not always reflect cell viability.Studies have indicated that cellular mitochondrial dehydrogenase activity was inhibited without cell death under certain experimental conditions .Indeed, we also observed inconsistencies in the results between mitochondrial dehydrogenase activity and the annexin V/PI assay in this study.Therefore, the decrease in mitochondrial dehydrogenase activity observed in NAPQI-treated cells may indicate other phenomena, such as mitochondrial dysfunction, rather than only cell death.Third, to evaluate the direct hepatoprotective effect of EtPy against NAPQI-induced toxicity, we used immortalized human hepatoma HepG2 cells for the hepatocyte model.Although HepG2 cells are used as a hepatocellular model, there seems to be some weakness when used in an APAP hepatotoxicity model .HepG2 cells show lower expression of CYP2E1 and do not react well to APAP treatment.Antidote NAC cannot protect HepG2 cell injury induced by APAP .Therefore, the mode of cell death in HepG2 cells induced by APAP was quite different when compared with that of primary cultured hepatocytes .In the current study, we used an active metabolite NAPQI instead of APAP to induce cytotoxicity in HepG2 cells, and observed preventive effects with NAC as well as EtPy.The cytoprotective action of EtPy and NAC was also confirmed in another cell line.Even though the experimental system using NAPQI does have some limitations, the results suggest that EtPy has cytoprotective potential against NAPQI in cells.Further studies are needed to determine the direct hepatocyte protective effect of EtPy and the mode of action using other models, such as human primary hepatocytes or pluripotent stem cell derived hepatocytes .We confirmed that EtPy showed hepatoprotective action against APAP hepatitis in mice.Although hepatic DNA fragmentation was inhibited with EtPy treatment in mice, nitrotyrosine formation was not prevented.EtPy also significantly attenuated cellular injury induced by NAPQI, an active toxic metabolite of APAP, in hepatic cell lines.The results suggest that the protective potential of EtPy against APAP hepatotoxicity observed in animal models is exerted, at least in part, through direct hepatocellular protection against NAPQI.Minako Nagatome, Yuki Kondo: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data.Mitsuru Irikura, Tetsumi Irie: Conceived and designed the experiments.Yoichi Ishitsuka: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Wrote the paper.The authors declare that they have no competing interests.This work was supported by the Japan Society for the Promotion of Science.No additional information is available for this paper.
Acetaminophen, a common analgesic/antipyretic, is a frequent cause of acute liver failure in Western countries. The development of an effective cure against acetaminophen hepatotoxicity is crucial. Ethyl pyruvate, an ethyl ester derivative of pyruvic acid, has been identified as a possible candidate against acetaminophen hepatotoxicity in animal experiments. However, the mode of the hepatoprotective action of ethyl pyruvate remains unclear. We examined the hepatoprotective effect of ethyl pyruvate against hepatocyte injury and oxidative stress in a mouse model of acetaminophen hepatotoxicity. In addition, to examine whether ethyl pyruvate has direct hepatocellular protection against acetaminophen hepatotoxicity to counteract the influence of inflammatory cells, such as macrophages, we examined the effects of ethyl pyruvate on cellular injury induced by N-acetyl-p-benzoquinone imine, a toxic metabolite of acetaminophen, in a human hepatocyte cell line, HepG2 cells. Treatment with ethyl pyruvate significantly prevented increases in serum transaminase levels and hepatic centrilobular necrosis induced with an acetaminophen overdose in mice in a dose-dependent manner. Although hepatic DNA fragmentation induced by acetaminophen was also attenuated with ethyl pyruvate, nitrotyrosine formation was not inhibited. Ehyl pyruvate significantly attenuated mitochondria dehydrogenase inactivity induced by N-acetyl-p-benzoquinone imine in HepG2 cells. The attenuating effect was also observed in a rat hepatocyte cell line. Increases in annexin V and propidium iodide-stained cells induced by N-acetyl-p-benzoquinone imine were prevented with ethyl pyruvate in HepG2 cells. Pyruvic acid, a parent compound of ethyl pyruvate, tended to attenuate these changes. The results indicate that ethyl pyruvate has direct hepatocellular protection against N-acetyl-p-benzoquinone imine induced injury observed in acetaminophen overdose. The in vivo and in vitro results suggest that ethyl pyruvate attenuates acetaminophen-induced liver injury via, at least in part, its cellular protective potential.
140
Chemoattraction of bone marrow-derived stem cells towards human endometrial stromal cells is mediated by estradiol regulated CXCL12 and CXCR4 expression
CXCR4 belongs to the CXC family of chemokine receptors.Interaction of CXCR4 with its ligand, stromal derived factor plays a key role in the mobilization and homing of stem cells.CXCR4, expressed on the surface of stem cells, serves as a target for modulating migration.CXCL12 is produced by the stromal cells and endothelial cells of many organs including bone marrow, endometrium, skeletal muscle, liver and brain.In human endometrium, CXCL12 is expressed by stromal cells.Estradiol stimulates CXCL12 production from endometrial stromal cells suggesting a role in stem cell recruitment to the uterus.BM-derived cells including hematopoietic stem cells, mesenchymal stromal cells, and endothelial progenitor cells, significantly contribute to peripheral tissue repair and angiogenesis.Therefore, factors influencing BM-derived cell migration and function are likely to have a broad impact.Overexpression of CXCR4 in stem cells enhances MSCs homing in vivo to bone marrow as well as migration in vitro towards CXCL12.Recently it has been demonstrated that estrogen receptor is expressed in EPCs in vivo and in vitro.EPCs proliferation is induced during the menstrual phase and the proliferation can be affected by estrogen through ERα activation.These studies suggested the potential regulation of stem cells by sex steroids.Previous studies from our laboratory showed that BM-derived stem cells can engraft in the murine endometrium.We have shown that ischemia–reperfusion injury, toxicant exposure, and medications can alter the migration of BM-derived stem cells to the uterus, however the molecular mechanism responsible for the recruitment and engraftment of these cells is unknown.Here we report the effects of female sex hormones estradiol and progesterone on CXCR4 and CXCL12 expression, and the role of this chemokine and its receptor in migration of BMCs towards hESCs.Bone marrow derived stem cells migrate to the uterine endometrium of both mice and humans.This migration likely has a key role in the repair of the uterus after damage.Indeed, our group has previously demonstrated that migration and engraftment of BM derived stem cells to the uterine endometrium is increased after ischemic injury and decreased by environmental toxins such as tobacco.Further, BMC delivery to mice after injury improved reproductive performance and mBMCs express several nuclear receptors.Characterization of the chemokines that regulate stem cell engraftment may allow increased engraftment of endogenous stem cells injury.It has been previously shown in other tissues that increased CXCL12 production at a site of injury enhances stem cell recruitment and promotes functional recovery.Here we demonstrate that bone marrow cells will migrate towards endometrial cell conditioned media; this chemoattraction of CXCR4 expressing bone marrow cells is similarly mediated by CXCL12 production by endometrial cells.CXCL12 has been previously identified as an estrogen-regulated gene in estrogen receptor-positive ovarian and breast cancer cells.Here we show that in the endometrium, E2 significantly increased CXCL12 expression, suggesting a mechanism by which stem cells are recruited to the uterus in reproductive age women; it is likely that this recruitment ceases after menopause when the uterus is not longer needed for reproduction.Similarly, an increase in CXCR4 in bone marrow in response to estrogen enhances the mobility of these cells when needed for reproduction and in response to uterine signaling.Interestingly BM cells also produce CXCL12 at a high level.It is likely that local CXCL12 serves to retain these cells in the BM, preventing depletion.Elevated CXCL12 from the endometrium likely competes with BM derived CXCL12 as a chemoattractant for the BM stem cells.Elevated E2, which reaches the levels used here in the late proliferative phase, may ensure an adequate mobilization of stem cells near the time of ovulation and embryo implantation.P4 also induces the production of CXCL12 and may lead to further mobilization of stem cells in support of pregnancy.The regulation of stem cells by sex steroids is likely a widespread phenomenon.Nakada et al. showed that E2 promotes the HSCs self-renewal and the replicative activity of the HSC pool is augmented in female versus male mice.Li et al. reported that E2 enhanced the recruitment of BM-derived EPC into infarcted myocardium and induced CXCR4 expression in mice.Similarly, Foresta et al. have observed an increase in the number of CXCR4+ EPC during the ovulatory phase, which was likely caused by E2 activation.Sex steroid induced alterations in stem cell renewal and mobilization may underlie many sex specific differences in health and disease.In the ectopic endometrium of endometriosis, high E2 biosynthesis and low E2 inactivation lead to an excess of local E2.These provide a favorable condition for inducing BM-derived stem cell migration to normal and ectopic endometrium.Consistent with this theory, we have previously shown that stem cells are attracted to the ectopic endometrium.The ectopic lesions compete for a limited pool of circulating BM-derived cells, depriving the uterus of the normal number of recruited stem cells.Insufficient uterine repair and regeneration may contribute to the infertility associated with endometriosis.The identification of CXCL12 as the primary chemokine that recruits BM-derived cells to the uterus may allow therapeutic use in endometriosis and other uterine disease to restore fertility.The expression of CXCL12 in mouse endometrial cells is far less than endometrial cells in humans.This may be the cause for the decrease in the number of stem cells recruited to the uterus in mouse.Moreover, mice do not menstruate and thereby may not be a need to attract new cells with every cycle while humans menstruate and have a greater need to regenerate the endometrium from stem cells.We conclude that estradiol plays a key role in normal and ectopic endometrium by augmenting the migration of BM-derived stem cells to the endometrium.Estradiol regulates stem cell migration by inducing CXCL12 expression by endometrial stromal cells and CXCR4 expression by BM-derived cells.Sex steroid induced stem cell recruitment may explain many health related sex differences.Estradiol or CXCL12/CXCR4 may prove to be useful therapeutic agents in stem cell mediated diseases.Mouse bone marrow cells were prepared from 8–10 weeks old female C57 BL/6 mice by flushing bone marrow from the tibia and femur, and filtering the marrow through sterile 70-μm nylon mesh.The filtered mBMCs were grown at a density of 2.5 × 106 cells/ml in DMEM/F-12 medium supplemented with 15% fetal bovine serum, containing penicillin and streptomycin.After 48 h the cells were gently washed with PBS and fresh medium added; the medium was subsequently changed for every 3–4 days until two weeks when the cells were used for experiments described below.Mouse uterine cells were prepared from 6–8 weeks old female C57 BL/6 mice by enzymatic digestion of the uterus in 0.125% type IA collagenase for 1 h at 37 °C, and then filtered through a 70-μm filter.Human endometrial stromal cells were obtained from human endometria in the proliferative phase as described by Ryan et al.Both mUCs and hESCs were cultured in DMEM/F12 medium supplemented with 10% FBS and penicillin/streptomycin for one week.The cells were then washed with PBS, trypsinized, plated and cultured for an additional 48 h before carry out the experiments.Experiments used to obtain the mouse and human cells were conducted under approved Yale Institutional Animal Care and Use Committee and Human Investigations Committee protocols, respectively.Cells grown on glass microscope slides were fixed with freshly prepared 4% formaldehyde for 10 min and rinsed three times for 5 min each with PBS.The cells were blocked with 4% BSA in PBS for 30 min and incubated with the primary antibody in a humidified chamber overnight at 4 °C.For ABC-ICC, the cells were incubated with the secondary antibody in 1% BSA for 30 min at room temperature.The ABC staining and 3, 3′diaminobenzidine kits were used to visualize the immunocytochemical reaction under light microscope.For fluorescence-ICC, the cells were incubated with the secondary antibody in the dark for 30 min at room temperature and 4′, 6-diamidino-2-phenylindole was added on to the cells.The slides were examined under inverted fluorescence microscope.After two weeks of culture, mBMCs were analyzed for mesenchymal stromal cells, and endothelial progenitor cells by flow cytometry.The cells were incubated with the fluorescent-labeled antibodies against CD90, CD105, CD34, Flk-1 and CD31, or with isotype-matched irrelevant antibody for 30 min on ice in dark.The cells were then washed with flow cytometry staining buffer 3 times for 5 min at 3,000 rpm and the cell pellet was resuspended in 1 ml ice cold staining buffer for cell sorting.Flow acquisition was performed on LSRII Fortessa, LSRII, or FACSCalibur analyzers, and data were analyzed using Diva software."CXCL12α was assayed from the supernatants of cell cultures using ELISA kit according to the manufacturer's instructions.mBMC, hESCs and mUC were cultured in DMEM/F12 supplemented with 10% FBS and 1% penicillin and streptomycin in a 6-well plate.The supernatants were collected from 48 h old cell cultures.For steroid treatment, the 48 h old mBMC and hESCs cells were serum starved overnight and treated for 24 h with E2 or progesterone at concentrations of 1 × 10− 8, 1 × 10− 7, 1 × 10− 6 M.The supernatants were then collected.The migration assay for mBMC and hESC cells was carried out using 8-μm pore size polycarbonate membrane.The serum free conditioned medium collected from 48 h old cultures from both cell types was added into the lower chamber and 200 μl of cells was placed into the upper insert.The cells in the upper insert were serum starved overnight and treated with either E2 for 24 h at 1 × 10− 7 M, or AMD3100 antagonist of CXCR4 for 30 min before the migration assay.After 16 h in a humidified CO2 incubator at 37 °C, the non-migrating cells were scraped with a cotton swab from the top of the membrane.The cells migrating across the membrane were fixed, stained, and counted.Results are reported as chemotactic index, defined as the number of cells migrating in response to the conditioned supernatants divided by number of cells that responded to the serum-free DMEM/F12 medium.Ethanol was used as a vehicle control to exclude the nonspecific effects on CXCR4.Protein extracts from different cells, as well as treated mBMCs, were subjected to SDS-PAGE and immunoblotting using standard methods.Anti-CXCR4 and anti-α-tubulin antibodies used were from Santa Cruz Biotechnology while secondary antibody conjugated with horseradish peroxidase was obtained from Cell Signaling.The CXCR4 protein band densities were quantified using Quantity One software from BioRad and relative band density was calculated as a ratio of sample to α-tubulin."RNA was isolated using TRIzol and purified on RNeasy minicolumns, with on-column deoxyribonuclease digestion, as per the manufacturers' instructions.First-strand cDNA was reverse transcribed using iScript cDNA Synthesis Kit while iQ SYBR Green Supermix based assays were performed for mCXCR4, mCXCL12 and α-tubulin for qRT-PCR analysis.The CXCR4 primers were as follows: forward 5′-TTTCAGATGCTTGACGTTGG-3′; and reverse 5′-GCGCTCTGCATCAGTGAC-3′; the CXCL12 primers were, forward 5′-ACTCACACTGATCGGTTCCA-3′ and reverse 5′-AGGTGCAGGTAGCAGTGACC-3′ and the primers for α-tubulin were, forward, 5′-ATGGAGGGGAATACAGCCC-3′ and reverse, 5′-TTCTTTGCAGCTCCTTCGTT-3′.For each experimental sample, a control without reverse transcriptase was run to verify that the amplification product arose from cDNA and not from genomic DNA.The relative expression levels normalized to α-tubulin, were determined using the comparative CT method.Results are presented as the mean ± S.D. Statistical significance was determined using one-way ANOVA with the Newman–Keuls multiple comparisons test.All statistical analyses were carried out using Graph Pad Prism 4.00 for Macintosh.The CXCR4 and CXCL12 genes are remarkably conserved across diverse species.The human and murine CXCL12 differs by one amino acid and is cross reactive, providing us with an opportunity to conduct the study of murine CXCR4 with human CXCL12 signaling.The mBMCs were cultured for two weeks, washed with PBS, and trypsinized.The cell pellet was resuspended in FACS staining buffer and incubated with fluorescent labeled antibodies against CD90, CD105, CD34, CD31 and Flk-1.The cells were then and analyzed by FACS.As shown in Fig. 1A, 25.6% of mBMCs expressed CD90 while CD105, CD34, CD31 and Flk-1 were expressed on 20.7%, 67.8%, 60.5% and 68.5% of mBMCs respectively.CD90+ and CD105+ were considered MSC-specific surface markers while CD34+, CD31+, and Flk-1+ represented the EPC.Cell lysates were prepared from 48 h old cells and 25 μg of protein from each cell type was subjected to SDS-PAGE followed by immunoblotting.As shown in Fig. 2A mBMCs had the highest CXCR4 expression among the three cell types while lowest expression was seen in hESCs.The differential expression of CXCR4 protein levels among these cell types was correlated with mRNA levels confirmed by qRT-PCR.The density of specific bands was quantified using Quantity One software.The relative band density was calculated as a ratio of sample to ɑ-tubulin.The CXCL12α was measured from the conditioned medium collected from the 48 h old cells using ELISA kit.As shown in Fig. 2B, CXCL12 was predominantly expressed in mBMCs; however hESCs also expressed CXCL12 at high level while mUCs showed very low level, CXCL12 expression.We localized the expression of CXCR4 in mBMCs with fluorescent ICC.As shown in Fig. 3, the CXCR4 is expressed intracellularly in 37.5% of mBMCs.Fig. 3A shows CD45 expression on mBMCs while 3B shows CXCR4 expression; 3C shows DAPI staining for nuclei and 3D shows the merge of CD45 and CXCR4 expression.CXCR4 expression is predominantly seen in the CD45 negative cells.A migration assay was carried out to determine the chemotactic activity of CXCL12, using the conditioned media.We detected the migratory capacity of mBMCs towards hESC supernatant and hESCs towards mBMC supernatant.As shown in Fig. 4, hESC supernatant significantly induced the mBMC migration.Pretreatment of mBMCs with the CXCR4 antagonist AMD3100 blocked the mBMC migration in a dose-dependent manner, and 100 μg/ml AMD3100 completely abolished the mBMC migration.qPCR analysis demonstrated that E2 caused a significant increase in mRNA expression levels of CXCR4 in mBMCs in a dose dependent manner at 6 h but at 24 h only physiological levels of E2 continued to drive CXCR4 expression in mBMCs.As shown in Fig. 5C, progesterone alone at a physiological concentration of 10− 7 M also induced CXCR4 in mBMCs.The combination of E2 and P4 resulted in a similar level of expression as treatment with either sex steroid alone.Ethanol was used as a vehicle to determine the non-specific effects on the CXCR4 expression.Ethanol treated cells does not showed any change in the CXCR4 expression comparing to control cells which are not treated either by ethanol or E2 or P4.Western blot analysis confirmed that CXCR4 protein induction was significantly increased after treatment with 10− 7 M E2 for 24 h while E2 at a concentration of 10− 8 M showed no induction.Conversely, E2 at a concentration of 10− 6 M did not increase CXCR4 protein levels compared to untreated cells.In summary, physiological concentrations of E2 and P4 results in increased expression of CXCR4 in mBMCs.Based on the results of steroid-induced CXCR4 expression, physiological levels of E2 and P4 were selected for examination of the effects of sex steroids on CXCL12 production.As shown in Fig. 5E, neither E2 nor P4 had any effect on CXCL12 production in mBMCs.However, in hESCs, E2 caused a significant increase in CXCL12 production compared to control and surprisingly P4 effectively inhibited E2-induced CXCL12 production in hESCs.In the both cell types mBMC and hESCs the protein levels were correlated to the mRNA levels confirmed by qRT-PCR.To determine if the enhanced CXCL12 and CXCR4 production induced by E2 would increase migration of BMCs to hESCs, we treated hESCs and mBMCs with E2 and performed a migration assay using the conditioned media, with and without the CXCR4 antagonist.The hESC supernatants were collected from 48 h old cultures.The mBMC in the upper insert were pretreated with 1 × 10− 7 M E2 for 24 h after overnight serum starvation.Migration of mBMC was observed after 16 h.As shown in Fig. 6, mBMC migrated towards the E2-induced hESC supernatant in greater numbers compared to the untreated hESC supernatant.The number of migrated mBMCs decreased 42–51% when mBMCs were pretreated with the CXC4 antagonist AMD3100.The E2 induced migration of BMCs to hESCs was mediated by CXCL12/CXCR4.
Bone marrow derived cells engraft to the uterine endometrium and contribute to endometriosis. The mechanism by which these cells are mobilized and directed to the endometrium has not been previously characterized. We demonstrate that human endometrial stromal cells (hESCs) produce the chemokine CXCL12 and that bone marrow cells (BMCs) express the CXCL12 receptor, CXCR4. Treatment with physiological levels of estradiol (E2) induced both CXCL12 and CXCR4 expression in hESCs and BMCs, respectively. BMCs migrated towards hESCs conditioned media; a CXCR4 antagonist blocked migration indicating that CXCL12 acting through its receptor, CXCR4, is necessary for chemoattraction of BM cells to human endometrial cells. E2 increased both CXCL12 expression in endometrial cells and CXCR4 expression in BM cells, further enhancing chemoattraction. E2 induced CXCL12/CXCR4 expression in endometrium and BM, respectively, drives migration of stem cells to the endometrium. The E2-CXCL12/CXCR4 signaling pathway may be useful in determining treatments for endometrial disorders, and may be antagonized to block stem cell migration to endometriosis.
141
Push-pull farming systems
All farming systems require crop protection technologies for predictable and economic food production.Pesticides currently serve us well, with no convincing evidence for legally registered pesticides causing problems of human health or environmental impact .In terms of risk analysis, risks associated with use of pesticides have been extremely low for some time .However, for sustainable pest management, seasonal inputs requiring external production and mechanical application need to be replaced by approaches involving direct association with the crop plants themselves .Current synthetic chemical pesticides have often been designed from natural product lead structures or are themselves natural products and, although they are in no way more benign than synthetic pesticides, there are, in nature, genes for their biosynthesis which could be exploited for delivery to agriculture via crop or companion plants, or via industrial crops.Production by the latter is not sustainable because of the need for extraction and then application to the crop, although on-farm extraction, or at least some processing, could be employed where the necessary quality control and safety can be achieved.Many crop plants incorporate biosynthetic pathways to natural pesticides which could be enhanced by breeding.Alternatively, pathways can be added by genetic engineering, for example, for Bacillus thuringiensis endotoxin production or with genes for entire secondary pathways, for example, for toxic saponins such as the avenacins , including from other plants or organisms entirely.Pheromones and other semiochemicals have long been regarded as presenting opportunities for pest management and many biosynthetic pathways have been elucidated .For semiochemicals, there is a further advantage in that beneficial organisms can also be advantageously manipulated .Thus, semiochemicals that recruit predators and parasitoids, or in other ways manage beneficial organisms, can be released by crop or companion plants, thereby providing new approaches to exploiting biological control of pests.Although biological control is sustainable in the example of exotic release of control agents, registration may not be granted because of potential environmental impact, and inundative release requires production and delivery.Therefore, managing the process of conservation biological control, which exploits natural populations of beneficial organisms, expands the potential value of releasing semiochemicals from crops or companion plants .Many semiochemicals are volatile, for example those acting at a distance as attractants or repellents.Also, in order that the signal does not remain in the environment after use, these compounds are often highly unstable chemically, which again promotes the concept of release from plants.From the attributes of a natural product pest control agents, as described above, follows the concept of stimulo-deterrent or push–pull farming systems.The main food crop is protected by negative cues that reduce pest colonisation and development, that is, the “push” effect.This is achieved either directly, by modifying the crop, or by companion crops grown between the main crop rows.Ideally, the modified crop, or the companion crop, also creates a means of exploiting natural populations of beneficial organisms by releasing semiochemicals that attract parasitoids or increase their foraging.The “pull” involves trap plants grown, for example, as a perimeter to the main crop and which are attractive to the pest, for example by promoting egg laying.Ideally, a population-reducing effect will be generated by trap plants, such as incorporating a natural pesticide, or some innate plant defence.Push–pull may use processes, largely semiochemical based, each of which, alone, will exert relatively weak pest control.However, the integrated effect must be robust and effective.The combination of weaker effects also mitigates against resistance to the overall system of pest control because of its multi-genic nature and lack of strong selection pressure by any single push–pull component.Smallholder farmers in developing countries traditionally use companion crops to augment staple crops such as cereals.Development of the push–pull farming system for these farmers employed the companion cropping tradition in establishing an entry point for the new technology.“Push” and “pull” plants were identified initially by empirical behavioural testing with lepidopteran stem borer adults.Having begun experimental farm trials in 1994 and moving on-farm in 1995, farmers very swiftly adopted the most effective companion crops and the benefits soon became apparent.The semiochemistry underpinning the roles of the companion plants in this push–pull system was then investigated by taking samples of volatiles released from companion plants and analysing by gas chromatography, coupled with electrophysiological recordings from the moth antennae .In addition to well-known attractants from the trap plants, including isoprenoidal compounds such as linalool and green leaf alcohols from the oxidation of long chain unsaturated fatty acids, other semiochemicals arising through the oxidative burst caused by insect feeding offered negative cues for incoming herbivores.These are isoprenoid hydrocarbons, for example,-ocimene and-caryophyllene, and some more powerful negative cues, the homoterpenes, that is, homo-isoprenoid, or more correctly, tetranor-isoprenoid hydrocarbons .Most importantly, these latter compounds also act as foraging recruitment cues for predators and parasitoids of the pests , and molecular tools for investigating other activities are being developed .Technology transfer for this push–pull system requires new approaches, and although such transfer benefits by a tradition of companion cropping, training is required for extension services and farmers, and availability of seed or other planting material, although, being perennial, these companion plants are one-off inputs.All the companion plants are valuable forage for dairy husbandry and potentiate zero grazing, which is advantageous in the high population density rural areas in which most of the population live in sub-Saharan Africa.The legume intercrop plants, Desmodium spp., also fix nitrogen, with D. uncinatum being able to add approximately 110 kgN/ha/yr and contributing approximately 160 kg/ha/yr equivalent of nitrogen fertilizer .Desmodium spp. intercrops also control parasitic striga weeds, for example, Striga hermonthica , via release of allelopathic C-glycosylated flavonoids , which represents another facet of push–pull in providing weed control .Overall, there is a high take-up and retention in regions where the technology is transferred; for example, in western Kenya in 2013, nearly 60,000 farmers are using these techniques .Although this represents a very small percentage of the millions of people who could benefit, so far there have been very few resources for technology transfer.A recent EU-funded research initiative, ADOPT, has sought companion plants that can deal with drought, a rapidly growing problem in sub-Saharan Africa as a consequence of climate change, and new companion crops have already been identified and taken up by farmers .The “push” plants imitate damaged crop plants, particularly maize and sorghum which produce the homoterpenes, and although normally too late to be of real value in economic pest management, production of these compounds is induced by the pest.Recently, we found that this can also be caused by egg-laying, specifically on the open pollinated varieties of maize normally grown by the smallholder farmers , but not on hybrids .An egg-related elicitor enters the undamaged plant and the signal travels systemically, thereby inducing defence and causing release of the homoterpenes.Exploitation of this phenomenon will offer new approaches to push–pull farming systems.New approaches to breeding by alien introgression of genes from wide crosses, including from the wild ancestors of modern crops , as well as incorporation of heterologous gene incorporation by GM , genome engineering and creation of synthetic crop plants by combining approaches including new crop genomic information , can contribute to push–pull farming systems.Mixed seed beds are now in use for cereals, even in industrial agriculture, and push–pull could be created without separated “push” and “pull” plants, including regulated stature facilitating selective harvesting.The new generation of GM and other biotechnologically derived crops could revolutionise the prospects for push–pull in industrialised farming systems by offering crop plants that could themselves embody the “push” trait, thereby obviating the need for labour to manage the intercrop.The expression of B. thuringiensis derived genes against certain insect pests has been highly successful , but we are now able to manipulate secondary metabolite pathways to produce pesticides, related to the synthetic versions, with a much greater range of activities, for example, cyanogenic glycosides , glucosinolates and avenacins .The latter, and also the benzoxazinoids , are biosynthesised by pathways involving a series of genes co-located on plant genomes, potentially facilitating enhancement or transfer to crop plants by GM .These pathways could be expressed in “pull” plants for population control.They could also enhance the “push” effect.However, for both, attention must be directed towards obviating interference with the “push” and “pull” mechanisms.Already, in sub-Saharan African push–pull, the value of the homoterpenes can be seen .Laboratory studies have demonstrated the principle, more widely, of enhancing production by GM .Biosynthesis of both the alcohol precursors and the homoterpenes has been demonstrated with, for the latter, Cyp82G1 being the enzyme in the model plant Arabidopsis thaliana .This is now being explored for insect control in rice.Pheromones also offer opportunities and, after demonstrating the principle in A. thaliana , the heterologous expression of genes for the biosynthesis of-β-farnesene, the alarm pheromone of many pest aphid species, after success in the laboratory, is being field tested as a means of repelling aphids and attracting parasitoids to the crop.Nonetheless, as well as overcoming the demanding issues of GM, these sophisticated signals will need to be presented in the same way that the insects themselves do, which, for the aphid alarm pheromone, is as a pulse of increased concentration.Indeed, as well as demands of behavioural ecology, complicated mixtures may also be necessary to provide the complete semiochemical cue.However, it is already proving possible to make relatively simple targeted changes in individual components of mixtures , which could allow an economic GM approach.The latter is likely to become even more appealing with the development of new technologies arising from genome editing .Genes for biosynthesis of the aphid sex pheromone could be used to establish a powerful “pull” for the highly vulnerable overwintering population, but would need to be isolated from the insects themselves so as to avoid the presence of other plant-related compounds that inhibit the activity of the pheromone.Recent discoveries in plant biosynthesis of compounds related to aphid sex pheromones will facilitate this quest.Attractant pheromones of moth pests may also become available as a consequence of attempts to use GM plants as “factories” for biosynthesis.A number of biosynthetic pathways to plant toxicants and semiochemicals are subject to induction or priming .Elicitors can be generated by pest, disease or weed development.Volicitin) and related compounds produced in the saliva of chewing insects induce both direct and indirect defence, often involving the homoterpenes, but require damage to transfer the signal to the plant.The egg-derived elicitor should overcome the problem.Plant-to-plant interactions mediated by volatile compounds, for example, methyl jasmonate and methyl salicylate, related to plant hormone stress signalling, are associated with these effects and can induce defence.However, there can be deleterious or erratic effects in attempting to use such general pathways .cis-Jasmone signals differentially to jasmonate and, without phytotoxic effects, regulates defence, often by induction of homoterpenes in crops even without genetic enhancement, for example, in wheat , soy bean , cotton and sweet peppers .In addition to aerially transmitted signals that could be used to induce “push” or “pull” effects, signalling within the rhizosphere directly , or via the mycelial network of arbuscular mycorrhizal fungi , is now showing exciting promise.The “pull” effect can be enhanced by raising the levels of inducible attractants, provided there is no interference with the population controlling components of the push–pull system.However, attractive plants, without population control or with a late expressed control, could be valuable as sentinel plants.Thus, highly susceptible plants, either engineered or naturally susceptible, could, on initial pest damage, release signals via the air or rhizosphere that could, in turn, switch on defence in the recipient main crop plants, creating elements of the push–pull farming system as a fully inducible phenomenon activated without external intervention.Push-pull is not only a sustainable farming system, but can also protect the new generation of GM crops against development of resistance by pests.Although considerable work still needs to be done for all the new tools of biotechnology to be exploited in push–pull, agriculture must sustainably produce more food on less land as it is lost through diversion to other uses and climate change, and so presents an extremely important target for new biotechnological studies.Papers of particular interest, published within the period of review, have been highlighted as:• of special interest,•• of outstanding interest
Farming systems for pest control, based on the stimulo-deterrent diversionary strategy or push-pull system, have become an important target for sustainable intensification of food production. A prominent example is push-pull developed in sub-Saharan Africa using a combination of companion plants delivering semiochemicals, as plant secondary metabolites, for smallholder farming cereal production, initially against lepidopterous stem borers. Opportunities are being developed for other regions and farming ecosystems. New semiochemical tools and delivery systems, including GM, are being incorporated to exploit further opportunities for mainstream arable farming systems. By delivering the push and pull effects as secondary metabolites, for example, (E)-4,8-dimethyl-1,3,7-nonatriene repelling pests and attracting beneficial insects, problems of high volatility and instability are overcome and compounds are produced when and where required. © 2013 The Authors.
142
Acute toxicity classification for ethylene glycol mono-n-butyl ether under the Globally Harmonized System
Ethylene glycol mono-n-butyl ether is a high production volume glycol ether solvent and is a component of a variety of products including hydraulic brake fluids, water-based coatings, and hard-surface cleaners.The hazards and risks associated with this solvent have been extensively reviewed.Currently the harmonized hazardous substance classification for acute toxicity in the European Union for EGBE under the Dangerous Substance Directive is Xn: R20/21/22.This corresponds with the translated harmonized hazardous substance classification for acute toxicity of EGBE under Regulation No. 1272/2008 which is Acute Toxicity Category 4 for oral, dermal and inhalation exposures.The Globally Harmonized System classifications taken from the Dangerous Substances Directive represent the equivalent categories and do not take into account the different classification thresholds.In early studies in rodents and rabbits, it was recognized that EGBE produces a hemolytic response characterized by the appearance of hemoglobinuria and by changes in a variety of blood parameters.Although first recognized following inhalation exposures, such effects are also present following oral and dermal administrations.Hemolysis is observed following both single and repeated exposures to EGBE, with apparent tolerance to EGBE-induced hemolysis developing in sub-acute or sub-chronic studies.Ghanayem et al. have shown that this tolerance to hemolysis is a consequence of the replacement of older and more susceptible erythrocytes with less susceptible, younger cells.Other reports have subsequently confirmed these findings.The acute toxicity of EGBE varies among test species, but would generally be classified as low to moderate.Acute signs of toxicity in sensitive experimental animals include lethargy, labored breathing and ataxia, generally accompanied by clear evidence of hemolysis.Pathological effects often noted in these acute studies and that are considered to be secondary to hemolysis may include hemorrhagic lungs, mottled livers, congested kidneys and spleens, red-stained fluid in urinary bladders, and hemoglobinuria.Hematuria was often observed in animals that died or were seriously affected.In reliable studies1 reviewed under the European Union REACH registration process, LD50 values from acute oral toxicity studies in experimental animals ranged from 615 mg/kg bw to values in excess of 2000 mg/kg bw; LD50 values from acute dermal toxicity studies ranged from 435 mg/kg bw to values in excess of 2000 mg/kg bw; and LC50 values from inhalation studies ranged from ca. 400 ppm to values in excess of 3.9 mg/L.The acute toxicity database for EGBE indicates the guinea pig to be relatively insensitive to the hemolytic effects that are observed in other sensitive species.This is reflected in generally low levels of acute toxicity in this species.In reliable studies conducted using the guinea pig, a LD50 value from an acute oral toxicity study of 1414 mg/kg bw has been reported; LD50 values from acute dermal toxicity studies in excess of 2000 mg/kg bw have been reported; and LC50 values from inhalation studies of approximately 400 ppm for a 7 h exposure, as well as values in excess of the maximum achievable vapor concentration of 633 ppm or 691 ppm for 1-h exposures have been reported.As in the case of the guinea pig, humans are generally insensitive to the hemolytic toxicity of EGBE.Under conditions of controlled human exposures, no overt toxicity or signs of hemolytic effects have been observed.In well-documented cases of intentional ingestions of large amounts of EGBE-containing products, coma with respiratory and other complications has been reported, but with little conclusive evidence of hemolysis.Metabolic acidosis is the critical effect observed in these poisonings with survival of all individuals following appropriate supportive care.Under the guidelines for classification of acute toxicity according to GHS, the rat is the preferred species for evaluation of acute oral and inhalation toxicity and the rabbit the preferred species for acute dermal toxicity.Strictly applying the classification thresholds of this guideline may result in possible classifications for EGBE of Acute Toxicity Category 3 for both dermal and inhalation exposures.Such classifications are overly conservative and do not represent actual hazard from human exposure.This review provides evidence for the use of acute toxicity data from the guinea pig as being most appropriate for setting acute toxicity hazard classifications for EGBE.For the purposes of classification, acute toxicity is defined as an adverse effect occurring following a single oral or dermal dose of a chemical.An adverse effect resulting from multiple doses administered within 24 h would also be considered an acute effect.An acute inhalation effect is defined as an adverse response following a 4-h exposure.The allocation of substances to one of five toxicity categories under GHS is shown in Table 1.The values shown represent numeric cut-off values for each category.The general guidance provided for classifications under the GHS classification system as well as the related classifications under Regulation No. 1272/2008 of substances and mixtures) recognize that animal data from acute toxicity studies will generally provide the acute toxicity estimates as indicated by the cut-off values in Table 1.Although presented as estimates, in fact and in common practice, these default values become ipso facto the required values for classification.In other words, if reliable animal test data is available, and in the absence of reliable or contradictory human data, the values in Table 1 represent the default values for classification.As stated in the guidance for GHS classification system, the rat is the preferred test species for acute oral and inhalation exposures.Thus, acute oral LD50 values from reliable studies in rats have precedence over other acute toxicity data from experimental animals.Similarly, acute inhalation 4-h LC50 values from studies conducted in rats are the preferred value for setting the acute inhalation classification.Acute toxicity data from either the rat or rabbit is preferred for evaluation of dermal toxicity.However the GHS text also state that when experimental data for acute toxicity are available in several animal species, scientific judgement should be used in selecting the most appropriate LD50 value from among valid, well-performed tests.Therefore, when determining the acute toxicity of a substance, it is acceptable to deviate from the standard species where there is a sound scientific basis for doing so.It is clearly stated in the GHS guidance that the protection of human health and the environment is a primary goal of the harmonized hazard classification scheme.For the purpose of acute toxicity classification, reliable epidemiological data as well as experience gained from occupational exposures, medical surveillance, case reports or reports from national poison centers are all recognized sources of information.A weight of evidence approach is to be used for classification, with both human and animal data considered.In cases of a conflict, the quality and reliability of the results evaluated must be assessed to determine the final classification.Expert judgment must be applied for weight of evidence determinations both to assess quality and reliability as well as to assess confounding factors.Summarized in Tables 2–4 are acute toxicity data for EGBE from experimental animals following oral, dermal or inhalation exposures.The studies listed have been reviewed for reliability under the requirements of the EU REACH registration program and most received acceptable ratings of 1 or 2 using the method of Klimisch et al.Table 2 lists acute toxicity data from experimental animals exposed by the oral route to EGBE.Acute oral toxicity has been most commonly measured in young adult rats, with reported LD50 values varying significantly over a range of 615 mg/kg bw to 2100 mg/kg bw.Hemolysis was observed in the majority of these studies, sometimes accompanied with renal and hepatic lesions, presumed to be a consequence of hemolysis.In animals dying prior to study terminations, pathological effects often noted included bloody urine and/or blood in the stomach and intestines and diffuse necrosis and hemorrhage of the gastric mucosa, indicative of a gastric irritant.In a single reliable acute oral toxicity study conducted in mice, both fasted and fed male mice were treated by gavage with undiluted EGBE.Fasted mice appeared somewhat more sensitive to EGBE than fed mice with LD50 values of 1519 mg/kg bw versus 2005 mg/kg bw.In two older and less reliable acute oral toxicity studies by Carpenter et al., rabbits were reported to be the most sensitive of the species tested that included rats, mice, and guinea pigs.The LD50 values reported in these studies were 320–370 mg/kg bw.These values seem low compared with those of other species but are consistent with more recent data for this species indicating complete mortality in both male and female rabbits at an administered dose of 695 mg/kg bw.Two reviewed studies are available in guinea-pigs, one of which is a more recent GLP guideline study.The LD50 values reported in these studies, 1414 and 1200 mg/kg bw, were comparable; Carpenter et al., 1956).The same or similar clinical signs and pathology as displayed in other species were seen in these studies.However, in the more recent study of Shepard, there was no evidence of hemolysis or any effects noted on erythrocytes.Gastrointestinal irritation, as evidenced by sialorrhea and changes in the gastric mucosa, may have contributed to the toxicity in this study.Listed in Table 3 are acute dermal toxicity data for EGBE in experimental animals.In most cases, experimental conditions included occlusive exposures of 24-h duration.Data is presented for three species: the rat, rabbit and guinea pig.In male and female rats, both occlusive and semi-occlusive exposures to undiluted EGBE are available and, using a weight of evidence approach, the LD50 value in the rat can be considered to be >2000 mg/kg bw both under occluded and non-occluded conditions.In rabbits, the results shown in Table 3 are generally consistent and indicate an increased acute dermal toxicity of EGBE in this species.In guinea-pigs, variations in LD50 values were seen among the studies with a range from 230 mg/kg bw to >2000 mg/kg bw.An older study reported by Roudabush et al. indicates a much higher dermal toxicity than more recently reported studies.In particular, the more recent guideline study of Shepard reports a LD50 value of greater than 2000 mg/kg bw.In this study, no adverse effects were described.The value reported by Shepard is consistent with the value of >1200 mg/kg bw reported by Wahlberg and Boman.Listed in Table 4 are acute inhalation toxicity data for EGBE in experimental animals.Exposure durations reported in these studies ranged from 1 to 8 h thus complicating a direct comparison of the study results.No attempt has been made to correct for exposure durations.When interpreting the findings from these inhalation studies, it is also important to note that the calculated saturated vapor concentration of EGBE under ambient conditions is 791 ppm.Thus values in excess of this may represent nominal values.Maximum measured achievable vapor concentrations below this have been reported.The acute inhalation toxicity of EGBE has been most commonly measured in the rat.Acute 4-h inhalation LC50 values in male and female rats were 2.4 and 2.2 mg/l, respectively.LC50 values at or near the theoretical maximum vapor concentration have been reported for exposure durations from 1 to 8 h.There is some limited evidence to suggest that female rats are more sensitive than males.Clinical signs and pathology indicative of hemolytic toxicity are seen.In six, 7-h exposure studies in male rabbits at approximate vapor concentrations of 400 ppm, an LC50 of 400 ppm was reported representing a mortality rate of 10/24 animals as an aggregate of the six studies.In a more recent and well-documented acute inhalation toxicity study in male and female guinea pigs, test animals were exposed to maximum achievable vapor concentrations of EGBE for 1 h.There was neither mortality nor adverse clinical effects noted in this study.There were also no significant adverse pathological effects reported.The LC0 values for guinea pigs in this latter study were 3.1 mg/l and 3.4 mg/l, representing the maximum measured vapor concentrations achieved in the study.It is worth noting that at a theoretical maximum saturated vapor concentration of 3.9 mg/l, and given a standard respiratory minute volume in the guinea pig of 0.66 l/min/kg bw, a 4 h inhalation exposure in a guinea pig would result in an internal dose of 618 mg/kg bw, which is approximately half of the acute oral LD50 value in this species.Thus at experimentally-achievable vapor concentrations, a LC50 value from a 4-h exposure cannot be obtained in this species.No evidence for hemolysis has been reported in any controlled laboratory exposures of human volunteers to EGBE by inhalation.In early studies by Carpenter et al., there was neither increased osmotic fragility of erythrocytes nor other signs of hemolysis in 3 individuals exposed by inhalation at 195, 113 or 98 ppm of EGBE for up to 8 h. Seven male volunteers were exposed to 20 ppm of EGBE for 2 h while performing light work.None of the exposed subjects in this latter study showed any of the adverse effects related to EGBE exposures.Reported cases of acute human poisonings with EGBE are rare and generally involve either accidental ingestion in pediatric cases or intentional ingestions in adults.Dean and Krenzelok reviewed 24 pediatric poisoning cases reported to the Pittsburgh Poison Center during a five month period from December of 1990 to April of 1991.These all involved glass or window cleaners containing EGBE at concentrations ranging from 0.5% to 9.9%.The ages of the children involved ranged from 7 months to 9 years.All incidents were reported to the PPC within 5 min of the actual exposures.The estimated quantities ingested ranged from 5 to 300 ml and all children were reported to be asymptomatic immediately following the ingestions.In the single greatest exposure among this group, a 2 year old was reported to have swallowed 300 ml of an 8% EGBE-containing glass cleaner representing approximately 24 ml of EGBE.In this latter case, the child underwent gastric lavage and was hospitalized for 24 h. Evidence of the toxicity of EGBE, as expressed in animals, and including hemolysis, central nervous system depression, metabolic acidosis and renal compromise were completely absent in these pediatric cases.In 22 of the reported 24 cases from this study, the patients were treated at home with simple dilution.All cases remained asymptomatic throughout an additional 48 h of telephone follow-up.Summarized in Table 5 are a number of reported cases of intentional ingestion by adults of large amounts of EGBE-containing products.Reviews of these cases have been previously published.Severe metabolic acidosis and coma are consistently reported in these poisoning cases.All patients required aggressive support with administration of fluids and mechanical ventilation.However, it is important to note that in all reported cases, patients recovered fully without subsequent symptomology.In those reported cases for which blood levels of BAA were measured, concentrations typically peaked at 2 days or after with little still present by 3 days.Maximum concentrations of BAA ranged as high as 3.64 mM and were generally in excess of levels that cause hemolysis in blood from sensitive species, but well below a concentration of 10 mM, a level reported to show only the most minimal hemolytic effects in human blood.In all but two of the cases listed in Table 5, hemodialysis was employed to remove un-metabolized EGBE.In these reports there was no clear evidence for hemolysis as seen in sensitive laboratory species.In reports by Bauer et al. and Hung et al., non-hemolytic anemia was attributed to hemodilution as a result of hemodialysis.Other reported effects included renal insufficiency, thrombocytopenia and disseminated intravascular coagulation.As discussed by Udden, these other effects reported were most likely not directly related to hemolysis.However, the exact etiology of these latter effects in certain human poisoning cases cannot be explained based on the limited data available.It can be concluded from these adult poisoning cases that EGBE is generally of a low order of acute toxicity in humans.High doses of EGBE in humans do not cause the characteristic hemolytic effects which have been shown to be critical for the acute toxicity expressed in rats, mice and rabbits.Intravascular hemolysis is the major effect reported in acute toxicity studies in rats, mice and rabbits following EGBE administration.Both a dose- and concentration-dependent hemolytic anemia develops in rats following the administration of a single dose of EGBE.The major urinary metabolite of EGBE, 2-butoxyacetic acid, was originally confirmed to be the proximate hemolytic agent by Carpenter et al.In human poisoning cases as discussed above, metabolic acidosis is a commonly reported effect.Lactic acidosis is also observed in most of these cases of EGBE ingestion and it has been suggested that this may be a consequence of EGBE metabolism in humans.Thus, a combination of lactate production and BAA lead to the metabolic acidosis observed in human poisonings.Other factors such as dialysis and hypotension may also contribute to the observed consequences of human ingestions.There are no comparable reports or studies of EGBE-induced acidosis in laboratory animals that can be used as a direct comparison to the effects seen in humans.At least in the case of the rat, the most extensively studied laboratory species, there is evidence that metabolic acidosis may be of much less consequence.In particular, this species more rapidly metabolizes and eliminates EGBE and its metabolites than do humans.Also, the tolerance induced in rats following either repeated or single sub-lethal doses of EGBE strongly argues for hemolysis as the primary toxicological response in this species, with other factors of only secondary importance.A number of detailed hematological investigations of EGBE toxicity have been conducted in the Fischer strain of rat.In a sub-acute oral toxicity study reported by Grant et al., four to five week old male F344 rats received 500 or 100 mg/kg EGBE for 4 days.These rats displayed decreased erythrocyte counts; increased relative weights of spleen, liver and kidneys; thymic atrophy; and lymphocytopenia.Microscopic examination of blood in this study was consistent with intravascular hemolysis and revealed increased numbers of circulating nucleated erythrocytes, pronounced anisocytosis, polychromasia and the presence of Howell Jolly bodies.All of these effects resolved within a 22-day recovery period, with the exception of relative weights of liver and spleen, which remained slightly raised.In studies by Ghanayem et al., groups of adult or young male F344 rats were dosed EGBE by single gavage treatment at 32, 63, 125, 250 or 500 mg/kg bw.Significant decreases in circulating erythrocytes, hemoglobin concentrations, and hematocrit were seen in adult rats at doses of 125 mg/kg bw and above but in young rats only at 250 mg/kg bw or higher.The greatest changes occurred within 4–24 h.The onset of hemoglobinuria followed the decline in plasma hemoglobin levels and was again more pronounced for adult rats.Hematological changes were mostly resolved by 48 h. Histopathological changes in the liver including focal disseminated coagulative necrosis of hepatocytes and evidence of hemoglobin phagocytosis by Kupfer cells and hepatocytes were present at the two highest dose levels in adult rats but were absent in young rats.In concurrent metabolism studies, young rats excreted a greater proportion of the dose as carbon dioxide or urinary metabolites.It was proposed that the greater susceptibility of the older rats to the hemolytic toxicity of EGBE may be due, at least in part, to a greater proportion of BAA formed and to a depressed urinary excretion.In a comparison of the in vivo hemolytic toxicity between rats and guinea pigs, both species were given a sub-lethal gavage dose of EGBE at 250 mg/kg bw and blood parameters measured for up to 25 h.As expected, rats showed significant declines in MCV, HCT, HGB and erythrocyte counts, associated with hemolysis.In guinea pigs, no changes in any of these parameters were recorded.Thus, sensitive laboratory species such as rats, when exposed to acutely toxic doses of EGBE, display a number of responses secondary to hemolysis and including enlarged kidneys, blood in the bladder, bloody urine, and splenic lesions.In contrast, guinea pigs display less sensitivity to the acute toxicity of EGBE than either rats, mice or rabbits and do not display the adverse pathological effects associated with hemolysis as seen in the other sensitive species.Carpenter et al. first identified BAA as the metabolite responsible for the hemolytic toxicity of EGBE by incubating the acid with blood from a variety of animal species and humans.In these studies, blood from rats, mice, and rabbits was more rapidly hemolysed than blood from monkeys, dogs, humans or guinea pigs when incubated at 37 °C in a 0.1% saline solution of sodium butoxyacetate.Also, inhalation exposures of rats, mice and rabbits led to increased osmotic fragility of erythrocytes while no similar effects were reported in monkeys, dogs, humans or guinea pigs, thus confirming the relevance of the in vitro responses.Ghanayem and Sullivan have assessed the in vitro hemolytic response of BAA in blood from a variety of species including rats, mice, hamsters, rabbits, guinea pigs, dogs, cats, pigs, baboons and humans.In these studies, blood collected with 7.5% EDTA as anticoagulant was treated with 1.0 or 2.0 mM BAA concentrations.These concentrations were selected because previous in vivo studies indicated these blood levels were found to cause intermediate levels of toxicity.Blood was incubated at 37 °C and samples collected at 1, 2 and 4 h and spun hematocrits obtained.Complete blood counts were obtained using an automated hematology analyzer and included the following: white blood cell counts, platelet counts, red blood cell counts, mean cell volume, mean corpuscular hemoglobin and mean corpuscular hemoglobin concentration.Blood from rodents, as represented by rats, mice and hamsters, displayed a time and concentration dependent increase in HCT and MCV when incubated with BAA, with more than a 45% increase above the corresponding control in MCV at 2 h and 6% decrease in RBC counts in rats and mice.At the higher concentration, RBC counts decreased 30% below control levels by 4 h.In hamsters, MCV increased greater than 15% and 35% above control levels at 1 or 2 mM BAA for 4 h, respectively.Although significant swelling occurred, no hemolysis of hamster blood was observed.Blood from rabbits incubated at 2 mM displayed greater than 20% increases in MCV above control levels at 2 h with a further increase to 39% by 4 h. Despite this extensive swelling of the RBCs, there was no significant change in RBC counts or hemoglobin concentrations suggesting hemolysis had not occurred under these in vitro conditions.The time- and concentration-dependent swelling of the erythrocytes of sensitive species leading to increased HCT and MCV is the effect most relevant to in vivo hemolysis.Such changes lead to decreased deformability of the erythrocytes, decreased ability to pass through small capillaries and subsequent removal of these damaged cells from circulation by the spleen.Blood samples from a number of species tested by Ghanayem and Sullivan were insensitive to the hemolytic effects of BAA.Thus, blood from the guinea pig was essentially unaffected by incubations with 1 or 2 mM BAA for up to 4 h. Blood from two primate species were also assayed in these studies.Incubations of blood collected from 4 healthy male adult humans with 2 mM BAA for up to 4 h caused only slight and statistically insignificant changes in MCV and HCT.In the case of blood from the yellow baboon, MCV increased in a time- and concentration-dependent manner to nearly 35% above control values after 4 h with 2 mM BAA.HCT in this species increased in a parallel fashion with MCV and hemolysis was significant at either the 1 or 2 mM BAA concentration by 4 h.The contrasting results from baboons and humans, both of the order primate, suggest the hemolytic sensitivity to BAA cannot be predicted solely based on order or class of mammal.Udden has studied the effects of BAA on human erythrocytes from young versus old adults as well as blood from patients with hemolytic disorders.The increased toxicity of EGBE in older rats has been attributed to older erythrocytes.Incubation of erythrocytes from young versus old humans with 2.0 mM BAA produced no significant hemolysis.Similarly, erythrocytes from patients with sickle cell disease and those with hereditary spherocytosis were unaffected.It is notable that erythrocyte swelling with loss of the discocyte morphology is characteristic of BAA-induced changes in rats and this change is also a characteristic of hereditary spherocytosis.Although when incubated at concentrations of 2.0 mM and below no significant effects on human erythrocytes have been observed, sub-hemolytic effects have been reported when human erythrocytes are incubated at high BAA concentrations.Incubations of human erythrocytes at concentrations 7.5 mM and 10 mM for 1–4 h resulted in decreased deformability, as measured by filtration, and slight but significant increases in MCV at the 10 mM concentration.Similar effects were observed when rat erythrocytes were incubated at 0.1 mM for 4 h. Thus, at 10 mM BAA, a 40% increase in filtration pressure was seen for human erythrocytes compared with a 64% increase in filtration pressure for rat erythrocytes incubated at a 100-fold lower concentration of 0.1 mM.Thus, a minimum 100-fold lower sensitivity of human versus rat erythrocytes was seen in this study.Starek et al. have reported similar but less pronounced differences in the hemolytic effects of BAA between rat and human erythrocytes.Washed erythrocytes from healthy human donors or male Wistar rats were incubated in 10 mM Tris buffer for up to 3 h with BAA concentrations ranging from 6.0 to 18.0 mM or 1.0–5.5.EC50 values for changes in red blood cell counts, packed cell volumes and mean corpuscular volumes were then obtained.It is important to note that the EC50 value for PCV changes with BAA was 13.1 mM, but a similar EC50 value could not be determined for MCV changes since these were not large enough to allow calculation of this value.There are several reasons why this work does not lend itself to direct comparison with the greater body of work reported on the comparative effects of hemolysis by EGBE in different species.In particular, no discussion is given by these authors to the possible effects of the very high concentrations employed in these studies.A possible explanation for the effects reported is that cell damage and hemolysis, resulting in lowered RBC values, occurred due to uncontrolled and lowered pH; with these effects seen without the attendant cell swelling and the normally observed changes in PCV and MCV.These results also contradict the findings of other workers including Udden and Ghanayem, who report only the most minimal and non-specific effects on human erythrocytes at 8.0 or 10 mM in vitro concentrations, and the work of Bartnik et al., who reported no hemolysis of human red blood cells incubated for 3 h with 15 mM BAA.Such high and physiologically irrelevant concentrations are considered of little or no use in the present human hazard assessment of EGBE.The application of expert judgment using a weight of evidence approach is a key feature of the hazard classification of chemicals under GHS.However, weight of evidence is a somewhat nebulous concept that does not have a set of well-defined tools and procedures for its implementation.In the ideal situation, all reliable scientific data relevant to the endpoint of concern undergoes expert review and subsequently is weighted to allow a final hazard classification.However, GHS is a Globally Harmonized System that is applied among many countries and regions of the world and with differences in the procedure used for its implementation.This often leads to discrepancies in the hazard classification of substances.This review is an attempt to perform a definitive assessment based on the hazard to humans that can be used as a basis to derive an acute toxicity classification for EGBE where ever GHS is implemented.The categorization of acute hazard under GHS requires acute toxicity estimation for placement into one of five hazard categories.These categories are defined by various sharp cut-off values and the information used for the final category assignments is most often derived from inherently variable acute toxicity data derived from animal studies.The problems associated with this method are clearly illustrated in a recent publication by Hoffmann et al. in which a large dataset of acute oral toxicity data from experimental animals was compared for a total of 73 reference chemicals.Reliable studies, as defined by a Klimisch score of 1, “reliable”, or 2, “reliable with restrictions”, were scarce with data often available from studies over several decades old.Statistical analyses revealed that only ∼50% of the substances would be unequivocally assigned to a single classification category; ∼40% would unequivocally be classified within two adjacent categories; and ∼10% would have LD50 ranges sufficiently large to span three or more classification categories.In the current report, data from acute toxicity studies in experimental animals along with information from mode of action studies and human experience has been used to categorize the acute toxicity of EGBE.In sensitive rodent species, hemolysis of erythrocytes caused by the major metabolite of EGBE, BAA, has been shown to be responsible for the acute effects and mortality observed with this solvent.The central role of BAA as the proximate hemolytic agent in sensitive species has been further confirmed through in vivo studies employing metabolic inhibitors such as pyrazole and cyanamide, to block metabolism to the acid through alcohol and aldehyde dehydrogenases, respectively.Incubation of erythrocytes isolated from a variety of animal species with BAA indicates a large variation in the sensitivity to this hemolytic agent, with erythrocyte swelling and hemolysis occurring most readily and at low concentrations in blood from rats, mice and rabbits but absent in blood from humans and guinea pigs.Similar incubations of erythrocytes isolated from healthy young or old humans or from patients suffering from congenital hemolytic disorders produced no significant hemolysis or morphological changes as seen in erythrocytes from the rat.In fact, more than 100-fold higher concentrations of BAA are required to produce even a minimal pre-hemolytic in human versus rat erythrocytes.In more recent guideline studies conducted in guinea pigs, EGBE displayed a low acute toxicity following oral, dermal or inhalation exposures.An acute oral LD50 value of 1414 mg/kg bw was reported in these studies and animals dying prior to study termination demonstrated pathology suggesting EGBE was a gastrointestinal irritant.There was no evidence of hemolytic toxicity in this acute oral study.Dermal administration under occlusive wrap at a limit dose of 2000 mg/kg bw produced no mortality or other signs of toxicity in the guinea pig.Inhalation exposures at maximum achievable vapor concentrations of 633 and 691 ppm produced no mortalities or clinical signs of toxicity.Perhaps the most convincing evidence of the low acute toxicity of EGBE in humans comes from accidental ingestions reported in pediatric cases or intentional ingestions in adults.In 24 pediatric poisoning cases reviewed, only two required hospitalizations with the remainder treated at home with palliative care.In a number of adult poisoning cases severe metabolic acidosis was consistently observed.However, in all reported poisoning cases the patients survived and recovered without subsequent symptomology.In two exceptional cases, estimated quantities consumed of EGBE were 80–100 g or 150–250 ml.In one case, blood concentrations of BAA as high as 3.64 mM were reported but with no evidence of hemolytic anemia.Acute oral, dermal and inhalation toxicity values derived from rats and rabbits often serve as the basis for acute toxicity classifications.Both of these species show increased sensitivities to the intravascular hemolysis caused by BAA, the hemolytic metabolite of EGBE.This increased sensitivity in turn leads to an overly conservative acute hazard classification for this chemical.An extensive body of work over many decades and from a number of different investigators has clearly shown that the effects on the erythrocytes of sensitive species such as the rat and rabbit are not representative of the human response to this chemical and that the guinea pig is a more appropriate surrogate species for the acute hazard classification of EGBE.Although varying considerable within and among species, the majority of the LD50 values presented in Table 2 are consistent with a Category 4 acute oral toxicity classification under GHS.Specifically, all values are above the upper cut-off value of 300 mg/kg bw for the Category 3 classification, while some few of the reported LD50 values fall near or slightly above the cut-off for the Category 4 classification, i.e. into Category 5.In both the rat and rabbit, signs of the acute systemic toxicity of EGBE are evident and are consistent with exposures by other routes.As in the case of acute oral exposures, there is compelling evidence to suggest the rabbit is the most sensitive of the experimental animals when tested dermally.In the rat, clinical and pathological signs of hemolysis are evident generally at or above acute dermal toxicity LD50 values of 2000 mg/kg bw and are more pronounced following occlusive exposures.Similar effects are observed in the rabbit but at acute dermal toxicity LD50 values ranging from 435 to 1060 mg/kg bw.Given the increased sensitivity of both the rat and rabbit to the hemolytic effects of EGBE, an effect that is not representative of the situation in humans, the guinea pig is considered the most representative species for assigning an acute dermal toxicity hazard for human exposures.Based on reliable and more recent studies of the acute dermal toxicity of EGBE in the guinea pig, as well as supportive mechanistic and well-documented human exposure data, a Category 5 acute dermal toxicity classification under GHS is the most appropriate outcome for EGBE.In the rat and rabbit, two species sensitive to the hemolytic toxicity of EGBE, reported acute inhalation toxicity LC50 values ranged from approximately 2 mg/l to values exceeding the theoretical calculated maximum achievable vapor concentration of EGBE.Reported clinical and pathological effects from these studies confirm an acute hemolytic response in these species.In contrast, dogs exposed for up to 7 h at 2.0 mg/l were unaffected.Similarly, guinea pigs exposed at either 2.0 mg/l for 7 h or at maximum achievable vapor concentrations for 1 h were unaffected.Given the increased sensitivity of both the rat and rabbit to the hemolytic effects of EGBE, the guinea pig is considered the most representative species for assigning an acute inhalation toxicity hazard for human exposures.Based on reliable studies of the acute inhalation toxicity of EGBE in the guinea pig, as well as supportive mechanistic and well-documented human exposure data, no classification of EGBE for acute inhalation toxicity under GHS is warranted.The authors claim no conflicts of interest in the preparation of this publication.
Acute oral, dermal and inhalation toxicity classifications of chemicals under the United Nations Globally Harmonized System of Classification and Labelling of Chemicals (GHS) should typically be based on data from rats and rabbits, with the tacit assumption that such characterizations are valid for human risk. However this assumption is not appropriate in all cases. A case in point is the acute toxicity classification of ethylene glycol mono- n-butyl ether (EGBE, 2-butoxyethanol, CAS 111-76-2), where acute toxicity data from rats or rabbits leads to an overly conservative assessment of toxicity. Hemolysis is the primary response elicited in sensitive species following EGBE administration and the proximate toxicant in this response is 2-butoxyacetic acid (BAA), the major metabolite of EGBE. The sensitivity of erythrocytes to this effect varies between species; rats and rabbits are sensitive to BAA-mediated hemolysis, whereas humans and guinea pigs are not. In this publication, a weight of evidence approach for the acute hazard classification of EGBE under GHS is presented. The approach uses acute toxicity data from guinea pigs with supporting mechanistic and pharmacokinetic data in conjunction with human experience and shows that adopting the standard method results in over-classification. © 2013 The Authors.
143
Proteomic analysis of glycosomes from Trypanosoma cruzi epimastigotes
Whereas in almost all eukaryotic organisms glycolysis is a process that occurs in the cytosol, in Kinetoplastea the major part of the pathway is localized in organelles called glycosomes.The glycosomes of T. cruzi contain the enzymes converting glucose into 3-phosphoglycerate; only the last three enzymes of the glycolytic pathway are present in the cytosol .A consequence of this organization is that, inside these organelles, the ATP consumed by hexokinase and phosphofructokinase is regenerated by a phosphoglycerate kinase and, in two auxiliary branches of glycolysis, a phosphoenolpyruvate carboxykinase and/or pyruvate phosphate dikinase.The regeneration of ATP by PEPCK and/or PPDK implies that entry of cytosolic PEP is absolutely necessary.Similarly, the NADH formed in the reaction catalyzed by glyceraldehyde-3-phosphate dehydrogenase is re-oxidized inside the organelles by reduction of oxalocetate to malate by a glycosomal malate dehydrogenase and in a subsequent reduction of fumarate produced from the malate to succinate by a soluble NADH-dependent fumarate reductase .Another route that may contribute to the regeneration of glycosomal NAD+ involves a NAD-dependent glycerol-3-phosphate dehydrogenase that catalyzes the reduction of dihydroxyacetone phosphate to glycerol 3-phosphate.Subsequently, the electrons are transferred to oxygen via a mitochondrial electron-transport system and a redox shuttle comprising a putative transporter in the glycosomal membrane which exchanges Glyc3P for DHAP.The transporter remains to be identified, but its existence is inferred from the requirement for strict coupling of the fluxes by which the two triosephosphates are exchanged.All glycosomal enzymes present high latency upon isolation of the organelles, which is in accordance with the notion that the glycosomal membrane acts as a permeability barrier for the enzymes and their cofactors, whereas it, like the membrane of peroxisomes in other organisms, may allow passage of small metabolites through channels .In addition, another important pathway of glucose oxidation present in glycosomes is the pentose-phosphate pathway.The PPP usually has two major roles, namely the reduction of NADP+ to NADPH, as well as the production of ribose 5-phosphate to be used as substrate for the synthesis of a variety of cell components .Besides glycolysis and the PPP, the presence of enzymes involved in other metabolic routes has also been described for glycosomes of different trypanosomatids: gluconeogenesis, purine salvage, β-oxidation of fatty acids and biosynthesis of ether-lipids, isoprenoids, sterols and pyrimidines .It has been observed that several of the enzymes belonging to these pathways are essential for the survival of the bloodstream form of T. brucei .Therefore, some of these glycosomal enzymes are considered as potential pharmacological targets .Glycosomes, like peroxisomes contain peroxins, proteins involved in the different routes of the biogenesis of the organelles, such as the import of proteins synthesized in the cytosol into the matrix of the organelles .Many, but not all, of the glycosomal matrix proteins exhibit a peroxisomal-targeting signal comprising a partially conserved motif at their C-terminus or close to their N-terminal end, called PTS1 or PTS2, respectively .In the study reported in this paper, we have purified glycosomes from T. cruzi and separated the soluble matrix from the membranes.Subsequently, the proteins attached peripherally to the membrane were separated from the integral membrane proteins through treatment with Na2CO3 and exposure to an osmotic shock .To obtain information about the growth-dependent variation of the glycosomal membrane and soluble proteins, we have chosen to compare glycosomes from exponentially growing epimastigotes, whose metabolism is essentially glycolytic, with those from stationary-phase epimastigotes, in which the metabolism has shifted to catabolism of amino acids as their carbon and energy source .Trypanosoma cruzi epimastigotes were axenically cultivated in liver infusion-tryptose medium at 28 °C as previously described .Cells were harvested when they reached the mid-exponential and stationary growth phase at OD600 nm values of 0.6 and 1.2, respectively.The parasites were centrifuged at 800 x g for 10 min at 4 °C, and washed twice with isotonic buffer A and hypotonic buffer B.Homogenates of exponential and stationary phase parasites were obtained by grinding washed cells with silicon carbide in the presence of a 1/100 protease inhibitors cocktail butane, 100 μM sodium ethylene diamine tetraacetate, 500 μM phenylmethyl sulfonylfluoride).Parasite disruption was checked for being at least 90% complete by light microscopy.The homogenates were centrifuged first for 10 min at 1000 x g at 4 °C in order to remove silicon carbide, nuclei and intact cells.The resulting supernatant was centrifuged for 20 min at 5000×g at 4 °C to remove a large granular fraction as a pellet.A small granular pellet was obtained after a centrifugation step at 3000×g for 20 min at 4 °C.This pellet was resuspended in 1.5 ml of buffer B containing a protease inhibitors cocktail and loaded on top of a 35 ml linear 0.25–2.5 M sucrose gradient.Centrifugation was performed for 2 h at 170,000×g and 4 °C using a vertical rotor.The glycosome-enriched fraction was applied to a second sucrose gradient.Fractions of 1.9 ml were collected from the bottom of the tube after puncture .For the sodium carbonate or osmotic shock treatment, one volume of the glycosomal fractions, with protein concentrations of 7 mg.ml−1 and 5 mg.ml−1 for the samples derived from the exponential and stationary growth phase, respectively, was mixed with about 100 volumes of 100 mM cold sodium carbonate or with cold milliQ water according to and incubated at 0 °C for 30 min before ultracentrifugation at 105,000×g for 2 h at 4 °C.The pellet was homogenized in milliQ water with a potter and further centrifuged at 105,000×g for 2 h at 4 °C.This procedure allowed the separation of matrix proteins and detached peripheral membrane proteins in the supernatant and a membrane fraction with integral proteins in the pellet after the sodium carbonate treatment and peripheral and integral proteins in the pellet after the osmotic shock treatment.The protein obtained in supernatants of sodium carbonate and osmotic shock treatment were lyophilized in a Labconco freeze dryer and stored at –80 °C until its later use.Proteins were digested with trypsin using the FASP protocol as described by .Peptides were solubilized in 2% acetonitrile with 0.1% trifluoro acetic acid and fractionated on a nanoflow uHPLC system before online analysis by electrospray ionisation mass spectrometry on an Orbitrap Elite MS.Peptide separation was performed on a Pepmap C18 reversed phase column.Peptides were desalted and concentrated for 4 min on a C18 trap column followed by an acetonitrile gradient for a total time of 45 min.A fixed solvent flow rate of 0.3 μl.min−1 was used for the analytical column.The trap column solvent flow was 25 μl.min−1 of 2% acetonitrile with 0.1% v/v trifluoroacetic acid.Eluate was delivered online to the Orbitrap Elite MS, acquiring a continuous duty cycle of a high resolution precursor scan at 60,000 RP, while simultaneously acquiring the top 20 precursors subjected to CID fragmentation in the linear ion trap.Singly charged ions were excluded from selection, while selected precursors are added to a dynamic exclusion list for 120 s. Protein identifications were assigned using the Mascot search engine to interrogate T. cruzi CL Brener protein coding sequences in the NCBI database, allowing a mass tolerance of 10 ppm for the precursor and 0.6 Da MS/MS matching.To classify type-1 and type-2 peroxisomal targeting signals in the glycosomal proteins revealed in this study, the motif was used to recognize proteins having the PTS1 sequence, while the X5 sequence was used to identify proteins with the PTS2 sequence .PTS1 and PTS2 containing sequences identified in our analysis were searched in the TriTryp and GeneDB databases.A comparison was made of the protein repertoires found in the glycosomes of epimastigotes grown to exponential and stationary phases.Protein concentration was determined by the method of using the Bio-Rad Bradford protein assay.Hexokinase and glutamate dehydrogenase activities were assayed spectrophotometrically according to and , respectively.The activities were measured in a Hewlett-Packard 8452 diode array spectrophotometer at 340 nm at 28 °C, in the presence of 0.1% Triton X-100 and 150 mM NaCl in order to solubilize membrane and eliminate latency.Glycosomes were isolated from epimastigotes in exponential and stationary growth phase by differential centrifugation followed by two rounds of isopycnic ultracentrifugation on a sucrose gradient.Figure SI shows that most glycosomes were recovered in fractions corresponding to densities of 1.23 to 1.24 g. cm−3 after the isopycnic ultracentrifugation.In contrast, most GDH, a mitochondrial marker enzyme, was recovered in fractions corresponding to densities of 1.15 to 1.17 g. cm-3.A small overlap of the HK activity with that of GDH was observed, which could be due to rupture of glycosomes during isolation.Analysis of the HK and GDH activities showed that fractions 3 to 6 contained about 35% of the total HK activity measured, and only 5% of the total GDH activity.For this reason, we decided to use fractions 3 to 6 for glycosomal membrane protein isolation by sodium carbonate and osmotic shock treatment.The isolation procedures and the mass spectrometry data acquisition using the samples are described in Materials and methods, sections 2.3 and 2.4.Studies with peroxisomes demonstrated the involvement of vesicles derived from mitochondria as well as the endoplasmic reticulum in the biogenesis of the organelles, implying that the classification of some proteins as "contaminants" in a peroxisomal preparation must be carefully considered .In this respect, we used three criteria to assign proteins as authentic to glycosomes: the results of our proteomic analysis; the presence of a glycosomal import signal of type PTS1 or PTS2; and previous publications about the association of proteins with T. cruzi glycosomes and a comparison with the data reported in proteomics studies of glycosomes from other kinetoplastids .For this last criterion, special emphasis was placed on the work of with the organelles of procyclic T. brucei, because this latter study can be considered as presenting a minimal level of contamination as a result of the powerful epitope-tagged method of glycosome isolation used.The analysis of the glycosomal preparations showed the presence of known and novel glycosomal matrix and membrane proteins, both in exponentially growing parasites and cells grown to the stationary phase.Based on the data obtained, we constructed schemes for predicted glycosomal metabolic pathways of T. cruzi epimastigotes, as shown in Figs. 1–4.Classical glycolytic enzymes already previously reported to be present in glycosomes of T. cruzi were indeed detected in the proteomes of both exponential and stationary phase cells.Additionally, glucokinase, a glucose-specific phosphorylating enzyme present in T. cruzi but not in T. brucei was also detected, in agreement with its PTS1 motif and subcellular localization previously described .An interesting case is constituted by PGK, which in T. cruzi is encoded by three genes, denominated PGK-A, PGK-B and PGK-C.Our previous studies showed that all three isoforms of the enzyme are expressed in epimastigotes, with both PGK-A and PGK-C present in glycosomes, the former one being associated with the membrane, and the latter isoform located in the matrix of the organelles .Meanwhile, PGK-B is cytosolic and accounts for 80% of the total cellular PGK activity .Interestingly, in the current analysis, three PGKs with different predicted molecular weights were detected by the mass analysis.The sequence of the 64.87 kDa PGK has a long extension of 85 amino acids at the N-terminus.Analysis of this N-terminal extension with TMHMM software predicts two transmembrane helices, indicating that the sequence of this PGK form likely corresponds with the previously reported 56 kDa membrane-associated PGK-A isoenzyme .The 44.78 kDa protein corresponds to the previously identified 46 kDa PGK-C.Remarkably, an unusual PGK-like protein of 100.44 kDa was found in the pellet obtained by treatment of the glycosomal membrane preparation with sodium carbonate.Using the TMHMM software, we could establish in this PGK the presence of three candidate transmembrane helices.These correspond to residues 745 to 767, 787 to 805 and 820 to 842 in the C-terminal region.Also intriguing was the finding of two glycosomal PGK-like proteins of 58.4 kDa with a PAS domain.PAS domains are important signaling modules that can monitor changes in light, redox potential, oxygen, small ligands, and overall energy level of a cell.These domains have been identified in proteins from all three kingdoms of life: Bacteria, Archaea, and Eukarya .This PGK-like protein with a PAS domain was previously also found in an analysis of glycosomal membrane proteins of Leishmania tarentolae .Enzymes of both the oxidative branch and the non-oxidative branch of the PPP have been detected previously in glycosomes of the three developmental forms of T. cruzi .However, in our analysis, only transaldolase was detected in the supernatant obtained after sodium carbonate treatment of glycosomes from exponentially growing parasites.Intriguingly, we found sedoheptulose-1,7-bisphosphatase, having the PTS1 motif SKL, in both exponential and stationary phase cells.This enzyme has previously been predicted by genome analysis in T. cruzi and T. brucei .Sedoheptulose-1,7-bisphosphatase is a Calvin cycle enzyme that is usually found in plastids of plants, algae, some fungi and ciliates .The presence of this enzyme suggests that possibly the parasites employ this enzyme as part of a modified PPP.Special attention is deserved for the presence of a hypothetical protein with a PTS1-like motif that shows similarity with glucose-6-phosphate-1-epimerases.Since glucose-6-phosphate dehydrogenase is specific for β-D-glucose 6-phosphate, the presence of a glucose-6-phosphate-1-epimerase might be important for the PPP.This has been hypothesized by Gualdrón-López et al. for trypanosomatids, especially for T. brucei which lacks a glucokinase gene.Glucokinase and hexokinases have each a preference for distinct anomers of glucose, the beta and alpha anomer, respectively.Although T. cruzi contains both enzymes, it cannot be ruled out that glucose-6-phosphate-1-epimerase can also participate in the regulation of the interconversion rate between the G6P anomers inside the glycosomes of T. cruzi .Several enzymes involved in the synthesis of sugar nucleotides were detected in the glycosome preparations.First, two isoforms of phosphomannose isomerase, both enzymes having a PTS1 motif and previously found to be essential for T. brucei growth .The enzyme is involved in the reversible isomerization of fructose 6-phosphate to mannose 6-phosphate.Another enzyme detected is a phosphomanno mutase-like protein.This enzyme was previously found in glycosomes of T. brucei and has the pecularity of acting as a phosphoglucomutase during glycolysis .Other enzymes of the sugar-nucleotide synthesis with PTS motifs were also found.Synthesis of glycoconjugates requires a constant supply of sugar nucleotides, molecules that are needed for parasite survival, invasion and/or evasion of the host immune system .Indeed, galactokinase, the first enzyme involved in the salvage pathway of UDP-galactose was found associated with the glycosomal preparation.It is present in T. cruzi but not in T. brucei, as two isoenzymes of 51–52 kDa, encoded by distinct genes both having a PTS1 .Trypanosomatids contain enzymes for succinic fermentation, forming an auxiliary branch to the glycolytic and gluconeogenic pathways.The branch contains PEPCK, malate dehydrogenase, fumarate hydratase and a soluble NADH-dependent fumarate reductase .This branch was previously located in glycosomes of T. brucei .Indeed the enzymes were also detected in our T. cruzi glycosomal preparation.Noteworthy, the glycosomal localization of one of the enzymes of the branch, FH, has been subject to discussion; this enzyme appeared to be predominantly present in the cytosol of T. brucei .In addition, no authentic PTS motif was found in FH, but the presence of a cryptic PTS1 at the C-terminal extremity of T. brucei FH, as well as in L. major FH and T. cruzi FH has been suggested as a possible explanation for a dual localization in cytosol and glycosomes .However, data from our current study indicate that the FH of some T. cruzi strains indeed possess a cryptic PTS1, whereas in other strains such a cryptic PTS1 is missing.Interestingly, a potential PTS2 was found near the N-terminus of the FH lacking the cryptic PTS1 of these latter strains.Another important auxiliary branch involves the enzyme PPDK that may play a role in the adenine-nucleotide homeostasis inside glycosomes, coupled to inorganic pyrophosphate metabolism of trypanosomes.PPDK as well as an adenylate kinase were detected in our glycosomal preparations.In T. cruzi epimastigotes a PPDK of 100 kDa can be post-translationally modified by phosphorylation and proteolytic cleavage into an inactive protein of 75 kDa.This modified form of PPDK was located peripherally at the glycosomal membrane, oriented towards the cytosol .Meanwhile, three isoforms of adenylate kinase were detected in the proteomic analysis.One of them contains a PTS1-type sequence at its C-terminus.Interestingly, we identified two possible novel auxiliary routes for the regeneration of NAD+ in glycosomes.First, a D-isomer specific 2-hydroxyacid dehydrogenase; this enzyme is a NAD-linked oxidoreductase whose catalytic properties resemble those of mouse and rat LDH-C4.This enzyme has two molecular forms.Isozyme I is responsible for the weak lactate dehydrogenase activity found in T. cruzi extracts and it might be an alternative for the reoxidation of glycosomal NADH through the reduction of the pyruvate produced by the PPDK.The second enzyme potentially playing a role in glycosomal NAD+ regeneration is a PTS1-containing putative aldehyde dehydrogenase that could reduce and decarboxylate pyruvate to acetaldehyde.This enzyme was detected only in glycosomes from parasites in the exponential growth phase.Its mammalian homolog has been described as part of the α-oxidation process in peroxisomes catalyzing the oxidation of pristanal to pristanic acid .Additionally, the presence of these enzymes was also reported in the proteomic analysis of purified glycosomes of T. brucei .Interestingly, an enzyme detected in both exponential and stationary phase glycosomes is Tc00.1047053511277.60, belonging to a class of oxidoreductases that can catalyze the reversible oxidation of ethanol to acetaldehyde with the concomitant reduction of NAD+.The presence of an ADH in T. cruzi has previously been described by Arauzo and Cazzulo ; it was reported as a cytosolic enzyme.However, in contrast to the putative aldehyde dehydrogenase, the HADH isoenzymes and ADH do not present a PTS motif, nor have they been described by Güther et al. and Vertommen et al. for the T. brucei glycosomal proteomes.However, it should be noted that also no orthologs were detectable in the T. brucei genome when searching TriTrypDB.Because of the lack of a typical PTS, we cannot yet entirely exclude the possibility that the two T. cruzi proteins represent a cytosolic or mitochondrial contamination of the glycosomal sample, but the presence of a possible internal glycosomal-targeting signal or import mediated by a piggy-back mechanism through their association with other glycosomal proteins remain options.These three enzymes could thus be alternatives for regeneration of NAD+ when PEP is not available to follow the branch PEPCK/MDH/FH/FRD.Since, however, this proposed role is only based on bioinformatic analysis and in vitro assays, further studies are required to determine how these enzymes are involved in the glycosomal intermediary metabolism of T. cruzi.Additionally, enzymes involved in glycerol metabolism were also detected in glycosomes from trypanosomes in both phases of growth: glycerol-3-phosphate dehydrogenase, in agreement with our previous report and glycerol kinase, the latter having a PTS1 motif.The initial evidence for identification of ether-lipid biosynthesis as a glycosomal process was the finding that activity of the first two steps of this pathway is associated with the organelles from procyclic forms of T. brucei and promastigotes of L. mexicana .From the proteomic data obtained here on purified T. cruzi glycosomes we could determine the presence of alkyl-dihydroxyacetone phosphate synthase, the second enzyme of the pathway.This enzyme, presenting a PTS1 motif, has also been reported in glycosomes of other kinetoplastids .Nevertheless, the enzymes catalyzing the first and third step of this pathway, dihydroxyacetone phosphate acyltransferase and acyl/alkyl dihydroxyacetone phosphate reductase, also reported in glycosomes of T. brucei and Leishmania spp. , were not detected in the T. cruzi proteomic analysis.β-Oxidation is a catabolic pathway that has been located in peroxisomes and mitochondria.In glycosomes, this route has only been reported in T. cruzi and T. brucei and recently detected by proteomics in glycosomes of T. brucei, L. donovani and L. tarentolae .According to our results, the T. cruzi glycosomes contain all enzymes involved in the β-oxidation, in both the exponential and stationary growth phase, and each of the sequences has a PTS1 or PTS2.We found two isoforms homologous to the trifunctional enzyme enoyl-CoA hydratase/enoyl-CoA isomerase/3-hydroxyacyl-CoA dehydrogenase, both containing a PTS2, whereas the 3-ketoacyl-CoA thiolase presents a PTS1-like sequence.An acyl-CoA-binding protein also detected in the T. cruzi glycosomal proteome shows a typical PTS1.It was previously also found in the high-confidence proteomic analysis of T. brucei glycosomes .ACBP binds medium- and long-chain acyl-CoA esters and has been reported to be essential in bloodstream-form T. brucei .While mammalian cells can synthesize purines de novo, protist parasites including T. cruzi and other trypanosomatids such as T. brucei and Leishmania spp. have to recycle them through the purine salvage pathway for the synthesis of AMP and GMP .In our proteomic analysis, three enzymes of the purine salvage pathway were detected: adenine phosphoribosyltransferase, hypoxanthine-guanine phosphoribosyltransferase and inosine-5′-monophosphate dehydrogenase.For APRT, four isoenzymes were identified, all with a PTS1 motif.The mass analysis detected also four isoenzymes of HGPRT, but only two of them showed a PTS1.For IMPDH the mass analysis indicated the presence of two isoforms which have high sequence identity and an unequivocal PTS1.Additionally, two isoenzymes of guanosine monophosphate reductase were detected in the glycosomal fraction, both with a PTS1.Importantly, these enzymes have also been detected in the proteomic analysis of T. brucei glycosomes .The nucleotide salvage pathway provides an alternative that is energetically more efficient than de novo synthesis for the parasites.AMP deaminase, which interconverts AMP and IMP, was also found in T. cruzi glycosomes and this enzyme contains a PTS1.Similarly, this enzyme was detected in glycosome proteomic studies of other trypanosomes .Unlike the situation for purines, de novo synthesis of pyrimidines is performed by trypanosomatids.In contrast to mammalian cells, de novo synthesis of UMP in glycosomes involves two enzymatic steps: by orotate phosphoribosyltransferase and orotidine-5-phosphate decarboxylase.These two activities are carried out by a bifuctional enzyme with a C-terminal SKL motif encoded by a single fused gene .The activities were found to be associated with the glycosomes of T. brucei and the organellar localization was confirmed in proteomic studies .In our analysis of T. cruzi glycosomes, two isoforms of the bifunctional orotate phosphoribosyltransferase/orotidine-5-phosphate decarboxylase enzyme were detected, with predicted molecular weights of 29 and 50 kDa, respectively, but only the latter one showed the classical PTS1.Additionally, a hypothetical protein detected in the mass analysis showed 98% identity with a protein putatively annotated in the TriTryp database as a phosphoribosylpyrophosphate synthetase present in other strains of T. cruzi.The sterol synthesis pathway, also known as the isoprenoid pathway, is an essential metabolic pathway present in eukaryotes, archaea, and some bacteria.Interestingly, peroxisomes of eukaryotes contain a set of enzymes involved in cholesterol biosynthesis that previously were considered to be cytosolic or associated with the endoplasmic reticulum.Moreover, some of them contain a conserved putative PTS1 or PTS2, supporting the notion of targeted transport into peroxisomes .Our analysis detected, associated with glycosomes, a 55 kDa hypothetical protein that shows similarity with 3-hydroxy-3-methyl-glutaryl Coenzyme A synthase.HMG-CoA synthase, catalyzing the first step of the mevalonate pathway, was previously located in peroxisomes .Another key enzyme located in glycosomes is HMG-CoA reductase, which is consistent with our previous report where 80% of the activity of this enzyme was associated with glycosomes .The next enzyme of this pathway, mevalonate kinase, has been shown to be almost exclusively located in glycosomes of T. brucei and L. major and, in the proteomic analysis of T. brucei, reported with high confidence as a glycosomal protein .In our analysis, this enzyme is present as two isoforms, but only the latter possesses a PTS1 motif.Other enzymes of the pathway, such as isopentenyl-diphosphate Δ-isomerase, squalene monooxygenase, lanosterol 14-α-demethylase, NAD-dependent steroid dehydrogenase protein, sterol 24-C-methyltransferase, C-8 sterol isomerase and sterol C-24 reductase were also detected.Of these enzymes, only isopentenyl-diphosphate Δ-isomerase and C-8 sterol isomerase were detected in proteomic analyses of glycosomes purified from procyclic-form T. brucei and only isopentenyl-diphosphate Δ-isomerase, squalene monooxygenase and C-8 sterol isomerase present PTS1 or PTS2 motifs.Interestingly, HMG-CoA reductase, squalene synthase , sterol 24-C-methyltransferase and mevalonate kinase were already previously reported as enzymes of the mevalonate pathway present in glycosomes of trypanosomes .It should be noted that in analysis of peroxisomes from mammals, phosphomevalonate kinase , isopentenyl diphosphate isomerase , acetoacetyl-CoA thiolase , HMG-CoA synthase , HMG-CoA reductase , mevalonate kinase , mevalonate diphosphate decarboxylase , and farnesyl diphosphate synthase were all found to possess a PTS motif and their peroxisomal localization has been determined experimentally.Interestingly, the acetoacetyl-CoA thiolase from mammals that condenses two molecules of acetyl-CoA to give acetoacetyl-CoA, contains both a mitochondrial signal peptide at the amino terminus and a PTS1 at the carboxy terminus .In our mass analysis, we detected the presence of several of the enzymes involved in this anabolic pathway; some of them have a clear PTS1 motif, but others showed neither a PTS1 or PTS2.The presence of these latter enzymes in glycosomes may be attributed to the existence of non-consensus targeting sequences or piggy-back transport.Iron-dependent superoxide dismutase is an enzyme involved in the dismutation of the highly reactive, toxic superoxide radical with the formation of hydrogen peroxide.It has been detected in various kinetoplastids .Four distinct SOD activities have been characterized in the epimastigote form of T. cruzi and selective membrane permeabilization with digitonin showed that these SODs are primarily cytosolic, with small amounts associated to glycosomes and mitochondrion .In our proteomic analysis two Fe-SODs of 23.5 kDa were identified.Their presence and location are consistent with the glycosomal proteome of procyclic T. brucei , and previous reports about the characterization of these isoenzymes in T. brucei .Additionally, two isoenzymes of tryparedoxin peroxidase were detected in the glycosomal fraction.These enzymes participate in decomposing H2O2 using electrons donated either directly from trypanothione, or via the redox intermediate tryparedoxin.Is important to mention the detection of a glutathione peroxidase-like protein with a PTS2 motif in the glycosomal mass analysis that could contribute to this process.Tryparedoxin, having a predicted PTS2, was also detected in the analysis.Additionally, this protein was reported in the glycosomal proteome of T. brucei .The flavoprotein trypanothione reductase is a key enzyme in the antioxidant metabolism of trypanosomes and represents a potential drug target.This enzyme from various kinetoplastid parasites has been characterized in detail .In our analysis two isoenzymes of 42 and 54 kDa were detected with the latter presenting a PTS1.The presence of this enzyme in glycosomes is corroborated by the glycosomal proteome of T. brucei .Two fructose-1,6-bisphosphatase isoforms were detected in the proteomic analysis of T. cruzi glycosomes; they show 98% sequence identity and have both a classical PTS1.This is a hallmark enzyme of gluconeogenesis whose presence was previously reported in an extract of epimastigotes of T. cruzi and in the proteomic analyses of glycosomes of T. brucei and L. donovani .Interestingly, FBPase and PFK can be expressed simultaneously in glycosomes of T. cruzi and other kinetoplastids, potentially so creating a futile cycle causing loss of ATP.However, activating the FBPase may also function to partially re-direct the glucose 6-phosphate from glycolysis to the PPP, as has been shown in Toxoplasma and hypothesized for Leishmania .Nonetheless, studies with T. brucei are suggestive of a reciprocal regulation in the activities of these enzymes possibly by post-translational modification.In peroxisomal membranes various types of transporters have been described: peroxins, half-size ABC transporters, pore-forming proteins and proteins of the mitochondrial carrier family.More recently peroxins and ABC transporters have also been identified in glycosomal membranes, and the existence of pore-forming proteins was also established, however the identity of these latter proteins remains to be established .Peroxins are a group of proteins involved peroxisome biogenesis, including the process of matrix-protein import .Most of them are integral or peripheral membrane proteins, whereas some are soluble cytosolic proteins, either permanently or transiently interacting with the membrane.For trypanosomatids, peroxins have been studied in detail in T. brucei and several of them were also identified in its glycosomal proteome .In our proteomic analysis of T. cruzi purified glycosomes, we also detected several PEX proteins.First, peroxins implicated in matrix protein import such as PEX2, PEX10, PEX12 were detected in the pellet fraction of glycosomes treated by osmotic shock and with Na2CO3.PEX14 was equally detected in the pellet fraction of glycosomes treated by osmotic shock.A search for proteins that align with the sequence of Tc00.1047053503811.40 detected in the pellet of the Na2CO3 treated glycosomes, showed 75% identity with the PTS2 receptor PEX7 of T. brucei .Other integral membrane proteins that were detected are a protein annotated as that presents 33% identity with T. brucei PEX16 , i.e. a peroxin involved in insertion of peroxisomal membrane proteins, as well as homologs of T. brucei PEX11 and GIM5A that is related to PEX11 .PEX11 is known to be involved in the morphology establishment and proliferation of peroxisomes, but has also been identified, in yeast, as a pore-forming protein .The second group of known transporters in the glycosomal membrane of T. brucei comprises three homologs of peroxisomal half-size ABC transporters designated GAT1-3 .Only for GAT1 a function has been shown, i.e. the transport of acyl-CoAs from the cytosol into the glycosomal lumen .The function of GAT2 and GAT3 has not yet been established.Interestingly, in the membrane fraction of T. cruzi glycosomes treated with Na2CO3, various ABC transporters were detected.Three ABC transporters show a considerably high identity with GAT1, 2 and 3 of T. brucei.The protein, annotated as a hypothetical glycosomal transporter in the database, shows 49% identity with TbGAT1, whereas the other ABC transporters detected show 52% and 54% identity with GAT2 and GAT3 of T. brucei, respectively.The three proteins were also reported with a high confidence as glycosomal proteins in the T. brucei proteomics analysis .As a third group of molecules involved in the transport of metabolites through the peroxisomal and glycosomal membrane are pore-forming proteins.Such pores have been well characterized for peroxisomes, notably from mammalian cells and, besides PEX11, one other pore-forming protein has been identified .The sizes of the peroxisomal pores have been determined electrophysiologically and will allow permeation of molecules with a Mr up to about 300–400 Da.Furthermore, three main channel-forming activities were detected in membranes of the glycosomal fraction from bloodstream-form T. brucei permitting currents with amplitudes 70–80 pA, 20–25 pA, and 8–11 pA, respectively .In analogy to the situation in peroxisomes, it has been proposed that such pores would allow the glycosome entry or exit of metabolites of low molecular masses such as inorganic ions including phosphate, PPi, glucose, G6P, oxaloacetate, malate, succinate, PEP, 1,3-bisphosphoglycerate, 3PGA, DHAP, Glyc3P, etc., but not larger molecules such as ATP, ADP, NAD+, NADH, CoA, and fatty acids or acyl-CoAs .In our glycosome analysis, two proteins were identified as cation transporters.However, for the first one, no additional report is available as yet to confirm it being localized in glycosomes, whereas the second protein was previously identified as part of the contractile vacuole complex of T. cruzi and may thus have been a contamination in the glycosomal fraction.In addition, a transporter protein involved in maintaining the redox balance within glycosomes through a Glyc3P/DHAP shuttle-based mechanism has not yet been described but its existence is presumed .Translocation of amino acids across the glycosomal membrane is also necessary; it is likely that amino acids could diffuse through the glycosomal pores.Nonetheless, the plasma-membrane arginine transporter AAP3 has, under certain culturing conditions, also been detected in the membrane of Leishmania glycosomes .As shown in Tables S1 and SVI, two proteins were found in the pellet fraction obtained after carbonate treatment of glycosomal membranes.Their sequences are identical to that of TcHT, previously identified as the plasma-membrane glucose transporter by Silber et al., 2009 .Intriguingly, based on their immunolocalization studies, these authors reported the association of this transporter also with glycosomes and reservosomes.However, the functionality of TcHT as a hexose transporter in the glycosomal membrane seems doubtful, not only because so far no evidence has been found for functional transporters for small solutes in the membrane of peroxisome-related organelles of any organism – the current notion is that translocation occurs via channels – but also because membrane insertion of proteins of plasma-membrane and peroxisomes/glycosomes is known to occur by distinct mechanisms, involving specific topogenic signals and proteins.Even when similar proteins would be routed to these different destinations after insertion into the endoplasmic reticulum membrane, they would have opposite topologies in the plasma membrane and glycosomal membrane.Additionally, it would be difficult to imagine how, in view of the different extra- and intracellular glucose concentrations, transport of this substrate across the different membranes would be mediated by carriers with identical kinetic properties.A possible explanation for the finding of TcHT also in intracellular membranes is a non-specific association of excess transporter molecules with hydrophobic environments, either directly with organellar membranes or after association with the specific domains of the endoplasmic reticulum destined to be routed to proliferating glycosomes.This same explanation may be invoked for the Leishmania arginine plasma-membrane transporter AAP3 that, under certain conditions is also found associated with glycosomes as mentioned above.Previously, a protein profile of glycosomal membranes from T. cruzi epimastigotes revealed a most abundant protein of 75 kDa which was later identified as PPDK .This protein was initially detected in the glycosomal membrane pellet obtained after treatment with Na2CO3, but also in the soluble phase confirming its dual location in both the membrane and the glycosomal matrix .Interestingly, the PPDK associated to the glycosomal membrane appeared to be post-translationally modified by phosphorylation and proteolytic cleavage .Moreover, studies of T. cruzi epimastigotes permeabilized with digitonin and incubated with anti-PPDK showed significant inhibition of glucose consumption when PEP was included in the assay.This result may indicate the participation of the 75 kDa form of PPDK in the entry of PEP into glycosomes.Several studies have indicated that T. brucei HK has a strong tendency to remain associated with the glycosomal membrane .Some authors even suggested that the entry of glucose into the glycosomes and the association of HK to the glycosomal membrane might be functionally linked.Interestingly, oligomycin inhibits the linkage via a mechanism that appeared not to be correlated to the energy charge of the cell .In addition, our experience in the purification of T. cruzi HK showed that when a glycosome-enriched fraction was treated with 80% ammonium sulfate, the enzyme’s activity was recovered as a flocculate floating on the solution .This latter observation would support the notion that HK of T. cruzi might be associated with insoluble regions of the membrane, possibly in association with a pore as has been proposed .Further support for the hypothesis is the detection, in our mass analysis, of HK in both the soluble fractions and the pellets obtained after treatment with Na2CO3 and osmotic shock.Other evidence that might suggest a close association between some enzymes and glycosomal transporters or pores results from studies performed on isolated glycosomes.In these studies the glycosomal latency values of enzymes showed some inconsistencies.For example, assays performed on glycosomes from T. cruzi and T. brucei showed high latency for HK, whereas in these same preparations PGK showed only very low or no latency , our unpublished results].A reinterpretation of these data could be that the glycosomal PGK is tightly associated with the membrane, for example forming a complex with a pore to facilitate specifically the translocation of 3PGA between the glycosomal matrix and the cytosol.A third group of peroxisomal membrane proteins is formed by some MCF representatives .Three different proteins of this family have been reported in the glycosomal proteome of T. brucei .These are homologs of transporters of phosphate, dicarboxylates and ATP/ADP exchange.However, the unambiguous localization of MCF transporters in glycosomes and the assignment of substrate specificity for such transporters in general and trypanosomes in particular is not always evident as shown in functional studies .Nonetheless, a MCF transporter with broad specificity for different, bulky compounds such as ATP, ADP, AMP, CoA and NAD+ has been identified in human, yeast and plant peroxisomes .In our mass analysis we detected homologs of a mitochondrial phosphate transporter as well as an ADP/ATP mitochondrial carrier protein which have high identity with a protein annotated as mitochondrial carrier protein 11 and ADP/ATP mitochondrial translocase, respectively, in the T. brucei genome database.However, the functions of these proteins in T. brucei have not yet been proved.Interestingly, both proteins were reported as high-confidence glycosomal proteins in the proteomic analysis of T. brucei .The ADP/ATP mitochondrial carrier homolog of T. cruzi presents a putative mitochondrial carrier sequence signature as well as the non-canonical mitochondrial ATP/ADP motif sequence RRRMMM.It is attractive to speculate that this protein, if present in the glycosomal membrane, could provide additional ATP necessary for intra-glycosomal biosynthetic processes when the PGK and PEPCK would not be able to provide it.In contrast to other antiport systems, the ADP/ATP translocase is an electrogenic antiporter exchanging ATP that has four negative charges against ADP with only three, resulting in the net translocation of a negative charge.This translocator has been characterized in the inner mitochondrial membrane where the electrochemical proton gradient is the driving force for the exchange of ATP and ADP.In the case of glycosomes such translocator would be expected to act in most instances to import ATP – thus in the opposite direction of its normal mitochondrial function of 36.2 kDa.A homolog has also been detected by proteomic analysis of the glycosomal membrane of T. brucei and L. tarentolae .This putative tricarboxylate carrier of T. cruzi showed 69% and 56% identity with the corresponding protein of T. brucei and L. tarentolae, respectively.Furthermore, we detected a protein of 51 kDa with 62% identity with a putative amino-acid permease/transporter of T. brucei and two other candidate amino-acid transporters, each possessing 11 predicted transmembrane helices.It should be realized that for many of the glycosomal membrane proteins, whether integral or peripheral, identified in the proteome analyses of both T. cruzi and other trypanosomatids, further studies will be required to prove unambiguously that they are authentic glycosomal and not contaminants in the glycosomal fractions analyzed.Moreover, functional studies to prove the identities of the membrane proteins have only been performed for the peroxins and one of the T. brucei ABC transporters.For all other proteins discussed above, the possible functional identity is predicted from homology with molecules in non-trypanosomatid organisms.However, it is important to realize that glycosomes have a very different metabolic repertoire than peroxisomes of other organisms, so they need different metabolite transporters.Trypanosomatids may have achieved this by recruiting appropriate transporters from the available repertoire in other membranes such as the mitochondrial inner membrane and/or by changing the substrate specificity of transporters inherited from the peroxisomes that evolved into glycosomes .Functions assigned solely based on sequence comparison should therefore be considered as tentative and require confirmation by future functional studies.A variety of other proteins bearing a classical PTS1 but with unknown function were detected in our proteomic analysis.Several of these proteins present high identity with hypothetical proteins identified with high confidence as glycosomal proteins in the proteomic analysis of T. brucei .Moreover, using a transmembrane domains predictor TMHMM software, we could establish that several of these hypothetical proteins present potential transmembrane alpha-helical segments.These proteins have a variable number of alpha-helices, ranging between one and 17.This characteristic suggests that these proteins, passing several times through the membrane, might fulfill the function of channels or transporters of important ions or molecules involved in the metabolism occurring within glycosomes.An interesting finding is the presence of a protein with 58.9% identity to the T. brucei serine/threonine phosphatase PIP39.Moreover, it has an identical PTS1 as TbPIP39.TbPIP39 is part of a protein phosphatase cascade that regulates differentiation between the parasite’s developmental forms .Prior to the trigger for differentiation, TbPIP39 is kept inactivated in a cytosolic assembly with the tyrosine phosphatase TbPTP1, that dephosphorylates it.Upon the differentiation trigger – the uptake of citrate from the blood into the trypanosome – the PTP1/PIP39 complex is disrupted and PIP39 becomes phosphorylated and activated and is translocated to the glycosomes.Future studies may reveal if PIP39 and glycosomes are also involved in life-cycle differentiation of T. cruzi.In this manuscript we have presented insight into the different biochemical processes that are carried out in the glycosomes of T. cruzi epimastigotes through a proteomic analysis of a purified fraction containing this organelle from cells grown to an exponential and stationary phase.The intraglycosomal enzymatic equipment and the proteins present in the glycosomal membrane reveal low variations between these two growth phases of T. cruzi epimastigotes indicating that the protein composition of this organelle is – at least in a qualitative sense – largely independent of the carbon source.In addition to known glycosomal routes, we have found new glycosomal constituents which were identified by electrospray ionization mass spectrometry in an Orbitrap Elite MS.These results provide information on possible new metabolic pathways in the organelles, as well as the identification of proteins possibly present in the glycosomal membrane that could fill the void of solute translocation mechanisms necessary for metabolism within the glycosomes.Several enzymes were found that had not yet been reported as glycosomal but have motifs such as PTS1 and PTS2, e.g. aldehyde dehydrogenase, PAS-domain containing PGK, nucleoside diphosphate kinase, arginine kinase, ribokinase, L-ribulokinase, sedoheptulose-1,7-bisphosphatase, dihydroxyacetone kinase 1-like protein, and phosphoribosylpyrophosphate synthetase.To corroborate the glycosomal localization of the new components identified in this study, it is necessary to carry out complementary experiments such as immunolocalization as well as others that allow to clarify the presence of the respective enzymes.
In Trypanosoma cruzi, the causal agent of Chagas disease, the first seven steps of glycolysis are compartmentalized in glycosomes, which are authentic but specialized peroxisomes. Besides glycolysis, activity of enzymes of other metabolic processes have been reported to be present in glycosomes, such as β-oxidation of fatty acids, purine salvage, pentose-phosphate pathway, gluconeogenesis and biosynthesis of ether-lipids, isoprenoids, sterols and pyrimidines. In this study, we have purified glycosomes from T. cruzi epimastigotes, collected the soluble and membrane fractions of these organelles, and separated peripheral and integral membrane proteins by Na 2 CO 3 treatment and osmotic shock. Proteomic analysis was performed on each of these fractions, allowing us to confirm the presence of enzymes involved in various metabolic pathways as well as identify new components of this parasite's glycosomes.
144
Colloid chemistry pitfall for flow cytometric enumeration of viruses in water
Viruses are the most numerous microbial group and have a fundamental impact on aquatic ecosystem dynamics.They influence biogeochemical cycles through gene regulation and configuring microbial communities, and by “killing the winning” prokaryotic or eukaryotic species, thereby maintaining the diversity and dynamic functioning of natural and artificial ecosystems.Key features include short-duration virus infection cycles, highly abundant viromes and rapid changes in species abundance and diversity.To investigate viruses in environmental waters, transmission electron microscopy was one of the first methods to be utilized, which demonstrated much higher abundances of viruses in marine waters compared to plaque forming unit enumeration.With the development of sensitive fluorescent dyes, TEM was replaced by epifluorescent microscopy, which has demonstrated even higher counts, compared with TEM.Though sensitive, these direct methods are labor intensive and time consuming.Flow cytometry enumeration of viruses has neither of these shortcomings, and was first reported in 1979, but was not widely used in ecological studies until twenty years later with the availability of bright fluorescent DNA-binding dyes.Since then, flow cytometric virus enumeration has become a standard approach in water research.The efficiency of virus-targeted FCM is usually estimated by its comparison with TEM or EFM virus counts in environmental samples.To our knowledge only Tomaru and Nagasaki attempted to compare FCM counts with most probable number estimates, based on a culture and extinction dilution method using single virus cultures.In general, SYBR® Green I is preferred for virus staining since this fluorescent dye is affordable and results in higher virus counts when compared to other dyes.The aims of this study were to illustrate likely artifacts and understand their mechanisms when staining bacteriophages with SYBR® Green I for FCM enumeration, and to estimate the sensitivity and accuracy of FCM for lambda, P1, and T4 bacteriophage enumeration compared to PFU estimations.Bacteriophages of three genome sizes: 48,502 bp dsDNA lambda; 93,601 bp dsDNA P1; and 168,903 bp dsDNA T4 were propagated in E. coli hosts TG1, MG1655, and BL21DE3 respectively.The E. coli cultures were grown in LB broth at 37 °C and 250 rpm to optical densities of 0.6–0.7, then infected with appropriate bacteriophage and the incubation was continued overnight at 37 °C with no shaking.Overnight cultures were centrifuged at 4,000 g for 30 min to precipitate bacterial cell debris, supernatant was filtered through 0.22 μm syringe filter into a sterile Amicon Ultra 100 K centrifugal filter device, and centrifuged again at 4,000 g for 20 min to eliminate any influence of growth media on flow cytometry analysis.Bacteriophage remaining on the filter part of the device, was treated with DNAse I to remove residual host DNA by adding: 25 μL of 10x DNAse I buffer and 1 μL of 2.5 mg/mL DNAse I, dissolved in storage buffer to the bacteriophage suspensions and incubated for 45 min at 37 °C.All chemicals were purchased from Sigma, unless stated otherwise.After the incubation, bacteriophage samples were rinsed with 10 mL of 1x HyClone PBS that was filtered through 1 kDa Macrosep Advance Centrifugal device, resuspended in PBS to the initial volume and analysed.Solid and soft Trypticase Soy Agar was prepared from BBL Trypticase Soy Broth with addition of 1.5 and 0.6% agar respectively.Triplicate decimal dilutions of bacteriophage samples were prepared in 900 μL of 1x HyClone PBS and the double-layer agar assay was carried out as described previously.Standard deviations and P-values were calculated with Microsoft Excel™.The molecular structure of SYBR® Green I) implies a hydrophobic compound, which is not fully soluble in aqueous solvents.Hence, to estimate fluorescence of colloidal SYBR particles, we prepared stabilized emulsions of SYBR with each of the following surfactants: Triton-X100, IgePal-630, Tween 20, NP 40, Brije 35, and Sodium Dodecyl Sulfate.SYBR Green I was added to 1% solution of a surfactant in 1 kDa – filtered Tris-EDTA buffer pH 8.0 to final concentration of 50 x. All samples and SYBR Green I stock in this study were diluted, stained, and stored in black microcentrifuge tubes.Duplicate dilutions of SYBR in TE were prepared at 0.5x, 1x, 5x, and 50x concentrations; one set was heated at 80 °C for 10 min, and the other was analysed unheated.All TE buffer was 1 kDa – filtered before use.Crimson fluorescent 0.2 μm FluoroSpheres® were added to a final concentration of 3.4 × 107 beads.mL−1 for quality control.The working stock of SYBR Green I should not be filtered due to interactions that remove this hydrophobic dye from solution.This effect is based on well understood selective wettability and capillary force mechanisms in colloid systems.Fluorescence was observed with a conventional benchtop UV transilluminator as well as an EVOS FL fluorescent cell imaging system.For the wet mount, 25 μL of fresh samples were placed on new pre-cleaned microscope slides and glass coverslips.The EVOS images were captured in TxRed, GFP, and TRANS channels and image overlays were created.SYBR® Green I samples were diluted in TE buffer to final concentrations 0.1x, 0.2x, 0.5x, 1x, and 2x, with one set heat treated and the other not, as described above.Bacteriophage decimal dilutions were prepared in triplicate in TE buffer and stained as described with 0.5x and 1x SYBR® Green I. TE buffer was also prepared with the SYBR dye as negative control.Flow rate was estimated with 1 μm latex bead FluoroSpheres®.The beads were first briefly vortexed and then bath-sonicated for 1 min as recommended by the manufacturer; noting that vortexing only gave inconsistent results.Triplicate 100-fold serial dilutions were prepared to 10−4, and then decimally to 10−6, immediately after the sonication step.It is important to pay attention that no droplet was left on the outer side of the pipette tip.Dilutions, used for analysis, were briefly vortexed and sonicated again right before being analysed.As each batch of beads has a Certificate of Analysis with the number of beads per mL indicated, it was possible to calculate the number of beads per mL of the working dilution.To calculate the flow rate, the number of events in the bead population was divided by bead concentration in the working dilution.Flow rate was calculated each time samples were analysed.Flow cytometric analysis was carried out with BD LSRFortessa™ X-20 cell analyzer equipped with 488 nm excitation laser with standard filter setup.The trigger was set on green fluorescence.Data was collected using FITC-W/SSC-W dot plots.Events were gated based on SYBR in TE samples with no virus and T4 SYBR-stained decimal dilutions.Also, an older model of flow cytometer, Gallios™, also equipped with 488 nm excitation laser, was used to compare sensitivity of the two instruments.Data were collected as FL1 INT/FL2 INT and/or FL1 TOF/SSC TOF plot, with the same no virus and T4 SYBR-treated samples used on the BD LSRFortessa™.Microscopic examination of SYBR® Green I partly dissolved in TE buffer revealed the presence of fluorescent particles in all dilutions, in both heated and unheated samples.Critical to the presence of possible artifacts analysed by FCM, this dye produces small crystals or amorphous mass, which may also lead to uneven distribution of the SYBR fluorophore among the aliquots used for sample staining.Centrifugation of SYBR stock is still not recommended as another well understood mechanical method for breaking an emulsion in addition to filtration.Addition of surfactants to 1% final concentration to aid colloid dispersion) resulted in intense fluorescence of SYBR® Green I even with no DNA present.Similar results were obtained with SYBR Gold at 50x final concentration, Hoechst 33342 Ready Flow Reagent at 10% of commercial stock concentration, and some other fluorescent dyes.Numerous fluorescing SYBR® Green I particles were observed by microscopy, and FCM signal was also more intense when compared to controls with no surfactant added.Flow cytometric analysis of various concentrations of SYBR® Green I in TE demonstrated a distinct population of fluorescent particles.Event counts in some random sample tubes were much higher than in other replicate tubes with supposedly the same concentration of SYBR.Most likely, this variability was the effect of non-uniform dispersion of SYBR® Green I in the stock solution.Moreover, the event counts noticeably increased after bath-sonication, pipetting, or just hand shaking of the samples and decreased in the samples subsequently kept undisturbed, as illustrated in Fig. 5 when using the Gallios™ instrument and on the BD LSR Fortessa™ X-20.Double agar overlay plaque assay showed 9.98 ± 0.09 log PFU.mL−1 of T4, 10.36 ± 0.25 log PFU.mL−1 of P1, and 9.3 ± 0.15 log PFU.mL−1 of λ bacteriophages.However, both Fortessa™ X-20 and Gallios™ instruments failed to detect Lambda and P1 bacteriophages.On the other hand, bacteriophage T4 was resolved as a distinct population of events when analysed on the Fortessa™ X-20, but not with the Gallios instrument.Two distinct populations were identified, with only the number of events in P2 changing according to dilutions of the T4 bacteriophage, thus confirming P2 largely contained the target population.T4 bacteriophage FCM counts of the same bacterial lysate showed no significant difference between either 0.5x or 1x SYBR stained samples at either 10−5 or 10−6 dilutions, as well as when compared with the plaque assay counts.However, significant disturbing of the samples led to decreased FCM virus events, and estimated numbers did not correspond to the plaque assay data.Therefore, care in sample handling is also important when quantifying bacteriophages by FCM.As SYBR® Green I is a hydrophobic chemical with low solubility in aqueous solvents) there are inherent problems in using such fluorophores when targeting small particles like viruses by FCM.Though not widely discussed in the microbiological literature, SYBR® Green I forms a disperse colloid rather than a homogenous molecular solution.Disperse systems can be formed via two main routes: mechanical dispersion or condensation from oversaturated solutions.For example, heating samples to 80 °C during staining procedure enhances oversaturation of the solution, and colloid particles start forming as the temperature decreases.The fact that fluorescent particles appear in both heated and unheated samples demonstrates that either route, or both routes together, might contribute to SYBR® Green I dispersion.A further indication of this dispersion was seen by the SYBR-FCM signal increase by shaking, sonication or pipetting, presumably due to the increase in auto-fluorescing dye colloidal particles.To the best of our knowledge, this is the first report of such chemical aspect of fluorescent dyes and its associated interference with small-particle enumeration by FCM.Consequently, keeping samples undisturbed for certain amounts of time reduces these apparent ‘virus’ event counts.This is a typical behaviour of lyophobic disperse systems.In such systems, mechanically dispersed colloid particles tend to coagulate, and if interaction energy between the particles allows, they will coalesce into larger particles.When the interaction energy is insufficient, due to low concentration and small diameter of remaining particles, coalescence becomes impossible and the disperse system self-stabilizes.Our findings demonstrate that SYBR® Green I, as it is used in flow cytometry for virus enumeration, looks like a good example of a self-stabilized or pseudolyophilic system.Such behaviour is not unique to SYBR® Green I, as the fluorescent dye SafeView Plus™ was also shown to self-stabilize in solution in the same manner, which we confirmed by flow cytometry.Furthermore, the addition of surfactants to a panel of SYBR® Green I solutions generated and stabilized artifact particles into emulsions, which could be misidentified as virus populations by FCM.Hence, when high gain levels are used to enumerate small-particle virions by FCM, hydrophobic fluorophores may generate various levels of false positive ‘virus’ signals.The same phenomenon was observed earlier by Pollard, who compared the excitation and emission spectra of organic matter in water, in parallel with intact virus particles, and confirmed that about 70% of the fluorescent signal was associated with the matrix itself independently of the presence or absence of virus.Although Pollard did not use flow cytometry, his findings contribute to our observations that fluorescent colloid dye particles, present in dye-stained virus suspensions, can comprise a significant portion of the FCM signal.Hence, the use of fluorescent dyes for virus enumeration by flow cytometry may produce false-positive signals and lead to overestimation of total virus counts by misreporting colloid particles as virions, depending on instrument sensitivity.Further research is needed to optimize reporting procedures involving small-particle count in pseudolyophilic colloid systems, so as to address stained-virus and no-virus but stain-present controls as discussed below.To reduce misidentification of virions in environmental matrices, the instrument and assay sensitivity could be estimated using a panel of bacteriophages of various genome sizes.As such, the target population could be identified by gating it/them from the total stained suspension signal.As illustrated in the current work, serial dilutions of the sample need to be correlated with the decline in target signal, which should be independent of dye concentration and should appear as a defined target population.Once the population is identified and gated, FCM signal counts should correlate to bacteriophage enumeration by a second established method, such as culture-based plaque assay.Stained no-virus aqueous phase control should always be applied during target identification, in order to minimize false-positive signals.In addition, staining of virus particles with nucleic acid stains may require heating of the samples to 80 °C, in order to expose viral nucleic acid.Successful enumeration of nucleic acid targets relies on gentle handling of such heated samples.We speculate that in order for the number of fluorescent signals to correlate to the number of target nucleic acid molecules associated with virions, the freshly-heated and released viral DNA needs to remain compact.Rough handling of the sample could untangle the DNA molecule, creating distant contact points with the dye, and therefore decreasing the intensity of dye signal associated with a single DNA molecule.Commonly used fluorescent dyes create pseudolyophilic colloid systems, which auto-fluoresce as stained virus-like particles even in the absence of DNA.The presence of surfactants further enhances non-specific fluorescence of such dye colloids and, therefore, use of surfactants for sample preparation should be avoided.Altogether, these interfere with small-particle enumeration by fluorescence-based assays, such as flow cytometry.Successful enumeration relies on correct identification of the target population by the careful use of negative virus control samples.The instrument sensitivity should be assessed by comparison with established culture-based methods.Given the pseudolyophilic colloidal nature of fluorophores used in FCM, sample handling can additionally affect the accuracy of virus enumeration.Overall, further research is needed to optimize the use of fluorescent dyes for virus quantification from environmental matrices by sensitive assays, such as flow cytometry.The authors have no conflict of interest to declare.
Flow cytomtery (FCM) has become a standard approach to enumerate viruses in water research. However, the nature of the fluorescent signal in flow cytometric analysis of water samples and the mechanism of its formation, have not been addressed for bacteriophages expected in wastewaters. Here we assess the behaviour of fluorescent DNA-staining dyes in aqueous solutions, as well as sensitivity and accuracy of FCM for enumeration of DNA-stained model bacteriophages λ P1, and T4. We demonstrate that in aqueous systems fluorescent dyes form a self-stabilized (pseudolyophilic) emulsion of auto-fluorescing colloid particles. Sample shaking and addition of surfactants enhance auto-fluorescence due to increased dispersion and, in the presence of surfactants, stabilization of the dye emulsion. Bacteriophages with genome sizes <100 kbp (i.e. λ & P1) did not generate a distinct population signal to be detected by one of the most sensitive FCM instruments available (BD LSR Fortessa™ X-20), whereas the larger T4 bacteriophage was resolved as a distinct population of events. These results indicate that the use of fluorescent dyes for bacteriophage enumeration by flow cytometry can produce false positive signals and lead to wrong estimation of total virus counts by misreporting colloid particles as virions, depending on instrument sensitivity.
145
XPS study of the surface chemistry of UO 2 (111) single crystal film
The data on UO2 stability, stoichiometric and ionic composition are important for uranium ore extraction , spent fuel storage and disposal, as well as for remediation of uranium contaminated environments .Uranium oxide solubility depends strongly on uranium oxidation state.U6+ compounds are much more soluble than U4+ ones .Therefore, oxidation state of uranium ions in spent nuclear fuel correlates with solubility and corrosion rate , which determines the release rate of the majority of radionuclides .Polycrystalline uranium dioxide is known almost always to contain excessive oxygen.Its general formula is UO2+x.As oxygen excess grows, UO2 cell shrinks to ax = 5.4690–0.12x .For example, UO2.12 phase is suggested to correspond to stoichiometric composition of oxide U8O17 with a0 = 5.456 Å.UO2 oxidation in the air starts with oxygen absorption on the grain surface.Oxygen included in UO2 lattice after diffusion can take intermediate positions like in the edges and in the center of the unit cell.Getting these vacancies filled, the unit cell can reach the composition of UO3, but at relatively low temperatures the actual saturation limit does not exceed UO2.30 .On the further filling, the tetragonal phase forms.Intrusion of oxygen ions in UO2 lattice accompanied by the UO2 lattice contraction is only possible if uranium ions oxidize from U4+ to U5+ and U6+, i.e. uranium ionic radii decrease as UO2+ and UO22+ ions form .Therefore, a complex oxide UO2+x was suggested to form on the surface of single crystal UO2 film in an atmospheric air .Stability of a cubic-octahedral cluster U6O12 was theoretically studied .This cluster can form in non-stoichiometric oxides UO2+x.Such a structural stability is inherited from the molecular cluster U6O12.X-ray and other spectra determination of stoichiometric composition of standard complex oxides UO2+x require single crystal uranium oxide films.This provides correct high-resolution spectra and reliable results .Therefore, the technique of film preparation and study is a question of a special attention .Photoemission spectroscopy and X-ray photoelectron spectroscopy are widely used for ionic characterization of uranium oxides UO2+x.These methods are also used for uranium oxide surface characterization on various substrates .The work in Ref. considers the U 4f and O 1s XPS spectra structure of oxide row UO2+x under different etching and annealing conditions.The determination of the uranium oxidation state employs the spectrum of the U 4f-electrons .The binding energy of the U 4f7/2 electrons grows with an increase of the uranium oxidation state in oxides .A special attention was paid to the study of the mechanisms of structure formation, which leads to widening of the main peaks and appearance of additional structure in the spectra .The XPS spectra of the U 4f-electrons of some oxides exhibit typical shake-up satellites .Relative satellite intensity Isat is calculated as a ratio of satellite intensity to the basic peak intensity .The mechanisms of the shake-up satellite appearance are considered in Refs. .The U 4f XPS structure is best resolved for single crystal oxide films.The U 4f spectrum from complex amorphous oxides UO2+x is often hard to separate unambiguously into components.This does not allow reliable quantitative information on uranium oxidation state and ionic composition.The earlier paper noted that Ar+ etching of the surface of the studied single crystal film caused a formation of an n-type semiconductive UO2-x phase.Upon a 527 °C annealing at 5 × 10−6 mbar of O2 a p-type semiconductive UO2+x phase forms.The UO2 phase is a Mott-Hubbard insulator .A short-time etching causes an increase in the satellite intensity at 6.9 eV in the U 4f XPS spectrum and appearance of the peaks attributed to metallic uranium form at the lower BE side from the basic oxide peak.The O 1s, 2s intensity drops, and the U 5f intensity grows during the etching .The goal of this work was to study uranium dioxide surface after etching and annealing.Therefore, the main attention was focused on the main XPS parameters of both core and valence electrons such as: binding energy; structure of the inner and outer valence molecular orbitals; intensities and position of shake-up satellites; intensities and widths of the U 4f, U 5f and O 1s peaks.As a result, the structure of the XPS spectra of the valent and inner electrons on the surface of uranium dioxide film was studied in this work.For this purpose a thin UO2 film on the YSZ substrate was prepared and the XPS study of the surface of the film was done, the influence of Ar+ etching and annealing was studied.The film was studied before and after the Ar+ etching and annealing at various times of etching and annealing.Epitaxial thin film of UO2 with surface orientation was produced by reactive sputtering onto YSZ substrate at the University of Bristol and thoroughly characterized in Refs. .A dedicated DC magnetron sputtering facility with UHV base pressure was employed to grow the film.The YSZ substrate was kept at the temperature close to 600 °C.After contact with atmosphere UO2 thin film, denoted UO2+x, have been analyzed by XPS using a Kratos Axis Ultra DLD spectrometer.The quantitative elemental analysis was performed for the surface of the studied sample as it was described in .The error in the determination of the BE and the peak width did not exceed ±0.05 eV, and the error of the relative peak intensity was ±5% .The inelastically scattered electrons-related background was subtracted by the Shirley method .40Ar+ etching of 2 × 2 mm2 sample area was conducted at the accelerating voltage of 2 kV and current density of 25 μA/cm2 at 2 × 10−9 mbar and room temperature for 20, 60, 120 and 180 s.The sputtering took place in the same place consequently after acquiring sequential group of spectra according to the scheme: 20 + 40 + 60 + 60 s.The etching rate under these conditions for SiO2 was 7.1 nm/min.The ion flux was kept at ∼1.5 × 1014 ions/.To study the surface of the film, etching and annealing of the sample were performed.Sample AP7 was annealed in the spectrometer preparation chamber at 600 °C for 1 h in order to outgas the sample and the sample holder before the experiment.The U 4f spectrum of UO2+x did not change significantly after the annealing.Afterwards the sample was etched with argon ions and annealed in the analytical chamber of the spectrometer.During this process the XPS spectra were collected from the same spot on the sample surface.There are known two methods for determining the oxygen coefficient kO = 2 + x and ionic composition in complex UO2+x oxides based on XPS data .The first method uses line intensities and binding energies of the inner U 4f- and O 1s-electrons with photoemission cross-sections of these electrons.The magnitudes of the photoemission cross-sections are determined by calculations .The second method is based on the relative intensity of the U 5f-electrons line, which is equal to the ratio of U 5f- and U 4f7/2-electron intensities.This technique is described in detail in Ref. .In the current work we provide only the main Eq. for determination of the kO.The U 5f intensity I1 determined as the U 5f/U 4f7/2 intensity ratio without the shake-up satellites can be presented as:I1 = 5.366 kO−7.173.As mentioned before, determination of uranium oxidation state and UO2+x ionic composition employs both the traditional XPS parameters and the structure parameters of the core- and valence spectra.These XPS parameters allow getting information on uranium physical and chemical properties in the studied sample.The XPS survey-scan of a surface of UO2+x single crystal thin film on YSZ is shown in Fig. 2.It does not differ much from the corresponding spectrum of a surface of UO2+x single crystal thin film on LSAT in Ref. despite the difference in the crystallographic orientation of the surface and different substrate.Fig. 3 shows the spectrum of valence electrons of the AP7b film after the subsequent 1 h annealing at 380 °C.The observed structure agrees satisfactorily with the calculation results for UO2 .The region of the valent electrons spectrum consists of two parts.The first part from 0 to ∼15 eV represents the structure related to the electrons of the outer valence molecular orbitals.The second part from ∼15 to ∼35 eV comprised by the structure of the inner valence molecular orbitals.The line at 1.3 eV is related to quasi-atomic U 5f-electrons and its intensity proportional to the number of these weakly bound electrons.It has a maximum intensity in the U4+O2 spectrum and this line is absent in the spectrum of U6+O3 .Usually, the intensity of I1 is expressed as the ratio of the intensities of the I1 = U 5f/U 4f7/2 electron lines.In this case, the oxygen coefficient on the surface of a complex oxide UO2+x can be determined using Eq. and other equations of the technique described in Ref. .For example, for sample AP7: I1 = 0.019; kO = 2.20 and k equals to 18%, 61% and 21% .The spectrum structure of the OVMO electrons is observed as a line with a maximum at 4.5 eV and with a width Γ = 4.2 eV.In the region of binding energies for the electrons of IVMO there are three widened lines with maxima at 17.3, 22.4 and 28.1 eV.In Fig. 3 these lines are formally assigned to U 6p3/2-, O 2s- and U 6p1/2-electrons.If these lines were from the electrons of atomic levels, then the ratio of their intensities would have been approximately equal to the ratio of photoionization cross-sections σ/σ = 2.89 .However, the ratio of the intensities of these lines is equal to 5.67.This difference arises from the fact that these lines represent the spectrum of the IVMO electrons.Ar+ etching is known to remove the surface atoms and to break the chemical bonds and the crystal structure .In this case the XPS structure of the valence and the core electrons changes significantly.For example, the XPS of the solid VIA elements exhibits a single ns-peak after an Ar+ treatment instead of two IVMO peaks due to the ns-ns overlapping.A short-time Ar+ etching of AP7 leads to increase of the U 5f intensity, as well as the U 5f peak widening and shift to the lower BE side due to the increase of the U4+ concentration and decrease of the U5+ and U6+ concentration on the surface.After the 20 and 60 s argon treatment the OVMO XPS of AP7 exhibits the structure with three peaks at 4.5, 6.6 and 9.1 eV, and the IVMO bands narrow.During this treatment the oxygen coefficient kO decreases, as compared to the initial sample AP7.In the beginning of the etching the U 5f intensity I1 and FWHM Γ grow and then remain constant within the measurement error.The subsequent annealing at 100 °C to 250 °C leads to narrowing of the U 5f peak but does not affect significantly its intensity.The annealing at 380 °C leads to narrowing of the U 5f peak and widening of the OVMO bands to the values corresponding to the initial sample AP7.On the basis of this, one can suggest that the 180 s Ar+ treatment of the UO2+x film on the YSZ substrate in the spectrometer chamber leads only to formation of self-organized stable UO2+x phase with the oxygen coefficient kO ≈ 2.11.This agrees with the formula UnO2n+1 for stable oxides at n = 8 .The annealing of the samples leads to narrowing of the U 5f peak but does not affect significantly its intensity.This suggestion agrees with the XRD data .Formation of a stable self-organized phase containing the An4+ ions during the Ar+ etching was observed for NpO2 and PuO2 films.Ar+ etching of AmO2 film leads mostly to formation of the Am3+ ions on the surface .The C 1s XPS spectrum of UO2+x surface consists of the basic peak at Eb = 285.0 eV used for BE calibration .Peaks at 286.2 and 287.1 eV are due to hydrocarbon bound with oxygen, and a peak at 288.8 eV is due to the carbonate group CO32− or carboxyl group COO− on the surface.After the 20 s Ar+ etching the C 1s intensity drops significantly.The O 1s spectrum of the initial sample AP7 consists of a relatively sharp basic peak at Eb = 530.1 eV, Γ = 1.1 eV and two low-intensity peaks at 531.4 eV and 532.2 eV with relative intensities of 79%, 15% and 6%, respectively.The peak at 531.4 eV can be attributed to the hydroxyl group, and the second one at 532.2 eV – to the CO32− group.The quantitative analysis based on the core U 4f7/2 and the basic O 1s peak intensities yielded the oxygen coefficient of 3.62, which exceeds the expected value of 2.The BE of the O 1s basic peak after the etching did not change significantly.In the beginning of the etching the O 1s FWHM slightly increased and afterward did not change significantly, while the intensity of the basic O 1s peak decreased.The hydroxyl-related O 1s peak intensity at 531.4 eV, in the beginning of the etching dropped noticeably and afterward changed insignificantly.The amount of oxygen of the basic peak at ∼530.0 eV BE on the surface can exceed significantly the coefficient of 2 as in UO2.The annealing after the etching leads to the increasing of the hydroxyl-related peak intensity, while the BE of the O 1s-electron does not change.The U 4f spectra are given in Fig. 7.After the 20 s Ar+ etching of sample AP7 the U 4f spectrum changed significantly, and the further etching up to 180 s did not cause any significant changes.Firstly, after the 120 s etching a small shoulder associated with the ions of oxidation state lower than U4+ at the lower BE side from the basic U 4f7/2 peak appeared at 377.0 eV.It slightly grows after the 180 s etching.Previously this peak was attributed to metallic uranium .Secondly, the XPS peaks narrowed significantly and simpler structure of the satellites was observed.This indicates a chemical bond and a long-range ordering in the lattice that self-organized after the etching in the spectrometer chamber.Since the U 4f spectra in this case do not show any explicit peaks attributed to different oxidation states of uranium ions, the decomposition of the spectrum into components is difficult.Therefore, a formal decomposition was done for a qualitative consideration of the ionic composition, and the ionic composition was evaluated only on the basis of the U 5f relative intensity.The technique described in Ref. yields that the considered phase beside the U4+ ions, contains U5+ and U6+ ions.As it was mentioned above, evaluation of the oxygen coefficient in UO2+x on the basis of the core U 4f7/2 and O 1s intensities is complicated because of adsorbed oxygen-containing molecules on the surface.A significant difference in the oxygen coefficient obtained on the basis of the O 1s/U 4f7/2 intensities and on the basis of the U 5f relative intensity was observed.In order to prepare the sample surface, as it was mentioned, the Ar+ etching and subsequent annealing of the AP7 sample were used.A 1 h annealing of the AP7 sample in the spectrometer preparation chamber at 600 °C did not bring any significant changes in the XPS spectra comparing to the initial spectrum.A 60 s Ar+ etching lead to formation of oxide UO2.14 on the AP7 surface, which agrees with the composition UO2.19.The further 1 h annealing in the spectrometer analytical chamber at 380 °C leads to formation of oxide UO2.53 on the surface AP7b, which leads to the narrowing of the U 5f peak, widening of the U 4f peak and increasing of oxygen and carbon concentrations on the surface, but does not cause a change in the oxygen coefficient kO of AP7b, as compared to AP7.The U5+-attributed shoulder 1.2 eV away from the basic peak and shake-up satellite appear.The satellite parameters are sensitive to the ionic composition changes on the surface.After another 30 s etching oxide UO2.30 formed on the AP7 surface.The subsequent 30 min annealing at 100 °C, 150 °C, 200 °C and 250 °C under 3.8 × 10−8 mbar in the spectrometer chamber leads to the similar changes in the XPS spectrum of AP7c.The U 5f intensity I1 and oxygen coefficient change insignificantly.An increase of the oxygen content on the surface during the annealing is difficult to explain by oxygen diffusion from the bulk of UO2+x.It is known that oxygen diffusion in UO2 is low .That is why the oxygen from the ‘bulk’ would not have time to reach the surface during the annealing .Moreover, the film with a thickness of 150 nm does not contain that much of excessive oxygen.Apparently, during the annealing stage oxygen and oxygen-containing compounds desorb from the walls of the spectrometer and adsorb from the surrounding environment on the surface of the sample.Supposedly, this results in formation of complex clusters UOyn− like U2 and U4 on the surface, whose O 1s BE is ∼531.5 eV.During annealing of AP7 narrowing of the U 5f peak and widening of the U 4f peak AP7b can be explained by an increase in the U5+ concentration and a decrease in concentration of the U4+ containing two uncoupled U 5f electrons and AP7b, Table 1).However, from the data for samples AP7 and AP7b it follows that the kO practically does not change on the annealing.The U 5f and the U 4f FWHM turn out to be more sensitive to the ionic compositions of the studied samples.The 30 min annealing of sample AP7 at 100 °C leads to the growth of the oxygen concentration on the surface) and does not affect significantly the oxygen coefficient kO compared to AP7.The U 5f peak FWHM does not change and the peak shifts towards the higher BE range, while Γ does not change much.The further 30 × 3 = 90 min annealing of sample AP7c at 150, 200 and 250 °C leads to changes in the oxygen concentration on the surface of sample AP7d, the oxygen coefficient is comparable to the kO in sample AP7c.The U 5f peak narrowed, while Γ did not change.The considered data show that annealing at 100–250 °C after Ar+ etching does not lead to any significant changes of the oxygen coefficient and ionic composition of the UO2+x film.The etching leads to formation of self-organized stable UO2+x phase with oxygen coefficient kO ≈ 2.12 resulting from the surface relaxation in the vacuum.Practically, on the surface of single crystal UO2 film the structure UO2 was not observed.A stable oxide UO2.12 forms on the surface.These data agree with the results of the study of UO2 single crystal surface .According to the goal of this investigation, the main XPS parameters described in Introduction were studied.Thus, the measured mean U 4f7/2 BE being 380.0 eV and the mean O 1s BE being 530.1 eV are in a good agreement with the data in Ref. .The OVMO and IVMO structures being typical for uranium dioxide agree well with the relativistic calculation results and the data of other studies .The mean shake-up satellite position 6.9 eV and intensity 28% agree with the data of other work .Since uranium U4+ ion outer electron configuration is 6s26p65f2, the presence of two uncoupled U 5f electrons must lead to the multiplet splitting and widening of the core peaks like the U 4f one .For the U6+ ions with the configuration U6+6s26p65f0 not containing the U 5f electrons such a splitting is absent in the U 4f XPS.In this case the U 4f7/2 FWHMs for BaU6+O4, PbU6+O4 and Bi2U6+O6 are 1.2, 1.1 and 1.2 eV, respectively .The U 4f7/2 FWHM for etched and annealed samples is higher.One of the reasons of the U 5f peak widening after the etching and narrowing after the annealing of the AP7 sample can be the disappearance of the U5+ ions with the configuration U5+6s26p65f1 after the etching and their appearance after the annealing.The XPS study of the UO2+x single crystal film surface) on a YSZ substrate was carried out.The film was studied before and after the Ar+ etching and annealing at various times of etching and annealing.XPS determination of the oxygen coefficient kO = 2 + x of oxide UO2+x formed on the UO2 film surface was performed on the basis of the U 4f and O 1s core-electron peak intensities, as well as on the basis of the dependence of the U 5f relative intensity on kO = 2 + x for synthetic and natural uranium oxides.The short-time Ar+ etching of the AP7 surface removes the excessive oxygen and forms the stable self-organized phase until the oxygen coefficient remains kO ≥ 2.0.The evaluation yields for this phase the kO ≈ 2.12.On the basis of the spectral parameters one can conclude that this phase contains mostly the U4+ and U5+ ions with some U6+ ions also present.
A (111) air-exposed surface of UO 2 thin film (150 nm) on (111) YSZ (yttria-stabilized zirconia) before and after the Ar + etching and subsequent in situ annealing in the spectrometer analytic chamber was studied by XPS technique. The U 5f, U 4f and O 1s electron peak intensities were employed for determining the oxygen coefficient k O = 2 + x of a UO 2+x oxide on the surface. It was found that initial surface (several nm) had k O = 2.20. A 20 s Ar + etching led to formation of oxide UO 2.12 , whose composition does not depend significantly on the etching time (up to 180 s). Ar + etching and subsequent annealing at temperatures 100–380 °C in vacuum was established to result in formation of stable well-organized structure UO 2.12 reflected in the U 4f XPS spectra as high intensity (∼28% of the basic peak) shake-up satellites 6.9 eV away from the basic peaks, and virtually did not change the oxygen coefficient of the sample surface. This agrees with the suggestion that a stable (self-assembling) phase with the oxygen coefficient k O ≈ 2.12 forms on the UO 2 surface.
146
A method for examining the geospatial distribution of CO2 storage resources applied to the Pre-Punta Gorda Composite and Dollar Bay reservoirs of the South Florida Basin, U.S.A.
Florida is one of the most heavily populated states within the United States.Over a dozen coal-fired power plants are operating in the region in order to meet the energy demands of the State’s residents, visitors, businesses, and industries.According to a recent study by the Environmental Protection Agency, carbon dioxide emissions from fossil fuel combustion by end-use sectors in Florida has increased by ∼20% from 186.58 million metric tons CO2 in 1990 to 225.80 MtCO2 in 2013, making it one of the top ten state CO2-emitters in the U.S.Storage of CO2 in deep geologic formations of Florida has the potential to significantly reduce CO2 emissions in the State, and possibly in the surrounding regions which lack substantial CO2 sequestration reservoirs; however, to date the use of geologic storage has not been implemented on a commercial level in Florida.This paper presents a detailed investigation conducted on the reservoir quality and potential CO2 storage capacity of the Upper Jurassic-,Lower Cretaceous pre-Punta Gorda Anhydrite rocks and Lower Cretaceous Dollar Bay Formation within the South Florida Basin, and evaluates how estimated storage capacities, calculated using a deterministic geospatial approach, are potentially distributed throughout the subsurface of the basin.Several studies have characterized and/or identified potential CO2 storage reservoirs within the Jurassic through lower Paleocene carbonate rocks of the South Florida Basin.Roberts-Ashby and Stewart and Roberts-Ashby et al. provide detailed investigations of the Sunniland Formation and the Lawson-Cedar Keys formations potential CO2 storage reservoirs of the South Florida Basin, respectively; these papers evaluate the geospatial distribution and estimation of CO2 storage space within these reservoirs throughout the basin using the Department of Energy methodology for calculating CO2 storage capacity.The entire subsurface strata within the South Florida Basin has also been evaluated and assessed by the U.S. Geological Survey as part of a national assessment of CO2 storage resources.This paper builds upon previous studies by providing an in-depth investigation of the porous intervals and CO2 storage resource distributions within the pre-Punta Gorda Anhydrite rocks and Dollar Bay Formation using the geospatial approach for estimating storage resources presented in Roberts-Ashby and Stewart and Roberts-Ashby et al.; however, this study utilizes the USGS assessment equation for calculating CO2 storage capacity as opposed to the DOE equation used in Roberts-Ashby and Stewart and Roberts-Ashby et al.Furthermore, this study provides a modified application of the USGS storage resource assessment equation to evaluate and demonstrate the potential spatial distribution of the volume of CO2 that can be stored in each of these two reservoirs throughout their extent in the study area, as opposed to considering the estimated CO2 storage capacity as a cumulative volume for each with no visual of resource distribution.Although the geospatial examples described here use the USGS assessment equation and input parameters, the results were obtained using a deterministic model and not a probabilistic model like that used in the 2013 USGS CO2 storage resource assessment.The Pre-Punta Gorda Composite reservoir and the Dollar Bay reservoir analyzed in this study were previously defined and characterized as the Pre-Punta Gorda Storage Assessment Unit and Dollar Bay Formation SAU of Roberts-Ashby et al. and USGS GCDSRAT.Following the USGS methodology for conducting an assessment of CO2 storage resources, the potential storage reservoir boundaries used in this study were defined by: 1) the 914-m reservoir-top depth minimum and 2) the presence of a thick, regional seal.The regional seal for the Pre-Punta Gorda Composite reservoir is the overlying, massive anhydrite unit known as the Punta Gorda Anhydrite; the regional seal for the Dollar Bay reservoir are the anhydrites within the overlying Panther Camp Formation.The area assessed for CO2 storage within these reservoirs incorporates part of the South Florida Basin as well as the surrounding region, and encompasses an area of approximately 8,380,635 ha and 6,509,369 ha.These reservoirs extend across the south-central and southern regions of the Florida peninsula and likely continue offshore for some distance into the South Florida Basin and toward the basin center, which is located to the southwest of peninsular Florida around Florida Bay; however, since this investigation was meant to further evaluate the two USGS SAUs identified in Roberts-Ashby et al. and USGS GCDSRAT, it was only conducted onshore and outward to the State-water boundary to match the evaluated area of the USGS assessment.The South Florida Basin is located on the Florida Platform, a massive carbonate platform that extends from the shelf break off the east coast of Florida westward to the Florida Escarpment.Basement rocks of the Florida Platform are composed of Precambrian and Cambrian igneous and metamorphic rocks, Ordovician to Devonian sedimentary rocks, and Triassic and Jurassic volcanic rocks whose depths increase from central-northern peninsular Florida to southern Florida.The Florida Platform has been subjected to multiple fluctuations in relative sea level throughout its history, with the highest levels occurring during the Late Cretaceous when a large majority of the platform was submerged.During the mid-Cretaceous, relative sea levels and paleo-environmental stresses, considered by many to be the result of worldwide oceanic anoxic events, affected sedimentation rates on the platform.Because sediment production and accumulation could not keep pace with tectonic subsidence that was occurring at that time, the western portion of the Florida Platform ultimately became submerged.Regressive relative sea levels dominated the Tertiary, and by the Oligocene, relative sea levels were considerably less than modern-day sea levels, exposing much of the Florida Platform.Furthermore, during the Tertiary, the platform had an elevated eastern portion and a largely submerged western portion that was no longer suitable for major carbonate accumulation; therefore, the Florida Platform was transformed from a rim-type platform into a gently west-sloping ramp.The South Florida Basin has a maximum sediment thickness of 4570–5180 m and is positioned to the east of the Lower Cretaceous Reef trend, which generally runs parallel to the Florida Escarpment and continues into the Gulf of Mexico.The onshore part of the South Florida Basin and the Florida State-waters represent about half of the entire basin.During the middle Jurassic through to about the middle Oligocene, shallow-water marine deposition was dominant in the South Florida Basin and resulted in sequences comprised of carbonate and evaporite rocks that were deposited in water typically less than about 90 m deep.Organic-rich carbonate mud also accumulated during the intermittent occurrence of salinity-stratified interior lagoons.The Pre-Punta Gorda Composite reservoir is a deep saline reservoir that is composed of the Upper Jurassic,Wood River Formation and the Lower Cretaceous Bone Island, Pumpkin Bay, and Lehigh Acres formations.The Wood River Formation primarily consists of thick limestone, dolostone, and anhydrite sequences, with a basal section of approximately 30 m of shale and arkosic sandstone; average thickness of the formation is 914 m.The Bone Island Formation is predominantly a limestone with some dolostone, and interbedded anhydrite layers are common throughout its vertical extent; average thickness of the formation is 396 m.The Pumpkin Bay Formation is predominantly a micritic limestone with intermittent dolostone and anhydrite intervals; however, at the northernmost extent of the formation, the lithology is mostly a porous dolostone.On average, the Pumpkin Bay Formation is 245 m thick, as determined in this study.The Lehigh Acres Formation is divided into three members, which from oldest to youngest are the West Felda Shale Member, Twelve Mile Member, and Able Member.The West Felda Shale Member is a calcareous shale and argillaceous limestone that varies in thickness across the basin, but typically is less than 30 m thick.The Twelve Mile Member is predominantly a tight limestone and dolostone that contains the informal “Brown dolomite zone”, and is the section of the member which contains a majority of the porosity; average thickness of the member is 98 m.The Able Member is composed primarily of anhydritic and argillaceous limestone, with an average reported thickness of 88 m.The overall average thickness of the Pre-Punta Gorda Composite section throughout the South Florida Basin is around 1160 m; average depth to the top of the composite section is about 3615 m.Evidence of hydrocarbons has been observed in each of the formations that comprise the Pre-Punta Gorda Composite reservoir; however, no commercial petroleum production has occurred from these units to date.The regional seal for the Pre-Punta Gorda Composite reservoir is the overlying Lower Cretaceous Punta Gorda Anhydrite, a massive, wedge-like anhydrite unit that is locally interbedded with limestone, dolomite, shale, and more rarely halite; thickness ranges 61–640 m.In addition, intraformational and interformational anhydrite layers of significant thickness also exist throughout the Pre-Punta Gorda Composite reservoir, and would likely limit or impede the upward migration of injected CO2 prior to encountering the Punta Gorda Anhydrite regional seal.The Dollar Bay reservoir is located within the rocks of the Lower Cretaceous Dollar Bay Formation.The formation is composed of sequences of anhydrite and both porous and tight limestone and dolostone, all of varying thickness, which were deposited during transgressive-regressive relative sea-level cycles.Thin beds of calcareous shale, salt, and lignite are also present, and in some locations within the basin, limestone is the predominant lithology of the formation.Average thickness for the Dollar Bay Formation throughout the basin is around 185 m, with an average depth to the top of the formation of 2750 m.Hydrocarbon shows are common in the Dollar Bay Formation, and are typically confined to micritic limestones, finely crystalline dolostone, and biohermal deposits.Nodular and intraformational anhydrite occur throughout the Dollar Bay Formation, which form local seals and means of entrapment; however, the regional seal for the Dollar Bay reservoir are the thick anhydrite and gypsum beds of the overlying Panther Camp Formation.This study expands upon previous CO2 storage resource assessments of the South Florida Basin by providing a more detailed evaluation of certain physical parameters of the Pre-Punta Gorda Composite and Dollar Bay reservoirs, and ultimately by evaluating how the total estimated CO2 storage resources and Brennan et al.) for each of these reservoirs is distributed across the basin.For means of comparison, the study area boundaries for the Pre-Punta Gorda Composite and Dollar Bay reservoirs in this investigation are correlative with the Pre-Punta Gorda SAU and Dollar Bay Formation SAU of the USGS assessment.As further discussed in the USGS assessment, the areas of the Pre-Punta Gorda SAU and Dollar Bay Formation SAU were estimated within ± 10%.Geologic data from over 200 petroleum exploration or production wells were examined throughout the South Florida Basin for this study.These wells provided information for Wood River Formation, Bone Island Formation, Pumpkin Bay Formation, Lehigh Acres Formation, and Dollar Bay Formation top and bottom identification, and many provided information to derive formation porosity and net-porous-interval thickness.For those wells used to derive porosity and net-porous-interval thickness for a formation of interest, there were three major criteria for the well-selection process: 1) the well had to penetrate the entire formation, 2) the well had to have two available geophysical logs from which porosity can be derived that cover the entire formation, and 3) for wells that had intentional or unintentional curves in the wellbore, the porosity log had to be reported in true vertical depth.It should be noted that in some wells the entire Wood River Formation was not penetrated by the well, which affected the ability to calculate total reservoir thickness for the Pre-Punta Gorda Composite at those locations.However, in many cases there was still substantial vertical coverage of the formation within those wells that enabled average porosity and net-porous-interval thickness estimation for the Wood River Formation that could subsequently be incorporated into the average porosity calculation for the Pre-Punta Gorda Composite reservoir at those well points.Deep-well locations throughout the South Florida Basin are typically most densely clustered within the oil-producing counties of southern and southwestern Florida, where the commercially producing Lower Cretaceous Sunniland Formation has an average top-depth of around 3355 m.In an effort to capture good spatial distribution for modeling geophysical parameters across the South Florida Basin, at least three wells were selected from each county that had significant well density; however, where well density is sparser outside the petroleum producing regions of the basin, all wells were selected for those counties.Additionally, to improve modeling interpolations for formations as they trend offshore, geologic information from wells located seaward of the State-water line within the basin were also selected and included in the study.All geophysical well logs used are public/non-proprietary and were provided by the Florida Geological Survey and Bureau of Ocean Energy Management.Adequate rock samples needed to conduct core porosity determination for the Pre-Punta Gorda Composite and Dollar Bay reservoirs were not publically available for this study; therefore, data from paper copies or scanned images of geophysical logs were used to derive porosity values for study wells.Porosity interpretation from geophysical logs was conducted using standard industry graphs and cross-plots created for geophysical-log interpretation.The types of geophysical logs used in this study for the purpose of deriving porosity values included bulk density logs, borehole-compensated sonic logs, compensated neutron logs, and dual-porosity, compensated neutron-compensated formation density logs.First, average porosity was calculated for each formation within the Pre-Punta Gorda Composite reservoir, as well as for the Dollar Bay reservoir, at each well-point in the respective study area.When calculating the average porosity for each formation, only values ≥ 8% were considered, as this investigation is meant to examine the porous rocks within the Punta Gorda Composite and Dollar Bay reservoirs that would potentially be suitable for CO2 storage.This approach was similar to the approach taken in Roberts-Ashby et al. and USGS GCDSRAT when estimating most-likely porosity values, all of which followed methodology presented in Blondes et al. and Brennan et al.; however, more abundant and detailed porosity data were available in this current investigation.Average porosity at each well point was then interpolated across the study area using Radial Basis Function and Inverse Distance Weighted spline interpolation methods in ArcGIS™ in order to determine an estimated, spatially averaged value for each formation throughout the South Florida Basin.The net-porous-interval thickness is defined in Brennan et al. as, “the stratigraphic thickness of the storage formation with a porosity of 8 percent or higher.,Net-porous thicknesses were calculated for each formation within the Pre-Punta Gorda Composite reservoir, as well as for the Dollar Bay reservoir, at each well-point in the respective study area.Like the average porosity determination, net-porous-interval thickness values at each well point were then interpolated across the study area using RBF and IDW methods in ArcGIS™ in order to determine an estimated, spatially averaged value that represents each formation within these storage reservoirs throughout the South Florida Basin.Total CO2 storage resources were first determined for the Pre-Punta Gorda Composite reservoir and Dollar Bay reservoir in Roberts-Ashby et al. and USGS GCDSRAT, and correspond with the Pre-Punta Gorda SAU and Dollar Bay Formation SAU, respectively.Storage resources were calculated in the USGS assessment using the USGS probabilistic equations and methodology provided in Blondes et al. and Brennan et al.For this investigation, total storage resources and storage resource distribution estimates were calculated using a Geographic Information Systems-based interpolation method that incorporated the USGS total storage resource equation.The total storage resource calculations are presented in Table 1 and an explanation of the coefficients, variables, and constants used in the calculations are provided in Table 2.ArcGIS™ was used to interpolate and model the estimated CO2 storage resources throughout each reservoir study area.First, a spreadsheet model was created using the USGS total storage resource equation and input values from Roberts-Ashby et al. and USGS GCDSRAT, with the exception of interpolated average porosity and interpolated net-porous-interval thickness determined in this study.Next, the USGS resource assessment equation and input parameters were applied in GIS to examine the distribution of the variables in the storage resource equation across each study area.Specifically, best-fit geostatistical methods were used to determine how the storage resource equation could most accurately be applied across each study area.Then, a geospatially derived storage resource interpolation was developed by manipulating data from the spreadsheet model and interpolated surfaces using the ArcGIS™ Spatial Analyst toolbox.The purpose of developing the interpolations was twofold: 1) to develop estimates of the total storage resource for each reservoir in their respective study area; and 2) to develop storage distribution visualizations to illustrate the variability in storage resources across each study area.A similar deterministic geospatial approach for estimating CO2 storage resources was presented in Roberts-Ashby and Stewart and Roberts-Ashby et al.; however, this study utilizes the USGS assessment equation for calculating CO2 storage capacity as opposed to the DOE equation used in Roberts-Ashby and Stewart and Roberts-Ashby et al.It should be noted that there is no statistically significant difference between these two methodologies.The RBF and IDW spline interpolation methods were used to construct the storage resource interpolations conducted for the Pre-Punta Gorda Composite reservoir and the Dollar Bay reservoir, respectively.The interpolated surfaces were only developed within the areal extent of actual data points; therefore, simulations created interpolated raster-surfaces that were bound by the outermost data points in the study area.This resulted in portions of each reservoir boundary that contained no predicted raster surface, and were subsequently identified in each figure as having “No Data.,Specifically, data-availability constraints resulted in the interpolation 79% of the Pre-Punta Gorda SAU for the Pre-Punta Composite reservoir and 62% of the Dollar Bay Formation SAU for the Dollar Bay reservoir.Since information on the subsurface strata for this study was limited to areas of well exploration within the South Florida Basin, there was a paucity of data in certain regions of the study area.To maintain the best accuracy of data, interpolations and predictions of geologic parameters were not made beyond the boundaries of the well points.Within the study area region of the South Florida Basin, thickness-weighted average porosity within the porous intervals of the Pre-Punta Gorda Composite reservoir, as determined in this study, ranges from 7 to 23%, with an average value of 14%.Fig. 7 shows the distribution of thickness-weighted average porosity for this composite reservoir throughout the study area.On a formation level, average porosity of porous intervals for the Wood River, Bone Island, Pumpkin Bay, and Lehigh Acres formations is 15%, 14%, 14%, and 12%, respectively.Average porosity within the porous intervals of the Dollar Bay reservoir is higher than the Pre-Punta GORDA Composite reservoir, and is determined here to be 18%, with average porosities ranging from 7 to 29%.Fig. 8 shows the distribution of average porosity for the Dollar Bay reservoir throughout the study area.The average net-porous-interval thickness for the Pre-Punta Gorda Composite reservoir is determined in this study to be 382 m, with a range of 90 to 1110 m.On a formation level, average net-porous-interval thickness for the Wood River, Bone Island, Pumpkin Bay, and Lehigh Acres formations is 20 m, 140 m, 120 m, and 220 m, respectively.Fig. 9 displays the distribution of net-porous-interval thickness for this composite reservoir throughout the study area.Net-porous-interval thickness for the Dollar Bay reservoir, as determined in this study, has a range of 45–250 m and an average thickness of 136 m. Fig. 10 displays the distribution of net-porous-interval thickness for the Dollar Bay reservoir throughout the study area.The total estimated CO2 storage resource calculated for 79% of the Pre-Punta Gorda Composite reservoir study area is 105,570 MtCO2.The total estimated CO2 storage resource calculated for 62% of the Dollar Bay reservoir study area is 24,760 MtCO2.With the GIS-based methods used in this study, it was not possible to calculate the estimated resource for the entire study area; the percentage of each study area assessed is a function of the geographic distribution of input well data points.Figs. 11 and 12, respectively, demonstrate how these storage resources are distributed across their respective study area within the South Florida Basin.Table 4 shows estimated total storage resource distribution summary statistics for each reservoir.Mention of limitations, uncertainties, and means for error in calculating CO2 storage capacity is essential for any investigation when presenting assessment results, as supported by Bradshaw et al.The following discusses factors and key points identified by the authors that likely affected the accuracy of storage capacity for the Pre-Punta Gorda Composite and Dollar Bay reservoirs as estimated in this study.No outcrops or areas of surficial exposure for rocks which make up the Pre-Punta Gorda Composite reservoir or Dollar Bay reservoir exist.Additionally, no continuous or sizable core samples of these reservoirs are publically available that would enable core-porosity determination or other more-detailed evaluations of these rocks.Core-chip and well-cutting samples are publically available for many Dollar Bay reservoir wells; however, very few rock samples are available for the entire Pre-Punta Gorda Composite rock section anywhere in the study area, and many of those that are available have been pulverized.As such, this study had to rely heavily on data interpreted from geophysical well-logs.Since there are few exploratory or discovery wells that penetrate the entire Pre-Punta Gorda Composite reservoir, characterizing the full extent of this rock section for CO2 storage via geophysical well-logs was difficult.Many exploratory wells in the South Florida Basin target the oil-bearing and commercially producing Lower Cretaceous Sunniland Formation which underlies the Dollar Bay Formation but overlies the Pre-Punta Gorda Composite reservoir; therefore, few difficulties were encountered when trying to characterize the vertical extent of the Dollar Bay Formation using geophysical logs, as opposed to the composite reservoir below.However, characterizing both the Pre-Punta Gorda Composite and Dollar Bay reservoirs spatially was limited because existing well coverage is dense in some regions of the South Florida Basin due to past or current oil production or exploration, and other regions have sparse or low-density spatial distribution of wells, such as the northern, eastern, and southern regions of both study areas.Well-coverage limitations affected the ability to interpolate reservoir structure tops, reservoir thickness, porosity, net-porous-interval thickness, and estimated CO2 storage resources throughout the entire extent of the study area for each of the reservoirs evaluated in this study.As discussed earlier, RBF and IDW spline interpolation methods were used to construct these various interpolations for both reservoirs, and interpolated surfaces were only developed within the areal extent of actual data points.This resulted in interpolated raster-surfaces that were bound by the outermost data points in the study area; therefore, portions of each reservoir boundary contained no predicted raster surface due to poor well coverage, and were subsequently identified in each figure as having “No Data.,The total estimated CO2 storage resource could only be calculated for 79% of the Pre-Punta Gorda Composite reservoir study area and 62% of the Dollar Bay reservoir study area due to the well-coverage limitations and data constraints.Further, mean prediction standard errors for the Pre-Punta Gorda Composite reservoir and Dollar Bay reservoir storage capacity models were 0.9% and 4.9%, respectively, with prediction standard errors in the Pre-Punta Gorda Composite reservoir ranging from less than 1%–9.8%, and less than 1%–9.4% in the Dollar Bay reservoir.Prediction standard errors in storage capacity values for these reservoirs were largely dictated by the manner in which the wells are distributed within each study area, as well as parameter/input value variance across the modeled area.Assessing the accuracy of utilizing GIS to evaluate the potential spatial distribution of reservoir parameters and storage capacity calculations when there are regions with little to no data points within a given study area can be difficult, and will likely vary on a case-by-case basis.However, this study as well as those such as Popova et al., Roberts-Ashby and Stewart, and Roberts-Ashby et al. are designed to present alternative means of calculating CO2 storage resources, specifically using geospatial methods, and demonstrate the unique contributions and considerations that the geospatial methods can provide.Like any methodology, however, there are limitations, uncertainties, and means for error.It is not uncommon to find a study area that on a regional scale does not have a complete and well-distributed dataset.In fact, as observed during the national assessment of CO2 storage resources conducted by the USGS, many basins in the U.S. can have a paucity of data and poor well coverage or data distribution for certain formations, yet there is still interest in CO2 storage resources for those regions and so they must still be assessed; this is likely no different on a global scale.The authors feel it is still relevant and significant to apply geospatial methods in calculating reservoir parameters and storage capacities wherever possible – what is important, is that the study is transparent and makes clear the errors and uncertainties associated with the work, especially in areas of sparse data.Future small-scale work would still be required when considering a specific site for CO2 storage, as studies such as these are meant to be regional evaluations.Although input values for average porosity and net-porous-interval thickness used in this study were derived using a detailed investigation of geophysical logs, it is important to note that other input values used in calculating total storage resources were largely derived from the USGS GCDSRAT.These USGS input values are single, study-area-wide constants; therefore, they are of a coarser resolution than the average porosity and net-porous-interval thickness estimates.It is possible that parameters such as density, buoyant pore volume, and buoyant and residual trapping efficiencies vary slightly within the Dollar Bay reservoir as opposed to remaining constant; however, variations in these parameters are more likely to occur within the Pre-Punta Gorda Composite reservoir due to variability in unit thickness, depths, and composition within this thick composite rock section.Refer to USGS GCDSRAT for a discussion of the uncertainties associated with each of these variables.As discussed in Roberts-Ashby et al. and USGS GCDSRAT, the areas of the Pre-Punta Gorda Composite and Dollar Bay reservoirs were estimated within ± 10%, as exact precision and accuracy when delineating the study area boundaries is difficult given the well distribution in the area and the current understanding of the subsurface stratigraphy, lithology, and chemistry of the groundwater systems.Furthermore, it is important to note that the study area boundaries are not exact lines, and more site-specific and small-scale investigations would provide better precision when estimating the area that is suitable for CO2 storage in these reservoirs.Additionally, reservoir-quality parameters for these resources is derived from geophysical well-logs and are subject to the scientist’s interpretation; differences and uncertainties can exist between individual scientists’ derivations.Net-porous-interval thickness for the Pre-Punta Gorda Composite reservoir ranges from 90 to 1110 m; thickness-weighted average porosity within these porous intervals ranges from 7 to 23%.Fig. 7 shows that average porosity within the reservoir increases in a northwesterly direction.Fig. 9 indicates that net-porous-interval thickness is generally greatest in the central region of the study area, and increases to the northwest and south, while decreasing to southwest and northeast.From a regional perspective of the study area, porous intervals vary in thickness but are generally continuous laterally within the rocks of the Pre-Punta Gorda Composite reservoir.However, semi-confining anhydritic or dense, low-permeability, dolomitic or micritic layers ranging 0.3 to 30 + m thick occur between each formation, and create vertical heterogeneity within the composite reservoir, as indicated in geophysical well-logs and rock samples observed in this study.The relatively thick, low-permeability layers could potentially create one or more storage “sub-zones” for CO2 sequestration within the Pre-Punta Gorda Composite reservoir.Net-porous-interval thickness for the Dollar Bay reservoir ranges from 45 to 250 m; average porosity within these porous intervals ranges from 7 to 29%.Figs. 8 and 10 show that a majority of the high-porosity regions and areas of thickest net-porous intervals are both located within the onshore, central portion of the South Florida Basin in south-central peninsular Florida.Both average porosity and net-porous-interval thickness within the reservoir decrease to the northeast and increase to the southeast.Similar to the Pre-Punta Gorda Composite reservoir, regionally the porous intervals of the Dollar Bay reservoir vary in thickness but are generally continuous; however, vertical heterogeneity exists within the formation in the form of semi-confining anhydritic or dense, low-permeability, dolomitic or micritic layers that range in thickness from 0.3 to ∼15 m, as indicated in geophysical well-logs and rock samples observed in this study.The relatively thick, low-permeability layers could potentially create one or more storage “sub-zones” for CO2 sequestration within the Dollar Bay reservoir.As indicated in Table 3, values for average porosity and net-porous-interval thickness provided in this study differ from those reported in USGS reports Roberts-Ashby et al. and USGS GCDSRAT.This is not unexpected because this study resulted from an in-depth and detailed investigation using over 200 wells and 400 + geophysical well-logs within the study area, which was conducted over several years.Conversely, the USGS assessment was part of a major national assessment of all potential CO2 storage resources within the U.S., and was meant to be a broad evaluation of the CO2 storage reservoirs.As such, this study evaluates and presents these rocks in much finer detail, which helps narrow down and fine-tune geologic calculations being made to evaluate these storage resources.The total CO2 storage resource calculated in this study for the Pre-Punta Gorda Composite reservoir is 105,570 MtCO2; however, this value only accounts for 79% of the study area outlined in this investigation, which directly correlates with the study area outline for the Pre-Punta Gorda SAU of Roberts-Ashby et al.The distribution of CO2 storage resources in the Pre-Punta Gorda Composite reservoir indicates that storage resources increase to the northwest and decrease to the northeast and southwest within the study area.A similar trend is also seen with the distribution of average porosity and net-porous-interval thickness for the composite reservoir.ArcGIS™ was used in this study to employ and process the probabilistic USGS equation for estimating CO2 storage resources, and integrated each variable in the storage resource equation on a cell-by-cell basis, resulting in a deterministic, GIS-calculated CO2 storage resource for the reservoir.The spatial statistics for the storage-resource raster layer in ArcGIS™ indicates that the sum of the cells in the model is approximately 105,570 MtCO2, which is ∼96% of the CO2 storage resource calculated for the entire Pre-Punta Gorda SAU in Roberts-Ashby et al. and USGS GCDSRAT, even though the geospatially derived value calculated in this study only accounts for 79% of the study area.The difference in total CO2 storage resource values is likely attributed to the fact that the geospatial method is able to represent and incorporate spatial variability of high porosity and thick net-porous intervals within the reservoir study area, which thus increases the calculated volume of storage space that is potentially available for CO2 storage.Additionally, this investigation was able to evaluate reservoir quality parameters at a more detailed level across much of the study area, which in this case resulted in higher porosity and net-porous-interval thickness values than that estimated in Roberts-Ashby et al. and USGS GCDSRAT.The total CO2 storage resource calculated in this study for the Dollar Bay reservoir is 24,760 MtCO2, and accounts for only 62% of the areal extent of the study area outlined in this investigation, which directly correlates with the study area boundary for the Dollar Bay Formation SAU of Roberts-Ashby et al.Fig. 12 demonstrates that CO2 storage resources within the Dollar Bay reservoir appear to be greatest in the central, western, and northern regions of the study area, and decrease to the northeast and south.Although average values of storage resource, porosity, and net-porous-interval thickness are generally located within the same region of the study area and all appear to decrease to the northeast, there is less of a correlation between these three parameters in other regions of the study area.Like the Pre-Punta Gorda Composite reservoir, ArcGIS™ was used to apply the probabilistic USGS equation for calculating CO2 storage resources in the Dollar Bay reservoir, resulting in a GIS-calculated CO2 storage resource for the reservoir.The spatial statistics for the storage-resource raster layer in ArcGIS™ indicates that the sum of the cells in the model is approximately 24,760 MtCO2, which is ∼3 times the total CO2 storage resource calculated for the entire Dollar Formation SAU in Roberts-Ashby et al. and USGS GCDSRAT, even though the geospatially derived value calculated in this study only accounts for 62% of the study area.Like the Pre-Punta Gorda Composite reservoir, the difference in total CO2 storage resource values is largely due to areas with high porosity and thick net-porous intervals being incorporated into the calculation, which thus increases the calculated volume of storage space that is potentially available for CO2 storage.However, as seen in Table 3, since reservoir parameters were evaluated on a more detailed level across much of the study area in this investigation, the average net-porous-interval thickness determined here is also ∼3 times the average net-porous-interval thickness estimated in Roberts-Ashby et al. and USGS GCDSRAT, and the average porosity was 4% higher.As such, these differences in reservoir parameter values probably had a significant impact on the 3-fold increase in estimated total CO2 storage resource.In addition to the geospatial modeling approach outlined here and in Roberts-Ashby and Stewart and Roberts-Ashby et al. for capturing heterogeneity and spatial variability in reservoir parameters when calculating geologic CO2 storage resources, Popova et al. presents a similar objective but uses a different type of geospatial model that utilizes spatial stochastic modeling and different CO2 storage resource assessment equations.Regardless, the same ultimate observation is made among these studies: accounting for spatial variability in reservoir parameters when estimating CO2 storage capacity for a potential geologic storage resource has the potential to improve storage estimates and can help in better-understanding the associated uncertainties.The USGS national assessment of geologic CO2 storage resources contains slightly more conservative estimates of CO2 storage space within sedimentary basins, such as the South Florida Basin.Because it was conducted using a more broad and regional approach with a less-refined conceptualization of the geology and reservoir-quality parameters for these resources, estimates of input-values for storage resource calculations were derived conservatively using an assessment panel to debate and form a consensus on input parameters in an attempt to avoid errors resulting in overinflated resource estimations.This investigation, however, was conducted over several years and was primarily focused on expanding and better-understanding the finer details of these storage resources, and did not require the same level of conservativeness as the USGS assessment.This fact is another contributor to subsequent differences in estimated total CO2 storage resources in the Pre-Punta Gorda Composite and Dollar Bay reservoirs between the two studies.Rocks that underlie the Punta Gorda Anhydrite and those that occur within the Dollar Bay Formation have been recognized as potential CO2 storage resources in the South Florida Basin.This study builds upon and expands previous CO2 storage resource assessments of the South Florida Basin by providing a more detailed evaluation of certain physical parameters of the Pre-Punta Gorda Composite and Dollar Bay reservoirs, and ultimately evaluates how the total estimated CO2 storage capacity for each of these reservoirs is potentially distributed across the basin when accounting for porosity and net-porous-interval thickness variability within the study area.Although the geospatial examples described here use the USGS assessment equation and input parameters, the results were obtained using a deterministic model and not a probabilistic model like that used in the 2013 USGS CO2 storage resource assessment.Furthermore, this study demonstrates a geospatial modification to the USGS methodology for assessing geologic CO2 storage resources, which may be applied to future USGS or other CO2 storage resource assessments.Both the Pre-Punta Gorda Composite and Dollar Bay reservoirs contain thick porous intervals with relatively high porosity.Within the study area region of the South Florida Basin, average porosity within the porous intervals of the Pre-Punta Gorda Composite reservoir ranges from 7 to 23%, with an average value of 14%.The net-porous-interval thickness ranges from 90 to 1110 m, with an average of 382 m. Average porosity within the porous intervals of the Dollar Bay reservoir is higher than the Pre-Punta Gorda Composite reservoir, ranging from 7 to 29% with an average of 18%; net-porous-interval thickness has a range of 45–250 m and an average thickness of 136 m.This study estimates average porosity and net-porous-interval thickness values that are higher than values calculated in Roberts-Ashby et al. and USGS GCDSRAT.This contributed to higher estimated CO2 storage resources for these reservoirs in this study, with the Pre-Punta Gorda Composite reservoir at 105,570 MtCO2 and the Dollar Bay reservoir at 24,760 MtCO2.This study also shows that incorporating the spatial variation and distribution of reservoir parameters, such as porosity and thickness of the net-porous intervals, across a study area while evaluating CO2 storage resources can provide different values of calculated storage resources when compared to applying a uniform, study-area-wide minimum, most-likely, and maximum value for these parameters, even though the same equation for calculating CO2 storage resource is being utilized.This is because the spatially derived method of calculating storage resources, at least for this study, accounts for the areas of higher porosity and thicker net-porous intervals, which thereby has the potential to provide a more refined estimation of the volume of pore space that is available for CO2 storage.Additionally, this investigation shows that incorporating a more detailed and robust study-area-wide dataset of reservoir parameters into the CO2 storage resource calculation can provide different insight into the volume of CO2 that can be stored in these deep rocks.The geospatially derived method for evaluating CO2 storage resources in a given study area also provides the ability to identify areas that potentially contain higher volumes of CO2 storage resources, as well as areas that might be less favorable.Although some degree of error is inherent when developing interpolated models, future studies should seek to maximize geologic data input and data distribution for these geospatial models in order to improve and refine the accuracy of predicted values.
This paper demonstrates geospatial modification of the USGS methodology for assessing geologic CO2 storage resources, and was applied to the Pre-Punta Gorda Composite and Dollar Bay reservoirs of the South Florida Basin. The study provides detailed evaluation of porous intervals within these reservoirs and utilizes GIS to evaluate the potential spatial distribution of reservoir parameters and volume of CO2 that can be stored. This study also shows that incorporating spatial variation of parameters using detailed and robust datasets may improve estimates of storage resources when compared to applying uniform values across the study area derived from small datasets, like many assessment methodologies. Geospatially derived estimates of storage resources presented here (Pre-Punta Gorda Composite = 105,570 MtCO2; Dollar Bay = 24,760 MtCO2) were greater than previous assessments, which was largely attributed to the fact that detailed evaluation of these reservoirs resulted in higher estimates of porosity and net-porous thickness, and areas of high porosity and thick net-porous intervals were incorporated into the model, likely increasing the calculated volume of storage space available for CO2 sequestration. The geospatial method for evaluating CO2 storage resources also provides the ability to identify areas that potentially contain higher volumes of storage resources, as well as areas that might be less favorable.
147
Chromatin structure profile data from DNS-seq: Differential nuclease sensitivity mapping of four reference tissues of B73 maize (Zea mays L)
The sequence data are 50 bp paired-end reads from Illumina Hi-Seq.2500 and the files are named as follows.FRT1Ha_R1.fastq.gz refers to FSU-grown 1 mm Root Tips, Biological Replicate 1, Heavy digest, technical replicate a, read 1.For each tissue, there are four sets of libraries, heavy and light digests, and their replicates.These digest pairs are later used to produce the "difference" files which capture the differential nuclease sensitivity, DNS.For comparison, the heavy digests alone are typical of conventional nucleosome occupancy mapping data, whereas the light digests are needed for the difference calculation.In order to facilitate subsequent analysis, each data file described in the NCBI SRA BioProject Accession PRJNA445708 can be named using the above schema to capture uniquely identifying information about each sample in the file name.We have developed and refined Differential Nuclease Sensitivity as a procedure for mapping nucleosome occupancy and open chromatin in maize .Briefly, fixed nuclei are digested with a diffusible enzymatic probe, micrococcal nuclease.Two digest conditions, “light” and “heavy” are employed and the resulting genomic DNA fragments are quantified by Next Generation Sequencing to obtain the relative abundance of aligned fragments.The wet-bench protocol used to produce the DNS sequencing libraries is provided in Supplemental file 1.This protocol describes many tissue- and digestion condition- specific considerations for isolation of fixed nuclei, selection of appropriate MNase digestion conditions, and construction of Next-Generation Sequencing library pairs.The Differential Nuclease Sensitivity values are calculated as the difference between light and heavy relative read coverage.As shown in Fig. 1, plotting the DNS profiles along a genomic region reveals areas with both positive or negative values.DNS-seq profiles were calculated in this way for four reference tissues.The DNS-seq data processing pipeline provided in Supplemental file 2 describes the sequential computational steps required to produce UCSC genome-browser-ready data tracks from NGS paired-end Illumina reads.In order to delineate significant differences in global MNase-sensitivity, we segmented the profiles using the peak-calling program iSeg .Descriptive statistics of these DNS-seq peak segments are shown in Fig. 2.The computational pipeline used to produce the iSeg peak calls is described in Supplemental file 3.Maize B73 seeds were obtained from WF Thompson.Earshoot and endosperm tissues were collected from plants that were grown in the field at the Florida State University Mission Road Research Facility.Tissue harvest for field-grown tissues was done at 9–11 A.M. by flash-freezing in liquid nitrogen.Coleoptilar nodes were collected from greenhouse-grown seedlings.Root tips were collected from lab-grown seedlings as described below.Earshoots were harvested in the field at 9–11 A.M. and immediately frozen in liquid nitrogen.For each earshoot collected, stalks were cut at the base, the top or second from the top earshoot was removed, the husks were peeled off, the silks were gently but quickly rubbed off, the earshoot was cut at the base and measured.Earshoot samples were immediately frozen in liquid nitrogen, pooled by date of harvest, and transferred to -80°C freezers for storage.Endosperm from self-pollinated ears was harvested at 15 days after pollination by manual microdissection in the field at 9–11 A.M. and immediately frozen in liquid nitrogen.For each ear, husks were removed, kernels detached, and the endosperm was separated from the embryo, scutellum, and pericarp.For each ear, only 10–20 kernels were quickly removed in order to limit the time elapsed between ear removal and tissue freezing to less than 5 min.The process was repeated for multiple ears during the 9am-11am time window for each day of harvest.Endosperm samples were immediately frozen in liquid nitrogen, pooled by date of harvest, and transferred to -80 °C freezers for storage.Coleoptilar nodes were harvested from greenhouse-grown seedlings 4–5 days after planting.Seeds were planted 2/3 deep in 2 in.of soil) spread into planting flats on greenhouse tables with natural plus supplemental lighting.The soil was well watered on day 1 and kept moist until harvest.Coleoptilar nodes were harvested between 10 AM and 12 PM on the day that “spears” showed above ground.Seedlings were quickly removed from the soil and first cut with a razor blade just above the seed.A 5 mm segment centered on the CN was then excised and immediately transferred to liquid nitrogen.Seedling removal and CN collection was done sequentially in order to limit the handling time to less than 1 min per plant.The CN samples were pooled by date of harvest, and transferred to −80 °C freezers for storage.Root tips were harvested, fixed, and frozen from laboratory-grown seedlings using a modified protocol adapted from a technique developed to investigate plant DNA replication .Seeds were imbibed in running water overnight, and germinated in sterile Magenta boxes containing a damp paper towel, and seedlings were grown under constant light at 28 °C for 72 h.The root tips were collected by cutting off the terminal 1 mm of primary and seminal root tips, fixing in 1% formaldehyde for 20 min, rinsing 3X in Buffer A , and flash freezing in liquid nitrogen.This fixed and frozen tissue, unlike the other tissues, was broken open via gentle brief polytron of root tips from ~ 160 seedlings directly in MNase digestion buffer followed by microfiltration through 50 μm Partec filter.A general methodology for the isolation of plant nuclei, MNase digestion, and subsequent NGS library preparation are described in detail in supplemental file 1, “DNS Bench Protocol”.Methodology for the processing of raw NGS data to browser-ready DNS profile data tracks are described in detail in supplemental file 2, “DNS Pipeline”.Methodology for the segmentation of the DNS profiles are described in detail in supplemental file 3, “iSeg Pipeline”.
Presented here are data from Next-Generation Sequencing of differential micrococcal nuclease digestions of formaldehyde-crosslinked chromatin in selected tissues of maize (Zea mays) inbred line B73. Supplemental materials include a wet-bench protocol for making DNS-seq libraries, the DNS-seq data processing pipeline for producing genome browser tracks. This report also includes the peak-calling pipeline using the iSeg algorithm to segment positive and negative peaks from the DNS-seq difference profiles. The data repository for the sequence data is the NCBI SRA, BioProject Accession PRJNA445708.
148
Perspectives on oblique angle deposition of thin films: From fundamentals to devices
Surface engineering is a technological area of high scientific and industrial interest thanks to the wide set of applications benefitting from its contributions.In addition to classical areas such as optics, tribology and corrosion/wear protection, where high compactness and a low porosity are important microstructural requirements, the advent of emerging technologies requiring porous or highly structured layers has fostered the development of thin film deposition procedures aimed at enhancing these morphological characteristics.A typical experimental approach in this regard is the use of an oblique angle geometrical configuration during deposition.Evidence of highly porous, optically anisotropic thin films grown using this technique were first reported more than a hundred years ago , but it was not until the 1950s–1970s that focus was given to the tilted columnar microstructure of these films and the factors controlling its development .Following these initial steps, the last 20–25 years have witnessed the systematic application of oblique angle deposition procedures for the development of a large variety of devices in fields such as sensor technology, photovoltaic cells, magnetism, optical devices, electrochemistry and catalysis; all of which require strict control over porosity, anisotropy and/or crystallographic texture of the film.A substantial number of papers, excellent reviews and books have been published during this period, providing a well-documented background into both, the principles and the scientific/technological impacts of this synthesis method as well as the films it can produce.The recent publication of a book on this exciting topic clearly demonstrates the ample, ongoing interest that the scientific community has in this subject, and readers are addressed to these aforementioned reviews and books to acquire an initial insight into the main features and possibilities of the OAD of thin films.In general, the term OAD or other widely used alternatives such as “glancing angle deposition”, and “ballistic deposition”, are all associated in the literature with the physical vapor deposition of thin films prepared by evaporation, which usually entails electron beam bombardment.Since the OAD concept can be more broadly applied whenever the source of deposition particles and the substrate surface are obliquely aligned, it is used in this review to describe a particular geometry in the deposition reactor rather than a particular technique.The intent here is to critically discuss the OAD of thin films from a perspective that is not restricted to evaporation, but also considers plasma- and laser-assisted deposition methods such as the magnetron sputtering technique, in which the presence of gas molecules may induce the scattering of particles and alter their otherwise rectilinear trajectory.Ion beam-assisted deposition procedures, in which a significant amount of energy is released onto the film surface by obliquely impinging particles, are also briefly discussed.A second scope in this review is to provide an atomistic insight into the mechanisms controlling the morphological development of OAD thin films, particularly its porosity, the tilt orientation of the nanostructures and any preferential texture.Although different geometrical models and empirical rules have been proposed in the last few decades to roughly account for these features in films prepared by e-beam evaporation, and to a lesser extent by MS, we believe that the time is ripe to discuss with better accuracy the atomic mechanistic effects controlling the development of morphology and crystallography of OAD thin films.Overall, the increasing interest shown by the scientific community in these films has been a direct consequence of their unique morphology, which has fostered the development of new applications and devices with specific functionalities.As such, the third and final scope of this review is the description of some of the more outstanding applications and new devices that incorporate OAD thin films.This review is organized into five sections in addition to this introduction.Targeting readers with no previous knowledge on this subject, Section 2 presents a phenomenological description of OAD thin films and their unique morphology, which is formed from tilted nanocolumns promoted from the so-called “surface shadowing” effects.The outline of this section is similar to that in other published reviews and books , thus readers may complete their understanding of the basic principles by consulting the relevant literature on the subject.Generally speaking, the OAD term refers to a configuration in which the material flux arrives at the surface of a substrate at an oblique angle.The most widely used approach to achieve this is the evaporation of material from a crucible through the impingement of electrons, although OAD thin films prepared by resistive evaporation under vacuum have also been reported .Starting with this concept, Section 3 describes the different thin film processing techniques in which restrictions imposed by the deposition method or the presence of a gas in the chamber may alter the directionality of the particles being deposited.Here, MS , plasma enhanced chemical vapor deposition , pulsed laser deposition and vacuum liquid solid deposition are all considered and subjected to specific analysis.We also briefly discuss in Section 3 the effect of interaction momentum and energy exchange between the substrate and obliquely impinging energetic particles.High-power impulse magnetron sputtering and ion-beam-assisted deposition are two typical examples of these latter procedures.For the presentation of the majority of the results, we have assumed that readers are already familiar with common material characterization techniques such as electron microscopy and X-ray diffraction.However, some mention has been made on less conventional methods such as grazing incidence small-angle X-ray scattering and reflection high-energy electron diffraction that have been used recently in the study of OAD thin films, and which have contributed to deepening the understanding of their properties.Contrary to the organization typically adopted by other reviews into OAD thin films, where basic properties such as their porosity, nanocolumnar shape or bundling association are correlated to their tilted nanocolumnar microstructure, we have opted to include a discussion on these features in Section 3.The main reason for this is that mechanistic factors other than the “surface shadowing effect” are strongly affected, not only by the preferential oblique directionality of vapor atoms, but also by additional specific energetic interactions that are discussed for the first time in this review.The discussion on methods other than evaporation, along with the analysis of bundling effects, adsorption properties and texturing effects in particular are all novel aspects that have not been systematically addressed in previous reviews on this subject.Section 4 accounts for the main OAD features described in Sections 2 and 3 from a mechanistic perspective, aiming to explain the atomic-scale evolution of a thin film’s microstructure during growth by e-beam evaporation and MS at oblique angles.Here, new fundamental ideas developed in the last years by our group are introduced, with these concepts allowing our understanding of the OAD phenomena to move beyond classical paradigms such as the tangent rule , as well as consider the film growth by methods other than evaporation.Particularly relevant in this regard is a systematic assessment of the mechanistic aspects involved in MS depositions that are not present in conventional e-beam evaporation.Section 5 describes a wide set of applications in which OAD thin films are incorporated into devices.Unlike previous reviews and books , where a systematic and thorough description of thin film properties constitutes the backbone of the analysis, we have elected in this review to go directly to the end use so as to critically assess the many successful cases of OAD films being employed.This exercise has resulted in an astonishing and overwhelming number of papers being identified from the last seven-to-eight years in which a new applications or devices where OAD thin films have been introduced.To cope with such a volume of results and to get a comprehensive description, we have relied on just a brief analysis of the expected performance of such films, providing a summary of the contributions made by different authors active in the field.Finally, Section 6 presents a short critical evaluation of the potentiality of up-scaling the OAD technology for industrial production.It will become apparent throughout this review that a critical shortcoming of OAD methods is their limited capability to be homogeneously applied over large surface areas, or to coat a large number of thin-film samples in a reproducible and efficient manner.This final section therefore discusses some of the more recent proposals and considerations to overcome these limitations and make the OAD methodology commercially competitive.Although the literature selected for this review encompasses the previously mentioned papers extending from the beginning of the twentieth century to the present, we have focussed on only those published in the last 10–15 years; or in the case of the applications and devices discussed in Section 5, on the literature that followed the comprehensive review of Hawkeye and Brett in 2007 .We expect that this systematization of the most recent literature on the subject, its critical analysis, and the proposal of new paradigms in thin film growth and description of emerging applications will be of particular interest to researchers already active in the field and, most importantly, attract new scientists from other areas that may be able to contribute to the development of this exciting topic.In this section, we address the basic concepts involved in the deposition and processing of thin films in an oblique angle configuration by developing a comprehensive survey of the already well-established knowledge on the topic.Though we describe the fundamentals of OAD by referring to the available literature, this knowledge can be complemented by reading some of the cited reviews on evaporated OADs .This description is framed within the context of classical concepts in thin-film growth such as the Thorthon’s structure zone model and other related approaches .The core concept of this section is the so-called “shadowing mechanism”, which is the main governing process responsible for the appearance of singular nanostructures within the OAD of thin films.The deposition of thin films at low temperatures, while keeping the substrate in a normal geometry with respect to the evaporated flux of material, gives rise to compact thin films.Within the classical SZM , the resulting microstructure is recognized as being Type I, which is characterized by very thin, vertically aligned structures and a high compactness.When the deposition flux arrives at an oblique angle at the substrate surface, an additional variable is introduced into the growth process that has a significant influence on the development of the film’s microstructure and compactness.It is generally accepted that the mechanistic factor controlling the nanostructural evolution of the films is a “shadowing effect”, which prevents the deposition of particles in regions situated behind initially formed nuclei .Under these conditions, the absence of any diffusion of ad-particles means that vapor particles are incorporated at their point of impact, a process that is usually recognized as ballistic deposition giving rise to tilted columnar and highly porous microstructures.Different descriptions and geometrical models accounting for these shadowing effects can be found in previous publications on the subject .Although most recent mechanistic models for the formation of nanocolumns will be described in detail in Section 4, we will recall here some of the basic concepts that intuitively account for the importance of shadowing in controlling the microstructure of an OAD thin film.From a geometrical point of view, it is obvious that the shadowing effects associated with the directionality of the incoming particles will be relaxed when they are produced from a large area source) and/or when their directionality is randomized through scattering processes by gas molecules , as it is typically the case in MS, PECVD and PLD techniques.Yet, even under these conditions, a preferential direction of arrival can be preserved to some extent, thereby inducing the growth of OAD thin films with tilted nanocolumnar structures.Variation in the morphology of OAD thin films with deposition conditions was thoroughly explored by Abelmann and Lodder by considering that changes in temperature, deposition rate and the extent of surface contamination by adsorbed impurities may have a significant influence on the tilt angle of nanocolumns, as well as on other morphological characteristics such as their bundling association.According to these authors, the diffusion of ad-particles in either a random manner, or with specific directionality resulting from momentum exchange between the deposition particles and the thin film surface, can be affected by all these factors.In Section 4, we will come back to the importance of momentum exchange and other mechanisms in determining the morphology of OAD thin films.Similarly, the bundling association of nanocolumns, a morphological feature generally overlooked in the most recent literature on the subject, will be treated specifically in Sections 3.6.2 and 4.2.2.In addition to the tilt angle, the shape and size of the OAD thin film nanocolumns can be altered by changing the temperature and deposition rate.Although a quantitative evaluation of these changes is not yet available, various works have addressed this problem and studied the morphological changes and other nanostructures induced as a function of temperature.For example, using a simple nucleation model, Zhou et al proposed that the evolution of the nanocolumn growth depends on the diameter of the nuclei formed during the initial stages of deposition.That is, if larger nuclei are formed at high temperatures and/or they possess a low melting point, then thicker nanocolumns will develop during OAD.A thickening of nanocolumns and a decrease in their tilt angle has been clearly observed in different materials by Vick et al. and Tait et al. , who attributed these effects to an enhanced diffusion of ad-particles during growth.However, in light of other results , it is evident that the general use of these diffusion arguments to predict OAD thin film morphologies needs to be critically discussed.Another way to control the surface shadowing effect so as to achieve specific morphologies and layer properties is to carry out deposition using substrates with a well-controlled surface topography.The idea of modifying the nanostructure of a film using a pre-deposited template is quite simple: it exploits the shadow effect by utilizing features already present on the surface.This artificially generated shadowing mechanism is illustrated schematically in Fig. 2.3, where it is considered that no deposition may take place behind an obstacle.The key point, therefore, is to control the shadow cast by a series of patterned hillocks, protrusions or other well-defined features on the surface.Some model calculations have been proposed to determine the dimensions of these features in ordered arrays, their relative arrangement in the plane and the distances between them so as to effectively control the shadowing effect and ultimately obtain new artificially designed nanostructures .The simplest version of OAD on a template uses a rough substrate surface; the elongated mounds or facets promoting the accumulation of deposited material onto the most prominent features, while at the same time preventing the arrival of vapor flux at the shadowed regions .Expanding on this simple approach, the use of substrates with a well-defined lithographic pattern opens up a range of new and unexpected possibilities for the nano-structuring of OAD thin films.To this end, patterned substrates consisting of well-ordered nano-structured array patterns or seed layers have been prepared through a variety of lithographic methods, laser writing, embossing or other top-down fabrication methods .Among these, the use of colloidal lithography has gained much popularity over the last few years thanks largely to its simplicity .With the aid of selected examples, Fig. 2.3 illustrates the possibilities that these approaches provide for the tailored growth of all-new supported nanostructures.Other options for controlling the thin film morphology arise when the deposition onto a template pattern is combined with the controlled movement of the substrate.For example, S-shaped 3D photonic structures used as polarizers have been fabricated by Phisweep or swing rotation of substrates with a pre-deposited template structure .At present, this use of templates in the OAD of thin films has transcended pure scientific interest, and is now developed for advanced technological applications.For example, mass-produced metal wire grid polarizers are being fabricated via the OAD deposition of antireflection FeSi and SiO2 layers on previously deposited aluminum columnar arrays .Other emerging applications in biomedical areas , or for controlling the wetting properties of substrates , also rely on the use of patterned surfaces rather than the direct OAD of thin films.In the search for new functionalities and properties, different authors have used these two aforementioned approaches to tailor the composition and distribution of components in a growing nanostructure.Simultaneous or alternant evaporations have been used along with control over the relative evaporation rate of each source to tailor the morphology and composition of OAD thin films.A survey of the different possibilities offered by this technique was recently undertaken by He et al. , who described new options such as controlling the nanocolumn composition along its length or diameter, or incorporating nanoparticles into either single or more complex nanostructures by simply moving the substrate.These principles have been applied by various authors to prepare laterally inhomogeneous nanocolumns , or to fabricate alloy or multicomponent nanocolumnar films .The combination of the template approach discussed in the previous section and evaporation from two sources opens the possibility of simultaneously controlling both, the microstructure and the lateral composition of the nanocolumns.Fig. 2.4 gives an example where OAD has been carried out using two sources on a substrate with packed nanospheres to produce thick nanocolumns with a laterally inhomogeneous chemical composition.Yet despite these advances, the possibilities offered by this method to further control the nanocolumnar morphology and other properties of the films remain largely unexplored.This was made clear in a recent work by He et al. , which demonstrated the possibility of enhancing the porosity of Si–Cu and SiO2–Cu nanocolumnar arrays through the selective removal of copper with a scavenger chemical solution.It is likely considered that e-beam, and to a lesser extent thermal evaporation, have been the most widely utilized methods for the OAD of thin films.This has certainly been true regarding the effective control of the geometry of the nanofeatures through shadowing effects; however, there are other alternatives that make use of the OAD geometrical configuration to develop new microstructures, textures and general properties in films.In these other methods, several physical mechanisms, including the thermal diffusion of ad-particles discussed in the previous section, can affect the deposition process and the efficiency of shadowing mechanisms.An interesting effect is the relocation of deposited particles through liquid–solid preferential agglomeration at sufficiently high temperatures, which constitutes the basis of the so-called vacuum liquid solid method .Another possibility is to alter the direction of the deposition particles by introducing a gas to scatter them during their flight from the source to the substrate, thereby inducing a certain randomization in their trajectories .Such a situation is encountered when deposition is carried out by MS, PLD and PECVD, among others.Total or partial randomization of the particle trajectory leads to a softening of the geometrical constraints imposed by a pure “line of sight” configuration, and in doing so causes the microstructure of the films to deviate from the hitherto discussed typical morphological patterns.In the presence of a gas, not only is the geometry of the deposition process critical to controlling the film microstructure, but also the mean free path of the deposition particles.Consequently, in order to retain at least some of the features typical of OAD growth, particularly the tilt orientation of the nanocolumns and a high porosity, it is essential that the deposition particles arrive at the film surface along a preferential direction, thereby allowing shadowing mechanisms to take over the nanostructural development of the films.The vapor–liquid–solid technique was first proposed by Wagner and Ellis as a suitable method for the fabrication of supported nanowires.It is a high temperature procedure, in which liquid droplets of a deposited material act as a catalyst to dissolve material from the vapor phase, which then diffuses up to the liquid solid interface and precipitates as a nanowire or nanowhisker.Typical VLS growth therefore proceeds through two transition steps, vapor-to-liquid and liquid-to-solid, making it vital to maintain the substrate temperature at a critical value capable of enabling these diffusion processes.The combination of VLS with OAD has proven very useful in tailoring nanostructures with very different shapes.For example, by keeping the substrate at typical VLS growth temperatures, crystalline Ge nanowhiskers with a nanostructure determined by the angle of material flux have been obtained .Fancy ITO 2D-branched nanostructures have also been fabricated by combining VLS and OAD while azimuthally rotating the substrate .In this case, liquid indium–tin alloy droplets act as both the catalyst and the initial nuclei for branch formation.This means that the final branched structure can be controlled by rotating the substrate or varying the zenithal angle according to a predefined pattern to adjust the shadowing effect created during the arrival of vapor flux .In the MS technique, a flux of sputtered atoms or molecules is produced by the bombardment of plasma ions onto an electrically biased target.The sputtered atoms are then deposited onto a substrate, which is usually placed in a normal configuration with respect to the flux direction and therefore parallel to the target .Owing to its high efficiency, reliability and scalability, this technique is widely used industrially .The MS technique has been also used in OAD configurations by placing the substrate at an oblique angle with respect to the direction normal to the target).Despite the complexity of the mechanisms involved, no systematic correlations have yet been established between the deposition parameters and the resulting thin film microstructure.Besides the existence of particle scattering events, another significant difference with respect to the e-beam evaporation is the large size of the material source).In Section 4, we will discuss some of the mechanistic effects that may induce changes in the microstructure of MS-OAD films.While e-beam evaporation has been used for the deposition of a large variety of materials, including amorphous and crystalline semiconductors , metals , oxides and even molecular materials , MS has been the most widely used method for the deposition of metals and oxides when control over the crystalline structure and/or surface roughness is an issue.Additional movement of the substrate, typically in the form of azimuthal rotation, has also been used during MS-OAD to create modified thin-film microstructures.In general, the effects obtained by incorporating this additional degree of freedom are similar to those reported for the evaporation technique, except the microstructure may now progressively evolve from separated nanocolumnar nanostructures to homogeneous porous layers by increasing the gas pressure or target–substrate distance .MS in OAD geometries has emerged as a powerful tool to tailor the crystallographic texture of films along both out- and in-plane directions; i.e., it makes it possible to obtain layers in which individual crystallites are aligned along preferential crystallographic directions perpendicular to the substrate and/or parallel to the substrate plane.Crystallization is very common in metals, even when deposition takes place at room temperature, and can occur in oxides if the substrate temperature is high enough.Thus, crystallization can be promoted by increasing the energy and momentum given to the surface via ion bombardment .A more detailed description of the texturization effects in OAD thin films will be presented in Section 3.7.In the PLD technique, a laser beam impinging onto a target ablates its uppermost layers that form a plasma plume with numerous energetic particles.This plume is directed toward a substrate, where the ablated material is deposited in the form of a thin film .Remarkably, a key feature in this technique is the conservation of the target stoichiometry when employing multicomponent targets.Moreover, in general conditions, the highly energetic character of the species in the plume promotes that thin films usually grow compact, although nanocolumnar and porous thin films can be grown by intentionally introducing typical OAD geometries .Much like the OAD of thin films prepared by evaporation or MS, the main purpose in most cases is the effective control of texture alignment so as to obtain a columnar microstructure and/or porous layers.For example, in cobalt ferrite oxide thin films, preferential orientation along the crystal direction and compressive strain tunable as a function of film thickness can be obtained by PL-OAD, whereas almost strain-free and random polycrystalline layers are produced when using normal-incidence PLD under the same conditions .Nanocolumnar mixed oxide thin films, including YBa2Cu3Ox and La0.7Sr0.3MnO3 , have been prepared by PL-OAD to take advantage of the preservation of the target stoichiometry.Thin films with simple composition, enhanced porosity and controlled microstructure have also been deposited by PL-OAD, for instance to grow porous nanocolumnar ZnO thin films by controlling both, the zenithal and azimuthal angles of deposition.Similarly, porous carbon thin films consisting of perpendicular nanocolumns ) or more complex zig-zag morphologies were obtained through azimuthal rotation of the substrate to an oblique angle configuration.In all these cases, even though the plasma plume was full of highly energetic neutral and charged species, shadowing effects are still of relevance during deposition, promoting the formation of an open film microstructure.This means that parameters such as the temperature of the plume species, the distance between the target and substrate, the pressure in the deposition chamber and the effective width/length of the plume need to be carefully controlled to effectively tailor the final morphology of a film.Plasma-assisted deposition, usually called plasma enhanced chemical vapor deposition, is a typical thin film fabrication technique employed to homogeneously coat large-area substrates that is only rarely used for the preparation of layers with singular nanostructures .In this process, a volatile-metal precursor is dosed in a reactive plasma chamber, where it decomposes to deposit a thin film, releasing volatile compounds that are evacuated together with the gaseous reaction medium.Depending on the plasma characteristics, thin films of metal, oxide, nitride, oxinitride or other compounds can be obtained.By definition, this procedure requires a certain plasma gas pressure to operate, a feature that seems to preclude the intervention of shadowing effects associated with the preferential arrival of deposition particles travelling in a particular direction.It should be noted, however, that this does not exclude nanostructuration due to shadowing effects of randomly directed species .However, mechanisms inducing certain preferential directionality of deposition species have been incorporated in plasma-assisted deposition by making use of particular effects.The best known example of this is the preparation at elevated temperatures of supported carbon nanotubes and other related nanostructures .The growth of CNTs is catalytically driven by the effect of metal nanoparticles that are initially deposited on the substrate, and which remain on the CNT tips during growth.During this process, it is assumed that the electric field lines within the plasma sheath are perpendicular to the surface, which is a decisive factor in the vertical alignment of the CNTs .As previously reported , tilted CNT orientations can be achieved by applying an intense external electric field at an oblique angle to the substrate and introducing a precursor gas parallel to it, which is believed to induce the preferential arrival of charged species along the externally imposed electric field.In the so-called remote PECVD processes, where the plasma source is outside the deposition zone, the preferential direction of arrival of precursor species moving from the dispenser toward the vacuum outlet has been used to fabricate metal-oxide composite layers consisting of well-separated vertical and tilted nanocolumns, as well as other types of branched nanostructures .An example of such a nanostructure is depicted in Fig. 3.2, which shows a series of zig-zag Ag@ZnO nanorods created by the plasma deposition of ZnO on substrates with deposited silver nanoparticles.Note that in this experiment, the substrates were tilted to impose a preferential direction of arrival to the precursor molecules.In addition to this geometrical OAD arrangement, plasma sheath effects and the high mobility and reactivity of silver under the action of an oxygen plasma seem critical factors in forming a nanocolumnar microstructure while inhibiting the growth of a compact film that would be expected with conventional PECVD .The possibilities offered by the combination of different plasma types and OAD to tailor the microstructure of thin films and supported nanostructures are practically unexplored, and therefore very much open to new methodological possibilities.A good example of work in this direction is that of Choukourov et al. , in which tilted nanocolumnar structures of Ti–C nanocomposites were fabricated by adding hexane to the plasma during the MS-OAD deposition of Ti, altering the composition of the film and its microstructure.A significant number of other interesting examples of the newly coined term “plasma nanotechnology” can be also be found in recent literature on the subject .In the previous section, it was indicated that in addition to deposition species, other neutral or ionized particles generated in the plasma may interact with the film during its growth, particularly in the case of PECVD, MS and PLD.That is, both the deposition particles and plasma species tend to exchange their energy and momentum with the substrate surface.In early models dealing with the analysis of OAD thin films prepared by evaporation, the influence of particle momentum on the evolution of the film’s microstructure was explicitly considered , but its effectiveness in controlling the microstructure was not always clear due to the small amount of momentum exchanged in most OAD processes.For instance, in thermal or electron beam evaporation processes, the energy of the deposition particles is only in the order of 0.2–0.3 eV, and so its influence on the development of a specific thin-film microstructure and/or crystalline structure can usually be neglected.Yet, in conventional PECVD or MS, the kinetic energy of some species can reach much higher values in the order of a few tens of eV due to ions impinging onto the growing surface, although the overall amount of energy exchanged per deposited species usually stays low enough to not induce any appreciable change in the film nanostructure.This situation changes, however, when the energetic interactions are increased beyond a certain level .In this section, we will address situations in which the arrival of deposition material at the substrate is accompanied by a massive transfer of energy and momentum carried by either the deposition particles themselves, or by rare gas ions and other molecules present in the system.The pronounced transfer of energy and momentum to a surface along an oblique orientation creates significant changes in the microstructure and other characteristics of thin films.Thus, even if the supplied energy contributes to a higher compactness by increasing the mobility of ad-particles , the film nanostructure may still keep some of the typical characteristics of an OAD film such as morphological asymmetry or a preferentially oriented crystallographic texture.Although a detailed description of the mechanisms involved is far beyond the scope of the present review, we will provide here some clues for the OAD of thin films prepared by high-power impulse magnetron sputtering and ion-assisted deposition .This deposition method operates under extremely high power densities in the order of some kW cm−2, with short pulses of tens of microseconds and a low duty cycle.Under these conditions, there is a high degree of ionization of sputtered material and a high rate of molecular gas dissociation.The impingement of these charged species and their incorporation onto the growing film also likely contribute to its densification , which is the most valuable feature of HIPIMS from an application point of view.Another advantage of this technique is the fact that it offers less line-of-sight restrictions than conventional MS, thus ensuring that small surface features, holes and inhomogeneities are well coated.The combination of high densification, homogenization and conformity are linked to the high energy and momentum of obliquely directed neutral particles and the perpendicular incidence of plasma ions produced in the HIPIMS discharge.1,This peculiar transport behavior results in a variation in the flux, energy and composition of the species impinging on the substrate as the deposition angle is changed.Specifically, the ion to neutral ratio is generally higher and the average ion energy lower when substrates are placed perpendicular to the target than instead of classic parallel configuration .The composition of the deposited films is also modified at oblique angles due to the angular distribution of the neutral species, the ejection of which tends to be favored along the target normal direction.This combination of effects has been utilized in controlling the composition and microstructure of ternary Ti–Si–C films , an example of the so-called MAX phases that are characterized by their attractive combination of metallic and ceramic properties.Although the influence of the deposition angle in combination with other deposition parameters on the microstructure and texture of HIPIMS-deposited thin films has not been systematically investigated, differences between conventional MS and HIPIMS have been observed in the deposition of Cu and Cr films .That is, with these two metals, it has been found that the tilt angle of the nanocolumns is lower in HIPIMS-OAD than in conventional MS-OAD.For copper films grown at room temperature, the tilt angle is independent of the vapor flux’s degree of ionization, whereas with Cr at elevated temperatures it is affected by ionization.These differences, as well as the changes induced in the crystallographic texture of the films, have been accounted for by a phenomenological model that incorporates atomic shadowing effects during the early nucleation steps .Ion-beam-assisted deposition is a classic method for producing highly compact thin films through the simultaneous impingement of deposition particles and energetic ions .This term encompasses a large variety of procedures, which vary depending on how the deposition species and ions are produced or the ion energy range.Herein, we will comment only on the effect of low energy ions; i.e., with typical energies in the order of some hundreds of eV or lower.In the first possible scenario under these conditions, thin film growth initially occurs through the aggregation of neutral species, being assisted by ions stemming from an independent source.For the sake of convenience, we shall refer to this as ion-assisted OAD .In the second case, a thin film is formed by the impingement of a highly or completely ionized flux of deposition species arriving at the substrate along an oblique direction.We will refer to this experimental arrangement ion-beam OAD .A straightforward application of the IA-OAD method concerns the control of the crystalline texture of the deposited films ; i.e., the variation of the crystallographic planes of the crystallites with respect to the substrate by changing the impingement angle of the ions.Moreover, as neutral species are produced in IA-OAD by e-beam evaporation, this has been proposed as a method capable of also controlling the film morphology, and possibly also the birefringence of the films .Indeed, it has been demonstrated that the tilt angle of the nanocolumnar structures can be increased in a large set of oxides and fluoride materials , and their areal density generally decreased, when the growing films are bombarded with neutralized ions with a kinetic energy of 390 eV.An enhanced surface diffusion of ad-particles due to heating and/or momentum transfer from the accelerated species and/or sputtered particles have been suggested as possible reasons for these morphological variations.However, the large number of parameters involved in these experiments and the factors affecting the physical processes involved have so far prevented any predictive description of tendencies.The IB-OAD processes, wherein the majority of deposition species are ionized and impinge along a well-defined off-normal angle with respect to the substrate, has not yet been extensively studied.A nice example of the nanostructuration that can be achieved in the vacuum arc deposition of Ni and C, given in Fig. 3.3 , shows a selection of cross-sectional electron micrographs and GISAXS patterns of this type of thin films.The red arrow indicates the impingement direction of the ionized atoms which, in this experiment, had energies ranging between 20 and 40 eV.These micrographs show that the films are compact and formed from tilted layers or lamellas containing C and Ni.They also show that a decrease in the tilt angle of the layers and a widening of their periodic structure occur when the Ni content and ion energy are increased.These results suggest that, on average, the ion-induced atomic mobilization in the growing film is not random, but instead proceeds preferentially along the direction of momentum of the impinging ions.The importance of momentum and energy transfer is further highlighted by the opposite tilting orientation of the Ni and C lamellas with respect to that of the nanocolumns whenever they grow as a result of surface shadowing mechanisms; e.g., by e-beam or MS OAD.This figure also shows other GISAXS characteristics for tilted nanocolumnar films prepared by e-beam OAD.Although this characterization technique has not yet been extensively used to study OAD thin films, it has been used recently to obtain information regarding the tilting orientation of the nanostructures and the correlation distances from the asymmetry and position of maxima in the GISAXS patterns .In the previous section, we implicitly assumed that the development of tilted and/or sculptured nanostructures is the most typical morphological characteristic of OAD thin films.In this section though, we develop a more detailed discussion of other relevant microstructural features such as roughness, porosity or preferential association of the nanocolumns; features that grant OAD films some of the unique properties required for a large variety of applications.A discussion on the development of specific crystallographic textures will also be presented later in this section.Although most of these characteristics stem from shadowing mechanisms, the contribution of energetic interactions or other random deposition processes outlined previously will also be taken into account as part of this analysis.A simple power law description of nanocolumn width evolution with film thickness cannot hold true when shadowing is not the prevalent mechanistic factor of growth, as is the case in MS-OAD and other deposition processes discussed in Section 3.5.Thus, crystallization and growth in Zones II or T of SZM clearly preclude any description of the growth mechanism according to simple rational schemes based on dynamic scaling concepts .For example, thin film crystallinity may significantly alter the dependence of roughness on thickness when facetted growth of nanocrystals suddenly occurs.An example of such a sudden modification in surface roughness in relation to crystallinity was reported by Deniz et al. in AlN MS-OAD thin films, where a sharp increase in RMS roughness was found when a given nitrogen concentration was added in the plasma gas.An initial assessment of the nanocolumnar arrangement in OAD thin films might suggest the absence of any clear organization and a random distribution of nanocolumns within the film.However, some experimental evidence by AFM and SEM analysis of the surface and morphology of thin films directly contradict this simple view .For example, in the case of evaporated OAD thin films, it has been found that well-defined correlation lengths and intercolumnar distances emerge depending on the deposition conditions.What is more important, these can be formally described with a power law scaling approach .Similarly, through SEM analysis of the surface of a series of OAD thin films, Krause et al. identified repetitive correlation distances between surface voids separating the nanocolumns that were dependent on the deposition parameters.Using AFM, Mukherjee et al. were also able to determine the period of surface roughness features in different OAD thin film materials, and correlated the values obtained with specific growth exponents.To confirm the existence of correlations between nanocolumns in the interior of films, not just at their surface, we have previously used a bulk technique such as GISAXS .With this method, it is possible to determine both the tilting orientation of the nanocolumns and the correlation distances among them.Fig. 3.3 shows a series of GISAXS patterns corresponding to TiO2 thin films that were prepared by e-beam OAD at different zenithal angles.We see from this that while the asymmetric shape of the patterns clearly sustains a tilted orientation of the nanocolumns, the position of the maxima in each pattern provides a rather accurate indication of the correlation distances existing in the system .It is also interesting that these patterns depict two well-defined maxima, suggesting the existence of a common correlation distance of about 14 nm in all the films, as well as a second much longer correlation distance parameter that progressively increases from 30 to 200 nm with increasing film thickness and deposition angle.We tentatively attributed the smaller of these correlation distances to the repetitive arrangement of small nanocolumns/nuclei in the initial buffer layer formed during the first stages of deposition, with the larger of the two being attributed to the large nanocolumns that develop during film growth.Owing to the bulk penetration of X-rays the values obtained were averaged along the whole thickness of the film, the progressive increase observed with thicker films confirming that the nanocolumn width extends from the interface to the film surface.In early studies of evaporated OAD thin-film materials , it was soon recognized that nanocolumns can associate in the form of bundles; i.e., laterally connected groupings of nanocolumns arranged in a direction perpendicular to the vapor flux.This bundling association has since been reported in a large variety of OAD thin films prepared by either evaporation or MS , and though development of this microstructural arrangement has been mainly reported for metal thin films , it has also been utilized as a template structure with oxides to develop new composite thin films.Surprisingly, aside from some detailed discussions in early reviews , this bundling phenomenon has not attracted much attention in recent investigations on OAD thin films.In the present review, we therefore highlight its importance and refer the reader to Section 4, where we present a critical discussion of the growth mechanisms contributing to the development of this microstructural arrangement.For many applications such as sensors, electrochemistry, catalysis, electrochromism, and antireflective layers, the porous character of an OAD thin film is a key feature in device fabrication.Here, two related concepts need to be taken into account: porosity and effective surface area.The former refers to the actual empty volume that exists within the films, whereas the latter represents the effective area created by the internal surface of these pores that makes possible the interaction with a medium through adsorption.Note that some pores might be occluded, in which case their internal area would not affect the adsorption properties of the films.Porosity can be assessed in an indirect way by determining the optical constants of thin films, and then deriving the fraction of empty space by means of the effective medium theory .However, as this approach requires relatively complex optical models based on “a-priori” assumptions regarding layer/pore structure and connectivity, its results can only be regarded as approximate at best.A close relation seems to exist between porosity and the growth mechanisms and conditions.For example, the porosity of a Cu OAD thin film is qualitatively related to its deposition rate and the wetting ability of copper on a silicon substrate, in such a way that the initially formed nuclei control the posterior evolution of the thin film’s microstructure and therefore also the final porosity of the film .Modeling the OAD thin film microstructure and growth by Monte Carlo simulation and other numerical methods has been carried out to better understand the evolution of porosity as a function of the deposition angle , with Suzuki et al. deducing that evaporated OAD thin films should present a maximum surface area at an evaporation angle of 70°.Interestingly, experiments exploring different adsorptions from the liquid phase have confirmed a maximum surface adsorption capacity at this “magic deposition angle” .An important consequence of the high porosity of OAD thin films of an oxide or other transparent dielectric material is that depending on the environment , their optical constants vary in response to the condensation of water in their pores.Thus, unless they are encapsulated, this effect precludes their straightforward incorporation into real-world optical devices .Surprisingly, this effect has been generally overlooked in many works regarding the use of OAD thin films as an antireflective coating or multilayer in solar cells or other related environmental applications .On the other hand, the development of humidity sensors takes advantage of this very change in the optical constants of OAD films and multilayers due to the adsorption of water vapor from the atmosphere .Porosity, surface area and chemical adsorption capacity are not homologous concepts.This idea, which is inherent to the field of adsorbents and catalysts, is not common when dealing with OAD thin film devices as only very few works have specifically addressed this issue.Some time ago, Dohnálek et al. made use of a temperature programmed desorption technique, very well known in catalysis to study adsorption processes on flat surfaces , to examine the adsorption capacity of OAD MgO thin films.They experimentally determined that the fraction of active adsorption sites in these thin films was higher than that of films prepared at normal geometry, and that the distribution of sites with different adsorption binding energies changed with deposition temperature.Unfortunately, no systematic studies involving adsorption processes from the gas phase, together with the subsequent desorption mechanism, have been pursued thereafter.In the liquid phase of a photonic sensor incorporating dye molecules in transparent OAD thin films, we have observed that the pH of the medium greatly modifies the adsorption capacity of cationic or anionic organic molecules from the rhodamine and porphirine families .To account for this pH dependent behavior, we have used the classical zero point of charge concept, which is widely used in colloidal chemistry .According to this, the surface of colloidal oxides becomes either positively or negatively charged depending on whether the surrounding liquid has a pH that is lower or higher than the zpc of the investigated material.Thus, by simply adjusting the pH of the solution and the concentration of dissolved molecules, it is possible to control the amount of molecules adsorbed in the OAD thin film or selectively favor the adsorption of one type of molecule over another.Pre-irradiation of the films with UV light to modify their surface properties has been also utilized to control the type and amount of molecules adsorbed from a liquid medium .In the course of these studies, it was also demonstrated that the adsorption equilibrium of tetravalent porphyrin cations in a OAD TiO2 thin film follows a Langmuir-type isotherm, while the adsorption kinetics adjust to an Elovich model .Undoubtedly, these preliminary studies are insufficient to reach a sound general conclusion, but a tight collaboration between thin film material scientists and colloidal chemists should help to deepen our understanding of the adsorption properties of OAD thin films.Crystalline and porous OAD thin films prepared by evaporation are needed for many different applications in which a combination of porosity and a well-controlled crystalline structure is essential.Most oxides deposited by e-beam evaporation at room temperature are usually amorphous, but both metals and ceramics become crystalline when their deposition is carried out at sufficiently high temperatures.Furthermore, the crystallization of metals, oxides and other dielectrics deposited in OAD geometries is promoted when using MS, PLD or other techniques that involve the exchange of energy and momentum with the growing film.In most cases, in addition to being crystalline, these thin films present a well-defined texture; i.e., a preferential orientation of the crystallographic planes of their individual crystallites.Both out- and in-plane preferential orientation may occur in these thin films depending on the deposition conditions.In the case of the former, the crystallites exhibit a preferential orientation with a given crystallographic axis perpendicular to the surface, whereas the other unit cell axes are randomly oriented.With the latter, however, individual crystals possess a similar orientation along the direction perpendicular to the plane and parallel to the surface plane.This second situation is similar to that of a single crystal, with the difference being that OAD polycrystalline thin films instead consist of small crystallites with similar orientation.The development of preferential orientations during the OAD of thin films has been the subject of an excellent discussion by Mahieu et al. , who studied the out-of-plane texturing mechanisms of MS deposited thin films within the context of the extended SZM.According to their description, preferential out-of-plane oriented films are obtained in Zones T and II of SZM when sufficiently high ad-particle mobility leads to a preferential faceting of crystallites along either planes with the lowest growing rate or highest thermodynamic stability.As a result, the films become textured with the fastest growth direction perpendicular to the surface.In addition to this out-of-plane orientation mechanism, the growth of biaxial thin films by OAD is favored by the preferential biased diffusion of ad-particles when they arrive at the film surface according to their direction.Obviously, this situation can be controlled by adjusting the orientation of the substrate with respect to the target, in which case this preferential diffusion can be used to ensure grain growth in the direction of the most favorable faceted crystal habit facing the incoming flux of material.From a mechanistic point of view, both the mobility of ad-particles during growth and the angular distribution of the incoming material flux are critical for the effective growth of biaxially aligned thin films; the particles mobility favoring biaxial alignment and their angular spread contributing to its randomization.Consequently, parameters such as pressure, target–substrate distance, deposition angle, film thickness, bias potential of the substrate, temperature and the presence of impurities may play an important role in determining the degree of biaxial orientation.This strong dependence that the crystalline structure of OAD thin films has on the experimental parameters means that although some trends in texture can be predicted , significant deviations associated with the use of different deposition conditions and/or techniques should be expected.A selection of crystalline OAD thin films is presented in Table 3.1 to highlight the specific features of their crystallographic structure.This gathered data broadly confirms that crystallization occurs when thin films are grown within Zone T and II of the SZM; i.e., in evaporated films prepared at high temperatures, or by using MS or other techniques that involve ion bombardment during deposition.Table 3.2 summarizes some select examples of biaxial thin films prepared by OAD.Most of these were prepared by MS, though the two metals prepared by evaporation have either a low melting point or are the product of thermally activated synthesis.This confirms the need to identify experimental conditions that favor the controlled diffusion of ad-particles during film growth to ensure effective control over the texture of the thin film.Another point worth noting from this table is the possibility of changing the facet termination of the individual crystallites by changing the deposition angle, and the fact that the selection of specific OAD conditions is generally critical to the development of a given biaxial orientation.In some cases, the clear prevalence of a given orientation confirms the importance of the energetic factors related to the development of a given crystallite facet for the control of texture.Thus, epitaxial effects and the influence of substrate roughness or film thickness in effectively controlling the texture confirm that mechanistic conditions mediating ad-particle diffusion are quite important to controlling the biaxial orientation.The bundling association of crystallites is another interesting feature present in some biaxially OAD thin films.In this section we deal with the fundamentals of OAD by discussing them from an atomistic point of view.Rather than a critical enumeration of previously reported models, we have focused this discussion on a set of new concepts that provide an updated conceptual framework for understanding the growth process by e-beam evaporation and MS at oblique angles.For a better assessment, these concepts have been explained using classical models which, using mainly geometrical considerations, have been previously proposed to describe the basic growth mechanisms of this type of films.In this regard, numerous works and review papers have already dealt with both, the phenomenology and key theoretical issues involved in the OAD of thin films by different deposition techniques .The terms ‘simulations’ and ‘experiments’, which were intentionally included in the title of this section, underline the importance of a combined approach when dealing with complex atomistic phenomena such as those involved in the OAD of thin films.From a fundamental perspective, a key model in this section is the SZM that was already mentioned in Sections 2 and 3 .Despite its phenomenological basis, it provides valuable information on the competition between surface shadowing mechanisms and thermally activated diffusion, which is useful for introducing simplified assumptions in growth models under various conditions .This section is organized into two well differentiated parts.In the first of these we address the problem of e-beam evaporation and consider the deposition of particles through a purely ballistic model; i.e., we assume that there is no significant scattering of particles in the gaseous phase during their flight from the source to substrate, and that the shadowing mechanism is the predominant nanostructuration process.In the second we explicitly address MS deposition, in which scattering interactions in the gaseous/plasma phase are considered so as to understand how they may affect the microstructure of the film.When dominated by surface shadowing mechanisms, the aggregation of vapor particles onto a surface is a complex, non-local phenomenon.In the literature, there have been many attempts to analyze the growth mechanism by means of pure geometrical considerations; i.e., by assuming that vapor particles arrive at the film surface along a single angular direction .Continuum approaches, which are based on the fact that the geometrical features of the film are much larger than the typical size of an atom , have been also explored.For instance, Poxson et al. developed an analytic model that takes into account geometrical factors as well as surface diffusion.This model accurately predicted the porosity and deposition rate of thin films using a single input parameter related to the cross-sectional area of the nanocolumns, the volume of material and the thickness of the film.Moreover, in Ref. , an analytical semi-empirical model was presented to quantitatively describe the aggregation of columnar structures by means of a single parameter dubbed the fan angle.This material-dependent quantity can be experimentally obtained by performing deposition at normal incidence on an imprinted groove seeded substrate, and then measuring the increase in column diameter with film thickness.This model was tested under various conditions , which returned good results and an accurate prediction of the relation between the incident angle of the deposition flux and the tilt angle of the columns for several materials.Semi-empirical or analytical approaches have provided relevant information regarding film nanostructuration mechanisms; however, molecular dynamics and MC methods have provided further insights into the growth dynamics from atomistic and fundamental points of view.The MD approach considers the incorporation of single species onto a film one by one, describing in detail the trajectory of each particle by means of effective particle–surface interaction potentials.Unfortunately, given the computational power presently available, this procedure only allows simulations over time scales in the order of microseconds, even with hyperdynamic techniques .Since real experiments usually involve periods of minutes or even longer, this constraint represents a clear disadvantage when comparing simulations to experimental data.In this way, two-dimensional MD simulations carried out with the intent of investigating the role of substrate temperature, the kinetic energy of deposition particles and the angle of incidence on the film morphology predicts that increasing substrate temperature and incident kinetic energy should inhibit the formation of voids within the film and promote the formation of a smooth and dense surface.Moreover, it was also found that increasing angles of incidence promote the appearance of tilted, aligned voids that ultimately result in the development of columnar nanostructures.In contrast to MD techniques, MC models approach the problem from a different perspective by allowing the analysis over longer time and space scales.In this case, MD simulations are employed to describe the efficiency of different single-atom processes using probabilities, which are then subsequently put together.Although this strategy accelerates by some orders of magnitude the simulation time, except in the case of athermal processes ), ad-atom diffusion must be excluded from the calculations to obtain a realistic simulation of the growth of a thick thin film.This means that MC simulations of the deposition process are suitable to conditions within Zone I of the SZM, which is where the preparation of the majority of evaporated OAD thin films takes place.Nevertheless, since thermal activation may also be involved in thin film growth , and may have certain influence on the nanostructural evolution of the films, this activation has been explicitly considered in some MC simulations of up to a few hundred monolayers of material.For example, a three-dimensional atomistic simulation of film deposition , which included the relevant thermally activated processes, was developed to explain the growth of an aluminum thin film onto trenches.In another work, Yang et al. employed a two-step simulation wherein arriving species are first placed at the landing location point, with a kinetic MC then describing their subsequent diffusion.Want and Clancy included the dependence of atom sticking probabilities on temperature to describe the deposition process, whereas Karabacak et al. considered the ad-atom thermally activated mobility by introducing a fixed number of ad-atom diffusion jumps onto the surface following deposition.The procedure hitherto described represents a very simple approach to simulating thin film growth under general conditions, a quite complex process in which other mechanisms may be also present.A brief summary of these additional issues are:Incoming vapor species may interact with the surface and not follow a straight trajectory, thus other processes resulting from the proximity of the vapor species to the surface should be introduced.As will be described later in this section, this is straightforwardly connected to so-called surface trapping mechanisms.Although most OAD films analyzed had been synthesized under conditions pertaining to Zone I of the SZM model, thermally activated processes may also influence the film nanostructure to some extent .Vapor species arriving at a landing location may carry enough energy to move or induce additional displacements in the material’s network.As we will see later, this is relevant in MS depositions where the vapor species may possess hyperthermal energies.When the plasma interacts with the film, there are numerous energetic species that may affect the film nanostructuration; e.g., plasma ions or neutral species in excited states .High growth temperatures may promote the crystallization of the film and the appearance of surface potentials that favor atomic displacements along preferential directions/planes of the network.The full analysis of these processes is an active research area, and the study of their combined effect on the film nanostructuration is an open field of investigation.In the following sections we discuss some of the previous mechanisms by considering the results of some fundamental experiments carried out under simplified conditions to highlight the influence of a particular process.The trapping probability concept should not be confused with the sticking probability, which accounts for the overall probability of a particle to be deposited onto a surface regardless of its particular location.Furthermore, most studies into particle sticking on surfaces have only considered a perpendicular incidence and therefore only accounted for head-on sticking processes .Numerous works in the literature have dealt with the fundamentals of MS deposition at normal incidence .From this, it is known that classical MS depositions, in which the growth surface is parallel to the target, usually yield dense and compact films, thanks mostly to the high energy of the deposition particles and the impingement of plasma ions during growth .In contrast, when deposition is carried out in an OAD configuration, other processes may play important roles in the development of columnar and porous nanostructures .The main challenges facing the MS deposition of thin films center around the control over the chemical composition of the layers, the deposition rate and the film nanostructure.There are many excellent reviews dealing with the first two issues when working under a normal configuration , but the third has only been scarcely addressed, even under simplified conditions.In this section, we will focus on some of the mechanistic aspects that are important to account for the MS-OAD of thin films in connection with the development of a columnar nanostructure.For this analysis, we have avoided the use of complex models for the transport of sputtered particles inside the plasma, and have instead employed a simplified approach based on effective thermalizing collision theory .This has already provided numerous results and straightforwardly applicable mathematical formulae pertaining to the deposition rate and final microstructure of MS thin films.In the following subsection, we analyze the main differences between evaporation and MS in terms of atomistic processes.Following this, we briefly describe the mechanism of sputtering and explain the transport of sputtered particles by means of ETC theory.This theory will then be employed to deduce a formula that describes the deposition rate at oblique angles.Finally, the influence of deposition conditions on the films’ morphology is described.When comparing evaporation and MS techniques, the following key differences become apparent:Plasma-generated species in contact with the film during growth: In MS-OAD deposition, the plasma contains numerous energetic species such as positive or negative ions, or highly reactive species that may impinge on the film during growth and affect its nanostructure and chemical composition .Size of the material source: Under typical e-beam evaporation conditions, the material is sublimated from pellets situated very far away from the film.In MS deposition, the size of the source is within the same order of magnitude as the target/film distance.Thus, the deposition particles stem from a wide area racetrack, meaning that the deposition angle is not fixed but rather varies over a relatively large interval).Kinetic energy of vapor atoms: In MS, the mean kinetic energy of sputtered particles is in the order of 5–10 eV, whereas under typical evaporation conditions it is in the order of 0.2–0.3 eV.Since the typical energy threshold required to mobilize atoms deposited on the surface is ∼5 eV, the impingement of vapor atoms onto the film may cause the rearrangement of already deposited species .Collisional processes in the vapor phase: The working pressures in MS are much higher than in typical e-beam evaporations, where the mean free path of evaporated species is typically greater than the source/substrate distance.In contrast, the pressure in MS can be varied within a relatively broad interval, meaning that sputtered particles can experience a large number of collisions before their deposition .These collisions have a direct impact on the kinetic energy and momentum distribution of the sputtered particles when they eventually reach the film surface .All these differences are of great relevance for the control of fundamental atomistic processes involved in the growth of a thin film, and make the previously introduced surface trapping probability and angular broadening concept insufficient to describe the nanostructural development of MS-OAD thin films.Even if the sputtered atoms leave the target with energies in the order of 5–10 eV and a high preferential directionality, collisional processes with neutral species of the plasma gas may drastically alter both the energy distribution function and the directionality of the particles when they reach the substrate.Sputtered particles arriving at the film surface can be broadly separated into three categories depending upon their collisional transport: particles that have not collided with any gas atoms and arrive at the film surface with their original kinetic energy and direction, those that have experienced a large number of collisions with background plasma/gas atoms and therefore possess low energy and an isotropic momentum distribution, and finally, those that have undergone several collisions, but still possess significant kinetic energy and some preferential directionality.This transport has been thoroughly studied in the literature by means of MC models of gas phase dynamics using different elastic scattering cross-sections to determine the energy and momentum transfer in each collision .However, the complexity of the mechanisms involved makes it difficult to find analytical relations between quantities of interest and experimentally controllable parameters.With the aim of simplifying the description and deducting general analytic relations, the effective thermalizing collision approximation has been successfully applied to describe the collisional transport of sputtered particles in a plasma .This theory introduces the concept of effective thermalizing collision: an effective scattering event between a sputtered atom and gaseous species that results in the former losing its original kinetic energy and initiating a random thermal motion in the gas.As we will see next, this enables the deduction of simple analytical equations that relate the main fundamental quantities, though the connection between actual collisional quantities in the plasma/gas and this effective mechanism remains the main issue for the practical application of these ideas.A summary of the main concepts and approximations utilized within this theory to assess the division between ballistic and thermalized species is presented next, but interested readers may get a more detailed description in Refs. .In regard to the OAD of thin films, the previous assessment of the partition that exists between ballistic and thermalized sputtered atoms is of the utmost importance, as the former contributes to the film’s nanostructuration through shadowing effects and other hyperthermal phenomena responsible for, among other things, the development of nanocolumns or specific textures.Moreover, in the OAD configuration, basic deposition magnitudes such as the deposition rate or the final composition and microstructure of complex thin film materials will be drastically affected by the thermalization degree of sputtered particles.Given its importance to any MS-OAD process, we specifically analyze the relation between the thermalization degree and deposition rate in the next section.An example showing the importance of the thermalization degree of sputtered particles on the control of composition and microstructure in complex MS-OAD thin films can be found in the recent work by Gil-Rostra et al. on the MS-OAD of WxSiyOz electrochromic thin films, in which a net W enrichment of the deposited film with respect to the target and continuous variation in the tilt angle of the nanocolumns were attributed to more effective scattering of Si than W by Ar atoms in the plasma gas.That is, W has a much higher atomic mass than Si, and so its energy and momentum is less affected by binary collisions with much lighter Ar atoms.Silicon, on the other hand, has an atomic mass similar to Ar, and so is more effectively scattered to become distributed in all directions within the deposition chamber.Consequently, the films become enriched in tungsten and exhibit a microstructure in which, for any given deposition geometry, the tilt angle of the nanocolumns increases with W content.Fig. 4.6 shows the formation of these different microstructures on a phase diagram, as well as through a series of MC simulations to visualize their main columnar and pore features.A very good concordance in shape was revealed between these simulated structures and the experimental films.In addition, the MC simulation provided clues to understand the formation of the different thin film microstructures by assuming a different thermalization degree for the sputtered gold atoms.This quantity identified different surface shadowing processes during the early stages of thin film formation and subsequent growth.A detailed description of the mechanistic effects under each working condition is reported in Ref. .In addition to the concepts already discussed in this section, a systematic analysis of how the impingement of plasma ions affects the nanostructuration of a film at oblique angles is still notably absent.Nevertheless, plasma ions have been widely used for numerous purposes, such as the removal of defects, the densification of materials, improving mechanical properties, smoothing surface, and the crystallization of materials.When dealing with porous materials, however, their energy must be kept low enough to influence the film nanostructuration without producing full densification.Yet it is only recently that this effect has been analyzed by altering the ion energy using different electromagnetic waveforms so as to change the plasma potential.In Ref. , it is demonstrated that the introduction of a shallow low-energy ion impingement on the film surface during growth affects the tilt angle of the columns and the overall film porosity.Even though this effect has only been reported for TiO2 thin films, this result suggests that the modulation of low-range ion energies is necessary to tune the morphological features of the columnar structures in plasma-assisted depositions.The incorporation of OAD thin films in a large variety of advanced devices is a clear indication of their maturity and excellent prospects for a successful implementation in a large range of technological fields.Although it is impossible in the space of this review to cover all of the advanced applications and devices that rely on OAD thin films, the following sections present a review on a select number of works published in the last seven-to-ten years that, in our opinion, best illustrate the possibilities of this material type and technology.We have grouped this selection into the following broad subjects: transparent and conductive oxides, energy harvesting, sensors and actuators, optic and photonic devices, wetting and microfluidics, and biomaterials and biosensors.Of the thin film materials used for the fabrication of photonic or electronic devices, transparent conducting oxides occupy a central position, being indispensable for the fabrication of solar cells, electroluminescent and electrochromic systems or electrochemical devices .Thanks to its low resistivity, high transmittance in the visible range and low-deposition temperature, tin-doped indium oxide is the most popular TCO used by industry .Indeed, it is only the need to replace the expensive indium component that has prompted the search for alternative TCO formulations, which are briefly summarized at the end of this section.The fabrication of highly porous and/or sculptured ITO thin films has been attempted by e-beam OAD from ITO pellets, both under high vacuum conditions and in a carrier gas flux .Fig. 5.1 shows some examples of the thin films and various microstructures that can be obtained by controlling the deposition conditions.This rich variety of potential morphologies is made possible by strict control of deposition parameters that are known to have a direct impact on the growth mechanism.The morphologies obtained can be divided into three groups: standard tilted nanocolumnar layers fabricated by OAD at low pressure–), nanostructures with a tapered nanocolumn profile over a critical length obtained by deposition with a carrier nitrogen flux–), and nanowhiskers and hierarchical, multi-branched tree-shaped nanostructures obtained by sequential application of VLS-OAD .Selected applications relying on these nanostructured TCO films will be discussed in the sections that follow.Owing to their open microstructure, controlling the electrical properties of ITO-OAD films has required the development of specific methodologies and the use of relatively sophisticated techniques.These have included axial resistivity measurements by a cross-bridge Kelvin system , terahertz time-domain spectroscopy , or combinations of experimental and theoretical studies .These, and other investigations, relating the OAD microstructure of ITO nanopillars to the deposition parameters –) have proved that it is possible to achieve optical transmittances and electrical conductivities comparable to those of compact ITO thin films.The good electrical performance, high porosity and nanocolumnar structure of ITO-OAD films has fostered their incorporation as active components into liquid crystal displays for the alignment of calamitic liquid crystals and).Typically, an LCD display integrates different layers of transparent electrodes, LC alignment layers and birefringent compensators), functions that can be provided by just one OAD-ITO nano-columnar film).Meanwhile, coatings with a graded refractive created by combining dense = 2.10) and porous = 1.33) stacked layers of ITO through changing the zenithal angle of deposition have been used for photonic and combined photonic–electronic applications.For example, different color filters designed and fabricated using this approach have been utilized as the bottom electrode of a LCD system and).Other photonic components such as Bragg reflectors , broadband antireflective and conductive materials , and transparent electrodes and antireflection contacts in organic light emitting devices have also incorporated similar photonic and conductive layers.OAD ITO layers have been used as transparent conducting and antireflective coatings in: organic photovoltaic solar cells , Si-based solar cells and GaAs photovoltaic devices .The photon absorption capacity of a solar cell is one of the main factors contributing to its global efficiency.Thus, in order to reduce the light reflection coefficient and increase the amount of transmitted light, antireflection coatings are usually incorporated on the front side of these devices.Because of their high porosity, OAD thin films and multilayer structures have an inherently low refraction index, making them suitable for AR-layer applications .An added benefit of ITO OAD films used for this purpose resides in their high electrical conductivity.From the point of view of cell manufacturing, another advantage is the possibility of using a single material as an AR coating rather than the multilayer structure of alternating layers of different refractive index that are typically used.Going beyond the use of conventional ITO OAD films, Yu et al. have proposed a new AR concept based on the peculiar ITO nanocolumns presented in Fig. 5.1–.The characteristic tapered and bent microstructure of these nanocolumns works as a collective graded-index layer offering outstanding omnidirectional and broad-band antireflection properties for both s- and p-polarizations.The benefit to a GaAs solar device integrating one of these ITO films and) over not using an AR layer is as much as 28%, with nearly 42% of the enhancement in the photocurrent generated through the transparent gap of the window layer.Nanocolumnar ITO films have also been used as high surface area 3D nanoelectrodes in organic solar cells .Fig. 5.4– depicts a device consisting of an OAD-ITO electrode electrochemically modified with nanofibrous PEDOT:PSS:poly) to produce a cobweb-like structure), which is then infiltrated by spin-coating with a P3HT:PCBM photoactive layer).The cross-sectional view given in this figure provides evidence that the ITO layer is effectively infiltrated by the photoactive material, a requisite for good performance with this type of solar cell device.The cell is electrically contacted with an evaporated aluminum layer on top), which integrates the insulating electrode formed by OAD-ITO nanopillars covered with OAD-SiO2 caps and).This solar cell configuration exhibits better performance than equivalent devices based on compact ITO films.In a very recent collaborative work between the Pohang University and the Ressenlaer Polytechnick Institute , ITO nanohelix arrays were used as three-dimensional porous electrodes, antireflection coatings and light-scattering layers in bulk heterojunction solar cells.These ITO arrays provided an enhancement in both the total light absorption and the charge transported from the photoactive layer to the electrode; the combination of these effects resulting in a 10% increase in short-circuit current density and a substantial increase in the power-conversion efficiency.As a result of their high surface area and outstanding optical/electrical properties, ITO nanopillar layers have been extensively used as biosensors to detect immobilized biomolecules within their pores by means of different spectro-electrochemical methods.One example of this is the rapid absorption of microperoxidase within the meso- and micro-pores of OAD ITO layers, which has been used for the detection of a series of adsorbed redox biomolecules that were electrochemically monitored by derivative cyclic volt-absorptiometry .Using another sensing approach, Byu et al. demonstrated that the performance of surface plasmon resonance gold biosensors can be enhanced by adding an OAD-ITO layer to increase the surface area available for the target molecules.These decorated substrates present a much higher sensitivity than bare gold films in the analysis of ethanol–water mixtures.The high surface area of ITO-OAD films has also proven decisive in the development of resistive gas sensors for NO2 , in that a maximum sensitivity of 50 ppb and short response time have been achieved with highly porous films prepared using high zenithal angles of deposition.Further improvements were obtained by incorporating a bottom layer of Fe2O3 or SiO2 as seeds to favor the growth of interconnected ITO nanorods, with their inner channels being highly accessible to the surrounding medium.The high cost and increasing demand of indium over the last few years have prompted the development of TCO films with other compositions; this also being true of TCO thin films prepared under OAD conditions.Al-doped zinc oxide thin films prepared by either e-beam or MS OAD have been successfully used in solar cells and LED devices.The synthesis of Sb-doped tin oxide and Nb-doped titanium oxide films with a strictly controlled composition, microstructure and porosity has also been recently reported to achieve outstanding optical transparency and electrical conductivity .In most of these preparations, MS has been the technique of choice for the simple reason that it is better suited for depositions over large areas.Some examples in the literature include the MS-OAD of Ga-doped ZnO and the development of a co-sputtering method for the fabrication of high and low refractive index layers of TNO and AZO, respectively, for use in the preparation of a transparent and conductive distributed Bragg reflector .In this structure, the eight-period stack of AZO/TNO layers produces a reflectivity of ∼90% along a spectral region centered at 550 nm, and a resistivity of less than 2 × 10−3 Ω cm.Because of this outstanding performance, this system has been proposed as a mirror and charge injection layer to substitute the existing dielectric Bragg reflectors in vertical cavity surface emitting lasers.The last few years have borne witness to a tremendous expansion in the development of efficient renewable energy sources and/or the implementation of new, advanced energy storage and saving methods.Among the solutions proposed for the development of these new methods, we have seen that OAD thin films play a special role due to their unique properties.In this section we review some of the recent advances in the development of OAD thin films for renewable energy applications, with a special emphasis on electricity generation by photovoltaic solar cells, solar-driven photocatalytic hydrogen production, fuel cells, electric energy storage with lithium ion rechargeable batteries and electrochromic coatings.The emerging topic of piezoelectric generators based on OAD thin films will also be introduced.OAD nanostructures represent versatile candidates to fabricate the many diverse elements of a photovoltaic solar cell device.Following a simplified description, OAD thin films have been used as: transparent conducting layers, active semiconductor materials for electron–hole production upon absorption of photons, electron or hole conducting/blocking layers, sculptured counter electrodes, and antireflection coatings.In Section 5.1 we reviewed the latest developments regarding the OAD of ITO and other TCOs used as transparent conducting materials and AR coatings in different solar cells architectures.In this section, we summarize the advances that have been made in the fabrication, design and optimization of other OAD nanocolumnar thin films utilized in this type of device/component.Layers of OAD nanocolumnar materials other than TCOs have been used as AR coatings to optimize the efficiency of various types of solar cell device.Their use is motivated by expected improvements in photon harvesting, carrier separation or carrier collection efficiencies.Photon harvesting in photoactive solar cell components substantially increases if light is scattered or confined within the system, and so the implementation of an AR coating at the front side of a solar cell is a well-known method of confinement that enhances light absorption by the active material.Since sunlight has a broad spectrum and its angle of incidence changes throughout the day, high-performance AR coatings must be efficient over the entire solar spectrum and a wide range of incident angles .A good approach to avoid Fresnel reflections is to insert an interface with a graded refractive index between the two media.Schubert et al. have shown that the interference coatings formed by stacking OAD films with different refractive indexes may provide the desired broadband and omnidirectional AR characteristics needed for photovoltaic cell applications.There have been two general approaches adopted for the fabrication of AR OAD structures for solar cell applications: the multilayer stacking of low and high refractive index materials, and the combination of layers of the same material but with different densities.In the case of the former, Schubert et al. have reported the lowest refractive index material known to date, with a n value ∼1.05 for SiO2.They also succeeded in fabricating a graded-index broadband and omnidirectional AR coating consisting of alternant layers of TiO2 and SiO2 with refractive indexes ranging from 2.7 to 1.3 and 1.45 to 1.05, respectively.As reported in Fig. 5.5, such a coating effectively reduces the Fresnel reflections when deposited on an AlN substrate.A typical example of an AR OAD coating used to enhance photon harvesting is one formed by stacked TiO2 and SiO2 layers , which in some cases are prepared through successive sputtering of the two materials .The extension of this procedure to large area substrates, including deposition on conformable polymers, has also been reported .Other reported attempts include the fabrication of single-material AR coatings by stacking different refractive index layers of ITO, MgF2 , SiO2 , TiO2 , alumina , ZnS or ZnO .Composite and compositionally complex thin film materials have also been used to increase the scattering of light in solar cell devices.Some examples of this approach include: antireflection and self-cleaning coatings for III–V multi-junction solar cells made of OAD layers of gold grown on nanocone arrays fabricated by plasma etching borosilicate glass , magnetron sputtered TixSn1−xO2 films prepared on top of a seed layer of polystyrene spheres , HfO2 films combined with amorphous porous silicon with closed porosity , and sculptured TiO2 films .The implementation of OAD thin films and nanostructures as active photon absorption components in solar cells is another relevant field of interest for these materials.One approach in this context consists of exploiting the high surface area of columnar OAD films to increase the interface area between p- and n-type semiconductors or, depending on the type of solar cell, between a semiconductor and an electrolyte.In order to reduce the electron–hole recombination probability, OAD nanostructures are also used to diminish the transport distance of the minority carriers from the absorbing semiconductor to the charge collection electrode.Furthermore, the unique architecture of OAD films at the nanometer scale provides different ways to increase both the interface area and carrier lifetime in solar cells.Various cases will be presented next to illustrate the possibilities in dye and quantum dot sensitized solar cells, organic and hybrid photovoltaic solar cells and Si-based solar cells.Titanium dioxide nanomaterials have been extensively used as a wide-band-gap semiconductor in DSSCs .In typical cells of this type, anatase thin films are made from agglomerated nanoparticles, nanotubes or nanorods, while light-absorbing dye molecules are adsorbed onto the oxide surface .A modified OAD methodology named GLADOX has been reported by Kondo et al. .It combines the sputtering of Ti at oblique angles with a posterior anodization to produce highly porous TiO2 hierarchical nanostructures.Following annealing to induce crystallization to anatase, these nanoporous photoanodes were incorporated in a DSSC where they yielded an overall performance that is comparable with a nanoparticulate reference cell of the same thickness.Improved charge transport and the potential to increase photoelectrode thickness without any significant detriment to the conduction capacity of photogenerated electrons are just some of the advantages cited for this type of nanobrush system.Generally speaking, the semiconductors used in photovoltaic devices possess a small bandgap and can potentially absorb all photons with energies above this minimum threshold.Unfortunately, owing to the small energy difference between photo-generated electrons and holes in these systems, small bandgap semiconductors produce lower potentials than wider bandgap semiconductors.Various approaches have been tried to combine the benefits of both large and small bandgap semiconductors in order to expand the range of absorbed wavelengths and use each absorbed photon to its full potential.One such attempt consisted of combining semiconductors and QDs of progressively decreasing bandgap energies to promote the absorption of radiation across the entire visible spectrum .These QDs were made from materials such as CdS, CdSe and CdTe that strongly absorb in the visible region , and which can inject photoexcited electrons into wide bandgap materials such as TiO2 and ZnO.Moreover, these QDs display a high extinction coefficient and the capability of tuning their energy absorption band through the control of their size.Utilizing these ideas, Zhang et al. proposed the OAD co-evaporation of TiO2 and CdSe at α ≈ 86° to deposit composite nanorods on ITO.Femtosecond transient absorption spectroscopy analysis of the charge transfer processes in this system revealed efficient electron transfer from the conduction band of CdSe QDs to the conduction band of TiO2.This result was attributed to the high interfacial area and a strong electronic coupling between the two materials, and represents a nice example of the possibilities that OAD and co-deposition can bring to engineering compositions and nanostructures at interfaces with a strict control over chemical states.Very recently, Schubert et al. reported QDs solar cells based on the decoration of three-dimensional TiO2 nanohelixes with CdSe QDs, as illustrated in Fig. 5.7.The SEM micrograph in Fig. 5.7 shows a characteristic cross section of the TiO2 nanohelixes grown on FTO, clearly showing how their large and open pores can accommodate both QDs and electrolyte.As demonstrated by HAADF-STEM, the distribution of QDs along the TiO2 nanohelixes is quite homogeneous and densely packed).The enhanced electron transport properties reported for this system have been related to the high crystallinity of the nanohelixes), while their three-dimensional geometry seems to enhance light scattering effects and therefore the cell absorption capability and efficiency–).The results of this work also suggest that one-dimensional nanorods and nanotubes with diameters in the order of a determined light wavelength may act as guides for this wavelength rather than as scattering centers, as it is the case with nanoparticles in conventional electrodes.This solar cell based on TiO2 nanohelix arrays exhibits a two-fold improvement in solar energy conversion efficiency with respect to QD solar cells based on a nanoparticle layer electrode.So far, TiO2 has been the most common material used in the fabrication of hybrid solar cells consisting of an inorganic and an organic component.Here, the TiO2 acts as a good electron conductor, and is combined with conjugated polymeric poly or small-molecule p-channel materials that act as hole conductors.OAD TiO2 thin films have been extensively used for the fabrication of different types of hybrid solar cells .The intended function of the OAD oxide is to provide a well-defined intercolumnar space in which the small size of the P3HT infiltrated domains can limit the distance travelled by photo-generated e−-h+ pairs until the electrons are captured by the TiO2.Relying on the same principle of confinement, various works have reported the use of other OAD thin film materials in the fabrication of hybrid polymer and small molecule solar cells.Examples of this include hybrid solar cells incorporating the semiconductor InN and Pb-phthalocyanine , or Si/P3HT solar cells in which Si nanowires are fabricated at low temperatures by hot-wire chemical vapor deposition at oblique angles or typical OAD procedures .The principle of operation for a fully organic photovoltaic solar cell is similar to that of a hybrid solar cell in that solar light is absorbed by a photoactive material, thereby creating electron–hole pairs with a given binding energy that must be overcome for their effective separation.However, the short exciton diffusion lengths characteristic of organic semiconductors tends to limit the thickness of the device, and consequently, the light absorption capacity of the cell .Moreover, for an efficient exciton separation and carrier collection at the semiconductor’s heterojunction, the two materials must possess the right offset between the highest occupied molecular orbital and the lowest unoccupied molecular orbital.The microstructure needed to successfully tackle these operating restrictions is one in which electron and hole conductor materials are distributed in the form of small, interpenetrating nano-domains with a size smaller than the exciton diffusion length .The OAD method is well suited to these morphological requirements and has been widely used to grow organic nanomaterials for OPV devices.The majority of these OAD-OPV devices incorporate nanostructures formed by the sublimation of small-molecule materials , although some works have also reported the use of conjugated polymers or composites formed by small molecules and a conjugated polymer .Fig. 5.8 shows a simplified view of a nanocolumnar OPV device and the molecular structure of some of its components.The most common donor materials in small molecule OAD-OPVs are metal phthalocyanines such as CuPc , PbPc and ClAlPc .The most common acceptor in these devices is fullerene, either in its sublimable or soluble form as -phenyl-C61 butyric acid methyl ester.In the latter case, the solvent choice and solution concentration critically affect the OPV performance because of difficulties in controlling the domain size and fragility of the donor/acceptor interface .One of the first examples of an OPV formed through the evaporation of both donor MPc and acceptor C60 is presented in Fig. 5.8, wherein the device structure was developed by growing a ClAlPc film at oblique angles on ITO, followed by evaporation at a normal incidence of C60 .The authors demonstrated that the increase in contact area at the ClAlPc/C60 heterojunction interface leads to an increase in efficiency comprised between 2.0% and 2.8% relative to planar heterojunctions.Equivalent experiments with OAD-CuPc/C60 OPV cells also resulted in efficiency improvements ranging between 1.3% and 1.7%.To better control the shape and size of the organic photoactive nanocolumns, Brett et al. thoroughly investigated the evolution of the morphology and porosity of a OAD-CuPc thin film using a pre-seeded ITO substrate) .Similarly, Taima et al. recently investigated the use of CuI OAD seeds to pattern the growth of ZnPc nanocolumnar arrays).Optimized ZnPc/C60 bilayer cells fabricated following this approach presented a three-fold higher efficiency than an equivalent planar cell.Additional results from nanopatterning with organic seeds can be found elsewhere .The OAD of C60 as an electron acceptor material has also been tried in C60/P3HT cells and, in combination with another hole collecting material, for the manufacture of inverted solar cells.Although the efficiencies achieved in this case were rather low, the values obtained were still two-to-four times greater than those for an equivalent planar cell fabricated by solution processes.The use of hydrogen as an energy vector in transportation or for the production of electricity has been proposed as a clean alternative to hydrocarbon fuels.At present, it is extensively used in chemical synthetic applications; however, its wider use as an energy vector faces serious scientific, technological and economic challenges regarding its production, storage and final combustion.In the sections that follow we summarize some of the recent advances in the use of OAD thin film materials for applications covering the whole chain of hydrogen technology.Considering the most commonly recurring themes addressed in recent publications on the subject, this review will be divided into three main parts: the solar generation of hydrogen, fuel cells and hydrogen storage.In the four decades since the seminal work of Fuyishima and Honda , in which TiO2 was used as photo-active UV semiconductor, the energetically efficient photocatalytic splitting of water has remained a dream.Yet since then, TiO2 has been extensively used in UV-driven photo-catalytic applications, with much effort being devoted to moving its spectral response to the visible part of the solar spectrum or to develop alternative semiconductor compounds active in the visible spectrum and stable in aqueous media.The solar driven generation of hydrogen generally requires a semiconductor, as well as a metal acting as an electron trapper, electrode or co-catalyst.According to a simplified description of the process, the absorption of a photon by the semiconductor creates an electron–hole pair.The hole then migrates to the surface of the oxide, where it becomes trapped by OH− groups adsorbed at the surface and yields oxygen.Meanwhile, the electron reaches the active metal and forms hydrogen by reducing protons.There are two main approaches to achieving solar-light induced water splitting: the use of a photo-electrochemical cell, wherein the semiconductor and metal are connected through an external circuit, and loading a powder-based semiconductor oxide with metal catalytic particles, usually Pt, decorating its surface .To efficiently convert water into molecular hydrogen and oxygen, any practical system must also fulfill the following requirements: the semiconductor must possess a narrow band gap to absorb a significant amount of solar light, it should promote both proton reduction and water oxidation, and it must remain stable in the electrolyte during long irradiation periods .Owing to the difficulties in meeting all these requirements, recent approaches have relied on the simultaneous use of two semiconductor materials that independently promote the oxidation and reduction processes of the water splitting reaction .In this way, n-type semiconductor photoanodes for oxygen evolution and p-type semiconductor photocathodes for hydrogen evolution are combined within the same system.Aside from some intrinsic properties of semiconductors required for the intended application, nanostructured materials with a high surface area are always desirable for water photo-catalytic splitting .Since OAD semiconductor thin films and nanostructures comply with these requirements, they result ideal candidates for PEC devices and applications.Mullins et al. have extensively investigated the manufacturing of OAD materials for PECs and Li-ion batteries electrodes, with a concise review from 2012 provided in Ref. .An outstanding result of these works has been the preparation by OAD of photo-active alpha-Fe2O3 photoanodes capable of collecting light in both the UV and visible regions of the solar spectrum.These authors have also reported the controlled doping of these nanostructures with Ti, Sn or Si by co-evaporation at oblique angles or by LPD-OAD .The same co-evaporation method has also been used for the synthesis of BiVO4 and Co-doped BiVO4, two active semiconductor materials capable of water photo-splitting using visible light .More recently, the same group has explored the OAD deposition of tungsten semicarbide which, when deposited on p-Si substrates, proved to be an efficient photo-active support for Pt nanoparticles .Among the proposed hydrogen storage methods, the trapping of hydrogen by chemical or physical bonding into suitable materials has provided some of the highest volumetric densities reported to date .The conditions required for suitable hydrogen storage applications are a high storage capacity, fast kinetics and a high reversibility of both hydrogenation and dehydrogenation processes.Additional requirements are a low-reaction temperature and operating pressure, easy activation, minimal deterioration of the material and an affordable cost .Solid-state hydrogen storage involves either chemisorption or physisorption; the latter mechanism usually involving van der Waals interactions that require cryogenic temperatures.Chemisorption involves the dissociation of hydrogen molecules and formation of metal hydrides, which usually requires high temperatures and a catalyst.The main problem usually encountered with these hydride compounds, however, is their high sensitivity to oxygen.In principle, OAD nanostructures are good candidates as hydrogen storage materials because their intercolumnar space allows for a reversible volume expansion during the transformation from metal to hydride and vice-versa.Moreover, thanks to their high surface area, nanocolumns enable faster hydrogen adsorption/desorption rates than compact materials.A pioneering work related to this possibility using OAD nanostructures of LaNi5 was published by Jain et al. in 2001 .Following this idea, Zhao et al. investigated the synthesis and performance of Mg nanoblades both with and without vanadium decoration .Wang et al. also applied the OAD method to fabricate Mg nanoblades decorated with Pd .To increase the chemical stability of the system, the Pd/Mg nanoblades were protected with a conformal layer of parylene: a polymeric material that is highly permeable to hydrogen, but prevents the passing of other gases .Meanwhile, the resistance of Mg nanorods to oxidation under conditions ranging from room temperature to 350 °C was also demonstrated for MS-OAD magnesium prepared under controlled conditions .Fuel cells convert chemical energy into electrical current through the chemical reaction of a fuel with oxygen or other oxidizing agents.The main components of the cell are anode, electrolyte and cathode.The type of electrolyte, fuel, working temperature and start-up time are the criteria utilized for the classification of fuel cells .In this section, we summarize some of the recent applications of OAD thin films as components in proton exchange membranes or polymer electrolyte fuel cells at low temperatures, solid oxide fuel cells operated at high or intermediate temperature and direct-methanol fuel cells.Karabacak et al. focused their investigations on the MS deposition of well-crystallized Pt nanorods to improve their efficiency in oxygen reduction reactions without carbon support.To achieve this, they studied the formation of well-isolated, single-crystalline and vertically aligned Pt nanorods formed by dynamic OAD at an oblique angle of 85°, finding that the Pt OAD films exhibit a higher specific activity, higher electron-transfer rate and comparable activation energy when compared to conventional Pt/C electrodes.This enhanced performance was attributed to the single-crystalline character, larger crystallite size and dominance of Pt facets in these OAD thin films.Though less common, MS-OAD has been applied for the fabrication of Pt-doped CeO2 anodes for PEMFCs .In this work, carbon nanotubes were used as both support and template to obtain a specific three-dimensional anode morphology.Oxide layers doped with Pt were then deposited onto the tips of the CNTs by co-sputtering the two components, with the resulting configuration exhibiting satisfactory catalytic activity.One of the challenges in this type of cell, which operates at moderate temperatures, is preventing the poisoning of the Pt catalyst by carbon monoxide formed during the methanol oxidation reaction.Alternate layers of Pt and Ru nanorods deposited by sputtering in a OAD configuration have been proposed as a catalyst for this type of DMFC cell .The aim of this Pt–Ru configuration is that the CO-poisoned platinum is regenerated through its reaction with oxygen species formed on the ruthenium.As a result, when used in an acidic medium, the Pt/Ru OAD multilayers exhibit an electrocatalytic activity with respect to methanol electro-oxidation reaction that is higher than that of equivalent monometallic Pt nanorods.Owing to their flexible design and high energy density, Li-ion batteries have become one of the most popular storage systems used in portable electronics.The basic components of these batteries are cathode, anode and electrolyte.As shown schematically in Fig. 5.9, they operate by extracting Li ions from the cathode and inserting them into the anode during charging .This process is reversed during discharging, when Li ions are extracted from the anode and inserted into the cathode.Typical cathode materials are laminar oxides, spinels and transition metal phosphates, while common anode materials include Si, graphite, carbon and Sn.A standard electrolyte consists of solid lithium salt in an organic solvent.The nature of the insertion and extraction mechanisms varies from the electrochemical intercalation of layered oxides and graphite, to alloy formation with Sn or Si.The electrode’s performance is typically quantified in terms of its charge capacity per unit weight, a functional parameter that is directly linked to the electrode’s porosity .The superior performance of porous materials stems from the fact that a high electrode/electrolyte interfacial area favors a rapid ion-coupled electron transfer rate and provides direct access to the bulk and surface atomic layers) .Furthermore, nanosized and structurally disordered materials can better accommodate volume changes and lattice stresses caused by structural and phase transformations during lithiation/delithiation .Films deposited by OAD comply with most of these requirements, and so the technique has been applied to the fabrication of both anodes and cathodes .Mullins et al. proposed an alternative procedure to increase the stability of Si anodes, which consisted of dosing small amounts of oxygen during the growth of Si OAD films, followed by posterior annealing at low temperature in air.The formation of these bulk and surface oxides provides a high capacity with virtually no capacity loss during the first 120 cycles, and only a slight capacity fade between 150 and 300 cycles.As a result, the anodes retained up to ∼80% of their original capacity after 300 cycles .These authors have also explored the use of silicon–germanium alloys and pure germanium as anode materials.Germanium is expected to be a suitable material for battery anodes as its high electronic and ionic conductivity should allow for a very high charge/discharge rate.Thus, by systematically changing the composition of SiGex alloy, it was found that the anode’s specific capacity decreased and its electronic conductivity and high-rate performance increased with germanium content.Meanwhile, an outstanding result found when using pure germanium OAD thin films in sodium-ion batteries was a high rate of operation at room temperature with this anode material .For an overall view of this work on ion batteries, readers are redirected to a recent review of this group , where in addition to summarizing their advances in the fabrication of anodes, they also comment on the use of amorphous OAD TiO2 films as cathodes.Various OAD nanostructures have been used recently as lithium battery cathodes.For example, needle-like LiFePO4 films deposited by off-axis pulsed laser deposition have been used for this purpose , as well as FeF2 cathodes prepared by dynamic OAD with tailored periodic vertical porosity ).Unlike the behavior of dense thin films, the ion and electron transport properties of these nanostructured cathodes are independent of their vertical thickness.As shown in the bottom part of Fig. 5.9, this is because the vertically aligned porous microstructures of these films assures a high accessibility of Li+ ions along the whole electrode, as well as a substantial increase in area of the substrate–electrolyte-film triple point interface.Moreover, with this particular morphology, it is possible to achieve high conductivities with cathode thicknesses up to 850 nm, which is about six-times the maximum thickness attainable with FeF2 dense films due to their relatively insulating nature.During recent years, electrochromic coatings have evolved into a practical solution for indoors energetic control, displays and other esthetic applications .Although different oxides and organic compounds are also utilized, the most popular system for fenestration and house-light control is based on tungsten oxide as an active electrochromic layer that is tunable from a deep blue to a transparent state, and a nickel oxide layer as a contra-electrode.Electrochromic film devices based on tungsten oxide consist of a reducible WO3 layer, a second thin-film electrode and an electrolyte containing a M+ cation that is incorporated into the film during the reduction cycle.Optimal device performance therefore requires rapid incorporation of M+ ions into the film and their reversible release to the electrolyte during reduction and oxidation cycles, respectively for the reduction cycle).Optimizing the electrochromic behavior involves increasing the incorporation capacity and maximizing the diffusion rate of M+ cations within the film structure.These two requirements are fulfilled if the thin film of the cathode has high porosity, making OAD thin films prepared by either thermal evaporation or MS ideal candidates for this purpose.Accordingly, Granqvist et al. reported in 1995 the preparation of WO3 electrochromic thin films by MS-OAD; however, even if the open and highly porous character of these films is very promising for their implementation as fast switchable electrochromic layers, only more recent attempts have incorporated OAD oxide thin films for this purpose .These recent studies report the MS-OAD of WxOzSiy and CoxOzSiy solid-solution thin films and their implementation as electrochromic cathodes.The high porosity of such films makes them very suitable for this application, with the integrated devices presenting fast response times and a high coloration capacity.In addition, these mixed oxide solid solutions add the possibility of controlling the optical properties in the bleached state by varying the relative amount of silicon with respect to either W or Co.One of the most promising options for the development of generators directly implementable in wireless nanosystems is to apply the piezoelectric effect to convert mechanical energy, vibrational energy or hydraulic energy into electricity.This topic has experienced ceaseless development in recent years thanks largely to the works of Wang et al. .The piezoelectric effect relies on the generation of electrical voltage by imposing a mechanical stress/strain on a piezoelectric material and vice-versa.Typical examples of piezoelectric nanogenerators are based on 1D ZnO nanostructures, although other materials such as ZnS, CdS, GaN and InN are expected to show an improved performance because of their relatively higher piezoelectric coefficient.Although still in the very early stages, promising results pertaining to the exploitation of OAD piezoelectric materials have recently appeared in the literature .The MS-OAD of ZnO and orientational control of obliquely aligned epitaxial InN nanorod arrays by OAD using a plasma-assisted molecular beam epitaxy system have been descried in Refs. .In some of these works, the piezoelectric output voltage was determined by scanning the Pt tip of an atomic force microscope along four different directions with respect to the tilt angle of the ZnO NWs.These studies revealed an anisotropic generation of electricity as a function of the characteristic geometry of the OAD films .Meanwhile, other authors have reported an increase in the output power generated by growing InN nanorods tilted along the direction of the piezoelectric field, while also applying mechanical deformation with a force normal to the surface .Nanosensors is another area of application that has greatly profited from the inherent high surface area and controlled nanoporosity of OAD nanostructures when utilized as transducers for the determination of different chemical analytes.The use of OAD thin films for biosensor applications will be reviewed in Section 5.6 and the use of photonic detection methods in Section 5.5.Most cases discussed here refer to electrical gas sensors, although a couple of examples using acoustic and optofluidic devices for liquid monitoring are also critically described.A short subsection devoted to particular applications in the field of pressure sensors and actuators complete this analysis of sensors.Typical gas and vapor sensor devices that use OAD thin films as transducer elements rely on changes in resistivity or electrical capacitance upon exposure to a corresponding analyte.Capacitance humidity sensors consisting of OAD TiO2 nanostructures deposited on interdigitated electrodes were developed some time ago by Brett et al. .Although other OAD materials respond to humidity variations, sensors utilizing TiO2 exhibit the greatest change and show an exponential increase from ∼1 nF to ∼1 μF when the humidity changes from 2% to 92% .The same group developed room temperature SiO2 OAD sensors to selectively detect methanol, ethanol, 1-propanol, 2-propanol and 1-butanol by monitoring both the frequency dependent capacitance and impedance changes in the system .In this work, it was also determined that for ethanol and 1-butanol sensor aging is reduced by UV illumination, a treatment that had no effect when detecting other alcohols.Capacitive humidity sensors based on OAD polyimide have also been reported .Different conductometric oxide sensors based on the variation in resistance of a transducer material upon exposure to the analyte) have been prepared by OAD.For example, ZnO nanorods prepared by MS-OAD present a high reproducibility and sensitivity, fast response and short recovery times in the detection of hydrogen and methane at mild temperatures .Similarly, tungsten oxide nanocolumns have been used as conductometric sensors of NO, NO2 and ethanol .To illustrate the possibilities and lay-out of a typical OAD conductimetry sensor device, Fig. 5.10 summarizes some of the results obtained with one of these WO3 sensors in the detection of NO. This particular system consists of MS-OAD WO3 films grown on interdigitated Pt electrodes.A key feature of the WO3 nanorods) is their resemblance to intestinal villi), even after crystallization by annealing at 500 °C).Nitrogen isotherms and BET analysis revealed that the surface area of the nanostructured film was about 30-times greater than that of a flat, compact reference layer).This resistive sensor was tested with different analytes, including NO, as a function of temperature at a relative humidity of 80% and).The results obtained confirmed a NO detection limit of as low as 88 ppt and an extremely high selectivity to NO under humid conditions approximating human breath even in the presence of ethanol, acetone, NH3 or CO.These results support the possibility of fabricating high quality sensor elements for breath analyzers to diagnose asthma, or for the detection of NO in aqueous media.The magneto-optical detection of minority components is a new and sophisticated method of detection based on the coupling between a plasmonic signal and a ferromagnetic layer under the influence of a magnetic field.In a recent work, we proved that the sensitivity of such a device could be enhanced by depositing a thin OAD TiO2 layer onto the magneto-optical layer structure .This transparent layer ensured a significant increase in the surface area available for adsorption, without affecting the optical signal coming from the device.One way of avoiding interference problems during the detection of multicomponent analyte mixtures is the incorporation of various sensing elements in the same device, and the use of a mathematical factor or multicomponent analysis procedure.Electronic noses are a typical example of this type of devices.Recently, Kim et al. developed a multi-sensor e-nose chip incorporating six different nanostructured gas-sensing layers prepared by OAD: TiO2 thin films, TiO2 nanohelices, ITO slanted nanorods, SnO2 thin films, SnO2 nanohelices and WO3 zig-zag nanostructures.These films were monolithically integrated onto a sapphire wafer through a combination of conventional micro-electronics processes and OAD.The thin film resistivity was measured through interdigitated electrodes, while the OAD nanostructures were tested in a top–bottom electrode configuration.The prototype e-nose showed specific sensitivity for various gases such as H2, CO and NO2.Detection in liquid media has also benefitted from the use of OAD thin films incorporated into ultrasonic devices or complex photonic structures produced by stacking thin film layers with different refractive indices.Sit et al. studied the use of nanostructured OAD SiO2 thin films to enhance the sensitivity of surface acoustic wave sensors for liquid monitoring.Here, SiO2 films were deposited on top of SAW devices operating at 120 MHz and were then implemented in an oscillator circuit.The evolution of the frequency signal was monitored as a function of the relative humidity, as well as for different viscous mixtures of glycerol and deionized water.In an early work , a similar approach was used for the in-situ evaluation of the elastic properties of SiO2 nanocolumns deposited on a SAW circuit.Our group has developed a very simple but effective photonic device for the optofluidic determination of the concentration of liquid solutions.This method utilizes a Bragg microcavity consisting of periodic and alternant SiO2 and TiO2 thin films of equivalent thickness, plus a thicker SiO2 layer acting as a defect; all of which are prepared by e-beam OAD) .The resonant absorption peak characteristic of this type of photonic structure shifts when the system is infiltrated with a liquid, and the magnitude of this shift can be directly correlated with the concentration of the solution.This system has proven to be very useful in monitoring low and high concentrations of glucose or NaCl in aqueous solutions, or the proportion of components in a glycerol/water mixture.Study into the mechanical properties of OAD thin films and three-dimensional nanostructures such as nanohelixes and nanosprings has received continuous attention in reviews and numerous works on the subject .These studies have provided very useful information regarding the elastic and mechanical properties of this type of thin films and nanostructures; knowledge that has been applied to the development of different sensor devices for the monitoring of mechanical forces and pressure.For example, Gall et al. have reported the fabrication of a nanospring pressure sensor based on zigzag and slanted Cr OAD nanostructures.These nanocolumnar arrays exhibited a reversible change in electrical resistivity upon loading and unloading that amounted to 50% for zigzag or nanospring structures, but only to 5% for tilted nanorods.An accompanying change in the resistivity of these sculptured films was attributed to the formation of direct pathways for electric current when the nanosprings are compressed.Individual metal microsprings have also been used as fluidic sensors and chemical-stimulated actuators by virtue of their reliable superelasticity; their flow rate being calibrated by determining the elongation of Ti or Cr-decorated polymeric microsprings .Meanwhile, a very sensitive pressure sensor based on the piezoelectric properties of embossed, hollow asymmetric ZnO hemispheres prepared by OAD has been recently reported .The analysis and modeling of the optical properties of OAD thin films has been one of the most important areas of their study.Indeed, the optical properties represent one of the most valuable tools for assessment over microstructure and composition distribution in this type of films, as they are controlled by the deposition geometry and other experimental conditions.Ongoing advances made in this area have been extensively reviewed in excellent reports dealing with OAD thin films , while a specific up-to-date analysis of their optical properties, including aspects such as their design, fabrication, theoretical analysis and practical examples, can be found in Ref. .A comprehensive evaluation of theoretical calculations and simulations of the optical properties of OAD films, multilayers and other less complex devices can be found in the monograph of Lakhtakia and Messier .Taking into account the extensive available knowledge regarding the optical properties of OAD thin films and multilayers, and considering that a clear distinction between the optical properties and potential applications is somewhat artificial, the present review has been limited to an analysis of aspects closer to specific operating devices and their final applications.Readers more interested in the fundamentals of optical properties are referred to the previous publications already mentioned.As a result of their tilted nanocolumnar structure, OAD thin films are optically anisotropic; as such, transparent dielectric films deposited by OAD are intrinsically birefringent, whereas metallic absorbing films are dichroic.The anisotropic character of OAD thin films is a precise feature that was first investigated during the earliest stages of research into these materials .Preceding sections have provided ample evidence of the fact that the OAD technique is a straightforward method for the fabrication of transparent dielectric optical films.This procedure has been successfully utilized for the synthesis of single-layers, multilayers and graded refractive index films, all of which have been utilized as antireflective coatings, rugate filters, Bragg reflectors, waveguides, etc.Examples of some of the dielectric materials prepared by OAD include SiO2, TiO2, Ta2O5, MgF2, ZnO, Si, Y2O3, Eu-doped Y2O3, ZrO2, Al2O3, Nb2O5 and ITO .Other transparent conducting oxides have also been prepared by OAD for integration into a large variety of optoelectronic devices.In addition, a considerable number of organic materials have been deposited in the form of OAD optical coatings .Such a wide range of OAD coatings has allowed them to cover the entire spectral range required for their use as interference filters.A characteristic feature of the OAD technique is the potential to change the nanocolumnar direction by modifying the deposition geometry.This can give rise to two- and three-dimensional structures constructed from multisections of oriented nanocolumns, zig-zags, helices or S/C-shapes.According to Lakhtakia and Messier , there are two canonical classes of OAD structures.The first type, sculptured nematic thin films, includes slanted columns, zig-zag, and S- and C-shaped columns.These materials have been extensively used for the fabrication of optical components such as polarizers, retarders or filters .The second class encompasses helicoidal columns and chiral-sculptured thin films, which are able to reproduce the behavior of cholesteric liquid crystals by preferentially reflecting circularly polarized light of the same handedness as the film’s microstructure.Chiral filters based on this principle and presenting different degrees of complexity have been successfully designed and fabricated by OAD .As explained in Section 2.2, changing the deposition geometry or using more than one deposition source permits a gradual in-depth change in composition, density and microstructure of films.In terms of their optical properties, this means that either by alternating the type of deposited material and/or changing the densification, the refractive index profile along the film thickness can be effectively tuned.This possibility has been used to fabricate antireflection coatings and rugate filters that are characterized by a sinusoidal index profile .One-dimensional photonic crystals and Bragg microcavities have also been fabricated by successively stacking oxides of different refractive indices or with porous-graded Si posteriorly oxidized by high temperature annealing .An example of a Bragg microcavity fabricated by stacking different layers of SiO2 and TiO2 prepared by OAD is presented in Fig. 5.11.Such optical structures depict a narrow resonant peak that is affected by the infiltration of liquids in their pores, a feature that has been utilized for the development of responsive systems to determine the concentration of solutions.More complex 3D square-spiral photonic crystals with a tetragonal arrangement of elements that exhibits well-defined band gaps in the visible, NIR or IR spectrum have also been fabricated by OAD) on lithographically pre-patterned substrates .An emerging topic in the field of electronics and photonics is the development of flexible devices.A flexible optical component combining the intrinsic nanometric order of OAD thin films with an additional micron-level arrangement has been recently developed by our group through the OAD of an oxide thin film on elastomeric PDMS foils .Manually bending this device gives rise to a switchable grating formed by parallel crack micropatterns).An outstanding feature of this type of foldable optical component is that the crack spacing is directly determined by the nanocolumnar structure and material composition of the OAD film, but it is independent on either the film thickness or foil bending curvature.We have attributed this microstructuration effect to the bundling association of the film’s nanocolumns.These self-organized patterned foils are transparent in their planar state, yet can work as a customized reversible grating when they are bent forming the concave surface shown in Fig. 5.12.The labeling possibilities of this type of optical component are illustrated in Fig. 5.12.Within this category we consider OAD thin films and more complex device structures capable of actively responding to excitation from the medium.Luminescent films, optical sensors and plasmonic effects will be commented briefly to illustrate the potential applications in this domain.A standard approach to the manufacturing of luminescent films is the OAD of intrinsically luminescent materials.Here, the nanocolumnar-tilted orientation of conventional OAD films ensures that the luminescent emission of the material is linearly polarized .Similarly, helical structures of luminescent materials produce a circular polarized emission .In all cases, the polarization of light seems related to a widely studied filtering effect produced by the particular nanostructure of each thin film or structure .Another possibility in the synthesis of luminescent OAD-based materials relies on the anchoring of luminescent molecules on the internal surface of thin films; the basic principles of which were discussed in Section 3.6.2.The incorporation into the nanocolumnar film of a luminescent or alternative guest molecule with specific functionality presents some similarities with conventional synthesis routes utilized for the sol–gel method and wet-chemistry processes used in the fabrication of hybrid luminescent materials.In these hybrid OAD films, however, functional molecules are anchored to the chemical functionalities of oxide surfaces either electrostatically or by forming covalent bonds .The luminescence of the resulting hybrid nanocomposite depends on the dye distribution within the porous nanocolumnar structure of the films.Different activation processes such as thermal treatment or the UV-illumination of semiconducting host films have been used to enhance or modify the luminescence properties of the films.An interesting example of the possibilities offered by this anchoring approach is the energy transfer process made possible from the visible to near-IR spectrum exhibited by rhodamine laser dye pairs adsorbed in nanocolumnar SiO2 OAD thin films .Fig. 5.13 shows that when different rhodamine pairs are adsorbed in the host films, excitation of the visible-spectrum absorbing rhodamine produces a very intense luminescence in the infrared.This luminescence cannot be induced by the direct excitation of Rhodamine 800, and has been attributed to an energy transfer induced by the formation of luminescent J-heteroaggregates between the two classes of dye molecules; a phenomenon observed for the first time in these hybrid OAD films .Although this is still largely unexplored, the possibilities for this type of process for wavelength-selective wireless optical communications are quite promising, as most optical detectors function in the near-IR spectrum.OAD photonic architectures are ideal for the development of photonic sensors.In these systems, the large pores separating the nanocolumns in the films enable the rapid access and interaction of analyte molecules either directly with their internal surface or with active molecules anchored on it.Generally, analyte adsorption is the main limiting step of the sensing response in an OAD structure, which is in stark contrast to the diffusion-limited sensing typical of bulk sensors .Filing the pores of the OAD films in the final stage of massive infiltration or adsorption can be directly monitored by measuring gas adsorption isotherms.The optical properties of OAD films and multilayer structures also respond to the environment, be that a gas or liquid that condense and/or fill the pores occupying the inner volume of the films.The dependence of the film’s refractive index and the overall optical response of these systems on the conditions of the medium have been one of the main motivations for using nanocolumnar films or nanostructures as optical sensors.Some examples of optical sensors based on changes in the optical properties of OAD structures are the use of helical nanocolumns and Bragg stacks for infiltrated liquid sensing, or the use of photonic crystals for high-speed water vapor detection and colorimetric detection of humidity .One drawback of direct sensing strategies that are based on changes in refractive index is the lack of selectivity.That is, selective detection can only be achieved when, by a specific reaction, the analyte modifies the light absorption properties of the nanocolumnar film.Examples of this approach are the optical detection of H2 with Pd/WO3 OAD films that change from transparent to blue when exposed to this gas , and the color change of cobalt oxide nanocolumnar films when exposed to CO .Another possibility to improve selectivity involves modifying the surface chemistry of nanocolumns by derivatization with functional molecules.For example, the silane derivatization of porous TiO2 OAD films makes them insensitive to changes in ambient humidity .In hybrid thin films, one way of directly enhancing sensitivity is to incorporate within the OAD films a dye molecule whose light absorption and/or luminescence changes reversibly through a specific reaction with a given analyte.Our group has widely investigated this procedure and developed different composite sensor systems based on this principle.For instance, acid vapors can be detected from the change in optical absorption and fluorescence of OAD TiO2 films with tetracationic metal-free porphyrin incorporated in their pores .Typically, the selectivity of these hybrid systems is mainly determined by the chemical reactivity of the anchored molecules.Thus, by combining a free base porphyrin with ten of its metal derivatives anchored in columnar OAD films, it has proven possible to selectively detect more than ten volatile organic compounds through spectral imaging, as illustrated in Fig. 5.14 .This confirms that the selectivity and performance of these hybrid systems are determined by how the sensing molecules are bonded to the porous OAD matrix .Plasmonics is a rapidly evolving area that explores the interaction between an externally applied electromagnetic field and the free electrons in a metal.These free electrons can be excited in the form of collective oscillations, named plasmons, by the electric component of light.At certain incident conditions, namely when the incident light frequency matches the intrinsic frequency of the free electrons, a resonant absorption occurs.This phenomenon is called surface plasmon resonance, or in the case of nanometric metallic structures, localized surface plasmon resonance.OAD has been utilized by many authors for the controlled fabrication of metallic structures intended for plasmonics applications; a topic that has recently been reviewed by He et al. and Abdulhalim .The OAD technique presents some competitive advantages with respect to many other fabrication techniques for metallic supported nanostructures, the most significant of these being direct control over the composition and shape of the metallic nanocolumns and the scalability of the method to large areas.Plasmonic structures made by OAD have been successfully employed for the fabrication of dichroic systems and labels, sensor and biosensor applications through the analysis of LSPR changes, molecular detection by surface enhanced Raman scattering, metal-enhanced fluorescence, surface-enhanced infrared absorption, and the development of metamaterials.A brief discussion of the possibilities offered by OAD techniques in relation to selected applications is presented next.In non-spherical silver or gold nanoparticles, plasmon absorption resonance is highly dependent on the polarization state of light.In practice, this polarization dependence manifests itself by the appearance of two plasmon resonance absorptions peaking at different wavelengths depending on the orientation of the electric component vector of light with respect to the largest axis of the nanostructure.Dichroic silver nanoparticles and aggregates can be formed during the earliest OAD stages on flat and transparent substrates , with the polarization dependence of their SPR being attributed to their in-plane anisotropy.This effect can be greatly enhanced by nanosecond laser treatments under ambient conditions at relatively low powers, but both anisotropy and dichroic behavior are completely lost at higher laser powers.Strongly dichroic structures have also been obtained by us through the OAD deposition of silver onto OAD nanocolumnar SiO2 films acting as a substrate .In this case, the in-plane anisotropy characteristic of the SiO2 nanocolumns is used as a template to induce anisotropic growth of the silver film.The treatment of these composites with an unpolarized nanosecond IR laser induces selective melting of the silver and posterior solidification in the form of long nanostripes on the surface, which promote and enhance the dichroic response of the system.This dichroic laser-patterning process has been attributed to a successive metal percolation and dewetting mechanism along the lines defined at the surface by the SiO2 nanocolumnar bundles.Depending on the laser power, zones with a localized increase in dichroism and/or totally deactivated regions can be written on the composite film surface.The use of this writing procedure has been suggested for the optical encryption of information at a micron level .Nano-columnar silver films directly prepared by OAD have been tested for SERS and for surface-enhanced Raman .Recently, a host–guest strategy based on the formation of anisotropic Au nanodiscs inside the intercolumnar space of bundled OAD SiO2 nanocolumns has been reported by our group .The method relies on that the size, shape and orientation of gold nanoparticles are defined by the tilt angle of the nanocolumns and the intercolumnar space distance, which are experimentally determined by the zenithal angle of evaporation.These composite materials have been applied to the development of optical encryption strategies in combination with local laser patterning.The distinct colors of the untreated zones seen in Fig. 5.15 are due to the different anisotropies of the gold nanoparticles depending on the characteristics of the SiO2 host layer.The effect of this laser treatment is to locally remove the dichroic character of the composite film by inducing melting and resolidification of the anisotropic gold nanoparticles.This treatment can be applied to selected zones with a micron-scale level of control).In addition, this selective writing can be applied to complex dichroic thin film arrangements in which gold nanodiscs present different orientations as a result of being prepared on SiO2 film zones with nanocolumn bundles that have grown after azimuthally turning the substrate 180°.In this system, retrieving the original information requires interrogation of the system using an appropriate polarization of light and a given planar orientation of the plate).A deeper discussion regarding the possibilities of this type of system for optical encryption and the fabrication of anti-counterfeiting labels can be found in Ref. .The characteristics of the LSPR are not only dependent on the size and shape of metal nanoparticles, but also on their dielectric environment.Typically, most OAD metal nanoparticle films are made of either Ag or Au.Fu et al. studied the deposition of Au/TiO2 and Au/Ti structures by OAD, with the first case producing a red shift of 30 nm in the plasmon resonance when the nanostructure was covered by a TiO2 layer with a nominal thickness of 5 nm, a change that was attributed to a modification in the refractive index of the medium immediately surrounding the Au nanoparticles.In the latter case, the observed red shift was a function of both the coating thickness and coverage.These results indicate that the OAD technique is very versatile in allowing for the deposition and fine-tuning of LSPR structures, as well as the development of sensors .Using uniform films with tuned, narrow particle size distributions, a linear relationship between the plasmon resonance wavelength and refractive index of the surrounding media was identified in OAD Ag and Au nanocolumns immersed in liquids with different refractive indexes .Similarly, a number of biosensors for the highly selective detection of different small molecules have also been developed.Selectivity in this case is mediated by anchoring different receptor molecules on the surface of Au or Ag, which is made possible thanks to the rich surface chemistry of these metals.Examples or this approach are the detection of anti-rabies immunoglobulin G , neutravidin and streptavidin .Surface enhanced Raman spectroscopy is very sensitive in the detection of minute amounts of molecules in solution upon their adsorption on noble metal substrates, even in the sub-micromolar range.A variety of noble metal nanocolumnar films, most of which have been made of Ag and fabricated by OAD , have all demonstrated excellent properties as substrates for SERS chemical and biological sensing.Indeed, the SERS enhancement factor and reproducibility of results obtained with OAD Ag nanocolumns was similar, or even superior to that reported for other Ag nanoparticle systems .Investigations in this field have generally concluded that the efficiency of OAD structures as SERS transducers is tightly linked to their structural and microstructural characteristics, which in turn can be tailored by controlling the experimental conditions of deposition such as the angle and thickness of the nanocolumns or the deposition temperature .Another possibility offered by OAD to enhance SERS performance is the manufacturing of complex architectures that incorporate intentional “hot-spots”; i.e., areas where an enhanced electromagnetic field amplifies the Raman scattering signals.OAD Ag films with L-shape , zig-zag and square helical nanostructures have all been reported to promote the formation of such “hot-spots” to increase the sensitivity of SERS analyses.Metal enhanced fluorescence refers to the enhancement in emission intensity of a fluorophore molecule in the proximity of a metal nanostructure due to a localized increase in the electromagnetic field associated with the SPR.Dipole–dipole interactions also play a major role in this enhancement mechanism .While MEF enhancement factors of up to 70 relative to dense metal films have been reported for OAD films made of Ag and Al, equivalent films made of Au and Cu have proven to be far less effective .In these studies, the influence of nanocolumn tilt angle, film porosity, the nature of the substrate, and the distance between the fluorophores and metallic structures were all systematically investigated.Various applications of this MEF effect using OAD metallic structures have been reported for biosensing in water and bioimaging , whereby a specific detection can be enhanced through the immobilization of the fluorescent receptor onto a metal nanostructure .Mimicking nature to obtain superhydrophobic/superhydrophilic, adhesive/anti-adhesive and, more recently, omniphobic and wetting anisotropic surfaces and coatings has been a topic of ongoing interest in the field of nanotechnology over the last two decades; a period that has witnessed the convergence of efforts from academia and industry in the search for new surface functionalities .The wetting angle of a liquid drop on a flat surface is determined by Young’s Law, and it is the result of a balance between the cohesive forces acting on the line of contact between the drop, the solid surface and the air/gas environment.It is therefore dependent on the interfacial energies between the solid–liquid, solid–vapor and liquid–vapor phases.Put simply, when the solid–vapor interface presents a low surface tension, the water contact angle increases.Surfaces with WCAs higher than 90° are usually referred to as being hydrophobic, while those with WCAs higher than 150° are considered superhydrophobic.The terms oleophobic and oleophilic designate surfaces with contact angles above and below 90°, respectively, with low surface tension liquids such as non-polar alkanes, oils and non-polar solvents.One of the most long-awaited successes in this field has been the development of reliable, simple and low-cost techniques for the fabrication of superomniphobic surfaces; i.e., coatings capable of repelling both water and low surface tension liquids .The main factors controlling the contact angle of liquid droplets on a solid are the chemistry and the roughness of the surface, the latter being intimately related to the microstructure of the material .Two classic models named after Wenzel and Cassie–Baxter relate the contact angle on actual surfaces, characterized by a specific roughness and microstructure, to the nominal angle on an ideally flat surface of the same material.Oblique angle deposition techniques have been extensively used to control the wettability of materials, as they permit a fine control over both the surface chemistry and roughness.The high versatility of this technique in obtaining different nanostructures is an additional feature that supports its use for wetting applications.Important achievements have been made in the last few years through the OAD fabrication of surfaces possessing singular adhesive properties , hydrophobicity , superhydrophobicity , superhydrophilicity or superolephobicity .Initial approaches to the development of highly hydrophobic surfaces by OAD combined the surface nanostructuration capability of this technique with the chemical modification of the surface composition by different methods.For example, the RF sputtering deposition at oblique and normal incidence of polytetrafluoroethylene, a hydrophobic material commonly known as Teflon, has been reported to increase the water contact angle of OAD Pt and W nanorods .In this way, superhydrophobic WCAs as high as 138° and 160°, respectively, were achieved by controlling the deposition angle, substrate rotation and reactor pressure.Veinot et al. were responsible for some of the earliest works showing the formation of superhydrophobic surfaces consisting of OAD SiO2 nanocolumns and 3D nanostructures modified by the chemical grafting of siloxane molecules.More recently, a similar approach involving the molecular vapor deposition of silane onto metal OAD nanocolumns has been proposed as a way of fabricating anti-icing surfaces .Moving a step forward, Choi et al. have produced superhydrophobic surfaces with a dual-scale roughness that mimics lotus petal effects .This was achieved through the fluorosilanization of Si nanowires arranged at a micron-level via OAD onto a pre-patterned substrate decorated with gold nanoparticles.Other pre-patterned metal OAD nanorods have been combined with Teflon deposition to control the roughness, morphology and chemistry of the surface, thereby rendering it superhydrophobic .Habazaki et al. have expanded this hierarchical roughness concept by using aluminum sputtered OAD nanorods as a starting material, followed by their anodization and surface decoration with fluorinated alkyl phosphate.These surfaces showed an interesting omniphobic behavior characterized by a high repellency of water, oils and hexadecane.Active surfaces, in which the contact angle can be changed by external stimuli, are an emerging topic of interest in the field of surface wetting.To this end, our group has developed different plasma approaches for the deposition of nanofibers, nanorods and nanowires made of inorganic semiconductors , small-molecules and hybrid and heterostructured nanostructures that can be activated by light illumination.Furthermore, by tailoring the density, morphology and chemical characteristics of these 1D nanostructures, we have been able to fabricate ultrahydrophobic surfaces; i.e., surfaces with an apparent WCA of 180° .In addition, by working with Ag@metal-oxide nanorods and nanowires prepared by PECVD in an oblique configuration, we have taken advantage of the well-known photoactivity of TiO2 and ZnO to reversibly modify the WCA of their surfaces from superhydrophobic to superhydrophilic by irradiating them with UV light, and in some cases, with visible light .This illumination does not alter the nanostructure of the films, but rather only their surface properties, thus enabling fine control over the final WCA of the system.To gain deeper insight into the surface state of these photoactive ZnO nanorods grown at oblique angles after water dripping), we have also carried out a thorough study of the evolution of WCA due to the formation of surface nanocarpets .The nanocarpet effect refers to the association of aligned and supported nanorods or nanowires after their immersion in a liquid, and relates to their deformation by capillary forces.This phenomenon has attracted considerable interest since the pioneering works of Nguyen et al. , where they demonstrated the self-assembly transformation of supported CNTs into cellular foams upon immersion in a liquid and subsequent drying off.Although it is still a subject of controversy, it seems that a critical factor controlling nanocarpet formation is the penetration of the liquid into the inter-rod space of the 1D nanostructured surfaces.Although the literature on this subject has mainly focused on CNT arrays, its expected impact in biomedical research, superhydrophobicity and microfluidics has fostered the investigation of other systems such as OAD nanorods of Si , functionalized Si , SiO2 , carbon , metal and ZnO .Indeed, the nanocarpet effect has already served to increase the WCA on 1D surfaces through the formation of a double or hierarchical roughness.From a fundamental point of view, it has also been used to provide a fingerprint of liquid droplets deposited on vertically aligned, tilted, zigzag and squared spring conformations of hydrophilic or hydrophobic materials .In Ref. we studied the evolution of the nanocarpet morphology in UV pre-illuminated OAD ZnO surfaces and their transformation from a superhydrophobic state to a superhydrophilic state, passing through a modified Cassie–Baxter state.In addition, as summarized in Fig. 5.16, we have shown the possibility of controlling the nanocarpet microstructure by pre-illuminating the surface for given periods of time and), or by using samples consisting of partially hydrophobic tilted nanorods, to induce asymmetric wetting after contact with water and).Anisotropic wetting and the development of droplets with asymmetric contact angles has emerged as an appealing area of research due to the industrial interest in materials capable of inducing a preferentially oriented spreading of liquid or with an asymmetrical adhesive surface .Parylene films deposited by oblique angle polymerization are a particularly outstanding example of this singular wetting behavior .Fig. 5.17 provides a summary of results from the fabrication of anisotropic films with unidirectional adhesive and wetting properties , which clearly demonstrates the possibilities of the OAP technique for the fabrication of PPX nanorods with a pin-release droplet ratchet mechanism derived from their singular microstructure.These nanofilms exhibited a difference of 80 μN in droplet retention force between the pin and release directions, the latter being characterized by a micro-scale smooth surface on which microliter droplets are preferentially transported.These OAP nanostructures and their unique unidirectional properties have been recently used to control the adhesion and deployment of fibroblast cells .Liquids moving not in the form of droplets, but rather as a continuous flow, have also benefited from the application of OAD thin films.Although the number of papers published on this topic is quite limited, there are a few that clearly illustrate the potential of these films to improve the handling of liquids in small channels and devices.For example, OAD has been used to fabricate nanostructures that were subsequently embedded in PDMS microchannels using a sacrificial resist process.These microchannels were filled with different kinds of sculptured SiO2 OAD thin films, and the resulting three-dimensional structure was used as a DNA fractionator capable of a more effective micro-scale separation of these large molecules .Microfluidic systems have also been provided with additional functionalities thanks to OAD films in order to develop various kinds of responsive systems.Fu et al. recently reported the fabrication of a microfluidic-based MEF detection system in which tilted Ag nanorod films or SiO2/Ag multilayers were integrated into a capillary electrophoresis microdevice, which was then utilized for the separation and detection of amino acids.This system demonstrated an enhanced detection by a factor of six at half times when used in a fluorescence device, thus opening the way for further improvements and functionalities.The singular surface topography and microstructure of OAD thin films provides unique possibilities for their utilization as biomaterial substrates, and for the development of biosensors.In this case, the key investigated feature is the effect of surface topography on the proliferation of cells and/or the adsorption of proteins that mediate cell adherence and proliferation.In a series of works on the OAD of platinum films, either directly onto flat substrates or onto polymer-packed nanospheres to provide a second nanostructuring pattern to the layers, Dolatshahi-Pirouz et al. showed that the particular surface topography of these latter layers promotes the growth and proliferation of osteoblast cells.For different in-vivo applications it is also critical that certain cells grow preferentially over others, a possibility that has been explored in-vitro by these same authors using OAD Pt films.In particular, they demonstrated that fibrinogen, an important blood protein, preferentially adheres to whisker-like nano-rough Pt substrates when compared to flat Pt surfaces, and that the proliferation of human fibroblasts is significantly reduced on these nanostructured surfaces .Over the course of these studies, the growth of the film nanocolumns could be described by a power law relationship between the increments in their length and width .The dependence found between the power law exponent and deposition angle was subsequently used to establish indicators that can be employed to predict cell growth on OAD thin films based on the characteristics of their surface topography determined by this deposition parameter.Another critical issue affecting the practical implementation of biomaterials in different domains is their capacity to act as a biocide layer; i.e., to prevent the development of bacteria.In this regard, α-Fe2O3 nanocolumnar OAD thin films have demonstrated to be quite effective in both limiting bacterial growth on their surface and contributing to the inactivation of Escherichia coli O157:H7 when subjected to visible light irradiation .This light response has been proposed for the development of improved visible-light antimicrobial materials for food products and their processing environments.A study into the viability of different bacteria on a nanocolumnar thin film of Ti prepared by MS-OAD has shown that unlike Staphylococcus aureus, the growth of E. coli is significantly reduced on the nanostructured film, and that this is accompanied by an irregular morphology and cell wall deformation .In a very recent publication on nanostructured Ti surfaces prepared by MS-OAD, we have claimed that while the specific topography produced by the vertical nanorods of the layers are effective in stimulating the growth of osteoblasts on their surface, it simultaneously hinders the development of different types of bacteria.This behavior makes these substrates ideal for implants and other applications in which osteointegration must be accompanied by efficient biocide activity .The high surface area of OAD thin films when compared to a compact film has been a key argument for their use in biosensor applications.However, given that biosensing needs to be carried out at room temperature in a liquid media, yet still requiring a high sensitivity, alternative transduction methodologies are normally used.An overview of instances in which these procedures have been used in conjunction with OAD thin films is the focus of this subsection.One example is the use of electrochemical transduction with MS-OAD NiO thin films for the enzymatic detection of urea in biological fluids .On the nanostructured surface of such films, the urease enzyme becomes easily grafted while the urease–NiO system promotes a high electro-catalytic activity.This provides detection limits as low as 48.0 μA/ and a good linearity over a wide range of concentrations.Monitoring H2O2 is of strategic importance in many applications, not just because it is a byproduct of a wide range of biological processes, but also because it is a mediator in food, pharmaceutical, clinical, industrial and environmental analysis.Consequently, TiN OAD nanocolumns have been recently proposed as an electrochemical sensor for H2O2, which was accompanied by a thorough study about the relation between sensitivity and catalytic activity on the one hand and the deposition angle on the other .A double detection method using UV–visible absorption spectroscopy coupled with cyclic voltammetry was employed by Schaming et al. with 3D nanostructured OAD ITO electrodes for the characterization of cytochrome C and neuroglobin, two proteins that act in-vivo to prevent apoptosis.The photonic detection of analytes has also benefitted from OAD thin films, with Zhang et al. reporting that the sensitivity of a photonic crystal consisting of linearly etched strips on silicon can be enhanced by OAD of an 80 nm layer of TiO2 on its surface.This system demonstrated an up to four-fold enhancement in sensitivity toward polymer films, large proteins and different small molecules.Another transduction concept based on the plasmon detection of analytes was developed by Zhang et al. .These authors proposed the use of gold nanoparticles with a controlled size and homogeneous distribution, which were prepared via OAD on a substrate previously covered with a close-packed layer of polystyrene spheres acting as a template.This template layer promotes the development of well-defined gold nanoparticles with a high plasmon resonance activity, which turned out to be very sensitive in detecting biotin–streptavidin molecules.A detection limit of 10 nM was achieved by following both the position of the plasmon resonance band and its variation in intensity upon adsorption.A quite different approach to evaluate the concentration of H2O2 has been proposed by Zhao et al. , who developed an original protocol for the fabrication of catalytic nanomotors using dynamic OAD.These nanomotors consist of asymmetrically Pt-decorated TiO2 nanoarms grown on silica microbeads, and have a sensing mechanism that relies on the fact that the asymmetric distribution of Pt in the nanoarms induces their rotation in the presence of H2O2 at a variable rate of 0.15 Hz per each percent of this compound in the medium.As with conventional gas sensors, multisensing is also a challenge in the field of biosensor devices.In this context, Sutherland et al. have reported a new method that combines protein adsorption on the surface of a nanostructured quartz crystal microbalance with optical and electrochemical detection.The procedure allows for the quantification of both the amount and activity of bound proteins.The previous sections have clearly shown that OAD procedures constitute a mature technology with a clear understanding of their physical basis and a wide range of potential applications in different domains.Yet despite this, the incorporation of OAD thin films in real industrial applications is still limited, with different factors seemingly hindering the successful transfer of this technology from research laboratories to industrial production.The following features have been identified as the major shortfalls when it comes to the successful up-scaling of OAD methodology :Fabrication process is too complicated when used at an industrial scale.Low productivity because of the angular dependence of the deposition rate.Difficult to develop a general methodology usable for the large-scale production of different nanostructures and materials in a unique experimental set-up.For the most sophisticated nanostructures, there are limitations associated with the sophisticated movements needed for a large number of substrates and/or large substrate areas at an industrial level.Increased cost of OAD thin films when compared with other methods of nanostructuration that do not require vacuum conditions.Both electron beam evaporation and magnetron sputtering are widely used at an industrial scale for the deposition of thin films at normal geometry, with a large variety of products manufactured by these methods, either on large area surfaces or for the production of high-tech commodities.Moving large substrates in front of large magnetron targets or using roll-to-roll methods are just some of the approaches that have made these techniques cost-effective and highly competitive at an industrial scale .The limited industrial implementation of OAD procedures is therefore quite striking given that, as indicated by Suzuki , large dome-shaped substrate holders are currently used in batch-type coaters for e-beam deposition of optical films, and these would require only slight modification to successfully mount substrates obliquely for up-scaled production.As pointed out by Zhao , a general concern when using an oblique geometric configuration is the inherent decrease in deposition rate; however, this limitation can be easily overcome by compensating with an increase in the evaporation power.The large-scale industrial deployment of e-beam or MS methods in an OAD configuration would certainly be cost effective if innovative and reliable procedures to handle the substrates are implemented.Some proposals in this regard were made in 1980s by Motohiro et al. , who utilized the ion-beam sputtering technique with an ion gun in an OAD configuration and a roll-to-roll procedure for the preparation of CdTe thin films intended for photovoltaic applications.Other authors have made alternative proposals to use OAD methodology for large scale production.For example, Sugita et al. suggested the use of a continuously varying incidence method, which consists of a rolling system in front of an electron beam evaporator with a shadow shutter to ensure that only species arriving at a given oblique angle reach the rolling surface.This method was used for the fabrication of tape recording ribbons consisting of Co–Cr films .As already reported in Section 2.4, other authors have used a similar procedure for the fabrication of porous thin films for thermal isolation.In this, and other similar rotating configurations, the incoming angle of species relative to the rotating surface is systematically changed so that even if the growth of the film is dominated by shadowing processes that would produce porosity, it does not maintain a fixed and well defined microstructure due to the continuously varying zenithal deposition angle.Although the mass production based of OAD films is not extended to the large-scale production of final products, there are various niche applications that have already benefitted from this technology in recent years.For example, metal wire grid polarizers are already mass produced by the OAD deposition of antireflection FeSi and SiO2 layers on previously deposited aluminum columnar arrays .Magnetic recording/video tapes are another example where OAD techniques have been successfully employed in large scale production .Very recently, Krause et al. used a simplified roll-to-roll system to analyze the geometrical conditions and translate the deposition recipes of typical OAD films and architectures from flat moving substrate to a roll configuration.The representative structures obtained prove that a successful translation from the laboratory to mass production is possible, even for thin films with a complex shape.In summary, although the general principles for the successful implementation of OAD methodologies at an industrial scale are already available, and some successful attempts have been made to fabricate different serial products, the large scale engineering of the process and a reduction in cost is needed to make OAD thin films competitive with alternative technologies.We believe that the time is ripe to achieve this goal and that the near future should see new developments and applications in the market that are based on the outstanding properties and performance of thin films prepared by deposition at oblique angles.
The oblique angle configuration has emerged as an invaluable tool for the deposition of nanostructured thin films. This review develops an up to date description of its principles, including the atomistic mechanisms governing film growth and nanostructuration possibilities, as well as a comprehensive description of the applications benefiting from its incorporation in actual devices. In contrast with other reviews on the subject, the electron beam assisted evaporation technique is analyzed along with other methods operating at oblique angles, including, among others, magnetron sputtering and pulsed laser or ion beam-assisted deposition techniques. To account for the existing differences between deposition in vacuum or in the presence of a plasma, mechanistic simulations are critically revised, discussing well-established paradigms such as the tangent or cosine rules, and proposing new models that explain the growth of tilted porous nanostructures. In the second part, we present an extensive description of applications wherein oblique-angle-deposited thin films are of relevance. From there, we proceed by considering the requirements of a large number of functional devices in which these films are currently being utilized (e.g., solar cells, Li batteries, electrochromic glasses, biomaterials, sensors, etc.), and subsequently describe how and why these nanostructured materials meet with these needs.
149
Heat protection behaviors and positive affect about heat during the 2013 heat wave in the United Kingdom
In July 2013, the UK experienced a heat wave where maximum temperatures exceeded 30 °C for seven consecutive days, from 13 to 19 July, and exceeded 28 °C on nineteen consecutive days, from 6 to 24 July.This heat wave was the most significant since July 2006, with the summers of 2007–2012 having mostly been cool and wet compared to the long-term average.Heat waves are projected to become more frequent, longer lasting, and more intense as climate change unfolds.Heat protection behaviors will, therefore, become increasingly important for UK residents.Daily mortality rates tend to rise as temperatures move above the long-term local average.In the temperate climate of the UK, individuals can experience thermal discomfort when outside temperatures reach 22 °C or 71.6 °F.Prolonged exposure to high temperatures during heat waves is associated with excess deaths, primarily in older age groups.The 2003 European heat wave caused around 35,000 deaths including 2000 in England, Public Health England, and Met Office, 2013).Heat waves have also been associated with increased hospitalizations and emergency-room visits.Summer heat can have rapid health consequences including heat stroke, which can be fatal or cause neurological sequelae.Although the 2013 heatwave was more notable for its length than for its intensity, initial syndromic surveillance data suggest minor but significant increases in heat-related illness that are in line with previous hot periods.Morbidity and mortality statistics have not yet been published.Because adverse health outcomes from heat are more likely among older people and those in poor health, heat protection messages often target these groups.However, heat protection messages are also relevant to healthy individuals of younger ages, who can experience heat illness as a result of prolonged exposure to high temperatures or vigorous outdoor physical activity in hot weather.Reaching young people is also important because once risk protection behaviors are learned they are more likely to be continued.In England, the National Health Service, Public Health England and the Met Office publish an annual Heatwave Plan with warnings about the dangers of heat and guidance on which heat protection behaviors to implement during heatwaves."Despite being moderate in intensity, the prolonged heat in July 2013 reached sufficient levels to trigger health warnings from the 13th to the 23rd Hence, the release of these warnings provided the opportunity to test people's responses, given the conditions of the 2013 heatwave.As described below, research on risk perception and communication has identified several factors that may motivate risk protection behavior.The present study examines the relationship of hearing heat protection messages with three of those factors: perceiving the recommended protection behaviors as more effective, feeling less positive about the risk, and having more trust in those issuing the recommendations.Although hot weather poses potential health threats, many UK adults seek the outdoors during hot weather, without protecting themselves from heat.When going on holiday, tourists from the UK deliberately spend many hours in the sun, including during the hottest time of day.Interviews with UK migrants to Spain suggest that they are less likely than local residents to implement behaviors that protect them against heat, because they question the effectiveness of doing so.Even vulnerable older adults in the UK perceive heat protection behaviors as ineffective and unnecessary.Taken together, these findings suggest that UK residents who do perceive heat protection behaviors as more effective are more likely to implement them.Risk researchers increasingly recognize the importance of feelings in shaping risk perceptions and responses to risk communications.Classic studies have suggested that affective responses to experiences are automatic and serve as cues for subsequent perceptions of risk.According to research on the affect heuristic, potentially risky experiences that evoke negative feelings will fuel concerns about risk protection and potentially risky experiences that evoke positive feelings will soothe concerns about risk protection.Indeed, some risks are unique in the sense that they tend to evoke positive affect among specific audiences, including wood-burning fire places, risky driving, and sunbathing.In line with research on the affect heuristic, people who report more positive affect for these experiences tend to judge the need for risk protection behaviors to be lower.In the UK, thoughts of hot summers often evoke positive affect."Many UK residents respond positively to the prospect of warmer summers, in contrast with Americans' negative responses.Older UK residents, who are especially vulnerable to heat, still describe heat as enjoyable.Accordingly, it is possible that messages about risks of hot weather inadvertently evoke positive feelings about heat among UK recipients, thus reducing the perceived need for risk protection.Indeed, messages that evoke positive moods may decrease perceptions of risk.Taken together, these findings suggest that UK recipients who report less positive affect about heat after hearing heat communications will be more likely to protect themselves against heat.According to the risk perception and communication literature, trust in the communicating organizations is essential for effective risk communication, because people are more likely to listen to the organizations they trust.Especially when people know relatively little about a risk, their decisions about whether to follow a recommendation may depend on how much they trust the communicating institutions.During UK heat waves, recommendations to protect against heat are released by the National Health Service, Public Health England, and the Met Office.Overall, these findings suggests that people who report greater trust in those agencies are more likely to implement heat protection behaviors.The July 2013 UK heat wave provided a unique opportunity to examine public responses to heat protection messages, including the role of perceived effectiveness, positive affect about heat, and trust.In a UK-wide survey conducted in October 2013, we assessed four specific research questions: 1) Who heard heat protection recommendations?,2) Was hearing heat protection recommendations associated with perceived effectiveness of behaviors, positive affect about heat, and trust in communicating organizations?,3) Was hearing heat protection recommendations related to heat protection behaviors during the 2013 heatwave – and, if so, what was the role of perceived effectiveness, positive affect about heat, and trust?,4) Was hearing recommendations, perceiving effectiveness, having positive affect about heat, and reporting trust related to intentions to implement heat protection behaviors in the future?,A total of 762 UK participants took part in an online survey conducted by survey research company Research Now.Participants were recruited through email invitations that especially targeted older adults.We excluded 61 participants because they had missing responses for key variables, and hence could not be included in all analyses.The remaining participants had complete data.Sample characteristics are presented in Table 1, with a comparison to the UK population appearing in Table S1 of the Electronic Supplemental Materials.Our sample had more males, was less ethnically diverse, and had completed higher levels of education, compared to the overall population.Additionally, our sample was markedly older than the general population, t = 12.45, p < .001, reflecting our strategy to oversample older adults.Our analyses tested our research questions while taking into account these demographic variables.Participants received an email invitation to an online survey about ‘weather’ and were paid £1 for completion.They were part of the no-intervention control group in a larger study that tested strategies for influencing feelings about hot weather and intentions to protect against heat.Therefore, our participants only received the following instructions: “We are interested in your thoughts about the weather.We will also ask questions about your health and other background information.,No additional information regarding the survey was provided.Questions relevant to our analyses are described below.Participants reported whether, during the summer of 2013, they had heard specific public recommendations about how to protect themselves from heat."Possible answers were ‘yes’ and ‘no.’ Those answering ‘yes’ were asked where they had heard these recommendations, with the options being Heatwave Plan, Public Health England, NHS, Met Office, Internet, Doctor's practice or hospital, Flyer, TV, radio, word of mouth, ‘I can't remember,’ and ‘Other’.Participants rated ten heat protection behaviors for their effectiveness on a 5-point scale ranging from 1 to 5 in response to the question “How effective do you think the following strategies are to protect yourself from heat in the summer?,Specifically, they rated the effectiveness of keeping out of the sun between 11:00am and 3:00pm, staying in the shade, applying sun screen, avoiding extreme physical exertion, having plenty of cold drinks, avoiding excess alcohol, keeping windows that were exposed to the sun closed during the day, opening windows at night when the temperature has dropped, closing curtains that received morning or afternoon sun, and using electric fans.All except for ‘applying sun-screen’ were taken directly from the Heatwave Plan published by the National Health Service, Public Health England, and the Met office."Reliability across the ten items was sufficient to warrant computing each participant's mean rating, reflecting overall perception of heat-protection effectiveness.Participants rated their positive affect about hot weather, on six items, using a scale anchored at 1 to 5.They considered I love hot weather, I want to get tanned, I spend time in the sun when I can, I am concerned about skin cancer, A positive impact of climate change is that summers will get hotter, and I go on holiday to seek out warm or hot weather."Reliability across the six items was sufficient to warrant the computation of each participant's mean rating, as reflecting overall positive affect about heat.Participants rated how much they trusted the three agencies that collaborated on the Heat Wave Plan: National Health Service, Public Health England, and the Met Office, on a scale from 1 to 5."Reliability across the three items was sufficient to compute participants' average trust ratings.Participants rated how often they engaged in the ten heat protection behaviors referred to in section 2.2.2 “during the heat wave of July 2013.,They used a scale ranging from 1 to 5."Reliability across the ten items was sufficient to warrant computing each participant's mean rating of 2013 heat protection behavior.Participants rated how often they would engage in each of the ten heat protection behaviors referred to in section 2.2.2 “next summer during very hot days.,Ratings could range from 1 to 5."Reliability was sufficient to warrant computing each participant's mean rating of intended future heat protection behavior.Participants reported their age, gender, ethnicity, highest level of education completed, yearly household income before tax, and whether they looked after young children or elderly dependents.To avoid small response categories, we dichotomized ethnicity, education, and income for our analyses.Table S1 of the Electronic Supplemental Materials shows a more detailed demographic breakdown.Participants self-rated their health as ‘excellent,’ ‘good,’ ‘fair,’ or ‘poor,’ as in previous work.They also received the question “In your life, have you experienced the following outcomes as a result of heat?,They then answered ‘yes’ or ‘no’ for thirteen health effects, including dehydration, heat stroke, headaches, dizziness, nausea or vomiting, confusion, aggression, convulsions, loss of consciousness, tiredness, sun burn, skin cancer, and missed work."Reliability across the thirteen items was sufficient to warrant calculating each participant's percent of reported adverse health effects.Because the distribution was significantly different from normal, we performed a dichotomizing median-split: Participants with above-median adverse experiences were classified as relatively more ‘prone to adverse experiences.’,We also assessed whether participants experienced these adverse outcomes during the heat wave of July 2013, and found that those who reported above-median adverse heat outcomes in the past were more likely to report above-median adverse heat outcomes in 2013.Using Chi-Square tests and logistic regression, we assessed the relationships of having heard heat protection recommendations with perceived effectiveness, positive affect about heat, trust, and demographic variables.Relationships with implementing protection behaviors during the heat wave of July 2013 were assessed using t-tests and linear regression models.We used linear regression because averaged responses across multiple ratings should be treated as interval rather than ordinal data.We used the Preacher and Hayes bootstrapping procedure to test whether the relationship between having heard heat protection recommendations and having implemented heat protection behaviors was mediated by perceived effectiveness of heat protection behaviors, positive affect about heat, and trust.Finally, using t-tests and linear regression models, we assessed predictors of intended future heat protection behaviors.All analyses were performed in SPSS 21.More than half of our sample reported having heard heat protection recommendations through at least one channel during the summer of 2013.Participants also indicated hearing heat protection recommendations through the Met Office the National Health Service, and Public Health England."Other common responses were TV, Internet, word of mouth, radio, doctor's practice or hospital, and the Heatwave Plan.Reports of hearing heat protection recommendations were more common among participants who experienced more adverse heat effects, both in their lifetime and in 2013, and among those who took care of isolated, elderly, or ill individuals."A logistic regression including all demographic variables found that reporting adverse heat effects over one's lifetime was the only significant predictor of hearing heat protection recommendations.Participants who had heard heat protection recommendations rated heat protection recommendations as significantly more effective = −2.19 p < .05), reported significantly more positive affect about heat = −2.70 p < .01), and had greater trust in organizations making these recommendations = −6.61, p < .001).Each of these three relationships held in linear regressions that included the demographic variables.In addition, there were significant associations with demographic variables in each model: First, perceived effectiveness of heat protection behaviors was significantly higher among participants who were older, female, white, and more prone to adverse heat effects.Second, positive affect about heat was significantly lower among participants who were older, reported lower income levels, rated their health as poor, and reported being more prone to adverse heat effects.Third, trust in organizations was lower among older participants.Participants who reported hearing heat protection recommendations indicated having implemented protection behaviors more often during the July 2013 UK heat wave, as reflected in higher average frequency ratings across the ten behaviors = 3.37, p = .001).As seen in Fig. 1, significant relationships also emerged for five of the ten individual behaviors: applying sun-screen, using an electric fan, closing curtains during the day, keeping windows exposed to the sun closed during the day, and having plenty of cold drinks.As seen in Table 2, having heard heat protection recommendations was associated with more frequent implementation of heat protection behaviors, even when including the demographic variables.Significant independent contributions emerged for being older, female, and prone to adverse heat effects.Adding perceived effectiveness of the heat protection behaviors and positive affect about heat increased the predictive ability of the model.Auxiliary analyses examined whether the relationship between hearing heat protection recommendations and implementing behaviors may vary between demographic groups.We found one significant interaction such that younger participants who heard recommendations implemented heat protection behaviors more frequently.A multi-mediation analysis that included demographic variables listed in Table 1, found that the relationship between having heard heat protection recommendations and having implemented heat protection behaviors was mediated by perceived effectiveness of the heat protection behaviors, and suppressed by positive affect about heat, with no independent mediation role for trust."Fig. 2 shows the significant steps of the multi-mediation analysis, including that having heard heat protection recommendations was associated with stronger perceptions of the behaviors' effectiveness as well as more positive affect about heat; perceptions of the behaviors' effectiveness were positively associated with implementing the heat protection behaviors, whereas positive affect about heat was negatively associated with implementing them; the positive association between having heard heat protection recommendations and implementing heat protection behaviors was reduced after taking into account perceived effectiveness of behaviors and positive affect about heat.Of the demographic variables in this model, only being female and being prone to adverse health effects had significant additional effects, although excluding these variables did not affect overall conclusions.In addition, we found that this multi-mediation model was significant for each protection behavior.Participants who had heard heat protection recommendations reported stronger intentions for implementing heat protection behaviors in the future than those who had not = 2.41, p = .02).Significant differences emerged for two specific behaviors: applying sun screen = 2.37, p < .02) and using electric fans = 2.75, p < .01).Table 3 shows that having heard heat protection recommendations was no longer related to future intentions after accounting for the demographic variables, with significant relationships for age, being female, being in poor health, and having experienced more adverse heat effects.Model 2 improved predictions of future intentions, by including their positive relationship with perceived effectiveness and their negative relationship with positive affect about heat, even after controlling for past behavior.Public health concerns are expected to become more serious as heat waves increase in frequency, intensity and duration over future decades.Policy makers therefore recognize the importance of promoting heat protection behaviors.In a UK sample, we examined the role of having heard heat protection messages, perceived effectiveness of recommended behaviors, trust in communicating organizations, and positive affect about heat, in reported behaviors during the July 2013 UK heat wave and intentions for future behavior.Below, we discuss findings associated with our four main research questions.Our first research question focused on who heard heat protection recommendations during the 2013 heatwave in the UK.We found that more than half of our participants had heard recommendations about how to protect themselves during the July 2013 heath wave.Having heard recommendations was more likely among those who indicated being in poor health, having experienced more adverse heat effects, and taking care of the elderly.However, age was not a predictor of having heard heat protection messages, even though older people are intended targets because they are at greater risk for adverse heat effects."Our subsequent two research questions examined psychological mechanisms that may affect people's behavior in response to heat protection messages.In regards to our second research question, we found that having heard heat protection recommendations was associated with perceiving the recommended behaviors as more effective, feeling more positive about heat, and having greater trust in the organizations behind the Heatwave Plan.More importantly, findings associated with our third research question suggest that hearing recommendations increased the likelihood of implementing heat protection behaviors.A subsequent multi-mediation analysis pertaining to our third research question found that the relationship between hearing heat protection messages and implementing the recommended behaviors was explained by perceiving the behaviors as more effective.However, positive affect about heat suppressed the relationship.The mediation results suggest that heat protection messages may have successfully promoted heat protection behavior during the 2013 UK heatwave, by making the recommended behaviors seem more effective.The mediation results also suggest that the impact of heat protection messages on reported behaviors was undermined by evoking positive affect about heat, consistent with the affect heuristic.Thus, our findings suggest that warnings about impending heat may inadvertently make UK residents feel good, because they tend to have positive memories of hot summers.This is in line with the affect heuristic, which posits that potentially risky experiences automatically trigger positive emotions, which are then used as cues to become relatively unconcerned about risk protection.While it may be rare for potential risks to evoke positive feelings, those that do tend to be perceived as requiring less risk protection.The hot weather of 2013 may have especially triggered positive affect among UK residents because of the cool and wet summers in preceding years.If positive feelings about heat are used as cues to reduce the perceived need for heat protection behaviors, then messages may be more effective if they are designed to evoke less positive or more negative feelings about heat.One possibility for achieving such messages might be to remind people of any negative feelings they experienced when they felt uncomfortably hot in the past.Finally, our fourth research question examined predictors of intentions for heat protection during future hot summers.Patterns were similar as seen for reported behaviors during the 2013 heatwave, such that reported intentions about future heat protection behaviors were similarly related to perceptions of their effectiveness and feelings about heat.Individual differences in trust seemed to play a minor role in promoting heat protection behaviors, perhaps because trust in the National Health Service, Public Health England, and the Met Office is generally sufficient.One limitation to these results is that all analyses are correlational, limiting causal inferences.Although our mediation analysis supported a model motivated by theories of risk perception and communication, alternative relationships are feasible.For example, people who feel positive about heat may have paid greater attention to heat warnings, or people who protect themselves more may have developed more positive feelings about heat due to their safe enjoyment of hot weather.Experiments with random assignment to hearing heat protection messages could inform these questions.A second limitation is relying on retrospective self-reports, in which people may misremember or misreport their summer feelings and behaviors.Third, our online sample may have underrepresented people vulnerable to heat due to poor health, even though we intentionally oversampled older adults.Finally, while the July 2013 heatwave was relatively long, maximum temperatures were not as high as in the 2006 heat wave.It is possible that heat protection warnings would evoke stronger negative affect and increased willingness to implement of heat protection behaviors during more severe heatwaves.Within these constraints, our findings suggest that once heat protection messages reach their intended audiences, they convey the effectiveness of the recommended behaviors.However, these communications might have greater impact if they also induced unpleasant feeling about heat – before it actually has unpleasant effects.
Heat waves pose serious health risks, and are expected to become more frequent, longer lasting, and more intense in the future under a changing climate. Yet, people in the UK seem to feel positive when thinking about hot weather. According to research on the affect heuristic, any positive or negative emotions evoked by potentially risky experiences may be used as cues to inform concerns about risk protection. If so, then their positive feelings toward hot weather might lead UK residents to lower intentions to adopt heat protection behaviors. Here, we examine the relationships between heat protection behaviors during the July 2013 UK heat wave and self-reports of having heard heat protection recommendations, feeling positive affect about heat, seeing heat protection measures as effective, and trusting the organizations making those recommendations. Responses to a national survey revealed that 55.1% of participants had heard heat protection recommendations during the 2013 UK heat wave. Those who reported having heard recommendations also indicated having implemented more heat protection behaviors, perceiving heat protection behaviors as more effective, feeling more positive about heat, and intending to implement more protection behaviors in future hot summers. Mediation analyses suggested that heat protection recommendations may motivate heat protection behaviors by increasing their perceived effectiveness, but undermine their implementation by evoking positive affect about hot weather. We discuss our findings in the context of the affect heuristic and its implications for heat protection communications.
150
Determination of pore structures and dynamics of fluids in hydrated cements and natural shales by various 1H and 129Xe NMR methods
In cement-based materials, pore networks influence the strength, durability and permeability .The pore sizes range from nanometers to millimeters.Natural shales are porous materials that act as hydrocarbon reservoirs and are composed of several clay minerals .The heterogeneous pore structures of shales have a strong influence on the accumulation, migration and uses of shale gas .Therefore, detailed knowledge of pore structures as well as dynamics of fluids in cements and shales is of paramount importance, particularly for the safety of oil well completions.Several techniques, such as mercury intrusion porosimetry, nitrogen adsorption/desorption, scanning electron microscopy, transmission electron microscopy, and small angle X-ray scattering, have been exploited to study the porous materials, and they have their own strengths and limitations .Nuclear magnetic resonance spectroscopy has proven to be an excellent tool for gaining molecular-level chemical, dynamic and spatial information .NMR allows investigation of fluids deep inside optically opaque materials, which is a significant strength as compared to many other methods listed above.NMR relaxation, diffusion, cryoporometry and magnetic resonance imaging measurements of fluids contained in the pores provide versatile information about the porosity, pore size distribution as well as fluid transport and chemical exchange phenomena .There is a long history of investigations of the porous structure of cements and shales with 1H NMR relaxometry .Typically, fluid molecules are characterized by a distribution of relaxation times due to the heterogeneous environments existing inside porous materials.The relaxation time distribution can be extracted from the experimental data by the inverse Laplace transform and, in the simplest case, it reflects the pore size distribution .T2-T2 relaxation exchange experiments allow one to quantify the exchange of fluid molecules between the pores of different sizes, by exploiting T2 relaxation contrast .This is not possible using the traditional exchange spectroscopy method , because typically the fluid molecules in different pores have the same chemical shift.The off-diagonal cross peaks in the T2-T2 maps reveal unambiguously the exchange between the pores due to diffusion.The method has been used to study the exchange of water in cements , wood , soil , rocks , articular cartilage , silica, borosilicate and soda lime glass particles , as well as the proton exchange in water/urea system .NMR cryoporometry is a technique for the determination of the pore size distribution via the observation of the solid-liquid phase transition temperature of a medium confined in the pores .According to the Gibbs-Thomson equation , the melting point depression ΔTmp in a cylindrical pore is inversely proportional to the pore radius Rp:ΔTmp = Tmp – Tmp = 2 σsl Tmp / = kp / Rp.Here, Tmp and Tmp are the melting temperatures of bulk and confined liquid, σsl is the surface energy of the solid-liquid interface, ΔHf is the specific bulk enthalpy of fusion, and ρs is the density of the solid.The constant kp is characteristic of each medium.Because usually the T2 relaxation time of frozen liquid is much shorter than that of unfrozen liquid, by using a proper echo time in a spin-echo pulse sequence, one can measure solely the signal of the unfrozen substance, with negligible contribution from the frozen component.The melting point distribution of the substance is determined by measuring the amplitude of the signal of the unfrozen liquid component as a function of temperature.This is converted into pore size distribution by using the Gibbs-Thomson equation.NMR cryoporometry has been used for the characterization of various materials , including cements and shales .Xenon is an inert gas, which has a129Xe isotope with a spin-½ nucleus and high natural abundance.Because of its easily polarizable electron cloud, the chemical shift of 129Xe is extremely sensitive to its local environment .Therefore, it is an ideal probe for the investigation of porous host media .The chemical shift and shielding anisotropy of xenon provide information about the pore size and geometry, as well as surface interactions.Xenon has been exploited in the investigation of many media, including zeolites and clathrates , as well as silica , alumina and carbonaceous materials, membranes , and cages .Recently we exploited, for the first time, 129Xe NMR for the characterization of cement and shale materials .It is interesting to compare those materials because, on the one hand, there are similarities in their chemical and mineral composition as well as texture but, on the other hand, shales are more heterogeneous.From the BASF hydrated cement samples, we observed one 129Xe resonance implying the presence of nanopores and another from larger pores or free gas.In contrast, the shale samples received from China showed an extremely broad 129Xe signal, covering a range of about 600 ppm, due to paramagnetic substances present in the sample.In this work, we extend the previous study by comparing hydrated cements from two different manufactures, BASF and Portland, as well as shales received from China and USA.Xenon is a very interesting fluid for investigating the shale samples, because its size is similar to that of methane, and therefore it can be expected to probe similar environments and have similar dynamics as natural hydrocarbons in shales .In addition to 129Xe NMR, we exploit 1H NMR cryoporometry and T2-T2 exchange measurements to estimate the pore size distribution and dynamics of acetonitrile in the shale samples.The NMR measurements are complemented with field-emission scanning electron microscope and X-ray diffraction analysis.Commercial cement samples were purchased from BASF and Portland companies.Two samples were hydrated for four months with an initial water/cement ratio of 0.3 and 0.5.During the hydration period, the samples were stored in a closed chamber at a constant temperature.The BASF and Portland cement samples with initial water-to-cement ratios of 0.3 and 0.5, are referred to as B0.3, B0.5 and P0.3, P0.5, respectively.We investigated also two siliceous, oil-bearing shale samples from the northern Rocky Mountains and one carbonate-rich shale sample from the Eagle Ford Formation in southern Texas, USA.The samples were labeled as Sil3-14, Sil4-34 and EF1-223, and they were drilled from the following depths: 3594, 3612 and 3461 m, respectively.Detailed characterization of the porosity, pore size distribution and chemical composition of the same samples can be found from Ref. .We compared the results with our previous experiments carried out for the black shale core samples that were collected from the Cambrian-Silurian strata at the Low Yangzi Plateau in China .This source is well known for the sedimentary facies with abundant organic materials.One of the shale samples was from Hubei Province of China, while two other samples were from Chongqing.The three shale samples were labeled as HU234, CH2175 and CH3634, where HU and CH refers to Hubei and Chongqing, respectively, and the number corresponds to the depth in meter, from which the sample was drilled.All the cement and shale samples were ground into a fine powder.The shale samples were also sieved by passing through a 63 μm sieve.The sieving was not done for the cement samples.The ground cement and shale samples were added to 10 mm medium-wall NMR tubes.The samples were subsequently dried overnight at 70 °C in a vacuum line.Thereafter, 129Xe isotope-enriched xenon gas was condensed in the sample using liquid N2, and the samples were flame-sealed.The xenon pressure inside the glass tube was about 4 atm.Variable-temperature 129Xe NMR spectra were measured from 225 to 290 K in steps of 10 K using a Bruker Avance III 300 spectrometer with a magnetic field strength of 7.1 T and a 10-mm BBFO probe.The temperature stabilization time was 30 min.The spectra were measured with a spin-echo pulse sequence to avoid background distortions, which were present in the spectra measured using only a 90° excitation pulse.The lengths of 90° and 180° pulses were 26.5 and 53 μs.The number of accumulated scans was 512 and 4096 for the cement and shale samples, respectively.The τ delays between the pulses in the spin echo experiment were 1 and 15 μs for the cement and shale samples, respectively.For the cement samples, the relaxation delay was 8 s and the total experiment time about 1 h. For the shale samples, the relaxation delay was 3 s and experiment time approximately 3.5 h.The chemical shift scale of the 129Xe spectra was referenced so that the chemical shift of free 129Xe at zero pressure is 0 ppm.Two-dimensional 129Xe EXSY spectra of the cement samples were measured using Bruker Avance III 400 spectrometer at a magnetic field of 9.4 T and a 10-mm BBFO probe.The length of the 90° pulse was 36 μs and the number of accumulated scans was 128.The relaxation delay was 5 s and the total experiment time about 6 h.The mixing time τm was varied from 1 to 90 ms.Some ground and dried shale samples were immersed in an excess of acetonitrile in a 10 mm medium-wall sample tube to carry out NMR cryoporometry analysis.1H NMR cryoporometry experiments were performed on a Bruker Avance III 300 spectrometer using the 10-mm BBFO probe.Variable temperature NMR spectra were measured using a CPMG pulse sequence with an echo time of 0.3 ms and 2048 echoes measured in a single scan.The measurement temperatures varied from 180 to 240 K.The temperature step was 5 K from 180 to 210 K, 2 K from 212 to 224 K, 1 K from 225 to 236 K and then 2 K from 238 to 240 K.The heating rate in the experiments was 1 K per 7.5 min.The number of accumulated scans was 64 and the experiment time approximately 4 min.The integrals of signal for each temperature were multiplied by a factor of T/T0 to exclude the temperature dependence of magnetization given by the Curie law.This correction makes the integrals of signals proportional to the true amount of the unfrozen acetonitrile liquid.The pore size distribution was extracted by a least-squares fit of a model function to the integrals as described by Aksnes et al. .Hansen et al. demonstrated the presence of a non-freezing layer with thickness of about 0.3–0.8 nm.Therefore, 0.6 nm was added to the diameters for the calculation of the true pore size.The value of constant kp = 545 KÅ in Equation for acetonitrile, was experimentally determined by Aksnes et al. by using silica materials.We have tested with known silica materials that this value of the constant gives reliable results .Because silica is one of the main components of the cement and shale materials studied here, it is justified to use this value in the analysis.1H T2 relaxation time distributions of acetonitrile in the shale samples were determined by performing Laplace inversion of the CPMG data.The Laplace inversion algorithm was received from the research group of late prof. Callaghan.1H T2-T2 relaxation exchange experiments for acetonitrile in the shale samples were carried out at room temperature using the Bruker Avance III 300 spectrometer and 10-mm BBFO probe.The relaxation delay was 3 s and the echo time 2 ms. The number of echoes in the direct and indirect dimensions were 32, and altogether 8 scans were acquired.The experiment time was 8 h for τm = 10 ms mixing-time experiment.The mixing time was varied from 10 ms to 1.5 s.The T2-T2 relaxation exchange maps of acetonitrile in the shale samples were determined by the Matlab-based Laplace inversion algorithm received from the research group of late prof. Callaghan.The chemical composition of the cement samples, measured by XRD, is listed in Table S1.The compositions of the BASF and Portland cements were very similar.As explained in the SI, the main hydrated phases were calcium silicate hydrate and calcium hydroxide .No paramagnetic minerals were observed in the cement samples.The results of XRD analysis of the shale samples from USA can be found from Tables S2 and S3 .The samples consisted mainly of clay minerals, calcite as well as fine-grained quartz.The samples included also 1–5% of pyrite and traces of ankerite/siderite that contain the Fe ion with the d6 configuration and, thereby, may in principle be magnetic.Both pyrite and the other polymorph of FeS2, marcasite, have been reported to possess non-magnetic ground states .On the other hand, the shale samples from China displayed clear signature of paramagnetism , with siderite as the most likely cause.Indeed, in normal pressure conditions, the Fe centers of this mineral are found in the high-spin, magnetic state .The FESEM images of cement and shale samples at two different magnification scales are shown in Fig. 1.As expected, the cement samples appear more homogenous than shale samples.There are porous structures visible from nanometer to micrometer scale in both the cement and shale samples.129Xe spin-echo spectra of the BASF and Portland cement samples measured at 225 and 290 K are shown in Fig. 2a and b, respectively.The spectra show two major signals.The stronger signal centered around 0–20 ppm is interpreted to arise from xenon gas in the larger pores visible in the FESEM images as well as inter-particle voids, and it is called the free gas signal.The other, smaller signal centered around 20–50 ppm at 290 K and 50–130 ppm at 225 K highlights the xenon in mesopores.Terskikh et al. defined the correlation between the chemical shift values of 129Xe and pore size for silica-based materials.By applying the same correlation to the cement samples, the size of the mesopores was estimated to be in the 10–50 nm range, which is a typical size for capillary pores in hardened cement .The chemical shift values of the mesopore signals of the Portland cement samples are higher than those of the BASF cement samples, suggesting that the capillary pores are smaller in the former samples.In the case of Portland cement, the chemical shift of the mesopore signal is significantly higher for the P0.5 sample than for P0.3 sample, indicating that the higher initial water-to-cement ratio resulted in the smaller capillary pores.The P0.5 cement sample showed another well-resolved, small mesopore peak centered around 130 ppm at 225 K and 80 ppm at 290 K.This implies the simultaneous presence of much smaller mesopores, with pore size of about 6 nm .This is a typical size of gel pores in hardened cement.We note that, in addition to the pore size, there are many other factors contributing to chemical shift of 129Xe .For example, the cement samples have Ca, Fe, Al and Si in their structure that can strongly affects the nature of -OH groups.Moreover, Fe ions can also affect the chemical shift.Here, δS is the chemical shift of 129Xe adsorbed on the surface of the pore, D is the mean pore size, η is the pore geometry factor, R is the universal gas constant, T is the temperature, K0 is a pre-exponential factor, and Q is the effective heat of adsorption.The model explains that the chemical shift values increase with decreasing temperature because the population of xenon adsorbed on the pore surface increases.The fits of Equation to the capillary mesopore chemical shift values of cement samples are shown in Fig. 2c.The parameters obtained from fits are listed in Table 1.The values of the heat of adsorption are quite similar for different samples, ranging from 12.1 to 14.9 kJ/mol and being comparable to the heats of adsorption of the silica gels .The chemical shift values of the 129Xe adsorbed on the surface of pores are also the same results within the experimental error, i.e., about 180 ppm, which is larger than for silica, alumina and MCM-41 materials .The difference in the chemical structure of the pore surface and heterogeneity of pore structure of the cement samples could be a reason behind these differences.The BASF cement samples have higher D/ƞK0R values as compared to Portland samples, reflecting the bigger capillary pore size.2D 129Xe EXSY spectra of the P0.5 cement sample measured at room temperature with varying mixing time, are shown in Fig. 3.The off-diagonal cross peaks become clearly visible at the longer mixing time, showing a diffusion-driven exchange process between the free gas and capillary mesopore sites.The fits of two-site exchange model to the experimentally determined amplitudes of the diagonal and cross peaks are shown in Figs. 3d and S8, and the resulting parameters in Table 2.The exchange rates of the BASF samples are higher than those of the Portland samples.The exchange rates follow inversely the chemical shifts of the capillary micropore signals; the larger chemical shift, the smaller exchange rate.This implies that the diffusion from the smaller pores to the free gas site is slower than from the larger pores.129Xe spin-echo spectra of the shale samples from USA measured at 225 and 290 K are shown in Fig. 4a and b, respectively.Also in this case the spectra show two dominant signals: one from the free gas around 0 ppm and another, very broad signal from the pores.The chemical shift range of the pore signals is about 70–300 ppm at 290 K and 90–330 ppm at 225K.The relationships between the chemical shift and pore size determined by Demarquay et al. and Terskikh et al. indicate that xenon gas explores a broad range of micropores and mesopores with different sizes in the shale samples, the pore sizes ranging at least from 1 to 10 nm.The nitrogen sorption isotherms of the same samples shown in Ref. show a bimodal pore size distribution with a narrow peak centered around 1–2 nm and a broad peak extending from 3 to 200 nm; the siliceous samples had mainly clay-associated porosity with a peak around 2 nm while the Eagle Ford samples had a broader pore size distribution with less of clay peak.In fact, similar bimodal features are visible also in the 129Xe NMR signal observed from the shale pores.The pore signal is slightly more intense for the siliceous samples than for the carbonate-rich sample, which may reflect their slightly higher porosity observed by 1H NMR .Restricted motion accompanied by anisotropic interactions may have also some effect on the line shape.For the comparison, the 129Xe spin-echo spectra of the Hubei and Chongqing shale samples from various reservoir depths measured at 225 and 290 K are shown in Fig. 4c and d, respectively .Because those samples included significant amount of paramagnetic Fe-bearing minerals, i.e., siderite, they showed extremely broad signal extending from −300 to 300 ppm.The large negative chemical shifts proved that the paramagnetic interactions had a dominant effect on the spectral features in those samples.Because no such negative shifts with respect to the gaseous 129Xe reference were observed from the new samples from USA analyzed in this work, the effect of paramagnetic substances is significantly smaller in the shales from USA.One may expect as a consequence of the smaller content and/or different chemical structure of the paramagnetic substances.Indeed, the predominant impurity component of the shale samples from USA, pyrite, with the Fe centers in the low-spin configuration, is not expected to show any paramagnetic effects at low temperature.However, Fe systems are known to possess spin-crossover effects between the low-spin and high-spin paramagnetic states upon applying temperature or pressure .In the case of the present samples, extension of the measurements to significantly higher temperatures than presently used, may in principle lead to population of an excited S = 2 spin multiplet and, thereby, thermal onset of paramagnetism.This would show up as appearance of negative shifts, as well as an additional source of non-monotonic temperature dependence of the shifts, in the USA-based shales, too.Variable-temperature 129Xe NMR spectra of the shale samples are shown in Figs. S5–S7.The chemical shift values of the pore gas signals slightly increase with decreasing temperature, but the change is not as significant as in the case of the cement samples.It turned out that Equation was not able to explain the observed chemical shift behavior, implying that the simplified model is not valid for the shale samples because of the small pore sizes, complex pore structure, and heterogeneous pore surface chemistry.The first echo signal amplitude of the 1H CPMG data of acetonitrile confined in the shale samples as a function of temperature is shown in Fig. 5a.As explained in the Introduction, the amplitude is proportional to the amount of liquid acetonitrile.There is a gradual increase in the amplitude of the 1H signal with increasing temperature in the lower temperature region due to the melting of the liquid confined to the pores.The steep increase in the amplitude above 223 K indicates melting of acetonitrile in larger pores and spaces in between the particles.The pore size distributions resulted from the NMR cryoporometry analysis are shown in Fig. 5b.The pore sizes observed by NMR cryoporometry range from 10 to over 100 nm, and the amount and size of mesopores seem to be larger for the carbonate-rich sample than for the siliceous samples.However, there may be some relaxation weighting in the distributions, because in the very small pores in shales, 1H T2 relaxation time may be so short that the nuclei may not be observed in the experiment .Therefore, some bias in the pore size distributions can remain.On the other hand, the distributions are in good agreement with the nitrogen sorption results, which also showed broad pore size distributions .The 1H T2 relaxation time distributions of acetonitrile in the shale samples as a function of temperature are shown in Figs. 5c, 5d, S9 and S10.There is a broad distribution of relaxation times in the whole temperature range.The distributions seem to be divided into multiple discrete components, but most probably this is a result of the so-called pearling artefact typical for Laplace inversion , and the true distribution is more continuous.Below the melting point of bulk acetonitrile, the observed T2 values range from 0.1 to 100 ms for the carbonate-rich sample and from 0.5 to 100 ms for the siliceous sample, while above that temperature they range from 1 ms to 1 s for all the samples.The diffusion-driven exchange process between the bulk and confined acetonitrile in the shale samples was observed by using T2-T2 relaxation exchange experiment.It is not possible to observe the exchange process with the traditional EXSY method, because both sites have the same 1H chemical shift.The T2-T2 maps are shown in Fig. 6a, S11a and S12a.The maps include from three to five diagonal peaks, labeled alphabetically in the figures, as well as some cross peaks.Again, at least some of the peak splitting is most probably due to the pearling artefact.For the carbonate-rich EF1-223 shale sample, we identified two groups of cross-peaks, labeled by F and S, which reflect slower and faster exchange rates.As the S cross peaks connect the intermediate and longest T2 values, we interpreted that it reflects the exchange between bulk acetonitrile and acetonitrile in the larger pores.The F cross peaks, in turn, connect the intermediate and small T2 values, and therefore may reflect the exchange between macro- and mesopores.The fits of two-site exchange model to the amplitudes of diagonal and cross peaks as a function of mixing time are shown in Fig. 6b and c for slow and fast exchange processes, respectively.These fits resulted in the exchange rate of 6 ± 2 s−1 for the faster exchange process and 1.1 ± 0.3 s−1 for the slower exchange process.Contrary to the carbonate-rich sample, the T2-T2 maps for the siliceous shale samples showed only the slower exchange process.The exchange rates for that process were similar to the carbonate-rich sample.We note that the exchange rates were much smaller than the R2 = 1/T2 relaxation rates, and therefore it is not needed to consider relaxation during the evolution period in the analysis .According to the NMR cryoporometry analysis, the amount of mesopores is much larger in the carbonate-rich sample than in the siliceous samples.This may explain why the exchange between macro- and mesopores was separately observed in the carbonate-rich sample but not in the siliceous samples.On the other hand, the Sil3-14 and Sil4-34 samples include 39% and 50% quartz in contrast to 13% in EF1-223.At the same time, the amount of calcite is 60% in EF1-223, whereas in Sil3-14 and Sil4-34 samples it is less than 3%.These differences may affect the substitution of bound water, which was not removed from the sample during the drying process, by acetonitrile molecules.The nature of functional groups affects the adsorption-desorption processes, and it may be anticipated that water substitution might be lower in EF1-223 than that in Sil3-14 and Sil4-34 samples .This might lower the relative proportion of small pores in the carbonate-rich sample observed by acetonitrile in relaxation and NMR cryoporometry experiments.This article described a versatile 1H and 129Xe NMR analysis the cement and shale samples.The 129Xe analysis showed that the size of the capillary mesopores in both BASF and Portland cement samples vary in between 10 and 50 nm.The pore size is smaller for the Portland samples.The EXSY data implies that the diffusion between the free gas and capillary mesopores is slower in the Portland than BASF samples due to the smaller pore size.129Xe spin-echo spectra of the shale samples from USA indicates the presence of micropores and mesopores with the pore sizes ranging from 1 to 10 nm.Contrary to the shale samples from China, the spectra in the shale samples from USA do not include signal at negative chemical shifts, and thereby lack the explicit fingerprint of paramagnetic effects.1H T2 distributions of acetonitrile in the shale samples are very broad due to the heterogeneity of the material.NMR cryoporometry analysis showed also the presence of mesopores in the shale samples, but there may be some relaxation weighting in the distributions.Using T2-T2 relaxation exchange experiment, the diffusion-driven exchange process was observed between the bulk and confined acetonitrile in the pores of shale samples and the exchange rate constants were quantified.Overall, the combined 1H and 129Xe NMR analysis reveal very versatile information about the structure of the cement and shale samples as well as dynamics of absorbed fluids that may be of interest also in the commercial applications of these materials.
Cements and shales play a vital role in the construction and energy sectors. Here, we use a set of advanced NMR methods to characterize the porous networks and dynamics of fluids in hydrated cement and shale samples. We compare the properties of cements from two different manufacturers, BASF and Portland, as well as shales brought from USA and China. 129Xe NMR spectra of xenon gas adsorbed in the samples indicate that the capillary mesopores are smaller and the exchange between free and confined gas is slower in the Portland than in the BASF cement samples. The pores probed by xenon in the shale samples from USA are significantly smaller than in the cement samples, partially in the micropore region. There is a substantial difference in between the 129Xe spectra of shales from USA and China. Whereas the latter show a clear signature of paramagnetic impurities by exhibiting large negative 129Xe chemical shifts (referenced to the free gas), the samples from USA lack the negative chemical shifts but feature large positive shift values, which may indicate the presence of micropores and/or paramagnetic defects. 1H NMR cryoporometry measurements using acetonitrile as probe liquid allowed the observation of mesopores in the shale samples as well, and T2-T2 relaxation exchange experiment enabled the quantification of the exchange rates between free and confined acetonitrile.
151
Non-Eulerian behavior of graphitic materials under compression
Due to its exceptional mechanical properties graphene holds great promise as a reinforcing agent in polymer nanocomposites.In order to produce nanocomposites that can compete with conventional long fibre composites, few-layer graphene inclusions rather than monolayer graphene are preferable since high volume fractions can thus be attained .Previous studies showed that the interlayer bonding of few-layer graphene affects somewhat their overall mechanical performance mainly at high deformations, but still the high stiffness of 1 TPa is retained and fracture strengths of 126 GPa and 101 GPa for bilayer and trilayer , respectively, were obtained from nano-identation experiments.Moreover, promising results of multi-layer graphenes as reinforcements of polymeric materials in tension obtained by either mechanical experiments or atomistic simulations .However, the compression behavior which is very important for structural applications is yet largely unexplored.Another important issue regarding the mechanics of graphene and 2D materials in general is the applicability of continuum mechanics at the nanoscale, and particularly of the plate idealization .The origin of bending rigidity in single layer graphene differs from that of a continuum plate and other methods need to be employed for the estimation of the bending rigidity of 2D materials .Although there are several studies focusing on this subject for single layer graphene , there is no analogous work for the case of thicker graphenes which is crucial for the compression behavior of these flakes as well as their mode of failure.Graphenes of various thicknesses have been examined under uniaxial or biaxial tension using Raman spectroscopy as a tool for monitoring the local strain .The graphene flakes are either supported or fully embedded in polymer matrices, and by bending the polymers the flakes are strained while the mechanical response is monitored mainly by the shift of the position of the 2D and G peaks with the applied strain.This is now a well-established technique for studying graphene under axial deformations for moderate strain levels and various aspects can be studied such as the stress transfer mechanism in graphene/polymer systems .In compression the achievable range of strain using the bending beam technique is sufficient to capture the mechanical behavior of single layer graphene up to failure that occurs by buckling at ∼−0.60% and −0.30% strain level for the cases of fully embedded and simply supported 1LG, respectively.As mentioned above there are limited studies for multi-layer graphenes under compression at least experimentally , while there have been compression studies of other less ordered graphitic structures such as aerographite and 3D carbon nanotube assemblies .In the present study we examine in detail the compression behavior of simply supported and fully embedded graphene in polymers with thicknesses ranging from bi-layer graphene to multi-layer using Raman spectroscopy and applying continuum theory to acquire an in depth understanding of the failure mechanisms.A comparative reference is also made to the compressive behavior of monolayer graphene that has been examined in our earlier work .Graphene flakes prepared with mechanical exfoliation of graphite using the scotch tape method .The exfoliated graphitic materials deposited directly on the surface of the substrate PMMA/SU-8 polymer.The SU-8 photoresist was spin coated with speed of ∼4000 rpm, resulting in a very thin layer of thickness ∼180 nm.Appropriate few-layer flakes located with an optical microscope and the number of layers was identified from the 2D Raman peak.In order to create fully embedded flakes another layer of PMMA was spin coated on the top of the flakes with thickness of ∼180 nm.A four-point-bending apparatus was used to subject the samples to compressive strain, which was adjusted under the Raman microscope for simultaneously loading the sample and recording spectra.A schematic of the experimental setup is presented in Fig. S1e.All the experiments performed using a laser line of 785 nm.Strain applied in a stepwise manner and Raman measurements were taken for the 2D and G peaks in situ for every strain level.Several points were measured close to the geometric centre of the flakes.Here we present experimental results on the compressive behavior of graphene flakes of various thicknesses fully embedded in polymer matrices.Following the setup of previous studies , graphene flakes deposited on a PMMA/SU-8 substrate and another layer of thin PMMA was spin coated on the top in order to fully embed the graphene in matrices.Using a four-point-bending jig under a Raman microscope we examined embedded few-layer graphene flakes under compression.The graphene/polymer specimens were subjected to incrementally applied compressive strain while the Raman spectra were recorded at every level of loading.It has been explained in detail in previous works , that under compression the frequency of the 2D and G phonons shifts to higher wavenumber until reaches a peak value, after which downshift of the frequency follows.The maximum strain that corresponds to the peak value of the phonon hardening is the critical strain to failure since the graphene no longer sustain the compressive strain.For every examined specimen several Raman measurements were taken close to the geometric centre of the flakes to avoid edge effects .Fig. 1 shows the position of the 2D peak versus the applied compressive strain.We examined bi-layer, tri-layer and few layer specimens in order to assess the compression behavior of multi-layer graphene similar to those employed in graphene nanocomposites .The critical strain to failure for the embedded bilayer is −0.26%.This value was consistent for all the examined flakes.As mentioned earlier, the 2D slope with strain for the bilayer is ∼41.4 cm−1/%, which is similar to what has been obtained by other workers in the field .The slope for the 2LG is lower than that obtained in the case of 1LG and this indicates that the carbon bonds are not as highly stressed per increment of strain as for 1LG .We note that all the examined flakes have length of over ∼15 microns and thus, there are no size effects that might compromise the stress transfer efficiency and the lower shift rate is due to the layered nature of multilayer flakes as has been discussed in detail previously .In Fig. 1c and d the results for trilayer and few layer graphene specimens are presented.The corresponding critical compressive strain to failure is −0.20% for the trilayer and −0.10% for the few layer, respectively.The slope for both the trilayer and few-layer graphenes is similar to the bilayer and in agreement with results obtained under tension for graphenes with the same thicknesses .We note that the stacking of the few-layers has no effect on the critical strain to failure as revealed by examining flakes of different interlayer configurations.Moreover, the critical strain to failure is stable for every graphene of same thickness as confirmed by examining more specimens.The results can be found in the SI.The reason for the decrease of the critical compressive strain with the increase of flake thickness must be sought in the way that axial stress is transmitted to multi-layer graphenes.For the embedded flakes the axial stress is transferred from the polymer to the outer layer by shear which is then is transmitted to the inner layers.However, as the inner vdW bonding is much weaker than the polymer/graphene bonding the interlayer stress transfer is less effective.Thus, a smaller fraction of the total stress is transferred to the inner layers and an internal shear field is present.As a result, the overall ‘structure’ is much weaker in compression and fails in shear like a pack of cards under axial compression.This explains why the critical strain to compression failure is significantly smaller than the corresponding value of −0.60% measured previously for monolayer graphene .Further quantitative explanation for this effect is given below.The post failure behavior is also very interesting.In the case of the trilayer and few layer graphenes the characteristic “slip-stick” behavior is observed in the post failure regime.This is a common occurrence in graphitic materials and has been observed in several studies of nanoscale friction of graphene .Another remarkable feature is that after the position of the peaks reaches the zero value, further compression causes downshift of the frequency and clearly passes in the tensile regime.The position of the G peak versus the compressive strain for the bilayer flake is plotted in Fig. 1b.The peak can be fitted by two Lorentzians curves due to the splitting at higher strain level.For compressive strain of ∼ −0.98% the frequency confirms that the bilayer graphene is in fact under tension, and the position of the sub-peaks is ∼1575.6 cm−1 and ∼1565.5 cm−1, providing solid evidence that the graphene is under uniaxial tension.In Fig. 2 the spectra of the G peak for various levels of compressive strain are presented showing the clear splitting in the tensile regime.In this context the graphene appears to behave as a mechanical metamaterial with negative stiffness.We note that in the work of Tsoukleri et al. , the behavior was different in the post failure regime and no such phenomenon was observed, as well as in other samples examined herein and can be found in SI.The precise mechanism governing this behavior is not perfectly clear, but based on our initial observations we assume that there is a geometric effect for such behavior to occur.We speculate that a mechanism similar to the incremental negative stiffness behavior observed in carbon nanotubes in the post buckling regime is also the case here for which when the curvature increases tension begins to develop at the curved centre.This is supported by the results of the simply supported flakes, for which the AFM images show buckling failure with localized buckling waves similar to the monolayer while the Raman response has entered the tensile regime.Further examination and simulations are currently under way in order to fully uncover the physical mechanism of this behavior and will be presented in a future study.At any rate, the results here clearly show that graphene exhibits negative stiffness under compression in the post-buckling regime.In order to evaluate theoretically the critical strain for Euler buckling instability under compression of embedded graphene flakes that are comprised of more than one layer, the bending and tension rigidities of multi-layer must be known.For the bending rigidity we use low-bound experimentally derived values that are available in the literature .Higher values have also been reported by other workers in the field but as argued below, values of D higher than a certain threshold, lead to larger deviation between standard theoretical approaches and experiment.Furthermore, the same problem holds if we resort to the use of plate mechanics) for the calculation of the bending rigidity of multi-layer graphene.The tension rigidity is given by C = Eh where E is the stiffness and h the thickness."We assume that the Winkler's modulus does not change with thickness since the foundation support is the same.Regarding the values of length and width of the flakes, we use representative values that refer to our experimental specimens bearing in mind that the dimensions do not affect the critical strain to failure ."Here, we will examine three cases; the Winkler's model as presented and applied in our previous works , using the corresponding bending stiffness values of few-layer flakes that can be found in the literature, and this will be referred to as Euler buckling; a spring-in-series model to account for the van der Waals bonds that bind the individual single layers that form the multilayer flakes, and this we will be referred to as Linear Spring Model; and finally, a Modified Winkler model that retains the linear elastic springs as above and taking the bending rigidity of the multi-layer flake as the product of the bending rigidity of a single layer multiplied with the number of layers.The results of the three cases as above for graphenes with thicknesses of two to six layers are summarized in Table 1.For this case the KW parameter of reference is employed, since as mentioned above it is unchanged for all cases examined.We observe that this approach yields constant critical strains to buckling with values close to those of the monolayer.This is shown in Fig. 3 for fully embedded graphenes under compression and for a wide range of thicknesses.Thus, the predictions for Euler buckling type of failure deviate from the experimental results which clearly indicates the compression behavior of multi-layer and graphitic materials cannot be considered as an elastic buckling phenomenon.It seems therefore that the failure of the embedded few layer graphenes in compression is cohesive and originates from the inability of the weak van der Waals bonds to withstand the developed shear stress.It is known that graphite is a solid lubricant with low interlayer shear strength .Loss of stacking has also been observed in embedded few layer graphenes under tension due to relative sliding between individual layers and at relatively small strain , which facilitates the cohesive failure of the flakes.Previous studies on the stress transfer mechanism in graphene/polymer systems showed that the shear stress at the interface reached a value of 0.4 MPa at small strain .The shear strength of graphite has been found to be in a range of 0.25–2.5 MPa and a value as small as 40 KPa reported for the interlayer shear stress in a bilayer graphene .Thus, the few layer graphenes fail in shear prior to the critical Euler buckling failure.The mono-layer graphene does not suffer from such a structural “disadvantage” and its failure under compression is Euler buckling, as has been discussed in detail previously .More importantly, it possesses the highest resistance to compression among graphitic materials when embedded in polymers despite being the thinnest family member and having the lowest bending rigidity.Using appropriate values for the interlayer distance and the adhesion energy per unit area, we can estimate the Kgr in a range between 87.5 and 198.3 GPa/nm.Although the range of the values is large, the resulting overall KW is marginally affected, its value remains almost constant for the range of Kgr and consequently the bending rigidity is the crucial parameter for the estimation of the critical strain to failure.Using the KW estimated considering spring in series and again using the bending rigidities from Ref. , the obtained critical strains are closer to the experimental findings but still, the εcr seems fixed at ∼−0.25% in contrast to the decreasing trend with the increase in thickness and cannot account for the much lower values obtained for the few-layer graphenes.The spectroscopic data can be converted to values of axial stress using the value of 5.5 cm−1/GPa for a laser line of 785 nm .Applying the procedure reported previously to the present results, we obtain compressive stress-strain curves for the embedded few layer graphenes.The stress-strain curves are represented in Fig. 5 along with results for mono-layer taken from the reference for comparison.A dramatic decrease is observed in the compressive strength with the increase in thickness of the graphenes.The mono-layer has significantly larger compressive strength than thicker graphenes and thus turns up as the most efficient reinforcing filler for composite materials under compressive loadings.Moreover, the change in the sign of stress when the negative stiffness regime is entered is captured.In summary, we examined in depth the effect of thickness upon the compressive behavior of fully embedded graphenes.The most significant finding is that the critical strain to compression failure decreases with the increase in thickness of graphene as a result of low resistance to shear and consequent cohesive failure.The threshold of cohesive failure occurs at a much lower strain than the calculated critical strain for Euler buckling.The graphene/graphene interactions were treated as springs acting in series, and the overall bending rigidity of a multi-layer graphenes were scaled to the number of layers.This assumption stems from the fact sliding is more favorable failure mechanism than buckling.Finally, the spectroscopic data were converted to stress-strain curves, which showed clearly that the monolayer graphene has a far superior compression behavior than multi-layer graphenes.These counter-intuitive results are useful for the efficient design of composites that incorporate graphene inclusions as reinforcing agents and provide significant insight for the mechanical behavior of few-layer graphenes.
The mechanical behavior of graphitic materials is greatly affected by the weak interlayer bonding with van der Waals forces for a range of thickness from nano to macroscale. Herein, we present a comprehensive study of the effect of layer thickness on the compression behavior of graphitic materials such as graphene which are fully embedded in polymer matrices. Raman Spectroscopy was employed to identify experimentally the critical strain to failure of the graphitic specimens. The most striking finding is that, contrary to what would be expected from Eulerian mechanics, the critical compressive strain to failure decreases with increase of flake thickness. This is due to the layered structure of the material and in particular the weak cohesive forces that hold the layers together. The plate phenomenology breaks down for the case of multi-layer graphene, which can be approached as discrete single layers weakly bonded to each other. This behavior is modelled here by considering the interlayer bonding (van der Waals forces) as springs in series, and very good agreement was found between theory and experiment. Finally, it will be shown that in the post failure regime multi-layer graphenes exhibit negative stiffness and thus behave as mechanical metamaterials.
152
Genetically engineered crops help support conservation biological control
Biological control is a cornerstone of Integrated Pest Management and plays an important role in the sustainable and economic suppression of arthropod pest populations.The global value of biological control has been estimated at $617/ha.For pest control provided by natural enemies in the USA alone, a value of $5.9 billion has been estimated, a value that is regarded as extremely conservative.The importance of biological control for sustainable agricultural production is widely recognized and biological control is regarded as an important regulating service in the Millennium Ecosystem Assessment.Biological pest control comprises different “tactics” including augmentative or inundative control, which requires an initial or repeated release of natural enemies, and classical biological control in which exotic natural enemies are introduced, mainly to manage invasive pests.Conservation biological control, in contrast, takes advantage of resident natural enemies and involves management strategies to conserve their populations and the services they provide.Two general approaches are followed.One involves habitat manipulations to increase the abundance and activity of natural enemies, because natural enemies have been shown to benefit from increased landscape complexity.The second focuses on reducing use of control tactics, such as insecticides, that may harm natural enemies.New molecular tools provide opportunities for the development of genetically engineered pest-resistant crops that control key pests and require less input of foliar and soil insecticides.The first GE crops developed in the late 1980s expressed insecticidal proteins from the bacterium Bacillus thuringiensis Berliner because of their known specificity and the excellent safety record of microbial Bt formulations.Pest-resistant Bt plants are now widely used on a global scale.There is evidence that Bt crops can reduce target pest populations over broad scales resulting in reduced damage on both GE and non-GE crops in the region.In addition, they have been shown to promote biological pest control in the system, if foliar insecticides are reduced.Host-plant resistance, whether developed through traditional breeding practices or genetic engineering, is an important tactic to protect crops against arthropod pests.Together with biological control, host plant resistance forms the foundation of sustainable IPM programs.Mechanisms of resistance can be categorized as constitutive or inducible and direct or indirect defenses.Direct defenses can be chemical or physical attributes and are defined by having a direct impact on the herbivore by negatively affecting its important life-history parameters or by deterring adult oviposition.In contrast, indirect defenses act by enhancing the effectiveness of natural enemies of the attacking herbivore.Examples are the emission of volatiles that are used by natural enemies to find their hosts/prey and the provision of food.Plant characteristics that affect herbivores may also directly or indirectly affect their natural enemies.Studies on plant-herbivore-natural enemy interactions reveal that plant defense traits can have negative, positive, or neutral effects on natural enemies.The tools of genetic engineering have provided a novel and powerful means of transferring insect-resistance genes to crops, and there is evidence that those resistance traits have similar effects on natural enemies than resistance achieved by conventional breeding.GE insect resistant crops have been grown on a large scale for more than 20 years, and there is considerable experience and knowledge on how they can affect natural enemies and how their risks can be assessed prior to commercialization.As a highly effective form of host plant resistance, insecticidal GE crops are a foundational tactic in IPM.They work synergistically with other tactics such as conservation biological control to achieve more sustainable pest control.This review will present basic information on the adoption and use of GE crops, discuss the impact of GE crops on natural enemies through the lens of risk assessment and provide evidence on how GE crops can enable biological control to become a more effective component of IPM.Since the first GE plant was commercialized in 1996, the area grown with GE varieties has steadily increased.The two major traits that are deployed are herbicide-tolerance and resistance to insects.Here, we will focus primarily on insect-resistant GE crops.In 2017, GE varieties expressing one or several insecticidal genes from Bt were grown on a total of 101 million hectares worldwide, reaching adoption levels above 80% in some regions.Thus, Bt plants have turned what was once a minor foliar insecticide into a major control strategy and their role in IPM has received considerable attention.The majority of today’s insect-resistant GE plants produce crystalized proteins from Bt.However, this bacterium possesses another class of insecticidal proteins, the vegetative insecticidal proteins, which are synthesized during the vegetative growth phase and have a different mode of action than Cry proteins.Vips are already deployed in some commercial maize hybrids and cotton.While the early generation of Bt crops expressed single cry genes, current varieties typically express two or more insecticidal genes.These so-called pyramid events are more effective in controlling the target pests and help to slow down the evolution of resistance.Currently, SmartStax® maize produces the most combined GE traits of any currently commercially cultivated GE crop, i.e., six different cry genes to control lepidopteran and coleopteran pests and two genes for herbicide tolerance.Herbicide-tolerant GE crops and biological control.Tolerance to broad-spectrum herbicides such as glyphosate, glufosinate or dicamba is the most widely deployed trait in GE crops.In 2017, GE varieties carrying the herbicide tolerance trait either alone or stacked with insecticidal traits were grown on a total of 166.4 million hectares worldwide.The benefits of this technology include highly effective weed control, greater flexibility in applying the herbicide, reduced phytotoxicity to the crop, and savings in time and costs.Herbicides generally have low toxicity to arthropods and this is evaluated during the approval process for each new product.However, changes in weed management affect weed diversity and abundance and also might indirectly affect the abundance, diversity and effectiveness of biological control.It is well established that weeds interact with both arthropod pests and their natural enemies.Weeds can provide food such as pollen and nectar, harbor prey/hosts, provide shelter and refuge, alter the microclimate and structure in the field, and interact with the crop affecting its morphology, phenology and physiology with consequences for natural enemies.Because these interactions are very complex and our understanding remains incomplete, making predictions on how changes in weed abundance and diversity affect arthropods is extremely difficult.Several studies in Europe have addressed the impact of HT crops and their management on arthropod biodiversity.The most publicized project was the UK Farm Scale Evaluations conducted in different crops.The project used split fields where one half was sown with a conventional crop variety and managed conventionally, while the other half was grown with a HT variety and only the associated herbicide was applied.As expected, the change in the weed management scheme affected both the composition of weed species and the invertebrate taxa in the field.Most importantly, however, crop type and sowing seasons had a far larger impact on the weed and invertebrate composition than the herbicide regime.Subsequent field studies conducted with HT maize in both Spain and the Czech Republic and with HT cotton in Spain have shown that the response of arthropods to altered weed abundance and diversity was variable and differed among taxa.For example, Albajes et al. compared the weed and arthropod abundance in HT maize treated with glyphosate to untreated maize plots.Both the abundance and composition of weeds differed significantly between the treatments.Among the herbivores collected, aphids and leafhoppers were more abundant in the glyphosate-treated HT plots, while the opposite was observed for thrips.In the case of predators for example, Orius spp. and spiders were more abundant in the glyphosate treated plots, while the opposite was observed for Nabis spp. and Carabidae.A follow-up study indicated that the differences in Orius spp. densities were more linked to prey availability than weed abundance per se.One of the biggest changes with HT weed management is flexibility in the timing of herbicide application, which has a marked effect on the population dynamics of weeds.This has been demonstrated for HT sugar beet, which provides opportunities to alter weed management, including enhancing weed biomass while protecting the crop from pests.Additionally, it is possible to enhance arthropod biomass and weed seed banks to provide food for farmland birds.In addition to changes in weed management, the use of HT varieties has also been found to have impacts on tillage practice.No-tillage and conservation tillage regimes have become more widely adopted with the introduction of HT crops.Reduced tillage or no-tillage minimizes the disruption of the soil structure, composition and biodiversity with positive impacts on arthropods and biological control.Furthermore, reduced tillage and fewer tillage passes contribute significantly to carbon sequestration and reduce the amount of greenhouse gas emissions, which in turn could help mitigate climate change.Overall, the experience available to date shows that the effects caused by a shift from a conventional weed management scheme to a HT crop system on arthropods and biological control are difficult to predict.Depending on the crop, the arthropod taxa and the actual changes in crop management effects can be positive or negative.Because of this complexity, assessing potential risk of HT technology compared to conventional cropping systems is difficult.Such an assessment, however, is a regulatory requirement in the European Union.The European Food Safety Authority assessed the environmental impact of HT maize and soybean and concluded that their cultivation is unlikely to raise additional environmental safety concerns compared to conventional maize or soybean in most conditions.The application of the Bt technology, however, is currently largely limited to the three field crops maize, cotton, and soybean.Most of the Bt varieties target lepidopteran pests.This includes stemborers, such as Ostrinia nubilalis in maize, the pink bollworm Pectinophora gossypiella in cotton and the budworm/bollworm complex in cotton and soybean, including Helicoverpa/Heliothis spp. and other caterpillar pests.In the case of maize, traits are available that target the larvae of corn rootworms Diabrotica spp.Recently, the technology has been applied to eggplant for protection against the eggplant fruit and shoot borer.Following years of field trials in Bangladesh, Bt eggplant was grown by 20 farmers in 2014 and over 27,000 in 2018.Bt-transgenic poplar trees producing lepidopteran-active Cry proteins have been grown in China since 2002, covering 450 ha in 2011.The adoption of the Bt technology differs among continents.While Bt-transgenic varieties are widely used in the Americas and in Asia, only few countries in Europe and Africa grow these crops.Bt maize is very popular in the Americas, often reaching>80% adoption.Bt cotton is also widely grown in the USA and Mexico, while Bt soybean remains at relatively low adoption levels in South America with the exception of Brazil.In Chile, stacked Bt/HT maize and in Costa Rica, stacked Bt/HT cotton have been grown for seed export only.In several Asian countries and Australia, the technology is used to control lepidopteran pests in cotton with adoption levels >90%.Bt maize is grown at a significant level in The Philippines to control the Asian corn borer, Ostrinia furnacalis while Vietnam only introduced Bt-transgenic varieties in 2015 and their use is still limited.In Europe, the only product currently approved for cultivation is the Bt maize event MON810 that produces the Cry1Ab protein and protects the plants from corn borers.The largest cultivation area is in Spain with an overall adoption level of 36% in 2017.In Africa, Bt crops are currently cultivated in only two countries.South Africa grows Bt maize to control stem borers and Bt cotton to control Helicoverpa armigera.Sudan deploys Bt cotton targeting the same pest.Use of Bt cotton has been temporarily halted in Burkina Faso after eight years.With the recent invasion of the fall armyworm in Africa, there is increased interest in using Bt maize as part of a management program.Worldwide, GE plants are subject to an environmental risk assessment before being released for cultivation.The ecosystem service of biological control is an important protection goal to be addressed in the ERA.Growing insecticidal GE plants could harm natural enemies and biological control in three ways.First, the plant transformation process could have introduced potential harmful unintended changes.In the ERA, this risk is typically addressed by a weight-of-evidence approach considering information from the molecular characterization of the particular GE events and from a comparison of the composition and agronomic and phenotypic characteristics of the GE plant with its conventional counterpart.There is increasing evidence that the process of genetic engineering generally has fewer effects on crop composition compared with traditional breeding methods.The current approach is conservative, in particular because off-types are typically eliminated over the many years of breeding and selection that happen in the process of developing a new GE variety.Second, the plant-produced insecticidal protein could directly affect natural enemies.Such potential toxicity is tested on a number of non-target species and these data are an important part of the regulatory dossier.Third, indirect effects could occur as a consequence of changes in crop management or arthropod food-webs.Such affects are addressed in the pre-market ERA but, because of the complexity of agro-ecosystems, potential impacts might only be visible once plants are grown in farmer fields.For insecticidal proteins in GE plants to directly affect a natural enemy, the organism has to ingest the toxin and be susceptible to it.Toxicity of the insecticidal protein to natural enemies is typically evaluated in a tiered risk assessment approach that is conceptually similar to that used for pesticides.Testing starts with laboratory studies representing highly controlled, worst-case exposure conditions and progresses to bioassays with more realistic exposure to the toxin and semi-field or open field studies carried out under less controlled conditions.From a practical standpoint, because not all natural enemies potentially at risk can be tested, a representative subset of species is selected for assessment.First, the species must be amenable and available for testing.This means that suitable life-stages of the test species must be obtainable in sufficient quantity and quality, and validated test protocols should be available that allow consistent detection of adverse effects on ecologically relevant parameters.Second, what is known about the spectrum of activity of the insecticidal protein and its mode of action should be taken into account to identify the species or taxa that are most likely to be sensitive.In the case of Bt proteins the phylogenetic relatedness of the natural enemy with the target pest species are of importance.Third, the species tested should be representative of taxa or functional groups that contribute to biological control and that are most likely to be exposed to the insecticidal compound in the field.Knowledge on the natural enemies present in a particular crop, their biological control activity, and their biology and ecology is used to select representative test species.Databases containing this information have been established for various field crops in Europe and for rice in China.The manner by which this information can be used to support the species selection process has been demonstrated for Bt maize in Europe and for Bt rice in China.Attempts to construct arthropod food webs and use this information to select the most appropriate surrogate species for testing have also been developed for Bt cowpea in West Africa, Bt sweet potato in Uganda, and Bt pine trees in New Zealand.When Bt genes are incorporated into crops, they are usually combined with constitutive promoters, such as CaMV 35s or the maize ubiquitin promoter that are active in all tissues.Consequently, Bt proteins in current crops can be found in the whole plant including roots, stems, leaves, pollen, and fruits.However, concentrations can vary considerably in different plant tissues, across different developmental stages, and among different Bt proteins and transformation events.One example is the pollen of Bt maize producing Cry1Ab.Early cultivars with the transformation event 176 had high concentrations of Cry1Ab in pollen, which lead to concerns that valued butterfly populations may be affected when inadvertently ingesting insecticidal pollen deposited on their host plants.Modern Bt maize varieties based on other transformation events express very low levels of Cry1Ab in the pollen.In contrast to sprayed insecticides, which are applied at distinct time points, plant-produced Bt proteins are present constantly.Exposure to the pest and non-target organisms is therefore longer than it would be with most insecticides.Bt protein concentrations in younger tissue, however, are often higher than in mature tissue, which can lead to lower Bt protein concentrations towards the end of the growing season.This has, for example, been reported for Cry3Bb1 in maize event MON88017, but not for Cry1Ab in maize event MON810.In the case of Bt cotton, the Cry1Ac concentration typically declines when plants get older, while the Cry2ab protein remains relatively stable.Bt plant material entering the decomposition process in the soil is degraded rapidly.When litter bags filled with senescent Bt maize leaves were buried in a maize field in autumn, almost no Bt protein was detectable eight months later.Similarly, residual root stalks collected eight months after harvest contained 100-fold less Cry1Ab than fresh root samples.Because Bt proteins are gut-active, they need to be ingested to reveal their insecticidal properties.Natural enemies can be exposed to plant produced-Bt proteins when feeding directly on plant tissue, or via prey or host species that have consumed Bt plant material.Plant material is mainly consumed by herbivores, which include major pest species that are the targets of the Bt crop, but also a range of non-target species from different taxonomic orders that are not susceptible to the produced Bt proteins.Many predators are also facultative herbivores, which feed on pollen and other plant tissue when prey is scarce.Pollen feeding has, for example, been reported for predatory bugs, such as Orius spp., for ladybird beetles, such as Coleomegilla maculata or Harmonia axyridis, for spiders, for ground beetles, and for predatory mites.Field studies with Bt maize, which sheds large amounts of pollen, revealed that Orius spp. and ladybeetles contained higher levels of Bt protein during anthesis than before or after, indicating pollen consumption.Green lacewings, Chrysoperla carnea, feed exclusively on pollen and nectar in the adult stage, while larvae are predators which can supplement their diet with pollen.Predators may seek pollen as a food source actively.They may, however, also ingest it passively, e.g. when it is sticks to their prey or, in the case of spiders, when they clean or recycle their web.In Carabidae, some species are mainly predators, some are considered omnivores, feeding on prey and plant tissue, and others live mainly as herbivores, e.g. on plant roots or seeds.Predatory bugs, such as Geocoris spp. and Nabis spp., have also been reported to feed directly on green leaf tissue.Soil inhabiting natural enemies may feed on roots or on decaying plant or arthropod material occasionally, which might expose them to Bt protein.They may also encounter root exudates that contain Bt protein.Nectar is an important source of carbohydrates for adult parasitoids and some predators, although there is no evidence that nectar contains Cry proteins.Parasitoids commonly don’t consume plant tissue and adult parasitoids collected in Bt maize and Bt rice fields did not contain measurable Cry protein concentrations.There is evidence, however, for direct plant feeding by Pseudogonatopus flavifemur, a parasitoid of planthoppers, that contained Cry protein when caged with Bt rice plants devoid of hosts.While exposure through direct plant feeding might be a significant route of exposure for some natural enemy species or in particular situations, the more common route of exposure to Bt proteins is through consumption of prey or hosts.Herbivores feeding on Bt plants may ingest the insecticidal protein and expose their antagonists to these proteins.Feeding studies with sensitive insects have shown that Bt protein measured in herbivores immunologically by ELISA is still biologically active, which indicates that ELISA data can be used to estimate levels of exposure to active Bt protein.When consuming prey or hosts, bioactive Bt protein is thus transferred from herbivores to natural enemies.For arthropods consuming Bt protein-containing food, the protein becomes undetectable after a few days when switched to non-Bt diet.This indicates that most of the ingested Bt protein is digested in the gut or excreted.However, Cry1Ac was also found in the body tissue outside the gut in cotton bollworms, H. armigera.It has been claimed that Bt proteins may accumulate in a ladybird in a system using aphids and purified Bt proteins in an artificial diet.However, the body of literature from more realistic laboratory and field experiments does not provide any evidence for such an accumulation.Many natural enemies use aphids as prey or hosts because aphids are abundant in most crops worldwide.Bt proteins, however, do not seem to enter the phloem sap, which is the food for aphids.Consequently, aphids contain, at best, trace amounts of Bt protein several orders of magnitude lower than the concentrations in green tissue.Natural enemies consuming mainly aphids are thus generally not exposed to significant concentrations of Bt protein.Consequently, aphid honeydew, which is an important source of energy for both predators and parasitoids, is a negligible route of exposure to plant-produced Bt proteins.The same appears to be true for the honeydew produced by other sap-feeders.Only trace amounts of Cry proteins were detected in the honeydew produced by the brown planthopper on different Bt rice lines.However, other transgenic compounds have been found in aphid honeydew.Consequently, this route of exposure could be important for insecticidal non-Bt plants.Herbivores feeding on green plant tissue ingest relatively high amounts of Bt protein.Those include species with chewing mouthparts, e.g. caterpillars, and species with piercing sucking mouthparts, such as bugs, thrips, or spider mites.Spider mites have been found to be among the herbivores with the highest concentrations of Bt protein because they suck out contents in mesophyll cells where the Bt protein is concentrated.Concentrations are in the same order of magnitude than those found in the leaf tissue.Tritrophic studies with Bt plants, herbivorous prey, and predators have shown that ladybeetles ingest relatively high amounts of Bt protein, while concentrations in lacewings, predatory bugs, and spiders were lower.Ground beetle larvae that live below-ground and feed mainly on other soil-inhabiting species, including decomposers, might contain Bt protein.Adults of most carabid species are ground-dwelling predators, omnivores, or herbivores and are thus exposed to Bt proteins via plant tissue or prey.Field collections of predators have shown that Bt protein concentrations also can vary considerably among species of the same taxonomic group, such as spiders, carabids, or ladybird beetles, which can be explained by differences in feeding habits.Parasitoids are potentially exposed to Bt proteins when feeding on their hosts.Similar to predators, the Bt protein concentration in the host, as well as the feeding habit of the parasitoid, influence exposure.In general, parasitoids that consume the gut of their host, where most of the Bt protein is located, are expected to experience higher exposure than those leaving the host without consuming the gut.In some species, adults also feed on the host, which might lead to exposure.For most parasitoid species, however, adults feed on nectar or honeydew and consequently do not ingest significant amounts of Bt protein.In conclusion, Bt proteins are generally transferred from plants to herbivores to natural enemies.But the amount of Bt protein ingested by natural enemies is highly variable and depends on the concentration of the Bt protein in the plant, the stability of the Bt protein, the time of the last meal, the mode of feeding of the herbivore and the natural enemy, and behavior.Furthermore, excretion and digestion at each trophic level leads to a dilution effect when Bt proteins move along the food chain.This is supported by evidence from ELISA measurements of field collected arthropods from Bt maize, cotton, soybean, and rice.Arthropods inhabiting or visiting Bt crop fields may be exposed to plant-produced Bt proteins.However, arthropods living in the field margins or other elements of the surrounding landscape may also encounter Bt proteins from fields where Bt plants are grown.The most prominent example is pollen from Bt maize that is deposited on food plants of butterflies in the field margins.During the period of pollen shed, butterfly larvae are likely to ingest certain amounts of pollen grains together with their food plant.This is also likely for other herbivores and potentially their natural enemies.Maize pollen is relatively heavy and deposited mainly within or in close proximity to the maize field, which limits exposure of arthropods off-crop, although certain wind conditions may lead to pollen drift over several kilometers.During harvest, in particular when only cobs are harvested and the remaining plant material is shredded and left on the field, parts of the plant debris might drift to neighboring habitats and expose decomposers and their natural enemies.Pollen, plant debris, and also exudates from living roots or exudes from decaying plant material might enter small streams that often run close to agricultural fields.Those are potential routes of exposure for aquatic organisms, such as shredders, filter feeders, and their natural enemies.Bt protein concentrations in aquatic systems, however, are expected to be very low due to a huge dilution effect of the running water.Finally, herbivores and other arthropods that have ingested Bt protein from the Bt crop may leave the field and expose natural enemies off-crop.Because of the rapid excretion and digestion, however, this route of exposure is temporally very limited.Studies to investigate the toxicity of the insecticidal compounds produced by Bt plants to natural enemies include direct feeding studies in which the natural enemies are fed artificial diet containing purified Bt protein, bitrophic studies where natural enemies are fed Bt plant tissue, or tritrophic studies using a herbivore to expose the natural enemy to the plant-produced toxin.Numerous such studies have been conducted on a large number of Bt proteins, Bt crops and transformation events.In summary, the available body of literature provides evidence that insecticidal proteins used in commercialized Bt crops cause no direct, adverse effects on non-target species outside the order or the family of the target pest.This also holds true for Bt plants that produce two or more different insecticidal proteins.The available data indicate that these pyramided insecticidal proteins typically act additively in sensitive species and cause no unexpected effects in species that are not sensitive to the individual toxins.Recent studies have demonstrated that this is also true for a combination of Cry proteins and dsRNA.While a few studies claim to have revealed unexpected non-target effects, none of those claims has been verified, i.e., confirmed in follow-up studies conducted by other research groups.It is thus likely that those results are artifacts, probably resulting from problems in study design.This emphasizes the need for risk assessment studies to be carefully designed to avoid erroneous results that include false negatives and positives.To support the regulatory risk assessment, non-target studies with natural enemies are typically conducted under worst-case exposure conditions in the laboratory.Recombinant insecticidal proteins produced in microorganisms are usually used as the test substance.It is often not feasible to use plant-expressed protein because sufficient mass cannot be reasonably purified from the plant source.As a consequence, those proteins must be well characterized to demonstrate a functional and biochemical equivalence with the plant-produced protein.In general, studies with purified Bt proteins have not indicated any adverse effects on the tested non-target organisms.Reviews are available for a number of Bt proteins including the Coleoptera-active Cry34/35Ab1 and Cry3Bb1 and the Lepidoptera-active Cry1Ab, Cry1Ac, Cry2Ab, Cry1F, and Vip3Aa.As noted above, more realistic routes of exposure for natural enemies include feeding directly on the plant or indirectly through their prey or hosts feeding on the plants.The following sections will focus on these types of studies,To our knowledge, bitrophic studies, where natural enemies were directly fed with Bt plant material, have been conducted on a total of 20 species from 6 orders and 12 families.The majority of studies tested material from Bt-transgenic maize, followed by rice, potato, and cotton.The most commonly used test substance was pollen.The studies recorded survival, but also sublethal parameters, e.g., developmental time or body mass.With two exceptions, exposure of the natural enemies to the plant produced Cry proteins has been confirmed or can be expected given the test system and the feeding mode of the test organism.The exceptions are studies conducted with adult egg parasitoids belonging to the genus Trichogramma which, due to their minute size, are not able to feed on maize pollen grains.Studies conducted with Bt maize pollen from events MON810 and Bt11 also lacked exposure given the very low concentrations of Cry proteins in the pollen of this event.Thus, valid conclusions about Cry1Ab toxicity are not possible from those studies.With the exception of four studies, none of the bitrophic studies has reported putative adverse effects of the Bt plant on the natural enemies when compared with the respective control plant.The first study concerns the impact of Bt rice pollen on Propylea japonica.Out of several life-table parameters that were measured, the longevity of females was reduced compared to the control in the KMD1 treatment, but not in the KMD2 treatment, despite similar exposure.In the second study, the impact of Bt maize pollen on the predatory mite Amblyseius cucumeris was tested and the authors reported a significant increase in female development time and a significant decrease in fecundity in the Bt treatment.The authors suggest that the observed effects were not related to the Cry1Ab protein since in a parallel study no effects were observed when the predator was fed with spider mites that contained much higher amounts of Cry1Ab compared to Bt maize pollen.Similarly, Mason et al. observed a reduced fecundity in lacewings fed pollen from Bt maize MON810 but not for pollen from event 176 which contains much higher concentrations of Cry1Ab.Adverse effects were reported in a third study where larvae of C. maculata were fed seedlings of Bt maize.In this study, however, a non-related non-Bt maize variety was used as the control.In summary, it is apparent, that the unexpected effects observed in these four studies were not caused by the expressed Cry protein but by some unidentified plant-related characteristics.Because several breeding steps are necessary to generate a stable GE line from the parental line, differences in the composition of plant tissues exist even between a GE line and the respective near-isoline.These differences are likely to increase when the transgenic event is conventionally crossed into a range of different genetic backgrounds to generate commercial varieties.Studies that have examined potential impacts of Bt plants on natural enemies in tritrophic test systems have deployed a variety of prey and host species as the Cry protein carrier.This has included prey or host species that are: 1) susceptible to the Cry proteins, 2) species that are not susceptible to the Cry proteins because of their taxonomic affiliation, and 3) target herbivores that have developed resistance to the Cry proteins.One challenge with tritrophic studies is that they can lead to erroneous results when sublethally affected Cry-sensitive herbivores are used as prey or hosts.This can lead to adverse effects on the natural enemy that are related to the reduced quality of the prey/host rather than to the insecticidal protein itself.The importance of such prey/host-quality effects has been demonstrated experimentally for the parasitoids Diadegma insulare and Macrocentrus cingulum and for the predators C. carnea and C. maculata.Ignorance of prey/host-quality effects has led to erroneous claims that lepidopteran-active Cry proteins cause direct toxic effects on natural enemies.One way of overcoming the effects of host/prey-quality is to use non-susceptible or resistant herbivores that can consume the Cry protein without being compromised and serve as prey or host for the predator or parasitoid.Through a literature review, we have retrived 68 publications presenting the results from such tritrophic studies using Bt plant material as the test substance.This list includes phloem feeding insects like aphids, but there is increasing evidence in the literature that phloem feeders have extremely low or non-existent titers of Cry proteins in their bodies after feeding on Bt plants.While these studies offer realistic trophic scenarios, because aphids are common prey and hosts in crop fields, they are not suitable for testing the direct effects of Cry proteins on natural enemies.The same holds true for studies that have offered eggs to natural enemies from herbivores that developed on Bt-transgenic plants.We have thus separated the tritrophic studies into those where exposure to the plant-produced Cry proteins was confirmed or expected, and those were exposure was not given or shown to be very low and where consequently no conclusions about the toxicity of the Cry proteins could be drawn.Tritrophic studies where natural enemies were exposed to plant-produced Cry proteins were conducted with 6 hymenopteran parasitoids from 4 families, 32 predators from 12 families in 5 orders, and one entomopathogenic nematode.Studies with no or negligible exposure were conducted with 7 hymenopteran parasitoids from 4 families and 12 predators from 6 families in 5 orders.Relevant data were extracted from the identified studies for various life history traits that have a bearing on population dynamics and biological control function.These data are summarized using meta-analysis.Care was taken to preserve independence in observations from any one study and to use metrics that reflected the longest exposure to the Cry protein.For example, if individual stage development time and total immature development time were measured for a natural enemy species, only total development time was retained.Likewise, if both fecundity and fertility were measured, only fecundity was retained because the former was generally measured over the life of the adult but the latter was often measured for only a brief period.A similar strategy was used for all studies so that only a single independent metric of a given life history trait was retained for each species studied.More detail on general screening methods can be found in Naranjo.We further retained only data from studies in which the plant was used as the source of the Cry protein, although this plant material could have been incorporated into an artificial diet.For studies that cumulatively exposed the natural enemy over multiple generations we used the results from the final exposed generation based on the rationale that this would represent the most extreme exposure to Cry proteins.The non-Bt plants used were generally isolines or near-isolines of the Bt plants; the remaining studies did not provide sufficient information.We used Hedge’s d as the effect size estimator.This metric measures the difference between respective means from each treatment divided by a pooled variance and further corrected for small sample size.A random effects model was used for analyses to enable a broad inference of effects and bias-corrected, bootstrapped 95% confidence intervals were used to determine if the effect size differed from zero.The effect size was calculated such that a positive value indicates a more favorable response from the Bt compared with the non-Bt treatment.All analyses were conducting using MetaWin v2.1.Results mirror those found in prior meta-analyses with fewer studies in showing that a variety of Bt plants and Cry proteins have no negative effects on a broad range of natural enemy species when the non-target species were exposed in an ecologically realistic manner.Effect sizes were generally larger for parasitoids and analyses indicated that reproduction was actually higher when their hosts had fed on Bt plants compared with non-Bt plants.This result was driven by a single study where parasitoids were offered a choice between Bt resistant Plutella xylostella caterpillars on Bt compared with non-Bt oilseed rape in field simulators in the laboratory.Eliminating this study reduced the effect size to a non-significant 0.0633.For predators, the majority of studies used non-susceptible prey and the results were exactly the same whether using non-susceptible or Bt-resistant prey.For parasitoids, studies tended to use Bt-resistant hosts more, but again the results were the same regardless of the type of host.We re-ran the analyses eliminating all studies that used herbivores as host or prey that did not contain Cry proteins.The results were similar.The analyses of the tritrophic studies provide further substantiation of the lack of effects of Bt plants and different Cry proteins on the biology or function of natural enemies.This together with the results from the bitrophic studies also confirms that transformation-related, unintended effects do not appear to impair natural enemy performance.Thus, the data available do not support the proposal by some scientists and the European Food Safety Authority that in-planta studies are needed to fully assess the Bt-plant effects on natural enemies.Two tritrophic laboratory studies compared non-target effects of Bt plants to those of conventional insecticides.Herbivore strains were deployed that were non-susceptible to either a particular Bt Cry protein or insecticides.The first study used a strain of Cry1C-resistant diamondback moth or strains that were resistant to four different insecticides.Caterpillars were treated with their respective toxins by feeding on leaf disks from Bt broccoli or disks treated with the insecticides and then exposed to the parasitoid D. insulare.Adult parasitoids only emerged from the Cry1C-resistant larvae.This provided clear evidence that the commonly used insecticides harmed the internal parasitoid while Cry1C did not.Similar results were reported in a second study where non-susceptible strains of aphids were used in tritrophic studies with Bt broccoli or pyrethroid-treated broccoli and the predators C. maculata and Eupeodes americanus or the parasitoid Aphidius colemani.Again, adverse effects on the natural enemies were observed in the pyrethroid treatment but not in the case of Bt broccoli.As noted, there has been considerable laboratory research demonstrating the safety of Bt proteins to a suite of important natural enemies.Further, it has been suggested that such early tier laboratory studies can conservatively predict non-target effects expected in the field.Thus, Bt crops represent a highly selective control tactic that should conserve natural enemies and contribute to enhanced management of pests, especially if Bt crops replace the application of broad-spectrum insecticides for control of Bt targeted pests.Bt maize and Bt cotton have been grown commercially for more than 20 years and provide an opportunity to assess their role in conservation biological control.As of late 2008, over 63 field studies had been conducted to assess the potential impacts of Bt crops on non-target arthropods encompassing six classes, >21 orders and >185 species, with the vast majority of these being natural enemies important to providing biological control services.Dozens of studies have since been added, especially in the rice and soybean systems, but also with continued focus on cotton and maize.These studies have been discussed and summarized in narrative reviews and several quantitative syntheses.Overall, these studies have collectively concluded that non-target effects of Bt crops are minimal or negligible, especially in comparison to the negative effects of the use of insecticides for control of the Bt targeted pest.A notable exception is the abundance of parasitoids for Bt maize.Many studies in this crop have been dominated by Macrocentris grandii, an exotic parasitoid introduced to the USA for control of O. nubilalis, which is in turn the main target of Bt maize.Not surprisingly, the abundance of such specialist parasitoids and the biological control services they provide may decline in Bt maize once their host insects are effectively controlled.However, reductions in target host abundance do not always lead to reductions in biological control function.In contrast, the use of insecticides for Bt targeted pests in non-Bt crops can significantly reduce biological control function.The impact of Bt crops on the biological control services supplied by generalist arthropod predators have been uniformly neutral in Bt maize and Bt cotton.Only one study observed small reductions in several arthropod predator taxa in Bt cotton in long term field studies in Arizona that were likely associated with reductions in caterpillar prey.However, using predator:prey ratios, sentinel prey and life tables of natural populations of Bemisia tabaci, it was shown that these small reductions in predator abundance were not associated with any change in the overall biological control services provided by the natural enemy community.Overall, such changes in the target herbivore community are not unique to Bt crops, but would arise from the deployment of any effective pest management tactic or overall IPM strategy.Nonetheless, extant data suggests that Bt crops do not alter the function of the natural enemy community and may provide for enhanced biological control services if they prevent or reduce the alternative use of broader-spectrum insecticides for control of the Bt targeted pest.Several case studies in cotton and maize are presented below that demonstrate the potential role of Bt crops in conservation biological control.The compatibility of Bt crops and biological control has been well documented with Bt cotton in Arizona as part of their overall IPM program.In 1996, Cry1Ac-cotton was introduced into Arizona to control the pink bollworm, P. gossypiella, a notorious pest of cotton in the southwestern US and northern regions of Mexico, as well as many other parts of the world including India.In Arizona, Bt cotton led to dramatic reductions in the use of foliar insecticides for the target pest, all of them broad-spectrum in nature.The quickly increased adoption of Bt cotton led to broad, areawide control of the pest and opened the door for an opportunity to eradicate this invasive pest.Bt cotton became a cornerstone element in the pink bollworm eradication program initiated in 2006 in Arizona, and insecticide use for this pest ceased entirely by 2008.Concurrently in 1996, a new IPM program was introduced for B. tabaci, another invasive pest that had quickly developed resistance to pyrethroids by 1995.Several new selective insect growth regulators were introduced leading to further reductions in broad spectrum insecticide use.With the introduction in 2006 of a selective insecticide for Lygus hesperus the package was complete and overall insecticide use statewide for cotton was dramatically reduced.This pattern was associated with a disproportionately larger reduction in broad-spectrum insecticides resulting in a situation where most of the few insecticides now applied are those that more selectively target the pests and conserve natural enemies.These progressive reductions in insecticide use provided an environment that allowed biological control by a diverse community of native natural enemies to flourish.Extensive experimental work documented the role of natural enemies generally and their conservation specifically in the suppression and economic management of B. tabaci.Overall, the Arizona cotton IPM strategy has cumulatively saved growers over $500 million since 1996 in yield protection and control costs, while preventing over 25 million pounds of active ingredient from being used in the environment.While many components contributed to this transformative change that allowed conservation biological control to function at a high capacity in Arizona cotton production, Bt cotton was a keystone technology that eliminated the early season use of broad-spectrum insecticides for pink bollworm.Without this capstone event, it is unlikely this success would have been possible.In China, a large-scale study demonstrated that the decline in insecticide sprays in Bt cotton resulted in an increased abundance of important natural enemies and an associated decline in aphid populations.More importantly, these effects were not only observed in the Bt crop itself but also in other crops within the region.Overall, Brookes and Barfoot estimate massive reductions in foliar insecticide use in Bt cotton production globally, pointing strongly to the potential for conservation biological control to play an important and ever increasing role in IPM more broadly in this crop system.The use of seed treated with various neonicitinoids has become pervasive in several field crops in the USA and potentially negates to some degree the reduction in insecticides possible through the deployment of Bt crops.In the USA, neonicitinoid seed-treatments for cotton is common in some production regions, where it can provide economic control of thrips during the seedling establishment period.The impacts of such usage on arthropod natural enemies in not well understood in cotton, but some data suggest minimal effects at recommended doses.Unlike most of the cotton production region in the US, the use of treated seed in Arizona is relatively rare, mainly because plants in this production environment can quickly outgrow any minor thrips damage and some species such as Frankliniella occidentalis are actually considered beneficial.As for cotton, studies have shown that using Bt maize has resulted in large global reductions in the use of foliar insecticides for control of Lepidoptera.Studies on the widespread adoption of Bt maize in the Midwestern USA corn belt have demonstrated a dramatic decline in populations of O. nubilalis, and thus the need for insecticide treatments for this key lepidopteran pest.Furthermore, this decline occurred not only for those who adopted Bt maize, but also for surrounding maize farmers that did not.A similar ‘halo’ effect of lepidopteran suppression by the widespread adoption of Bt maize in the eastern USA has also been documented, as well as the benefits of pest declines in surrounding vegetable fields.While these studies document lower pest pressure because of wide spread adoption of Bt maize and less need for insecticidal sprays, by implication they also suggest that widespread conservation of natural enemies may be occurring.However, as noted, there has been a trend in the USA to add neonicotinoid seed treatments and to date virtually all maize seeds sold are treated.This insurance approach is targeting a number of early-season pests that occur only sporadic but for some of which rescue treatments are not available.Recent work suggests that seed treatments in maize can negatively impact some natural enemy populations early season even though there is recovery later on.Thus, such treatments have the potential to erase some of the very positive gains in foliar insecticide reduction in maize.Studies in sweet corn, which is routinely treated with foliar insecticides far more than field corn, have been able to document that the conservation of natural enemies with Bt plants results in enhanced biological control.In the northeastern US where a considerable amount of sweet corn is grown, studies have shown that Bt sweet corn is far less toxic to the major predators in the system, than the commonly used pyrethroid lambda cyhalothrin, spinosad, and indoxacarb.Furthermore, this study demonstrated that Bt sweet corn provided better control of lepidopteran pests, and did not negatively affect the predation rates of sentinel egg masses of the European corn borer, as did lambda cyhalothrin and indoxacarb.A follow-up study proposed a model that integrated biological and chemical control into a decision-making tool and highlighted the benefit of conserving natural enemies so they could play a role in suppressing not only the lepidopteran pests but secondary pests such as aphids that infest the ears and affect marketability.Work by Stern and colleagues in California in the 1950 s demonstrated that use of selective insecticides could be used to control the spotted alfalfa aphid without disrupting an important parasitoid that helped keep it in check.They noted that when biological control was disrupted, it often led to an ‘insecticide treadmill’ for the pests which, in turn, led to their eventual resistance to the insecticides.This key finding on the importance of conserving biological control agents was instrumental in the development of the Integrated Control Concept, the precursor of the IPM concept.As described previously, multiple studies have shown that Cry1 proteins expressed in plants control targeted Lepidoptera but do not harm important natural enemies, thus conserving them to function as biological control agents.With the threat of targeted pests evolving resistance to Bt proteins expressed in plants, investigations have been undertaken to determine whether natural enemies may help delay resistance to Bt proteins in the targeted pest.Using a system composed of Bt broccoli, the diamondback moth, the predator, C. maculata, and the parasitoid, D. insulare, the interaction of resistance evolution and biological control was explored.In a greenhouse study over multiple generations, use of C. maculata and Bt broccoli provided excellent control of P. xylostella while delaying resistance in P. xylostella to Bt broccoli.Using this same system, a model was created to study the influence of D. insulare on the long-term pest management and evolution of resistance in P. xylostella.Simulations demonstrated that parasitism by D. insulare provided the most reliable long-term control of P. xylostella within this system and always delayed the evolution of resistance to Bt broccoli.This latter finding agrees with previous studies using this experimental system that demonstrated the lack of harm to the parasitoid by Cry1Ac, compared to other commonly using insecticides for control of P. xylostella.These findings suggest that biological control, in addition to other factors including refuges and gene expression, may play a significant role in limiting the number of cases of resistance to Bt plants to date, especially compared to the ever-increasing cases of resistance to broad-spectrum insecticides.In the near future, we are likely to see currently used as well as new Bt proteins deployed in additional crops.For example, in China dozens of rice lines with resistance to various lepidopteran pests have already been developed that are highly resistant to stem borers such as Chilo suppressalis.While two lines expressing a cry1Ab/Ac fusion gene have received biosafety certificates by the Ministry of Agriculture already in 2009, no Bt rice is commercialized yet.Another example is that of cowpea that contains Cry1Ab to protect the plant from damage by Maruca testulalis.While the plant has not yet been approved, it has the potential to significantly reduce the yield loss caused by this major pest in sub-Saharan Africa, where cowpea is the most important grain legume.In addition to cowpea, field experiments with various Bt crops are ongoing in different countries in Africa.Genes for new Bt proteins may include modifications to improve efficacy or to facilitate expression in plants.An example is modified Cry51Aa2 protein that protects cotton against feeding damage caused by hemipteran and thysanopteran pests.Furthermore, we can expect to see novel combinations of Cry and Vip proteins in pyramided GE crops.In today’s Bt-transgenic plants, the expression of the insecticidal genes is driven by constitutive promotors and the proteins are constantly produced in most plant tissues.Scientists thus search for effective wound-inducible promotors that ensure that the insecticidal compound is only produced when and where it is required.The feasibility of this approach has been documented in the glasshouse and in the field for rice where cry genes were driven by the wound-inducible mpi promotor from maize.Another example is the successful use of the wound-inducible AoPR1 promotor isolated from Asparagus officinalis in cotton and potato.Other examples of non-constitutive promoters include tissue-specific and inducible promoters that may help not only limit exposure to natural enemies but can be used for resistance management.In addition to Bt, effective toxins have also been isolated from other bacteria including species of Pseudomonas and Chromobacter that might be expressed in future insect-resistant GE plants.Much research has also been devoted to protease and alpha-amylase inhibitors and lectins to target lepidopteran, coleopteran, and hemipteran pests.A compound that is of particular interest is the alpha-amylase inhibitor αAI-1 from the green bean that has been introduced into various other legumes and shown to provide very high levels of protection from certain bruchid species.Despite the fact that the alpha-amylases of hymenopteran parasitoids of bruchids are also susceptible to this particular inhibitor, tritrophic studies have shown that the αAI-1 containing GE seeds cause no harm to their parasitoids.In any case, to our knowledge, none of those insecticidal compounds is close to reaching the market stage anytime soon.Another promising new development is the use of RNA interference to control arthropod pests by developing plants to produce double-stranded RNA that silences an essential gene in the target species after ingestion.RNAi effects caused by ingested dsRNA have been shown in various insect orders but with highly variable success rates in the down regulation of the target genes.In general, dietary RNAi works very well in Coleoptera but less so in Lepidoptera.What makes the technology interesting is the fact that one can also target hemipteran pests that have not yet been targeted by Bt proteins.The potential of RNAi for pest control has first been demonstrated in 2007 for H. armigera and Diabrotica virgifera virgifera.Later, Zhang et al. reported control of the Colorado potato beetle Leptinotarsa decemlineata by expressing dsRNA in chloroplasts of potato.The first insect-resistant dsRNA-expressing GE crop was registered by the US Environmental Protection Agency in June 2017.This GE maize event produces a dsRNA targeting the Snf7 protein in D. v. virgifera, which is crucial for the transport of transmembrane proteins.Suppression of Snf7 has been reported to cause increased D. v. virgifera larval mortality leading to reduced root damage.Because the RNAi effect is sequence specific, the dsRNA can be designed to specifically target the gene in the target pest insect.Studies on numerous non-target species using the dsRNA targeting Snf7 in D. v. virgifera have demonstrated this specificity.Combing Bt Cry proteins with RNAi has great potential to delay resistance development.As expected, development of resistance will also be a concern in respect to RNAi-based GE crops and thus needs to be managed.A recent study demonstrated that insects can develop resistance to dsRNA.Interestingly, resistance was not sequence-specific but caused by an impaired luminal uptake, indicated by cross resistance to other dsRNAs tested.New plant breeding techniques, such as genome editing that are protein-mediated or based on sequence-specific nucleases are continuously been developed.These techniques allow the knock-out of a specific gene.Of those, CRISPR-Cas9 has gained the highest importance.The technique has already been successfully applied to crop plants to alter agriculturally important traits such as disease resistance and drought tolerance.To our knowledge, there is only one example where the technology was used to develop an insect-resistant plant.By knocking out the cytochrome P450 gene CYP71A1, rice plants became resistant against rice brown planthopper and striped stem borer.The gene encodes for an enzyme that catalyzes the conversion of tryptamine to serotonin.The suppression of the serotonin biosynthesis resulted in enhanced insect resistance.As these new technologies develop it will be important that research be conducted to ensure that any unacceptable non-target effects be identified and mitigated before commercialization so that GE crops will continue to be useful tools in the context of IPM and sustainable pest control.The efficacy of Bt-transgenic crops in controlling important target pests has been very high.Furthermore, the large-scale adoption of Bt crops in some parts of the world has led to area-wide suppressions of target pest populations benefitting both farmers that adopted the technology and those that did not.As expected and intended, the insecticidal proteins deployed today have a narrow spectrum of activity and cause no detrimental unintended effects to natural enemies.The use of Bt crops typically replaces chemical broad-spectrum insecticides.However, in the USA, and possibly other parts of the world, this benefit is to some extent counteracted by the increasing application of insecticidal seed treatments for the management of early season pests and as insurance against sporadic pests.Overall, the change in insecticide use has benefitted non-target species in general and biological control in particular.In respect to Bt-transgenic crops, the National Academies of Sciences, Engineering, and Medicine recently concluded: “On the basis of the available data, the committee found that planting of Bt crops has tended to result in higher insect biodiversity on farms than planting similar varieties without the Bt trait that were treated with synthetic insecticides.,Earlier, the European Academies have stated that “There is compelling evidence that GM crops can contribute to sustainable development goals with benefits to farmers, consumers, the environment and the economy.,.Consequently, such insect-resistant GE varieties can not only help to increase yields and provide economic benefits to farmers but also improve environmental and human health.The large body of evidence supporting such outcomes should be considered when developing and introducing new insecticidal GE plants in new countries and cropping systems.All authors compiled, wrote and approved this review article.
Genetically engineered (GE) crops producing insecticidal proteins from Bacillus thuringiensis (Bt) (mainly Cry proteins) have become a major control tactic for a number of key lepidopteran and coleopteran pests, mainly in maize, cotton, and soybean. As with any management tactic, there is concern that using GE crops might cause adverse effects on valued non-target species, including arthropod predators and parasitoids that contribute to biological control. Such potential risks are addressed prior to the commercial release of any new GE plant. Over the past 20+ years, extensive experience and insight have been gained through laboratory and field-based studies of the non-target effects of crops producing Cry proteins. Overall, the vast majority of studies demonstrates that the insecticidal proteins deployed today cause no unintended adverse effects to natural enemies. Furthermore, when Bt crops replace synthetic chemical insecticides for target pest control, this creates an environment supportive of the conservation of natural enemies. As part of an overall integrated pest management (IPM) strategy, Bt crops can contribute to more effective biological control of both target and non-target pests. The growing use of insecticidal seed treatments in major field crops (Bt or not) may dampen the positive gains realized through reductions in foliar and soil insecticides. Nonetheless, Bt technology represents a powerful tool for IPM.
153
Bond-selective photoacoustic imaging by converting molecular vibration into acoustic waves
Molecular vibration is the basis of numerous microscopy approaches and enables the detection of specific molecules within cells and tissues.These approaches include Raman scattering, infrared absorption, and near-infrared absorption, which have been widely used for chemical imaging in biomedicine .Similarly, nonlinear vibrational methods, such as coherent anti-Stokes Raman scattering and stimulated Raman scattering microscopies, have enabled new discoveries in biology on account of their high sensitivity and 3D spatial resolution.However, all these approaches have limited imaging depth on the order of a few hundred micrometers due to significant optical scattering in biological tissue.Thus, their potential applications at the organ level in vivo and in clinical settings are restricted.A deep-tissue imaging modality able to maintain both high chemical selectivity and spatial resolution would certainly satisfy the functional requirements for many diagnostic applications in biomedicine.A promising approach is the development of photoacoustic imaging platforms, which combine optical excitation with acoustic detection.With this approach, the imaging depth is significantly improved, as acoustic scattering by biological tissue is more than three orders of magnitude weaker than optical scattering .Unlike nonlinear optical microscopy that relies on tightly focused ballistic photons, the diffused photons contribute equally to the generation of PA signal and thus further enhance the penetration depth.Over the past decade, researchers have developed various PA imaging platforms, including photoacoustic microscopy , photoacoustic tomography , photoacoustic endoscopy , and intravascular photoacoustic imaging .Many excellent review articles provide comprehensive insight into different aspects of the imaging technology , applicable contrast agents , and a variety of biomedical applications .In most of the aforementioned applications, the PA signal comes from the electronic absorption of endogenous tissue pigments, such as hemoglobin and melanin, or from exogenous contrast agents, such as nanoparticles and dyes.Molecular vibrational transitions in biological tissue have recently been demonstrated as a novel contrast mechanism for PA imaging.It describes the periodic motion of atoms in a molecule with typical frequencies ranging from 1012 to 1014 Hz.The molecular population in the ith vibrationally excited state relative to the ground state follows the Boltzmann’s distribution law as Ni/N0 = exp, where ΔE is the energy gap, T is the temperature, and k is the Boltzmann constant.Thus, the Boltzmann distribution describes how the thermal energy is stored in molecules.When the incident photon energy matches the transition frequency between the ground state and a vibrationally excited state, the molecule absorbs the photon and jumps to the excited state.During subsequent relaxation of the excited molecule to the ground state, the thermal energy is converted into acoustic waves detectable by an ultrasound transducer.The fundamental vibrational transitions in the mid-infrared wavelength region have been previously exploited for PA detection of glucose in tissues .Nevertheless, this approach is limited in detecting molecules only tens of micrometers under the skin, where strong water absorption in the mid-infrared region predominates.Vibrational absorption with minimal water absorption can occur in two ways.One is through the stimulated Raman process and the other is through overtone transition.In stimulated Raman scattering, the energy difference between the visible or NIR pump and Stokes fields is transferred to the molecule to induce a fundamental vibrational transition.The concept of stimulated Raman-based PA imaging has been previously demonstrated .However, because stimulated Raman scattering is a nonlinear optical process relying on ballistic photons under a tight focusing condition, this approach is not suitable for deep-tissue imaging.The overtone transition is based on the anharmonicity of chemical bond vibrations.Taking the CH bond as an example, the first, second, and third overtone transitions occur at around 1.7 μm, 1.2 μm, and 920 nm, respectively, where water absorption is locally minimized.Since CH bonds are one of the most abundant chemical bonds in biological molecules including lipids and proteins, photoacoustic detection of CH bond overtone absorption offers an elegant platform for mapping the chemical content of tissue with penetration depths up to a few centimeters.In the following sections, we introduce the mechanism for vibration-based PA signal generation.Then, applications of vibration-based PA imaging in forms of microscopy, tomography, and intravascular catheter will be reviewed, followed by a discussion of the improvements needed to overcome technical challenges that limit translation of these imaging modalities to the clinic.Vibration-based PA signals arise from the molecular overtone transitions and combinational band absorptions, which are allowed by anharmonicity of chemical bond vibration.According to the anharmonicity theory, the transition frequency for an overtone band has the following relation with the fundamental frequency, Ωn = Ω0n-χΩ0, where Ω0 is the transition frequency of fundamental vibration, χ is the anharmonicity constant, and n = 2,3… representing the first, second, and subsequent overtones.When the frequency of an incident pulsed laser matches the transition frequency of an overtone, the energy of the incident photons is absorbed and then induces a local rise in temperature.When both thermal and stress confinements are satisfied , the accumulated heat is subsequently released through a thermal-elastic expansion in tissue, which generates acoustic waves detectable by an ultrasound transducer.Fig. 1 depicts this process for PA signal generation based on first and second overtone transitions.The generated signal contains depth-resolved information of absorbers on which the image reconstruction is grounded.Compared to diffuse optical tomography, the integration of NIR spectroscopy with ultrasound detection eliminates the scattering background.Through conversion of molecular vibration into acoustic waves, vibration-based PA imaging enables the visualization of different molecules and chemical components in biological tissue.Thus far, CH2-rich lipids , CH3-rich collagen , OH bond-rich water , nerve , intramuscular fat , and neural white matter have all been investigated.Particularly, the detection of overtone absorption of CH bonds has recently drawn attention , since CH bonds are highly concentrated in certain types of biological components, such as lipid and collagen.The presence of these molecules or components is directly related to several clinically relevant diseases, including atherosclerosis and cancers.Using vibrational absorption, researchers have conducted PA spectroscopic studies of various molecules in biological specimens .These efforts were aimed at identifying suitable spectral windows to visualize different biological components, as well as differentiate them based on their vibrational spectral signatures.As shown in Fig. 2a, two new optical windows have been identified for bond-selective photoacoustic imaging, where the absorption coefficient of CH bond-rich specimens is maximized and water absorption is locally minimized.The electronic absorption of hemoglobin is dominant in the visible to NIR wavelength range and it overwhelms the third- and higher-order CH overtone transitions in the same range.For longer wavelengths in the range of 1.1–2.0 μm, the optical absorption from hemoglobin has been significantly reduced.In particular, in the first optical window, the hemoglobin absorption is close to one order of magnitude smaller than lipid absorption.The whole blood in the second optical window exhibits almost the same spectrum as pure water , the major content of blood .Although the absorption coefficient of lipid is 1–2 time larger than that of water in both optical windows, the fat constituent in tissue provides much higher contrast than water in vibration-based PA imaging.This observed phenomenon in PA imaging experiments can be explained by the following theoretical prediction and quantitative analysis.Theoretically, the initial PA signal amplitude is described by p0 = ξΓμaF, where ξ is a constant related to the imaging system, Γ is the Gruneisen parameter of tissue, μa is the absorption coefficient of tissue, and F is the local light fluence.The Gruneisen parameter can be further expressed as Γ=βνs2/Cp, where β is the isobaric volume expansion coefficient, νs is the acoustic speed, and Cp is the specific heat.In the equation, only Γ and μa are dependent on absorbers in tissue.Thus, the vibration-based PA contrast of fat versus water can be expressed as p0_fat/p0_water =fat/water.Based on the Gruneisen parameter and absorption coefficient of fat and water listed in Table 1 , the PA contrast of fat versus water is 9.6–12.4 and 10.9–14.0 at 1210 and 1730 nm, respectively.These parameters make vibration-based PA imaging a valid platform for selective mapping of fat or lipids in a complex tissue environment.Based on the same parameters, the fat signal amplitude at 1730 nm is 6.4 time of that at 1210 nm, largely due to the stronger absorption of lipid at 1730 nm.Detailed analysis of the PA spectra of CH, OH, and OD bonds further verified these two optical windows .Fig. 2b shows the PA spectra of polyethylene film, trimethylpentane, water, and deuterium oxide.These spectra have contributions from the absorption profiles of methylene groups, methyl groups, OH, and OD bonds, respectively.According to the spectrum of polyethylene film, the peak at ∼1210 nm comes from the second overtone transition of the symmetric stretching of CH2 .The broad peak located from 1350 to 1500 nm is attributed to the combinational band of symmetric stretching and bending of CH2.The two primary peaks at ∼1.7 μm are thought to be the first overtone of CH2 , which are caused by the anti-symmetric stretching and symmetric stretching, respectively .For trimethylpentane, the 1195 nm peak corresponds to the second overtone transition of CH3 symmetric stretching .The combinational band has a main peak at ∼1380 nm .The primary peak at ∼1700 nm is thought to be the first overtone of anti-symmetric stretching of CH3 .Although OH bonds have combinational bands at ∼1450 and ∼1950 nm, respectively, its absorption is locally minimal in the first and second overtone windows of CH bonds.Due to the heavier mass of deuterium, the prominent overtone and combinational bands of D2O have their corresponding peaks at longer wavelengths.Thus, it has been widely used as an acoustic coupling medium for vibration-based PA imaging .As shown in Fig. 2c, a PA spectroscopic study of polyethylene film with a varying water layer thickness suggests that the second overtone of CH bonds is peaked at ∼1.2 μm, while the first overtone corresponds to the peak at ∼1.7 μm .Compared with 1.2 μm excitation, 1.7 μm excitation produces a ∼6.3 times stronger PA signal in the absence of water , which is consistent with aforementioned theoretical calculation.The signal amplitude drops with the thickness of the water layer and has the same level as 1.2 μm excitation when the water layer thickness reaches 3–4 mm.Thus, a 1.7 μm wavelength is favorable for intravascular photoacoustic imaging considering the relatively large absorption coefficient of the first overtone and the diminished optical scattering caused by blood at longer wavelengths .The second overtone however is suitable for a tomographic configuration that requires larger penetration depths due to smaller water absorption at 1.2 μm .These spectral signatures were utilized for different biomedical applications, as reviewed below.Based on its high spatial resolution, deep penetration depth, and rich optical absorption contrast, PAM has been used extensively and enabled new discoveries in biology and medicine.Using vibrational absorption, new applications are explored through PAM in the relevant optical windows.In a typical PAM setup, an inverted microscope is employed to direct the excitation light which can be generated by Nd:YAG pumped optical parametric oscillator or a Raman laser .An achromatic doublet lens or objective is applied to focus the laser light into a sample.A focused ultrasonic transducer records the time-resolved PA signal from the acoustic focal zone.According to the time of flight, each laser pulse can be used to generate an A-line.By raster scanning the sample in the X–Y direction, a three-dimensional image can be acquired.One important application for PAM in the new optical windows is to map the lipid bodies in Drosophila 3rd-instar Larva.Drosophila melanogaster is one of the genetically best-known and widely used model organisms for genetic, behavioral, metabolic, and autophagic studies .Since lipids have strong optical absorption due to the second overtone transition of the CH bond, Wang et al. performed 3D imaging of lipid body of a whole 3rd-star larva in vivo.The imaging result shows that lipid storage is mainly distributed along the anterior-posterior and the ventral-dorsal axis.This demonstrated capability of label-free visualization of adipose tissues in Drosophila is important for the rapid determination of phenotype, which will decrease the time required to conduct genetic screens for targets of fat metabolism and autophagy in this model organism .Intramuscular lipids are associated with insulin resistance, which is related to a range of metabolic disorders including type 2 diabetes, obesity, and cardiovascular diseases .However, the assessment of intramuscular fat is difficult since current deep-tissue imaging modalities cannot provide chemical contrast.Li et al. reported the feasibility of performing intramuscular fat mapping with a Ba2 crystal-based Raman laser .The Raman laser provided an output with wavelength of 1197 nm.The signal from fat at 1197 nm is strong and the contrast nearly disappeared at 1064 nm, which indicates a strong absorption at 1197 nm due to the second overtone transition of CH bond.The muscle sample was also imaged in three dimensions with an imaging depth of 3 mm, where the fat structure was clearly reflected.This result shows the promise of using this technique for quantitative measurement of intramuscular fat accumulation in metabolic disorders.Each year, approximately 12,000 new cases of spinal cord injury are diagnosed in the U.S., causing tetraplegia or paraplegia.White matter loss is thought to be a critical event after spinal cord injury.Traditionally, such degeneration is measured by histological and histochemical approaches .However, real-time imaging is not feasible and artifacts are often introduced during histological processing.Wu et al. used PAM with 1730 nm excitation to assess white matter loss after a contusive spinal cord injury in adult rats .Owing to the abundance of CH2 groups in the myelin sheath, white matter in the spinal cord can be easily visualized.From the cross-sectional image, contrast from white matter is ∼2.5 times higher compared with grey matter.The absorption difference can be used to examine the morphology of white matter and changes in injured spinal cords.This study suggests that PAM based on first overtone transition of CH bond could be potentially used to assess white matter loss during spinal cord injury and repair.By taking advantage of signal generation from diffused photons, PAT penetrates deeper than PAM and expands the imaging scale from the cell and tissue to whole organ level .The high scalability of PAT is achieved through a trade-off in spatial resolution for improved imaging depth.Moreover, the imaging scale can vary with the specific needs of PAT applications.Current applications for PAT include lymphatic and sentinel lymph node mapping, superficial and deep vessel mapping, and tumor imaging .The key advantages of this technique are noninvasiveness, superior depth penetration, and chemical-selectivity without the need for exogenous agents.For superior penetration depth, the experimental set-up requires integration of a high power laser with a low-frequency ultrasound array.Fig. 4a shows a typical PAT system .Briefly, a customized OPO laser generating a 10 Hz, 5 ns pulse train with wavelength tunable from 670 to 2300 nm was used as the light excitation source.An optical fiber bundle delivers the light to tissue through two rectangular distal terminals adjacent to an arrayed ultrasound transducer with center frequency of 21 MHz.The generated PA signal is then acquired and reconstructed as two-dimensional or three-dimensional tomographic images using the ultrasound system.Below we describe a range of applications using molecular overtone absorption for tomographic imaging of lipid-associated diseases.Carotid artery atherosclerosis is a common underlying cause of ischemic stroke .Noninvasive imaging and quantification of the compositional changes within the arterial wall is essential for disease diagnosis.Current imaging modalities are limited by the lack of compositional contrast, inability to detect of non-flow-limiting lesions, and inadequate accessibility to patients.However, modified multispectral PAT has great potential for serving as a point-of-care device for early diagnosis of carotid artery disease in the clinic.Hui et al. tested this system to image ex vivo atherosclerotic human femoral arteries and tissue-mimicking phantoms .We placed a 45-degree polished fiber-optic probe and a 21 MHz linear array transducer with 256 elements on opposite sides of the sample with a thick piece of chicken breast in order to mimic the in vivo conditions of carotid artery imaging through transesophageal excitation and external acoustic detection.Chemical maps of the blood and lipid in the lipid-laden vessel and fatty chicken breast were generated as shown in Fig. 4b.Furthermore, for the tissue-mimicking phantom experiment, a piece of chicken breast was added between the excitation source and a polyethylene tube in order to analyze the signal-to-noise ratio and imaging depth in this set-up.An imaging depth of about 2 cm was achievable in this scenario while retaining chemical selectivity around 1210 nm and spectral discrimination between the intramuscular fat in the chicken breast and the polyethylene tube.These results collectively show that this prototype fiber-optic probe design enables deep-tissue multispectral imaging and has translational potential as a minimally invasive diagnostic tool.In a surgical procedure, iatrogenic damage to peripheral nerves is a leading cause of morbidity .The resultant complications include temporary numbness, loss of sensation, and peripheral neuropathy .The accurate noninvasive visualization of nerve tissues relative to adjacent structures is of vital importance yet remains technically challenging.As myelin sheaths surrounding axons are abundant in lipids, there is an opportunity to apply PA imaging technique to discriminate nerves from adjacent tissues using lipid and blood as two different contrasts.A preliminary feasibility study of nerve imaging was performed in a PAM configuration .Clinical translation of this technique however is impeded by millimeter-scale imaging depth and slow imaging speed.Li et al. recently demonstrated the label-free in vivo tomographic visualization of mouse nerves through PAT based on second overtone absorption of CH bonds with an imaging depth of at least 2 mm .Spectroscopic imaging was performed in the optical window of 1100 to 1250 nm to discriminate lipid from blood.An algorithm called multivariate curve resolution alternating least squares was then applied to the spectroscopic image stack to resolve chemical maps from lipid and blood.As shown in Fig. 4c, the femoral nerve fiber was clearly resolved and distinguished from the adjacent femoral artery.Although this application does not require a greater imaging depth, it demonstrates chemical selectivity and sufficient spatial resolution to discriminate adjacent structures with a large imaging field of view.It has the potential for label-free imaging of peripheral nerves in patients undergoing surgery.Breast-conserving surgery, or lumpectomy, is a common procedure for breast cancer treatment .To prevent local cancer recurrence after lumpectomy, histology is performed to check whether the excised tumor specimen is surrounded by a sufficient amount of normal tissue .A re-operation is needed if a positive margin is identified.Currently, the re-operation rate ranges from 20% to 70% .This high re-operation rate highlights a pressing need for the development of an intraoperative device that is rapid, sensitive, label-free, and able to scan the entire tissue surface for accurate breast cancer margin assessment.Recently developed multispectral PAT combining lipid with blood contrast provides a compelling opportunity to meet this need .Specific to breast cancer, multispectral PAT, as shown in Fig. 4a, was applied for margin detection in the optical window from 1100 to 1250 nm.In this window, the distribution of fat and blood were visualized after acquisition of a multispectral image stack.The image stack was processed through the MCR-ALS algorithm, generating chemical maps for those two major components.Based on the imaging results and a comparison with histology results, the area with fat and lacking hemoglobin contrast was assigned to be normal tissue with fat and scattered fibrous tissue.The area with hemoglobin contrast and fat indicated angiogenesis and invasive tumor with scattered fat tissue.The area without fat contrast indicated tumor tissue with dense fibrous tissue.These results collectively demonstrated the capacity of tumor margin assessment based on the contrast of hemoglobin and fat.This imaging configuration maintains an imaging depth of up to 3 mm, which is sufficient for determining breast tumor margins.With 100% sensitivity, the system can successfully detect breast tumor margin and opens a new way for clinical intraoperative breast tumor margin assessment.A similar approach using blood and lipid as two complementary contrasts has recently been demonstrated to visualize the vasculature and external boundaries of healthy lymph nodes across their depth .Future studies will be useful for “indirect detection” of cancerous nodes in which the structure and composition are expected to change.Beyond microscopic studies of lipid-laden plaques inside an atherosclerotic artery, IVPA has been intensively investigated over the past few years.As is widely known, cardiovascular disease is the number one cause of death in the United States.The majority of acute fatal incidents with cardiovascular disease are due to vulnerable plaques, which are at a high risk for rupture and thrombosis .Pathophysiological findings suggest that these vulnerable lesions contain a large lipid core covered by a thin fibrous cap and are located in areas of high shear stress within the coronary arterial wall .Current imaging modalities either lack compositional contrast or sufficient imaging depth for this application.Furthermore, no existing imaging tools can reliably and accurately diagnose vulnerable plaques in live patients .However, IVPA maintains high resolution, chemical selectivity, and sufficient imaging depth to characterize vulnerable plaques.It has great potential to be developed as a life-saving device for diagnosis of vulnerable plaques.The current translational goal for IVPA imaging is to detect lipid-laden plaques with high accuracy and specificity.Thus, the selection of an optimal wavelength to excite lipids becomes the primary objective.PA imaging of lipid-rich plaques has been demonstrated using different wavelengths .However, the PA signal from lipids between 400 and 1100 nm is greatly overwhelmed by hemoglobin absorption, and is not suitable for in vivo applications.When the wavelength exceeds 1.1 μm, hemoglobin absorption is minimal, but significant water absorption due to vibrational transition of OH bonds attenuates light intensity inside the biological tissue.Nevertheless, two optical windows have been revealed for PA detection of overtone absorption of CH bonds, where lipids can be imaged at ∼1.2 μm and ∼1.7 μm as discussed before .Grounded on these two lipid-specific optical windows, a series of IVPA imaging developments has been reported.Jansen et al. demonstrated the first IVPA imaging result of human artery with 1210 nm excitation of the second overtone of CH bond .As seen in Fig. 5a, the histology shows a large eccentric lipid-rich lesion, as well as a calcified area and regions of peri-adventitial fat, which is confirmed by IVUS image.The IVPA image at 1210 nm exhibits a bright signal along the intimal border, and also from deeper tissue layers in the eccentric plaque and the peri-adventitial fat in the bottom right corner, compared with IVPA image at 1230 nm.Wang et al. tested the feasibility of IVPA imaging of atherosclerotic rabbit aorta at 1.7 μm in the presence of luminal blood .A preliminary study of in vivo IVPA imaging was also performed at 1720 nm in a rabbit model .These results suggest that in vivo IVPA imaging is possible even without flushing luminal blood with saline, a necessary step for imaging coronary arteries with optical coherence tomography.Recently, with the availability of high-repetition-rate laser sources, a few research groups have developed high-speed IVPA imaging at ∼1.7 μm .Hui et al. demonstrated the high-speed IVPA imaging of human femoral artery ex vivo at 1 frame per sec as shown in Fig. 5b .The IVPA and IVUS images of the atherosclerotic artery reveal complementary information in the artery wall."The lipid deposition in the arterial wall indicated by white arrows at 2 and 3 o'clock directions, which is not seen in the IVUS image, shows clear contrast in the IVPA image.IVPA imaging has been widely considered a promising technique for the diagnosis of vulnerable plaque in the arterial wall of live patients.However, the translation of the imaging technology from bench to bedside has been stifled by its slow imaging speed, mainly due to lack of suitable laser sources to excite the molecular overtone transitions at a high repetition rate.Wang et al. recently demonstrated a 2-kHz master oscillator power amplifier-pumped barium nitrite2) Raman laser, which enabled the high-speed IVPA imaging .In the laser system, a 1064 nm pulsed laser at a repetition rate of 2 kHz generated by the MOPA laser was used to pump the Ba2-based Raman shifter.Through a stimulated Raman scattering process, the 1064 nm pump laser is converted to an 1197 nm output, which matches the second overtone vibration of CH bond and thus can be used to excite the lipid-rich plaques.The high-speed IVPA imaging with this laser was validated using the iliac artery from a pig model at a speed of 1 frame per sec, which is nearly two orders of magnitude faster than previously reported systems.Since 1.7 μm optical window is more optimal for in vivo IVPA imaging when compared with 1.2 μm, the laser development having output at ∼1.7 μm with high pulse energy and high repetition rate is of great importance toward the in vivo applications of IVPA imaging.More recently, several research groups have developed such laser sources based on different technologies with a repetition rate of kHz levels and applied them to high-speed IVPA imaging .As one example, shown in Fig. 6b, a potassium titanyl phosphate-based OPO laser has an output at 1724 nm with pulse energy up to 2 mJ and with pulse repetition rate of 500 Hz .In order to obtain high pulse energy at high repetition rate, two KTP crystals were cut with a special angle and placed with adverse orientation in the OPO, which effectively minimized the walk-off effect.This laser enabled imaging of human artery at a speed of 1 frame per sec with a cross-sectional IVPA image composed of 500 A-lines.This speed greatly reduces the ambiguity caused by slow imaging speed and can be further used for preclinical in vivo imaging.However, in order to make IVPA competitive in the clinic, the repetition rate needs to be further improved to the order of 10 kHz .Harnessing its high depth scalability and endogenous contrast mechanism, vibration-based PA imaging has opened up a range of biomedical applications in forms of microscopy, tomography, and intravascular imaging, as well as technical challenges for their translation to the clinic.PAM with overtone absorption of CH bonds as the contrast can achieve ultrasonic resolution in deep tissue regime.As lipids are rich in CH2-groups, PAM offers opportunities for lipid imaging, which is often related to disease severity, including type 2 diabetes and white matter loss and regeneration.Using an OPO as the light exciting source, multispectral PAM can be achieved with spectral signatures of molecules.Currently, with ∼20 MHz ultrasound transducer, the lateral resolution of vibration-based PAM is ∼70 μm .However, it potentially can be increased by 10 times with optical-resolution PAM configuration through a ∼75 MHz transducer and objective lens .Because of light focusing and sample scanning scheme, the speed of PAM is limited, which hinders its translation from bench to bedside.However, imaging speed may be improved by translating the transducer instead of the sample stage .PAT in the new optical windows offers new opportunities for its use in biomedicine, but also some particular engineering challenges.As demonstrated in the previous examples, the ability to perform either two-color imaging or multispectral tomography lends the technology to different applications.For instance, a two-color PAT system for ex vivo breast tumor margin assessment may be advantageous for a compact mobile system designed for a clinical setting, whereas multispectral tomography is helpful for analyzing spectral differences in more complex tissue samples.The most salient advantage and future direction for vibration-based PAT though is the enabling of molecular-specific deep-tissue imaging applications.Vessels, nerves, and organs that accumulate pathological levels of lipid are of primary interest in this regard.Currently, magnetic resonance imaging, X-ray computed tomography, and ultrasound are used in the clinic to aid in the diagnosis and treatment monitoring of lesions, such as those in atherosclerosis and fatty liver disease .While magnetic resonance imaging provides excellent soft tissue contrast and adequate resolution for these applications, its cost and availability make it an impractical option in many cases.Ultrasound and PA imaging are well suited to clinical applications requiring functional and chemical-selective characterization while providing greater imaging depth than purely optical tomographic techniques.Furthermore, different configurations involving excitation or detection outside or within the body expand the capabilities and potential applications of vibration-based PAT.Challenges for tomographic imaging, including vibration-based PAT, are present and require refinements in the engineering of these systems.Two significant challenges for in vivo imaging are the presence of clutter, or unwanted superficial signal enhancement, and volumetric imaging without motion artifacts.The first is mainly contributed by the distance between the illumination source and the imaging surface .The geometry of the illumination source and detector may also help to alleviate this effect, which degrades the signal to background ratio.Volumetric imaging provides significantly more morphological and compositional information of large and complex regions.With handheld two-dimensional array transducers, it is important that it is stabilized in order to collect three-dimensional images.Furthermore, gating techniques can significantly reduce motion artifacts from breathing and vessel motion in small animals, which have high heart and respiration rates.The integration of gating into PAT will be critical for future experiments examining complex lesions and areas of the body where motion is significant.IVPA imaging has become one of the hot topics in the field of biomedical imaging.So far, it has shown great potential for clinical applications and is undergoing rapid development.To translate the imaging tool from bench to bedside, several significant challenges need to be resolved.The first and most pressing challenge is to develop a high-pulse-energy laser with a lipid-specific wavelength at 1.7 μm and with a pulse repetition rate at 10 kHz level.The high pulse energy would be large enough, but also under the ANSI safety limits , to ensure effective PA signal generation even through a thin layer of luminal blood.The high repetition rate would enable IVPA imaging at high speeds, comparable with the speed of many IVUS systems.The second one is to design a clinically relevant IVPA catheter.The catheter should be further miniaturized to ∼1 mm or less in diameter for clinical practice, as well as maintain excellent detection sensitivity.Better sensitivity could help reduce the laser pulse energy, thus further reducing the challenge for development of a high-pulse-energy laser.In addition, a clinically relevant large animal used as a model for human atherosclerosis is also essential for the validation of IVPA imaging technology.With this model, the in vivo IVPA imaging procedure, the detection of lipid-laden plaques, and the clinical requirements of catheters in pressurized vessels and blood can be all tested and validated.Ultimately, IVPA imaging of living patients could have a phenomenal impact on both diagnosis and treatment of vulnerable plaques.Moreover, it has the potential to guide coronary stenting during percutaneous coronary intervention, as well as to stimulate the development of new cholesterol-lowering and anti-inflammatory therapeutics for atherosclerosis.Indeed, it has the potential to be a life-saving technology when used in clinical settings.The authors declare that there are no conflicts of interest.
The quantized vibration of chemical bonds provides a way of detecting specific molecules in a complex tissue environment. Unlike pure optical methods, for which imaging depth is limited to a few hundred micrometers by significant optical scattering, photoacoustic detection of vibrational absorption breaks through the optical diffusion limit by taking advantage of diffused photons and weak acoustic scattering. Key features of this method include both high scalability of imaging depth from a few millimeters to a few centimeters and chemical bond selectivity as a novel contrast mechanism for photoacoustic imaging. Its biomedical applications spans detection of white matter loss and regeneration, assessment of breast tumor margins, and diagnosis of vulnerable atherosclerotic plaques. This review provides an overview of the recent advances made in vibration-based photoacoustic imaging and various biomedical applications enabled by this new technology.
154
A Large Range Flexure-Based Servo System Supporting Precision Additive Manufacturing
Micro-additive manufacturing is considered to be an effective method to improve the performance of three-dimensional microproducts.Scalable AM is classified as one of the main groups of micro-AM, and includes stereolithography, selective laser sintering, and inkjet printing; these technologies can be employed at both the macroscale and microscale in order to efficiently fabricate complex 3D components.As one of the most popular scalable AM technologies, SL solidifies a liquid polymer by photo-curing with high resolution.Micro-stereolithography is SL at the microscale, and is widely used in many areas, such as microsensors , optical waveguides , 3D photonic band gap structures , and biology analysis .During an MSL process, a two-dimensional microscale pattern is formed by solidifying a liquid photopolymer; next, a 3D structure can be obtained by accumulating the 2D patterns.An MSL system is mainly composed of a liquid resin container and a precision multi-axis motion stage, with which the patterns are accurately located with the laser beam in order to solidify the resin.The accuracy of the finished product is determined by both the solidified area generated by the light beam and the motion quality of the positioning system.In other words, the motion system must be accurate and repeatable enough to reach the right location for every solidification event.A considerable amount of research effort has been devoted to reducing the laser spot size; examples include Refs. and the references therein.However, relatively fewer works emphasize the motion quality of positioning systems, possibly because the required motion quality corresponding to the laser spot size is not particularly stringent.Given the recent decrease of the laser spot size down to 1 μm or less , the corresponding motion quality should be one order of magnitude less than the laser spot size—that is, < 100 nm.This means that a nano precision stage is required in order to achieve nanometric motion quality.In order to achieve such a high-precision motion quality, the choice of the bearings of the motion stages is crucial.Most current multi-axis positioning stages are based on contact bearings, such as linear guideways, which limit the motion quality to the sub-micrometer level; moreover, this type of design requires sophisticated assembly and maintenance.With the goal of providing a nanopositioning system to support MSL systems, this paper discusses the development of a beam flexure-based motion system.To overcome the abovementioned disadvantages of contact bearings, it is desirable to have a motion system that uses frictionless bearings.Flexure bearings provide motion by means of the elastic deformation of flexures, which allows nondeterministic effects such as friction, backlash, and wear to be avoided during the operation; as a result, nanometric motion quality can be achieved in a compact desktop size.Furthermore, beam flexure-based nanopositioning systems are extremely suitable for harsh operation environments, as zero maintenance is required.By combining multiple beam flexures, a flexure mechanism can be constructed to provide millimeter-range motion guidance and load bearing in a compact desktop size .Despite the abovementioned advantages of flexure bearings, some challenges still exist in the design and control of beam flexure-based nanopositioning systems.In research into the development of over-millimeter-range XY micropositioners, the actual motion quality of these systems were not fully satisfactory, with very few experimental results showing nanometric tracking accuracy.As one of the aims of this work, we would like to emphasize mechanism design and real-time control strategies in order to show the nanometric tracking accuracy of large range beam flexure-based nanopositioner supporting MSL systems.The remainder of this paper is organized as follows: In Section 2, the design of a large range beam flexure-based nanopositioning stage is discussed, with detailed finite element analysis and verification.In Section 3, a real-time control system is proposed for trajectory tracking of the nanopositioning system, and in Section 4, numerous experiments are conducted on the fabricated prototype system to demonstrate the desired ability of the nano servo system.A schematic design of a beam flexure-based MSL system is shown in Fig. 1.The main components of the MSL system and their features are briefly introduced.A light source and a related optical system are responsible for generating a small laser beam to induce photo-curing.For the sake of compact size, a Blu-ray optical pickup unit can be chosen as the light source .The multi-axis motion system is mainly composed of two positioning stages.One is an XY nanopositioning stage; this is the key to precisely locating the laser beam in an XY plane, such that the XY cross-sections of a 3D micro component can be solidified.When the laser spot size goes down to 1 μm or less, a nanometric motion quality of the XY motion stage is required.The development of such a nanopositioning system in a compact desktop size is challenging and is the main concern of this work.The other positioning stage is a Z-axis translator, which is responsible for providing the required vertical motion of one layer thickness of the sliced 3D component.Since the motion quality of the Z-axis translator is at the micrometer level, this solution is widely available and hence is not under consideration in this study.With the abovementioned motion requirement in mind, we present a compact beam flexure-based nanopositioning stage supporting MSL system.To be specific, an XY nanopositioning stage was designed to locate the laser beam in a range of 3 mm along both X and Y axes.For this range, electromagnetic actuators such as voice coil actuators are usually adopted to provide thrust forces.Instead of obvious but bulky serial kinematic configuration of the XY positioning stage, in which one axis stacks on top of another, we considered a parallel kinematic configuration of the flexure mechanism, in which each axis actuator is grounded mounted.With this type of configuration, a higher bandwidth and precision of the motion system can be achieved due to the lack of moving actuators and disturbance from moving cables, respectively.Furthermore, in order to achieve a millimeter range and nanometric motion quality of the XY parallel mechanism, it is necessary to carefully manage parasitic error motions, which are the motions in any axis that differ from the axis of an applied thrust force.With the increase of the stroke, the in-plane parasitic error motion of the moving stage also increases, including parasitic translational and rotational motions, both of which significantly adversely affect the nanometric motion quality.To reduce the parasitic motions, a mirror-symmetric arrangement is desired, and appropriate planar redundant constraints are usually required in order to reject various disturbances.Fig. 2 shows a conceptual design, in which the Z-shaped and Π-shaped parallelogram beam flexures provide the required load bearing and kinematic decoupling, and the four-beam flexure serves as the redundant constraints to improve the disturbance-rejection capability of the mechanism.As shown in later sections, the important features of the design are as follows: A mirror-symmetric arrangement is clearly beneficial to reduce parasitic motion; the transverse stiffness of the Z-shaped beam flexure has a good linear behavior in the desired operation range, which is beneficial toward achieving a large workspace; and the planar redundant constraints provide a sufficiently high stiffness, resulting in significantly reduced parasitic error motions.Note that there is a tradeoff in the Z-shaped beam flexure design between the desirable and non-desirable characteristics.Regarding desirable characteristics, the Z-shaped beam flexure has a large range of motion, with constant primary stiffness and a very compact structure.Regarding non-desirable characteristics, the Z-shaped beam flexure, as the guiding bearing of the actuator, can also result in non-negligible parasitic transverse motion to the actuator, since it is not a rigorous single-degree-of-freedom compliant prismatic joint.However, this small parasitic motion can be compensated for by the VCA used in this paper.Functionally speaking, the motion stage can still achieve a planar motion without redundant constraints; however, an appropriate redundant constraints module is crucial in order to allow the motion stage to achieve a nanometric quality.Several recent studies have reported on the design of redundant constraints to restrict parasitic error motions; examples include Refs. .To control the abovementioned nanopositioning stage, the dynamics of each axis are needed.From a dynamics perspective, the VCAs and beam flexure-based mechanism can be treated as a mass-spring system.Furthermore, by using the stiffness model provided above and by neglecting the beam mass, the dynamics can be modeled as a five-mass-spring system, in which the dynamics of the moving stage m along the X axis, Y axis, and rotational axis are modeled as follows:An FEA analysis was conducted to simulate the relationship between the displacement and thrust force.To be specific, in order to achieve higher accuracy of the solution, a high mesh density and a large deflection approach were utilized for the Z-shaped module, the parallelogram module, and the redundant constraints module.The material of the stage was aluminium alloy 7075-T6, which has the following properties: Young’s modulus, 72 GPa; Poisson’s ratio, 0.3; and density, 2.7 × 10−6 kg·mm−3.The FEA result shows that the maximum ranges of the X and Y axes are 1.5003 mm, which agrees with the analytical ranges.In addition, it is seen from Fig. 5 that the transverse stiffness change is about 0.276% when the force varies from 10 N to 60 N, which validates the linear stiffness model in Eq.Modal analysis results by ANSYS are shown in Fig. 6, where the redundant constraints modules mainly correspond to the first two modals and).In addition, the rotation corresponds to the third modal), and the vibration of the T-shaped connecting component corresponds to the fourth to sixth modals–), which are far beyond the working frequency range.To better illustrate the advantages of the proposed design, we compared the performance of the proposed design with that of the same design in the absence of the redundant constraints module.The following two sets of comparisons were conducted: First, a 1 mm displacement was applied to the primary motion axis; simultaneously, a 0.01 mm off-axis displacement was applied, which was perpendicular to the primary motion axis in order to simulate disturbances.The resulting error motions of the moving stage reflected the disturbance-rejection capability of the proposed design.Next, modal analysis was conducted to show the difference in the natural frequencies of the two cases.The results are shown in Table 2: The error motion is 87.1 nm versus 193.8 nm, and the natural frequency is 55.0 Hz versus 44.1 Hz.It is clear that the proposed design has a much better disturbance-rejection capability and a higher bandwidth.To precisely control the motion system in an accurate and repeatable fashion, a real-time control system is required, for which advanced feedback control strategies need to be appropriately designed.The overall control system we propose here has the following features:Enhanced tracking performance for high-frequency signals with minimized tracking errors; and,Powerful, user-friendly controllers and drives to enhance the positioning process.To accommodate controller algorithms for real-time implementation, a user-friendly interface was also developed with the following features:Output zeroing, linearization, and temperature compensation;,Sensor calibration and temperature compensation;,High sampling frequency feedback control of the firmware architecture;,Complex trajectory tracking; and,User program space and data space through the callback functions.Fig. 7 shows the prototype of the large range XY beam flexure-based nanopositioner.The prototype was monolithically fabricated from aluminium alloy 7075-T6 via wire electric discharge machining.Two VCAs were utilized to actuate the servo stage.A Renishaw laser interferometer was used to sense and feedback the displacements of the motion stage for a large range of up to 1.5 mm × 1.5 mm.A feedback control strategy was employed by a self-designed rapid-prototyping control system, which is an open architecture.Fig. 9 shows the user interface running on the host computer.Due to the symmetrical configuration, the system has an identical model on both actuation axes.The overall dynamics of the motion system are composed of electrical and mechanical subsystems.Since the current amplifier was designed with a bandwidth of 2 kHz, it can be assumed that the dynamics from the controlled voltage input to the amplified current are a constant gain.We tested the stiffness of the motion system from the VCA input voltages to the axial displacement.Fig. 10 shows that the nano servo system has a good linear stiffness in the range .The Y axis was also tested and yielded a similar result, showing that the nano-positioning system has the ability to achieve a ±1.5 mm × 1.5 mm motion range.To test the frequency response functions of the designed beam flexure-based nanopositioning stage, an impact excitation was exerted on the system using an impact hammer.The vibration of the servo stage was detected, and the measured signals were processed and then imported to a computer using Fourier analysis to obtain the FRFs of the designed system.The results for the X axis are shown in Fig. 11, and the results for the Y axis are similar and hence omitted.An obvious resonant peak was observed at 244.2 Hz.Theoretical and FEA results are in very good agreement for the fourth to the sixth orders, at 243.1 Hz, 243.6 Hz, and 245.2 Hz; in addition, the first and second orders, at 76.1 Hz and 76.3 Hz, can be seen with a resonant peak of 71.2 Hz.Based on the above H∞ optimization, a robust stabilizer of six orders was obtained; its detailed expression is omitted here.The contour-tracking capability is shown in Fig. 12; the root mean square error of the circular trajectory is 79.3 nm.In addition, the repeatability of the contour-tracking performance was tested through multiple trials, and the RMS mean tracking error was found to be 84.1 nm.In this work, a large range beam flexure-based nanopositioning system was developed to support the MSL process.This paper provided the design, modeling, FEA analysis, and real-time control system for the nano servo system.Its static and dynamic models are presented here in order to predict the desired system performance.Detailed FEA results showed good agreement between theoretical and simulated calculations.Using the fabricated nano servo stage and its identified dynamic models, real-time advanced control strategies were designed and deployed, showing this system’s capability to achieve a millimeter workspace and an RMS tracking error of about 80 m for a circular trajectory.
This paper presents the design, development, and control of a large range beam flexure-based nano servo system for the micro-stereolithography (MSL) process. As a key enabler of high accuracy in this process, a compact desktop-size beam flexure-based nanopositioner was designed with millimeter range and nanometric motion quality. This beam flexure-based motion system is highly suitable for harsh operation conditions, as no assembly or maintenance is required during the operation. From a mechanism design viewpoint, a mirror-symmetric arrangement and appropriate redundant constraints are crucial to reduce undesired parasitic motion. Detailed finite element analysis (FEA) was conducted and showed satisfactory mechanical features. With the identified dynamic models of the nanopositioner, real-time control strategies were designed and implemented into the monolithically fabricated prototype system, demonstrating the enhanced tracking capability of the MSL process. The servo system has both a millimeter operating range and a root mean square (RMS) tracking error of about 80 nm for a circular trajectory.
155
A mechanism to derive more truthful willingness to accept values for renewable energy systems
In 2005 the share of energy from renewable resources in gross final consumption in Cyprus was 2.9%.The European Union Commission Directive 2009/28/EC established a common framework for the use of energy from renewable resources in order to limit greenhouse gas emissions.The EU set an obligatory target for each member state.The EU target for the renewable energy sector’s share of gross final energy use in Cyprus is 13% by 2020.This target necessitates developing plans to implement renewable energy technology projects and policies in the electricity sector.The North Cyprus government has also set an incentive strategy to expand renewable energy exploitation in an attempt to support environmental improvements; and to have greater self-sufficiency in power generation, as a substitute to the imported sources, whilst maximizing the efficiency of renewable energy sources utilization.Cyprus has 300 days of sunny weather per year.There is thus a high potential for solar energy utilization, particularly micro-generation solar panels.The government is attempting to raise people awareness about the benefits of energy efficiency, the need for diversification of sources of energy, and reduced dependence on imported fossil fuels.The aim is to change people’s behaviour towards renewable energy production and consumption, primarily by the use of incentives.The adoption of renewable energy by households is both a private and public good.The public good aspect of renewable energy adoption is the household’s contribution towards reducing carbon emissions and reducing global climate warming.The private good externality element is the reduction in visual amenity of the property as a result of the installation of renewable energy solar panel systems.The aim of this research is to assess people’s willingness-to-pay for micro-generation photovoltaic systems in Northern Cyprus, and their willingness-to-accept compensation to forego their right to micro-generation of PV on their property.CV has been used to measure the monetary value of both gains and losses in the quantity of a good.WTA measures the minimum amount that an individual is willing to accept as just compensation for the loss, whilst WTP measures the maximum amount an individual is willing to pay rather than forego the environmental gain.Numerous studies have used CV to measure WTA compensation for the loss, and WTP for the gain, of a ‘public good’, and also private goods, in environmental economics.The loss of an environmental good is typically valued more highly than an equivalent gain in the good.This asymmetry between willingness-to-accept compensation for the loss of a good and WTP for a gain has long been a feature of most CV results.Bishop et al. explained the discrepancy as an anomaly arising from people’s unawareness of the value of environmental assets and non-market values in monetary terms.Explanations for WTP–WTA asymmetry, in terms of economic theory, have emphasised the role of substitution and income effects.Disposable income constrains demand for environmental improvements in terms of WTP, but not WTA compensation; whilst the unique character of some public and private goods implies high compensation to offset utility loss.However, Plott and Zeiler using a Becker-DeGroot-Marschak auction mechanism, and respondent or subject training, found no difference between WTP and WTA across a variety of goods, thus calling into question loss aversion theory in real market conditions.Horowitz and McConnell analysed 45 studies which reported WTA and WTP values, and found the mean WTA/WTP ratio was approximately 7.0; with a higher WTA/WTP ratio for public and non-market goods, and a ratio of 2.9:1 for ordinary private goods, with the lowest ratio being for experiments involving forms of money.Haab and McConnell suggested that the proportion of the difference between WTA and WTP for private goods, such as pens and mugs, is not the same as for public goods, and this notion challenges Hanemann’s assumption based on neoclassical theory.Nevertheless, studies have shown that even for private goods, there is a divergence between WTP and WTA.According to neoclassical theory, the income constraint is a factor limiting the value of WTP.Unlike WTP, WTA is not constrained by income because consumers are able to demand greater monetary amounts.In survey instruments different methodical biases have been observed by CV practitioners.Sudgen suggested the use of a well-designed instrument to facilitate the minimisation of these biases to elicit true preferences in accordance with an incentive-compatible mechanism.In attempting to weaken the endowment effect, Plott and Zeiler proposed the need to control subjects’ misconceptions.This effect can be controlled by using an incentive compatible elicitation mechanism to clarify the minimum WTA and maximum WTP terminologies.Bjornstad et al. proposed a teaching mechanism to simplify the CV technique, on the basis that the parametric and non-parametric results which suggested that the impact of “learning design” on eliminating of hypothetical bias is highly effective.In addition to teaching and clarification tools, assuring respondents’ confidentiality is an effective tool to control the subject’s misconceptions.Furthermore, research has identified the role of an incentive compatible survey design in eliciting truthful answers in which respondents must view their responses as an effective element over actions or decisions.Potential hypothetical bias can be controlled by clarifying for the respondents what is meant by a minimum WTA and maximum WTP.Indeed, the importance of a market-like environmental setting for a decreasing ratio was recommended nearly thirty years ago by Brookshire and Coursey.Various mechanisms exist to elicit truthful answers directly, such as take-it-or-leave it offers, Vickrey auctions, nth-price auctions, BDM auctions, and incentive compatible stated preference methods such as contingent valuation and choice experiments.Neill et al. designed open-ended CVM questions and used two types of hypothetical and second-bid or Vickrey auction surveys to value the same good.In the Vickrey auction, individuals were asked to make the real payment from their own pockets, for the good in question, in order to generate true results.The values from both the hypothetical and Vickrey auctions were compared and the findings indicated that an open-ended hypothetical valuation is not always capable of providing unbiased true values.However, the Vickrey auction’s WTP values were lower than the hypothetical ones, and the values were closer to the real economic values.This suggests that situating individuals in a real market setting supports the use of incentive compatibility in a survey to elicit truthful answers.Similarly, Berry et al. compared the BDM and take-it-or-leave it values of WTP for clean drinking water technology in northern Ghana.The take-it-or-leave-it survey results showed a higher WTP compared with the BDM.The gap was explained as a possibility the result of strategic behaviour and anchoring effects.The study reported here uses a Becker-DeGroot-Marschak incentive compatible experimental method along with cheap talk to mitigate some of the behavioral and hypothetical anomalies that can potentially affect the use of CV in estimating the benefit of environmental policies.In what follows we present and compare the results of a conventional approach with a BDM experimental approach.The experimental approach shows a lower WTA/WTP ratio than has previously been reported in the environmental economics literature.In addition, the average WTA value was significantly influenced by incentivised BDM setting which sharply reduced the WTA values.Whereas, the average value of WTP was not substantially greater than in the conventional study.The high capital cost of micro-generation solar technology is a barrier to accelerating the distribution and supply of the technology.However, consumers can be influenced by financial incentives to install solar panels on their premises.Previous studies have pointed to the viability of grid connected micro-generation solar systems in the residential sector.Scarpa and Willis suggested that, in the UK, government grants would need to be increased to attract more households to install micro-generation systems and offset the higher cost of the renewable energy micro-generation systems.However, their results showed that despite households’ enthusiasm for investing and their willingness to pay for micro-generation systems, the benefit households received from micro-generation was not sufficiently large to cover the capital cost of micro-generation energy technologies.Claudy et al. reviewed the Irish WTP for micro-generation technologies, and found that their WTP was considerably lower than the actual market prices.The main obstacle was said to be the initial cost of purchasing or installation, but they also suggested more market based finance options for consumers such as leasing and ‘fee for service.’,An alternative to leasing and fee for service might be a network connection.Grid connection has a number of advantages over a stand-alone or off-grid system, and may increase the number of investors.It offers both reliability and financial benefits for consumers and an unfailing connection to electricity would be guaranteed.Any excess generated electricity can be exported and sold to the grid and electricity outages can be prevented by importing when there is no sun.In addition, it saves the extra cost of installing batteries.However, although the need for financial incentives to induce consumers has been recognised by governments and policy makers, the economic cost and burden of lending support should not be neglected.A cost-benefit analysis based on individuals’ responses provides an insight into the extent of the incentives required.A CV method was used to evaluate WTP and WTA values.The underlying demand function is the individual’s WTP.In addition, policy implications may be drawn from CV responses which can be used to regulate the extent of the subsidies and other types of financial incentive.The conventional CV approach can be administered to respondents in different ways, such as open-ended questions, a payment ladder, or as closed-ended single and double-bounded dichotomous choice questions.An open-ended question asks the respondent directly about the maximum amount they would be willing to pay for good X. Despite the assumption that the close-ended referenda format is more incentive compatible than open-ended in the hypothetical study, lower WTP values have generally been found to be elicited with open-ended questions.Balistreri et al. found both open-ended and dichotomous choice questions over estimated auction values and also the expected value for private goods, although the upward bias in open-ended CV was less than that in dichotomous choice CV questions.List and Gallet suggest that different elicitation mechanisms, including different auction mechanisms, cause disparity in value.Lower WTP can be perceived as being due to the larger non-response proportions in an open-ended format, relative to a dichotomous choice format.Protest bias may affect WTP estimates, but this can differ between markets and referenda, and by type of good, making unambiguous rules to deal with protest bids responses difficult to establish.In addition, sometimes respondents with a lower propensity to meet the expense of the good in question may overstate their WTP in an open-ended question.Carson and Groves indicated the impossibility of formulating a simple open-ended matching question that tactically corresponds to an incentive compatible binary discrete choice question in an assessment setting, unless the respondents are provided either with a specific price or a device that chooses the cost independent of the individual’s answer.The use of BDM with the open-ended format is said to facilitate the incentive compatibility of the survey setting.With this technique, individuals have the incentive to state their maximum WTP truthfully, and the approach would be free of behavioural bias.In addition, to control the hypothetical problem of an SP survey, a number of studies have suggested the use of cheap talk to minimise the hypothetical bias effect either in open-ended or close-ended formats.With an open-ended question, the cheap talk script resulted in decreasing in the quantity of respondents stating a zero WTP; thus hypothetical bias was circumvented, although the average of WTP appeared to increase.Cheap talk, explaining solar panels, energy generated, and benefits, was employed in both the conventional CV treatment, and in the experimental treatment.So any difference between the conventional and experimental treatments should be attributable to the difference between the conventional and experimental treatments only.Carson and Groves stated that cheap talk is not a costless technique for non-market valuation if it influences the actions of players in the game.Therefore, the economic value of the difference with and without its use needs to be estimated.The term cheap talk is used in the game theory in an attempt to prevent the dominant strategy in such a way that an individual has no incentive to lie in the game, the so called the equilibrium strategy.This strategy occurs when players share information consistently and in balance with incentives.To implement a survey with the objective of gathering truthful responses, it is essential for the survey to be designed in accordance with an incentive compatibility format, owing to the high possibility of an individual over-stating or under-stating the value of the good in question: so called strategic behaviour.This may happen when respondents think that a decision will be made based on their evaluation, but that they will not be called upon to pay their stated price.To avert or minimise some of the limitations of the CV method, an incentivised mechanism can be incorporated prior to asking key questions.For instance, an incentive compatible survey can be implemented through the following instruments: a voting system, price auction, lottery auction, games, prize draw, and the selling and buying of items.The information revealed by the respondents’ answers would be the outcome of incentive strategies and the explicit information about the question itself to the respondent.Following studies by Eisenberger and Weber and Plott and Zeiler, and Chilton et al., this study also evaluates WTP and WTA via a conventional and an experimental survey of separate samples of respondents.Conventionally, individuals are asked their maximum WTP and minimum WTA.To help respondents gain a better understanding of minimum WTA and maximum WTP concepts, and the potential consequences of over- and under-stating values, an experimental survey with the incentive compatibility was designed and implemented.In addition, to control for order effects and allow for a between and within subject evaluation, the study was carried out with two groups of respondents with and without the experimental approach.The respondents of one group were individually asked to respond to the open-ended questions without the use of clarification and experimental values.They were required to state their minimum WTA and maximum WTP for solar technology equipment.The other survey was elicited with the same open-ended question, but prior to that we used the experimental approach.Prior to eliciting values for the solar technology intervention, we administered a practice BDM using a familiar good and with cheap talk, before asking the main question about PV solar panels.In so doing, we were relying on rationality spillover: whether rationality that is induced by a market-like discipline spills over into a non-market setting involving hypothetical choices.The results of these two settings were compared to determine the role of the incentivised mechanism.The experimental approach aimed to elicit the truthful minimum WTA and maximum WTP responses, which requires beginning with the respondents’ familiarity with the terminologies prior to asking the main WTA and WTP questions.The protocol included firstly, familiarising respondents with the concepts of minimum WTA and maximum WTP and the consequences of untruthful responses; and secondly, asking respondents to state their minimum WTA and maximum WTP for installation of 1kWp micro-generation solar panels on their premises.Finally, respondents provided some socio-economic and demographic information about themselves.The content of the protocol was supplemented by visual aids, to aid memory and assist the respondents with the questions.The protocol consisted of two sections on WTA and WTP, and the minimum WTA concept was first introduced and practised.Between five to twelve respondents participated in each group session and the participants were incentivised by the opportunity to enter a prize draw for a prize of €10.The practice procedure started with an introductory session on the study’s subject and brief information was given to them about micro-generation solar technology for the residential sector.The group discussion began by introducing the term ‘reserve price’ as a substitute for the term maximum WTA.Based on other studies, respondents are usually more comfortable with ‘reserve price’ as a term, and these participants were familiarised with the term by discussing the process of selling land in an auction.The reserve price was explained as the lowest fixed price, at which the land would be offered at the auction sale.This was followed by introducing the term ‘external sealed bid’, and also to simplify the meaning of minimum WTA.Respondents were divided into two groups and asked to discuss a ‘reserve price,’ i.e. the minimum price they would accept for a Teddy.Then, the reserve price was compared with a predetermined sealed bid in a second price auction mechanism.After comparison between the respondents’ answers and the sealed bids, the question of ‘why it is always best to be truthful’ was discussed.In particular, the experimenter should clarify the possibility of the undesirable consequences of over- or under-stating, i.e. in the case of over-bidding, there is a danger that the vendor keeps the item rather than selling it.Similarly, this is the case of under-bidding when the item sells for less than it is worth.Respondents were given a ‘memory jogger’ to summarise the key concepts, and their answers were recorded in response books.The subsequent valuation survey was based on individual answers, so it was important that respondents had some experience of deciding their own WTA for an item.Participants were given two tokens for entry to a prize draw.In each of two rounds, participants recorded their ‘reserve price’ or minimum willingness to accept, for selling the token and foregoing entry into the draw.Their reserve price was compared with a sealed bid in an envelope, which had already been randomly selected from a visible box at the front of the room.If their reserve price was lower than, or equal to, this sealed bid they would sell the token, and receive a higher or equivalent sealed bid, but if the reserve price was higher, s/he would not sell the token and be put into the draw.In the WTP process, contributors were given €2 to spend, €1 in each round, to buy two tickets for entry to a prize draw for €10.In each round, participants’ maximum willingness to pay was recorded in order to buy a token to enter into a new prize draw.Then, after participants were shown a box of chocolates and told that it would be sold, they were asked how much they were willing to pay for it.In other words, the respondents were asked to bid their maximum willingness to pay for the box of chocolates.Before respondents had revealed their maximum WTP amount for the box of chocolates, they were sufficiently familiarised with the potential consequences of over- or under-bidding.In the case of under-bidding when the offered price for the item is less than it is worth, there is a danger of the item not being sold to the buyer, if the vendor decides not to sell for the offered value.Based on the predetermined value or sealed bid price, the respondent’s maximum WTP was evaluated.Respondents had the memory jogger in their hands throughout the practice in the form of their response books.The survey and questionnaire were vetted and approved by Newcastle University Ethics Committee.Informed consent was obtained from each participant prior to their participation in the study.At the start of the solar technology evaluation questions, respondents were sufficiently practised and experienced for truthful bidding.In addition, respondents were supported by the memory jogger hand-out throughout the micro-generation solar system evaluation.Then, the respondents’ evaluation of the micro-generation solar technology was carried out using the cheap talk script below:The process of the discussion that we went through was implemented with the intention of eliciting your truthful responses.We tried to clarify what will be the consequences of overestimating a value to incentivise you to state an amount close to your actual valuation.Then, the participants were requested to imagine that the government or private company was offering to install micro-generation solar panels on their properties.An area of 8 m2 was considered for the installation of 1 kWp solar panels, including a space allowance for maintenance; with attendant visual amenity impact.Respondents were asked to consider, their minimum willingness to accept compensation, for not being permitted to install 1 kWp solar panels.After the respondents had answered the first question, they were then asked to imagine that a government or private company had offered to install 1 kWp micro-generation solar panels in an area of 8 m2 in their property.Respondents were asked to reveal their maximum willingness to pay.Throughout the evaluation, the respondents were supported with memory joggers and were given sufficient explanations and opportunities to ask questions from the moderator.Finally, participants provided some demographic information, and the session finished with the prize draw.The target population of this study was drawn from a residential sector in Northern Cyprus.The survey was conducted in urban areas including Nicosia, Famagusta and Kyrenia as well as rural regions, including Karpaz and Iskele, Guzelyurt and Lefke.In total, 105 respondents comprised the sample of this study, and they were the decision makers for the household expenditure, regardless of their gender.All the participants were aged above 18 with a mean age of 45.Each experimental session was comprised of five to twelve participants and it ended in one to two hours depending on the size of the group.The sessions were held at different places such as houses, cafes, companies and university.The sample population for the conventional CV study was 50 respondents, who were interviewed individually.These respondents were not provided with any clarification on terminologies of maximum WTP and minimum WTA prior to being asked the CV WTA and WTP questions.On the other hand, the opportunity to clarify terminologies was provided in the experimental survey, and this study was conducted with 55 respondents in groups of five to twelve.In order to compare the WTA/WTP divergences, the WTA/WTP ratios of the conventional and experimental approaches were calculated separately.Table 1 shows the outcome of the conventional approach, where the mean WTA was €15,418 and the mean WTP was €4392.The WTA/WTP ratio was approximately 3.5:1.In addition, to explore the disparity when the highest bids are removed, a sensitivity analysis was carried out.Table 2 shows the results of the truncation analysis for conventional approaches.The top 5% of values were trimmed, which resulted in top values of 50 K and 30 K. Therefore, the mean ratio decreased from 3.50:1 to 1.343:1.However, a degree of arbitrariness is incorporated into the approach.The result of the experimental mechanism is provided in Table 3.This result explicitly illustrates the function of the experimental mechanism, in that the WTA and WTP values have converged.A significant reduction in WTA values resulted in a mean value of €6390.Therefore, the WTA/WTP converged at 1.08:1.Subsequently, the standard deviation values for WTA and WTP from the experimental mechanism were more consistent and had a lower obtained ratio.As reported in Table 4, participants’ WTP increased from 4392 to 5913 Euros, with a WTPE/WTPC ratio equal to 1.34, when they were provided with an intuitive understanding of the terminologies.Similarly, respondents’ WTA decreased from 15,418.85 to 6390 Euros with 0.41 WTAE/WTAC ratio.Additionally, a T test shows that the difference between WTAE–WTAC is statistically significant at the 0.05 level, whereas the difference between WTPE–WTPC is not statistically significant.Nevertheless, it is noteworthy that the WTAE value was considerably influenced by the impact of the experimental setting compared with the WTPE value.The significant reduction in WTAE values via experimental setting implies that there is a greater need for clarification on WTA term compared with WTP.In other words, it is more important to tackle the elicitation of truthful responses from WTA questions than from WTP questions.Finally, a T test on the difference between WTAE–WTPE is insignificant, whereas it is significant between WTAC–WTPC,As a result, the experimental approach showed a lower ratio than has previously been reported in the environmental economics literature.The average WTA value was significantly influenced by the incentivised setting and its value sharply decreased, whereas the average value of WTP was not substantially greater than in conventional studies.This study tested the role of incentives on individuals’ estimation of WTA and WTP for micro-generation solar panels.The discrepancies between WTA and WTP valuations are recognised as an obvious problem in the CV surveys.However, true preferences can be elicited through an incentivised mechanism.The incentive-compatible mechanism provides respondents with an adequate understanding and does not encourage strategic biases.The reduced discrepancy between the conventional and experimental mechanisms agrees with the economic theory and literature findings.The suggested novel experimental approach allowed the convergence of WTA and WTP, when the respondents were sufficiently incentivised to respond.The average discrepancy based on the 45 studies on WTA/WTP ratio was found by Horowitz and McConnell to be for public and non-market goods, with a ratio of 2.9:1 for ordinary private goods.The conventional setting results here, with an average WTA/WTP 3.5:1 ratio is consistent with the average ratio in the literature.This ratio substantially decreased to 1.08:1 in the experimental or incentivised setting.Consequently, this finding agrees with the hypothesis that the incentivised setting will perform better than the conventional setting in terms of avoiding strategic and hypothetical biases.The perceived larger sum to compensate in the conventional setting corroborates previous studies.The findings agree with studies by Scarpa and Willis and Claudy et al. on WTP for micro-generation, in that households are willing to pay for micro-generation systems, but the benefit households receive from micro-generation are not sufficiently large to cover the capital cost of micro-generation energy technologies.Financial incentives are thus required to encourage people to invest in micro-generation technologies, if renewable energy targets are to be met.However, the findings of the suggested novel experimental setting here indicate a higher support from respondents for covering the capital costs of micro-generation solar technology.This was achieved when individuals had a better understanding of the WTA and WTP questions, the consequences of overestimating and underestimating, and the good in question.Subsequently, they revealed truthful responses.This paper assesses the households’ acceptance and preferences for the installation of micro-generation solar panels in the residential sector.The individuals’ WTA compensation for the loss of a 1 kWp solar panel, and WTP for installation of 1 kWp solar panel, was tested.The survey was implemented via conventional and incentivised settings.The discrepancy between WTA and WTP within each setting and between the settings was compared.The most obvious findings are: that WTA is statistically different to WTP in the conventional setting, whereas it is equivalent in the experimental setting; a smaller value of WTA for compensation and larger WTP are observed in the incentivised setting compared with the conventional setting.Conventional CV methods may not derive truthful WTA and WTP responses.The experimental setting results suggest that policy makers could reduce financial incentives to increase the solar power installations in Cyprus.Mehrshad Radmehr: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper.Ken Willis: Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper.Hugh Metcalf: Conceived and designed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper.The authors declare no conflict of interest.This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.No additional information is available for this paper.
This paper examines and compares households’ willingness to accept (WTA)/willingness to pay (WTP) ratio for solar power equipment on their premises through both a novel experimental approach and conventional techniques. The experimental approach was administered by using a Becker-DeGroot-Marschak method and cheap talk, with open-ended questions of WTA/WTP. The results were quite striking. The ratio for the incentivised approach was 1.08:1; whereas for the conventional approach it was 3.5:1. The findings suggest that the hypothesis that WTP equals WTA cannot be rejected for the incentivised mechanism, and it appears to control for the individual's strategic behaviour bias as a treatment against over-estimating WTA and under-estimating WTP. The findings also provide some policy implications for Northern Cyprus: the government can set lower financial incentives to increase the solar power installed capacity on the island.
156
Understanding the contrasting spatial haplotype patterns of malaria-protective β-globin polymorphisms
The mutations responsible for sickle-cell disease and β0-thalassaemia represent two unequivocal examples of balanced polymorphisms in the human genome.Occurring at high frequencies in many populations indigenous to malaria-endemic regions, these variants are subject to balancing selection due to their protective effect against Plasmodium falciparum malaria in the heterozygous state.Homozygotes suffer severe blood disorders, which, without access to diagnosis and treatment, are often lethal in the first few years of life.In population genetics theory, it is generally accepted that natural selection results in one of two population genetic outcomes: a hard selective sweep, in which a single adaptive allele sweeps rapidly through a population, resulting in the predominance of a single haplotype associated with the adaptive allele in the population, and a soft selective sweep, whereby ancestral genetic variation around the adaptive site is partially preserved owing to multiple alleles at the site being selected.In the context of this study, we define a haplotype as a set of DNA variations, including the variant under selection, that are located on a single chromosome and, by virtue of their close proximity, are inherited together.Both the sickle-cell mutation and β0-thalassaemia appear at first glance to be examples of soft selective sweeps.The former, which always results from the replacement of glutamic acid by valine at position 6 of the β-globin gene, is associated with five “classical” restriction fragment length polymorphism haplotypes.The latter results from any mutation that completely eliminates the production of protein from the β-globin gene.One-hundred and fifty-eight such mutations are currently reported, and many of these can be found on more than one genetic background.The precise spatial patterns exhibited by βS- and β0-thalassaemia haplotypes are markedly different.For βS, current data suggests that the five classical haplotypes predominantly occupy geographically separate regions within Sub-Saharan Africa, the Middle East and India.By contrast, whilst β0-thalassaemia mutations are mostly geographically specific on a cross-continental scale multiple variants can be found in the Mediterranean, the Middle East, and Asia, respectively, with their distributions considerably overlapping.Furthermore, for each β0-thalassaemia variant, various associated genetic backgrounds typically coexist in the population.To illustrate, the β0-thalassaemia mutation IVS-I-1 G → A is found on haplotypes V and II in Sicily, and haplotypes I, III, V, IX and A in Algeria.Similarly, the cd39 C → T mutation is associated with haplotypes I and II in Sicily, Sardinia and Corsica, haplotypes I, II and IX in mainland Italy, and haplotypes I, II and B in Algeria.In the case of β0-thalassaemia, different causal mutations have clearly arisen independently, whilst the occurrence of identical mutations on separate haplotypes is generally ascribed to gene conversion.For βS, however, it is commonly believed that each of the five classical βS-associated haplotypes represents five independent occurrences of the same A → T mutation in codon 6 of β-globin.Yet, as suggested by Livingstone in the 1980s, a single βS mutation, and its subsequent transfer onto different haplotypic backgrounds by gene conversion, could also have generated the same present-day βS pattern.The different patterns exhibited by β0-thalassaemia and βS mutations thus offer a unique opportunity to make a direct comparison between different sub-types of soft selective sweep in humans.Here, we identify the demographic and genetic processes that are more likely to give rise to either a sickle-cell-like or a β0-thalassaemia-like spatial distribution of haplotypes.Within the context of our spatial framework, we also specifically address the role that recurrent mutation and gene conversion may have played in the evolution of these polymorphisms.We simulated a meta-population of Ne diploid individuals, divided into d demes of equal size and arranged in a network with a varying degree of randomness in its migration connection structure, controlled by parameter c.Every generation, Ne increases by a percentage drawn from a uniform distribution between zero and a maximum possible population growth rate of g%.Any increase in total population size is spread equally across all demes.We simulated a meta-population of Ne diploid individuals, divided into d demes of equal size and arranged in a network with a varying degree of randomness in its migration connection structure, controlled by parameter c.Every generation, Ne increases by a percentage drawn from a uniform distribution between zero and a maximum possible population growth rate of g%.Any increase in total population size is spread equally across all demes.We are only interested in the haplotypic diversity of βS- or β0-thalassaemia-bearing chromosomes.The model thus records only mutation in the βA → βX direction, and the transfer of a βX allele onto a new haplotypic background by gene conversion in βAβX heterozygotes.All mutation and gene conversion rates throughout the manuscript refer to the rates at which these particular processes happen.Every time a new βX mutation arises, or an existing βX mutation undergoes gene conversion, the resulting allele is assigned a unique numerical identifier representing a novel haplotypic background.This approach assumes a high diversity of pre-existing β-globin haplotypic backgrounds, such that each time a rare mutational or gene conversion event occurs it involves a different genetic background to any of those of previous mutations/gene conversions.It is important to note that, given that the different haplotypes in our model can only arise through mutation or gene conversion, they are intended as proxies for βS- and β0-thalassaemia haplotypes whose occurrence cannot be accounted for by simple reciprocal recombination.Full details of the assignment of fitness values are provided in the Supplementary material.Crucially, the fittest individuals in our simulated populations were always βAβX heterozygotes, who were assumed to experience malaria protection.Throughout all of our simulations, βXβX homozygotes were assigned a fitness of zero and thus were not represented in the potential offspring pool.Full details of the assignment of fitness values are provided in the Supplementary material.Crucially, the fittest individuals in our simulated populations were always βAβX heterozygotes, who were assumed to experience malaria protection.Throughout all of our simulations, βXβX homozygotes were assigned a fitness of zero and thus were not represented in the potential offspring pool.It is possible for the haplotypic background of mutations to affect the course of β0-thalassaemia major or sickle-cell anaemia.However, given the absence, historically, of any effective treatment for these disorders, we have assumed that all individuals homozygous for βS- or β0-thalassaemia are likely to have been at a considerable disadvantage relative to the wild-type, regardless of whether they possessed a mutation with an ameliorating haplotypic background.We do, however, address the possibility of inter-haplotype fitness variation in our simulations, by incorporating a range, f, of possible heterozygote fitnesses into our model.The model was parameterised using value ranges taken from the literature where available.Of particular note, we tested four different allelic mutation rates: 10− 8 events per chromosome per generation, i.e. the average nucleotide substitution rate for the human genome, although this yielded very few instances of a soft selective sweep in our simulations; 10− 7 events per chromosome per generation, accounting for the possibility of a higher-than-average rate of mutation in the β-globin cluster, and 5 × 10− 7 events per chromosome per generation and 10− 6 events per chromosome per generation to reflect the fact that hundreds of different types of mutations can give rise to a β0-thalassaemia allele.Results from the latter three mutation rates are presented here.The ranges of values used for all other parameters are described in the Supplementary material.The model was parameterised using value ranges taken from the literature where available.Of particular note, we tested four different allelic mutation rates: 10− 8 events per chromosome per generation, i.e. the average nucleotide substitution rate for the human genome, although this yielded very few instances of a soft selective sweep in our simulations; 10− 7 events per chromosome per generation, accounting for the possibility of a higher-than-average rate of mutation in the β-globin cluster, and 5 × 10− 7 events per chromosome per generation and 10− 6 events per chromosome per generation to reflect the fact that hundreds of different types of mutations can give rise to a β0-thalassaemia allele.Results from the latter three mutation rates are presented here.The ranges of values used for all other parameters are described in the Supplementary material.All simulations shown were run for 500 generations.Assuming a generation time of 15–25 years for humans, this represents 7.5 to 12.5 thousand years of malaria selection, which is consistent with estimates for how long P. falciparum is likely to have been a significant cause of human mortality.Adaptive alleles arose stochastically throughout each simulation.We chose to analyse a “snapshot” of the genetic variation in the meta-population at the 500 generation time point.Simulations were implemented in Matlab R2012b and performed on a 1728 2.0 GHz cores super-computer part of the Advanced Research Computing resources at the University of Oxford.Based on the reported geographical distributions of βS and β0-thalassaemia variants and their associated haplotypes, we defined a series of possible outcomes within our model.In particular, we sought to distinguish a spatially tessellating, ‘patchwork’ βS-like pattern and an overlapping β0-thalassaemia-like pattern.For this outcome, at least two different βX-associated haplotypes must be present in the meta-population.In addition, there must be sufficient overlap in the distributions of the different haplotypes that no more than 20% of demes contain a βX-associated haplotype that accounts for ≥ 95% of the haplotypic variation in the deme.We refer to such haplotypes henceforth as “dominating” haplotypes.When assessing the geographical patterns exhibited by β0-thalassaemia, we counted two different β0-thalassaemia mutations occurring on the same genetic background as two different haplotypes: considering the β0-thalassaemia mutation itself to be part of the haplotypic diversity in the population.For this outcome, there must be at least two different dominating βX-associated haplotypes in the whole meta-population, and at least 50% of the demes must contain a dominating βX-associated haplotype.Other possible model outcomes include: no malaria-protective variation at the β-globin locus; a hard selective sweep, whereby malaria-protective variation is associated with only a single haplotype in the meta-population; the co-occurrence of malaria-protective variation on multiple haplotypes in the meta-population where haplotypes are completely deme-specific; and the co-occurrence of multiple haplotypes in the meta-population whose distributions are not deme-specific but do not reflect closely enough the overlapping or patchwork spatial patterns defined above.β0-Thalassaemic mutations can result from deletions, insertions and point mutations anywhere in the coding or regulatory region of β-globin.The βS mutation, by contrast, is the result of the replacement of a specific, single nucleotide with another.There are therefore strong biological reasons to suppose that β0-thalassaemic mutations arise much more frequently than βS mutations.In our simulations, increasing the mutation rate did increase the probability of observing the β0-thalassaemia-like pattern, but only if the overall population size was high.If the population was too small and/or too highly structured, the probability of the β0-thalassaemia-like pattern remained low even at the highest tested mutation rate."This was due to there being insufficient genetic variation or population movement to facilitate overlap in the haplotypes' distributions.The probability of a βS-like haplotype pattern, by contrast, was negatively correlated with mutation rate, except in the case of a low initial total population size and minimal population growth.We also observed an interaction between the effects of initial total population size and population growth on the probability of the βS-like pattern at low mutation rates.Population growth had a positive effect on the probability of the βS-like pattern when the initial total population size was small; such growth increased the overall size of the population, and thus the chances of more than one βS haplotype arising anywhere.However, for a much larger starting population size population growth had a slight negative effect.This is because increasing the size of an already large population led to too many βS haplotypes arising through mutation and intermingling, thereby preventing a patchwork βS-like pattern.At a higher mutation rate, population growth rate had a negative effect on the likelihood of the βS-like pattern across all initial population sizes.We also observed an interaction between the effects of initial total population size and population growth on the probability of the βS-like pattern at low mutation rates.Population growth had a positive effect on the probability of the βS-like pattern when the initial total population size was small; such growth increased the overall size of the population, and thus the chances of more than one βS haplotype arising anywhere.However, for a much larger starting population size population growth had a slight negative effect.This is because increasing the size of an already large population led to too many βS haplotypes arising through mutation and intermingling, thereby preventing a patchwork βS-like pattern.At a higher mutation rate, population growth rate had a negative effect on the likelihood of the βS-like pattern across all initial population sizes.As illustrated in Fig. 2A and C, for a given initial total population size and rate of population growth, the β0-thalassaemia-like pattern was more likely under conditions of low subdivision, high migration and when the migration network contained more random connections.By contrast, the probability of obtaining the βS-like pattern was highest when population subdivision was high, the migration network was non-random and migration was low.For both patterns of selective sweep, population subdivision and the degree of randomness in the migration network had a weaker effect at smaller initial population sizes when μ = 10− 7 events per chromosome per generation.Presumably this is because, even if the spread of alleles in the meta-population is slowed by high population subdivision or a highly non-random migration network, this has little impact if opportunities for new copies of the allele to arise are few.The converse is true for βS when μ = 10− 6 events per chromosome per generation; in this case, the effects of population subdivision and connection structure are strongest when the initial total population size is small.Moreover, the effect of population subdivision was greatest when the connectivity network was non-random, indicating that even at high levels of subdivision a more random migration network is sufficient to allow the “small world” phenomenon to occur, minimising the number of migratory steps that it takes for an allele to have access to the entire network.For both patterns of selective sweep, population subdivision and the degree of randomness in the migration network had a weaker effect at smaller initial population sizes when μ = 10−7 events per chromosome per generation.Presumably this is because, even if the spread of alleles in the meta-population is slowed by high population subdivision or a highly non-random migration network, this has little impact if opportunities for new copies of the allele to arise are few.The converse is true for βS when μ = 10− 6 events per chromosome per generation; in this case, the effects of population subdivision and connection structure are strongest when the initial total population size is small.Moreover, the effect of population subdivision was greatest when the connectivity network was non-random, indicating that even at high levels of subdivision a more random migration network is sufficient to allow the “small world” phenomenon to occur, minimising the number of migratory steps that it takes for an allele to access to the entire network.The β-globin cluster incorporates the γ- and δ-genes as well as an extensive locus control region, mutations in any of which are capable of affecting the phenotypic outcome of a mutation in the coding region of β-globin itself.This phenomenon could be deemed epistasis, although the tight linkage between all elements of the β-globin cluster makes it equally acceptable to conceive of different haplotypes as allelic to one another.In either case, it is entirely plausible that βS- and β0-thalassaemia mutations could have different fitnesses according to their haplotypic background.We addressed the possibility of haplotypic fitness variation by randomly assigning a heterozygote fitness value, drawn from a predefined range of width f, to each haplotype as it arose.Including inter-haplotype fitness variation decreased the probability of observing the β0-thalassaemia-like pattern.This was also true for the βS-like pattern when μ = 10− 7, although when μ = 10− 6, inter-haplotype fitness variation increased the probability of observing the βS-like pattern.The β-globin cluster incorporates the γ- and δ-genes as well as an extensive locus control region, mutations in any of which are capable of affecting the phenotypic outcome of a mutation in the coding region of β-globin itself.This phenomenon could be deemed epistasis, although the tight linkage between all elements of the β-globin cluster makes it equally acceptable to conceive of different haplotypes as allelic to one another.In either case, it is entirely plausible that βS and β0-thalassaemia mutations could have different fitnesses according to their haplotypic background.We addressed the possibility of haplotypic fitness variation by randomly assigning a heterozygote fitness value, drawn from a predefined range of width f, to each haplotype as it arose.Including inter-haplotype fitness variation decreased the probability of observing the β0-thalassaemia-like pattern.This was also true for the βS-like pattern when μ = 10− 7, although when μ = 10− 6, inter-haplotype fitness variation increased the probability of observing the βS-like pattern.The rate of gene conversion had almost no effect on the overall probability of either the βS- or β0-thalassaemia-like pattern.However, whenever gene conversion was allowed to occur, inter-haplotype fitness variation increased the proportion of scenarios in which haplotypes resulting from gene conversion formed part of the final haplotypic diversity of the population.For example, for the β0-thalassaemia-like pattern, when the gene conversion rate was 10− 5 events per chromosome per generation and the mutation rate was 10− 7 events per chromosome per generation, gene conversion contributed to the final haplotypic diversity 97% of the time if f > 0, but < 1% of the time if f = 0.There has so far been no attempt to quantify the relative heterozygote fitnesses of different βS- and β0-thalassaemia-associated haplotypes, although clinical evidence does suggest that the severity of sickle-cell anaemia and β0-thalassaemia in homozygotes can vary according to haplotypic background.It is therefore difficult to know whether the maximum fitness range included in Fig. 3 is plausible.Our model additionally shows that the haplotypes that coexisted in the long-term tended to be relatively similar in their heterozygote fitness, so a study carried out today may not provide a fair picture of what fitness variation could have existed in the past.Amongst all β0-thalassaemia-like results where f > 0, the average within-simulation heterozygote fitness range of haplotypes that coexisted in the meta-population after 500 generations was 0.03, compared to 0.18 for all haplotypes that arose during the 500 generations.For βS-like results, these values were 0.05 and 0.16, respectively.The average fitness range of dominating haplotypes was 0.01 for β0-thalassaemia-like repeats and 0.04 for βS-like repeats.The fittest haplotypes to arise contributed to the final haplotypic diversity in only 15% and 26% of all of the repeats exhibiting the βS- and β0-thalassaemia-like patterns, respectively, when f > 0.Across all of our simulations, once a deme had come to contain a βS- or β0-thalassaemia-associated haplotype at a high frequency, that deme was rarely taken over by another haplotype.This is because alleles that arose later contributed only a very small fraction of the adaptive variation in question and thus were vulnerable to loss by genetic drift.We refer to this phenomenon as “allelic exclusion” to coincide with the terminology used by Ralph and Coop.Importantly, we found that the generation of either βS- or β0-thalassaemia-like patterns requires allelic exclusion to be undermined, but to different spatial extents; the βS-like pattern requires that allelic exclusion be maintained in parts of the meta-population but not the entire network, whilst the near complete avoidance of allelic exclusion is necessary for the β0-thalassaemia-like pattern.Allelic exclusion can be avoided if a pre-existing haplotype has not yet reached a threshold frequency in a deme when subsequent haplotypes arrive.It follows that the timing of mutation, gene conversion and/or migration events within each deme is important in determining the evolutionary trajectory of the meta-population.Alternatively, allelic exclusion is undermined when genetic drift is weakened for incoming alleles, either through population growth or fitness variation between haplotypes.As illustrated in Figs. 1–3, the balance between all of these factors determines the probability of either a β0-thalassaemia- or βS-like pattern occurring.However, generally speaking, we expect the βS-like pattern to emerge when mutation rate is low and the population is highly subdivided, with low connectivity and little gene flow.The β0-thalassaemia-like pattern is more likely when mutation rate is high and the population is less subdivided, with high connectivity and high gene flow.Several previous theoretical treatments of soft selective sweeps have delivered important insights into the genetic and demographic factors influencing the probability of adaptation by soft selective sweep versus hard selective sweep.In a series of papers, Pennings and Hermisson used coalescent theory to show that soft sweeps are most likely when population size is large and/or allelic mutation rate is high.More recently, Ralph and Coop demonstrated that soft sweeps, specifically of the patchwork type, are likely to be common in species whose distributions are widespread and whose populations are geographically structured.The behaviour of our model is consistent with these previous studies.By modelling a meta-population where we do not assume that different alleles exclude one another when they meet, we are also able to show that a mutation rate that is too high precludes the possibility of a βS-like soft selective sweep pattern, whilst a weaker geographical structure is important for the formation of a β0-thalassaemia-like pattern.As noted in the Introduction, the occurrence of the same β0-thalassaemia variant on multiple haplotypes is generally attributed to gene conversion.Our results imply that gene conversion can contribute to haplotypic diversity only if inter-haplotype fitness is sufficiently variable.β0-Thalassaemia variants certainly vary in their clinical severity, often due to factors such as different levels of expression of foetal haemoglobin.No study has yet compared the relative level of malaria protection that is afforded by heterozygosity for different β0-thalassaemia haplotypes, but it is entirely possible that variable maintenance of foetal haemoglobin might affect malaria susceptibility.Curiously, however, we found that, whilst including fitness variation made it more likely that gene conversion contributes to long-term haplotypic diversity, it simultaneously made the β0-thalassaemia-like pattern less likely.A specific combination of demographic conditions, gene conversion rate and inter-haplotype fitness variation, which increases the probability of observing a β0-thalassaemia-like pattern where the haplotypic diversity is partly derived from gene conversion, may yet be discovered.Present consensus seems to be that gene conversion has had no role in the generation of the classical βS haplotypes.This is despite modelling work by Livingstone in 1989, in which he used a stochastic model of the diffusion of different βS- and βA-associated chromosomes to demonstrate that reciprocal recombination and gene conversion readily give rise to multiple βS haplotypes, with no need for recurrent mutation.Our model demonstrates that a patchwork haplotype pattern that is at least partly derived from gene conversion is difficult to obtain unless inter-haplotype fitness variation exists.It is therefore possible that, until we understand what fitness variation is possible amongst βS haplotypes, we will not be able to judge properly the role of gene conversion in its evolution.However, as indicated by our simulations, it is important to bear in mind that the observed present-day inter-haplotype fitness variation for both βS- and β0-thalassaemia may not necessarily reflect the full fitness range of all haplotypes that have arisen over the course of human evolutionary history and may not include the fittest haplotypes to have ever existed.There is good evidence for past gene conversion events in the β-globin cluster.Further and improved sequence data from this region of the genome will continue to provide insight into these processes, and may be able to indicate whether gene conversion has played a role in the generation of the classical βS haplotypes.However, given that gene conversion events can involve a few hundred bases, which for a conversion event involving the βS mutation is likely to include the highly conserved coding region of the β-globin gene, it may not always be possible to distinguish between de novo mutation and gene conversion at the βS locus using sequence data alone.We defined a patchwork βS-like pattern based on the geographical distribution of the classical βS haplotypes.As noted in the Introduction, classical βS haplotypes derive from RFLP analyses, which continue to be used in present-day studies of sickle-cell diversity.Using SNP markers, Hanchard and colleagues showed that the classical Benin and Senegal haplotypes both exhibit a high degree of long-range haplotypic similarity extending across more than 400 kb in three separate populations.Similar results were found in a Ghanaian population.Fine-scale sequence analysis has revealed heterogeneities within the classical Benin and Bantu haplotypes.However, the distribution of the observed polymorphisms suggests that these differences evolved after the emergence of βS on the distinct classical haplotypes, so as such, the broad pattern of the classical RFLP haplotypes remains.β0-Thalassaemia mutations completely eliminate β-globin production from the affected gene.Other mutations exist which reduce but do not eliminate the production of β-globin, designated β+-thalassaemia alleles.Like β0-thalassaemia, β+-thalassaemia mutations are associated with multiple haplotypic backgrounds whose distributions have been found to overlap.To some extent, therefore, the results we present here for β0-thalassaemia also apply to β+-thalassaemia.However, our present assumption of zero fitness for homozygotes for the relevant mutation is less reasonable for certain milder β+-thalassaemic variants.We predict that allowing for milder homozygous, or compound heterozygous, phenotypes will allow an overlapping haplotypic pattern to be obtained over a still wider range of parameter space.We propose to explore this in the future as part of a model that allows a wider range of malaria-protective globin mutations to compete with one another.Our work so far has focused on the β-globin locus.The study of the haplotypic evolution of α-globin will require a different modelling approach, incorporating duplicated α-globin genes in the α-globin cluster; the possible occurrence of the same variants in paralogous genes; and the wide array of α-thalassaemia variants that are observed in human populations, including both single and double gene deletions.In this way, the relative contributions of recurrent mutation, gene conversion and unequal crossover in generating complex genetic variation in the α-globin gene cluster can be explored, along with the roles of malaria selection and demographic factors in shaping the spatial pattern of this diversity.Our theoretical framework can be extended in a number of ways.One informative next step will be to allow βS and β0-thalassaemia mutations to compete within the same interconnected network, alongside β+-thalassaemia mutations and other malaria-protective alleles with less severe clinical outcomes, for example HbC in Africa and HbE in Asia.Further investigation into the origin, maintenance and fate of different β0-thalassaemia and βS haplotypes will also need to consider the possible influence of epistasis between mutations at the α- and β-globin loci; as well as interactions with other malaria-protective genetic variants elsewhere in the genome.Finally, Wilson and colleagues recently showed that, depending on their severity and frequency of recurrence, population bottlenecks can cause a soft selective sweep to become hard.It would be interesting to see how their results relate to the specific case of β-globin polymorphisms under malaria selection.Sickle-cell trait and β0-thalassaemia are two of our best examples of recent human evolution.Here we have shown that their differing selective sweep patterns may be just as much a product of different demographic conditions as they are of different mutation rates.Our results also suggest that inter-haplotypic fitness variation – a very real possibility for β-globin variants – both affects the probability of observing specific haplotype patterns and increases the probability of gene conversion having contributed to the variation present today.A better understanding of the fitness variation that is possible amongst β0-thalassaemia- or βS-associated haplotypes, will therefore be critical in determining the role of gene conversion in their evolution.The following are the supplementary data related to this article.Supplementary material: A detailed description of the model and its processes.The effects of population growth and initial total population size on the probability of the βS-like pattern.The total height of each bar indicates the overall probability of observing a βS-like pattern at different levels of maximum population growth rate, and for different initial population sizes.Each bar is based on 100 simulations.The mutation rate was low in panel and high in panel.Other parameter values are fixed as follows: d = 75, c = 7.5, m = 0.5, f = 0 and r = 10− 6.At the lower mutation rate, population growth rate increases the probability of the βS-like pattern when population size is small but decreases it when population size is large.At a higher mutation rate, population growth has a consistent negative effect on the probability of the βS-like pattern.The interaction between the effects of population subdivision and initial total population size on selective sweep outcomes.Each graph indicates how the probability of observing a β0-thalassaemia-like or βS-like pattern changes with different levels of population subdivision.Each data point is based on 100 simulations.Results are shown for two different initial population sizes: Ne = 25000 and Ne = 125000.Two different mutation rates are also shown: μ = 10− 7 and μ = 10− 6.In panels and, m = 2, to maximise the probability of a β0-thalassaemia-like pattern; in panels and m = 0.5, to maximise the probability of a βS-like pattern.Other parameter values are fixed as follows: g = 0, c = 7.5, f = 0 and r = 10− 6.When both population size and mutation rate are low, opportunities for new copies of the allele to arise are few and therefore the speed at which alleles move through the network is less important in determining the patterns that emerges.By contrast, when both population size and mutation rate are high, so many haplotypes are generated that the β0-thalassaemia-like pattern is guaranteed whilst the βS-like pattern is precluded; regardless of how easy or difficult it is for alleles to move through the network.The interaction between the effects of network connection structure and initial total population size on selective sweep outcomes.Each graph indicates how the probability of observing a β0-thalassaemia-like or βS-like pattern changes with different degrees of randomness in the migration network connection structure.Each data point is based on 100 simulations.Results are shown for two different initial population sizes: Ne = 25,000, and Ne = 125,000.Two different mutation rates are also shown: μ = 10− 7 and μ = 10− 6.In panels and, m = 2, to maximise the probability of a β0-thalassaemia-like pattern; in panels and m = 0.5, to maximise the probability of a βS-like pattern.Other parameter values are fixed as follows: g = 0, d = 75, f = 0 and r = 10− 6.As for Supplementary Fig. S2, when both population size and mutation rate are low, opportunities for new copies of the allele to arise are few, and therefore the speed at which alleles move through the network is less important in determining the patterns that emerges.By contrast, when both population size and mutation rate are high, so many haplotypes are generated that that the β0-thalassaemia-like pattern is guaranteed whilst the βS-like pattern is precluded, regardless of how easy or difficult it is for alleles to move through the network.The interaction between the effects of network connection structure and initial total population size on selective sweep outcomes.Each graph indicates how the probability of observing a β0-thalassaemia-like or βS-like pattern changes with different degrees of randomness in the migration network connection structure.Each data point is based on 100 simulations.Results are shown for two different initial population sizes: Ne = 25,000, and Ne = 125,000.Two different mutation rates are also shown: μ = 10− 7 and μ = 10− 6.In panels and, m = 2, to maximise the probability of a β0-thalassaemia-like pattern; in panels and m = 0.5, to maximise the probability of a βS-like pattern.Other parameter values are fixed as follows: g = 0, d = 75, f = 0 and r = 10− 6.As for Supplementary Fig. S2, when both population size and mutation rate are low, opportunities for new copies of the allele to arise are few, and therefore the speed at which alleles move through the network is less important in determining the patterns that emerges.By contrast, when both population size and mutation rate are high, so many haplotypes are generated that that the β0-thalassaemia-like pattern is guaranteed whilst the βS-like pattern is precluded, regardless of how easy or difficult it is for alleles to move through the network.Supplementary data to this article can be found online at http://dx.doi.org/10.1016/j.meegid.2015.09.018.
The malaria-protective β-globin polymorphisms, sickle-cell (βS) and β0-thalassaemia, are canonical examples of human adaptation to infectious disease. Occurring on distinct genetic backgrounds, they vary markedly in their patterns of linked genetic variation at the population level, suggesting different evolutionary histories. βS is associated with five classical restriction fragment length polymorphism haplotypes that exhibit remarkable specificity in their geographical distributions; by contrast, β0-thalassaemia mutations are found on haplotypes whose distributions overlap considerably. Here, we explore why these two polymorphisms display contrasting spatial haplotypic distributions, despite having malaria as a common selective pressure. We present a meta-population genetic model, incorporating individual-based processes, which tracks the evolution of β-globin polymorphisms on different haplotypic backgrounds. Our simulations reveal that, depending on the rate of mutation, a large population size and/or high population growth rate are required for both the βS- and the β0-thalassaemia-like patterns. However, whilst the βS-like pattern is more likely when population subdivision is high, migration low and long-distance migration absent, the opposite is true for β0-thalassaemia. Including gene conversion has little effect on the overall probability of each pattern; however, when inter-haplotype fitness variation exists, gene conversion is more likely to have contributed to the diversity of haplotypes actually present in the population. Our findings highlight how the contrasting spatial haplotype patterns exhibited by βS and β0-thalassaemia may provide important indications as to the evolution of these adaptive alleles and the demographic history of the populations in which they have evolved.
157
Phytochemicals, antioxidant and antifungal activities of allium roseum var. grandiflorum subvar. typicum regel.
Much research has been performed to investigate the phytotherapeutic properties of the Allium genus.Recent studies have confirmed the antibacterial, antifungal, antiviral, immuno-stimulating, and antioxidant properties and cholesterol lowering effects of Allium species.Due to their health beneficial effects, extensive scientific investigations have been mainly conducted on the phytochemistry and biological properties of Allium sativum L. and Allium cepa L.Allium plants and their extracts contain different chemical compounds; an abundance of bioactive constituents namely organo-sulfur compounds, volatile sulfur compounds and proteins.Prostaglandins, fructan, vitamins, polyphenols, fatty acids and essential oils have also been identified.Because of its secondary metabolite production, in particular because of its sulfur and other numerous phenolic compounds’ content, Allium species are of great interest.This genus is also one of the major sources of polyphenol compounds.However, polyphenols are bioactive molecules widely distributed in many plant species, with a great variety of structures, ranging from simple compounds to very complex polymeric substances.Among polyphenols, flavonoids are the best known and best characterized group.This group is further subdivided into classes which include flavones, flavonols, isoflavonoids and proanthocyanidins.The different polyphenol classes share the ability to act as chain-breaking antioxidants, which confers protection against the damage caused by free radicals to DNA, membrane and cell components.The total polyphenol content is a good indicator of the antioxidant capacity and various studies have reported a high correlation between antioxidant capacity and this value.Moreover, polyphenols are also shown to exhibit antibacterial, anti-inflammatory, antiallergenic, anti-arthrogenic and antithrombotic effects.Different flavonoids have showed distinct antioxidant and antibacterial activities.Although antioxidant activity of some Allium species has already been reported elsewhere, little attention was paid to their antifungal potential.Garlic extracts have been demonstrated to have antifungal properties against soil borne fungal pathogens.Allium roseum L. is a bulbous perennial plant native from the Mediterranean.A. roseum var.grandiflorum Briq.has big flowers with large obtuse tepals.Leaves are flattened, 5–10 mm wide, papillose on the edges; it grows on undergrowth and grassy slopes.The subvariety typicum Regel.is characterized by inflorescences that do not have bulblets with well developed flowers.In Tunisia, this subvariety is widespread in the North East and the North West of the country and on mountainous regions located in the Center and has also reached the South."Le Floc'h reported that A. roseum was used since ancient times as a vegetable, spice or herbal remedy to treat headache and rheumatism.Investigations dealing with A. roseum growing wild in Tunisia are scarce.Recently, some studies have described the chemical composition of the essential oil and the antimicrobial and antioxidant activities of A. roseum var.odoratissimum,Coss.collected in certain regions of Tunisia.Nevertheless, no information concerning the A. roseum var.grandiflorum growing wild in Tunisia has been published, except the work of Ben Jannet et al. who studied the chemical composition of essential oils of flowers and stems of the same species as reported upon in this study.Therefore, the aim of the present work was to study the phytochemical content of the essential oil and organic extracts of A. roseum var.grandiflorum subvar.typicum Regel.and their antioxidant and antifungal properties used in the Tunisian traditional medicine or generally as food.A. roseum var.grandiflorum subvar.typicum was collected in the region of Sousse, during their blooming stage in March 2007 and in March 2011.Identification was performed according to the “Flora of Tunisia”, by the botanist Pr.Fethia Harzallah-Skhiri and a voucher specimen has been deposited in the laboratory of Genetic Biodiversity and Valorisation of Bioresources, High Institute of Biotechnology of Monastir, Tunisia.Plant samples were separated in three parts:, and.Fresh flowers and both stems and leaves of A. roseum var.grandiflorum were air-dried for five weeks then ground into fine powder, whereas the bulbs and bulblets were used fresh.One hundred grams of each plant part was extracted separately at room temperature with 200 mL of acetone–H2O mixture.Extraction was performed twice for 5 days at room temperature.The resulting extract was filtered and the solution was evaporated to remove acetone under reduced pressure in a rotary evaporator.The remaining aqueous solution was extracted sequentially with the following solvents: chloroform, ethyl acetate and butanol.The extracts were separately concentrated with a rotary evaporator under reduced pressure and stored at 4 °C until tested.The fresh stems, leaves and flowers were separately submitted to hydrodistillation in a Clevenger-type apparatus for 4 h.The essential oils were collected, dried over sodium sulfate, weighed and stored in sealed glass vials in a refrigerator at 4 °C until use.Gas chromatograph: HP 5890-series II instrument equipped with a flame ionization detector, HP-5 fused silica capillary column, carrier gas nitrogen.The temperature oven was programmed from 50 °C to 280 °C at 5 °C/min.Injector and detector temperatures were 250 °C and 280 °C, respectively.Volume injected: 0.1 μL of 1% hexane solution.The identification of the components was performed by comparison of their retention times with those of pure authentic samples and by means of their linear retention indices relative to the series of n-hydrocarbons.GC–EIMS analyses were performed with a Varian CP-3800 gas-chromatograph equipped with a HP-5 capillary column and a Varian Saturn 2000 ion trap mass detector.Analytical conditions: injector and transfer line temperatures 220 and 240 °C respectively; oven temperature programmed from 60 °C to 240 °C at 3 °C/min; carrier gas helium at 1 mL/min; injection of 0.2 μL; split ratio 1:30.Identification of the constituents was based on comparison of the retention times with those of authentic samples, comparing their linear retention indices relative to the series of n-hydrocarbons and on computer matching against commercial and home-made library mass spectra built up from pure substances and components of known oils and MS literature data.Moreover, the molecular weights of all the identified substances were confirmed by GC–CIMS, using MeOH as CI ionizing.The content of total phenolic compounds in organic extracts was measured with a spectrophotometry method based on a colorimetric oxidation/reduction reaction."The oxidizing agent was the Folin–Ciocalteu's phenol reagent.50 μL of each diluted organic extract was added to 750 μL of distilled water/Folin–Ciocalteu solution.After 3 min, 200 μL of sodium carbonate solution was added.The reaction mixture was kept in a boiling water bath for 1 min.For the control, 50 μL of methanol was used.The absorbance was measured at 765 nm.Tests were carried out in triplicate.Quantification was obtained by reporting the absorbance in the calibration curve prepared with gallic acid, results are expressed as mg of gallic acid equivalents per 100 g Dry Weight.Total flavonoid content in each organic extract was determined using a spectrophotometric method with slight modifications, based on the formation of the complex flavonoid–aluminum.To 0.5 mL diluted organic extract, 0.5 mL of 2% aluminum chloride dissolved in methanol was added.The sample was incubated for 15 min at room temperature and the absorbance of the reaction mixtures was measured at 430 nm.Rutin was used as standard flavonoid.The total flavonoid content was expressed as mg of rutin equivalents per 100 g DW, by using a standard graph and the values were presented as means of triplicate analyses.Total flavonol content in each organic extract was determined according to Miliauskas et al. with slight modifications.One milligram of each the organic extract was dissolved in 1 mL of methanol.One milliliter of this solution was added to 1 mL of 2% AlCl3 dissolved in methanol and 3 mL of 5% sodium acetate in methanol.After incubation at room temperature for 2 h 30 min, the absorbance was measured at 440 nm.Rutin was used as standard flavonol.The flavonol content was expressed as mg of rutin equivalents per 100 g DW.The experiments were always performed in triplicate.The antioxidant potential of A. roseum var.grandiflorum organic extracts and of essential oils was determined by two complementary radical scavenging assays: the 2,2-diphenylpicrylhydrazyl radical and the 2,2′-azinobis-.A. roseum var.grandiflorum samples were weighed and dissolved in absolute ethanol at concentration ranges of 0.5–5 mg/mL.The hydrogen atoms or electron-donation ability of A. roseum var.grandiflorum organic extracts and essential oils was measured by the bleaching of a purple-colored methanol solution of DPPH.This spectrophotometric assay uses the stable 2,2′-diphenyl-1-picrylhydrazyl radical as a reagent and was adapted from Ramadan et al. with slight modifications.Thus, 950 μL of 10− 4 M DPPH methanol solution was added to 50 μL of A. roseum var.grandiflorum diluted samples.Each mixture was homogenized and allowed to stand at room temperature in the dark for 30 min.The absorbance was read against a blank at 515 nm.The decrease in absorbance induced by the tested samples was compared to that of the positive control Trolox.The antioxidant activities of samples were expressed by IC50 values which indicate the concentration of samples required to scavenge 50% of the DPPH radical.All the tests were carried out in triplicate.The scavenging activity of the ABTS+ radical was measured as described by Re et al.ABTS+ radical cation was generated by the interaction of ABTS and potassium persulfate.The mixture was allowed to stand at room temperature in dark for 12–16 h to give a dark green solution.Oxidation of ABTS+ begins immediately, but the absorbance was not maximal and stable until more than 6 h had elapsed.The radical cation was stable in this form for more than 2 days when stored in the dark at room temperature.Prior to assay, the dark green solution was diluted in ethanol to give an absorbance at 734 nm of 0.70 ± 0.03 in a 1 cm cuvette.One milliliter of the resulting solution was mixed with 10 μL of A. roseum var.grandiflorum diluted samples.The absorbance was measured exactly 20 min after initial mixing.Trolox was used as an antioxidant standard.All tests were performed in triplicate.The antioxidant activities of samples were expressed by IC50 values, which indicated the concentrations of samples required to scavenge 50% of ABTS+ radical, expressed also as Trolox Equivalent Antioxidant Capacity.The TEAC value of a sample represents the concentration of a Trolox solution that has the same antioxidant capacity as this sample.Fungi were collected from various localities in Tunisia.Table 1 lists each fungus, the host, plant part from which it was isolated, locality and date of collection.All fungal isolates were identified and samples of each fungus were deposited in the collection bank at the Plant Pathology Laboratory.Fungal isolates were maintained on Potato Dextrose Agar, stored at room temperature and sub-cultured once a month.The isolates were dispensed in sterile Petri dishes and grown for 7–10 days before use.The organic extracts and the essential oils of A. roseum var.grandiflorum were dissolved in methanol and dimethyl sulfoxide, respectively, in order to have an initial concentration of 10 mg/mL.The disk qualitative and quantitative antifungal assays of A. roseum var.grandiflorum organic extracts and essential oils were carried out using the disk diffusion method.A conidial suspension of each fungus was added into each Petri dish.Thereafter, 15 mL of Potato Dextrose Agar medium supplemented with streptomycin sulfate was added to the dish containing the spore suspension.Once the substrate solidified a disk of Whatman paper no. 3 was soaked with 20 μL of each A. roseum var.grandiflorum sample, allowed to dry and placed on the inoculated Petri dishes.In the case of the control, the disk was moistened with methanol for the organic extract and with DMSO for the essential oil.Then the plates were incubated at 25 °C for 8 days.The antifungal activity was evaluated by calculating the percentage of mycelium growth inhibition %I = / C) ∗ 100, according to the method of Singh et al., where C is the mean of mycelium growth of controls and T is the mean of mycelium growth of those treated with organic extracts or essential oil.All the tests were performed in three replicates.Simple regression analysis was performed to calculate the dose–response relationship of standard solutions used for calibration as well as samples tested.Linear regression analysis was performed.All the results are expressed as mean ± standard deviation of three parallel measurements.The data were processed using Microsoft Excel 2003 then were subjected to one way analysis of variance and the significance of differences between means was calculated by the Duncan multiple range test using SPSS for Windows,p values < 0.05 were regarded as significant.Table 2 gives abbreviation and the yield for each organic extract and essential oil from A. roseum var.grandiflorum."Extract yields varied from 0.074% to 1.109%.The yield of the essential oil from stems, leaves and flowers of A. roseum var.grandiflorum was 0.008, 0.010 and 0.026%, respectively.The chemical composition of essential oils of flowers and stems of A. roseum has been studied recently by Ben Jannet et al. while that from the leaves has never been investigated.The essential oil of A. roseum var.grandiflorum leaves was light yellow, liquid at room temperature and its odor was piquant.The composition of this oil was analyzed and reported for the first time in this study.The identified components, their percentages, identification methods, calculated linear retention indices and their comparison according to the LRI-values previously published in the literature are listed in Table 3.A total of 13 constituents accounting for 96.8% of the whole oil were identified.The leaf essential oil was rich in carboxylic acids.Hexadecanoic acid was detected as the major component, with 75.9% of the totality of the oil.It was identified for the first time in the essential oil of this species.Furthermore, tetradecanoic acid and pentadecanoic acid constituted 2.9% of this oil.The three carboxylic acid esters were particularly abundant with 10% of the totality of this oil, which only methyl hexadecanoate detected in appreciable amounts.Two carotenoid-derived compounds, hexahydrofarnesyl acetone and-β-ionone, were identified.Four sulfur compounds were identified namely dimethyl tetrasulfide, dipropyl trisulfide, di-2-propenyl trisulfide and di-2-propenyl tetrasulfide, which accounted for 3.8% of the totality of the oil.Hexadecanoic acid was found to be the most abundant component in some essential oils such as those from Oryza sativa L. and from Ardisia brevicaulis Diels leaves.In the case of Pyrenacantha staudtii,Engl., hexadecanoic and tetradecanoic acids were the major components in the leaf essential oil.Hexadecanoic acid is also present in some Allium species, where it represented 19.03% and 16.75% of the totality of the essential oil in Allium jesdianum Boiss aerial parts and Allium ursinum L. leaves, respectively.In contrast, only smaller percentages of hexadecanoic acid and tetradecanoic acid were present in the essential oil of A. roseum var.odoratissimum,Coss.flowers.Although, in the previous study by Ben Jannet et al., hexadecanoic acid was absent in essential oils from flowers and stems of A. roseum var.grandiflorum.Hexahydrofarnesyl acetone was detected in very small amounts in the essential oil of Allium sphaerocephalon L. inflorescences compared to the results in this study.Methyl tetradecanoate was also present in lower percentages in flower and stem essential oils in the same species A. roseum var.grandiflorum studied by Ben Jannet et al.The latter authors also noted the presence of methyl pentadecanoate and methyl hexadecanoate in stem oils.Sulfur components were also detected in oils of some Allium species.Dimethyl tetrasulfide was present in small amounts in the essential oils of A. roseum var.odoratissimum flowers."Di-2-propenyl trisulfide was previously detected present in small amounts in the leaves' essential oil of three A. ursinum ecotypes. "Finally, di-2-propenyl tetrasulfide was present in A. ursinum leaves' essential oil.As shown in Table 4, the total phenol content in the nine organic extracts ranged from 31.83 ± 0.04 to 193.27 ± 0.44 mg GAE/100 g DW.FlB extract contained the highest total phenol content, followed by BbEa extract.BbB, SLCh and SLB extracts have moderate total phenol content compared to the lowest amount in FIEa extract.The highest level of total flavonoid content was measured in the FlB extract, followed by the SLCh extract.Significant amounts were also obtained in SLB and FlCh extracts.Thus, the flower part and both stem and leaf parts have the highest content of total flavonoid in chloroform and butanol extracts, while a moderate content in ethyl acetate extracts.In contrast, all extracts of bulbs and bulblets contained quite low amounts of these chemicals.The total flavonoid content increased from the ethyl acetate extract to the butanol extract then the chloroform extract for all the plant parts, except in the case of the flowers where the total flavonoid content was higher in butanol than in chloroform extract.The total flavonol content ranged from 35.36 ± 0.19 to 179.72 ± 0.69 mg rutin/100 g DW.The highest level was measured in FlB extract.Interesting results were also observed for chloroform and butanol extracts of stems and leaves.The lowest amount was detected in ethyl acetate extract for all plant parts.The most significant of the observations was in the butanol flower extract of A. roseum var.grandiflorum which were similar to the results of Dziri et al., where the highest total phenol content and flavonoid content were detected in methanol flower extract of A. roseum var.odoratissimum.Concerning the total flavonol content, interesting results were also achieved in butanol and chloroform bulb extracts of A. roseum var.grandiflorum.Previous studies reported high values for the total flavonol content in bulbs of A. cepa.Analysis of the polyphenol content in A. roseum var."grandiflorum indicated that the flowers' butanol extract contained the highest concentration of total phenol, flavonoid and flavonol.This organ-dependant distribution of total polyphenols has recently been reported for A. roseum var.odoratissimum and for other Allium species with the flowers and leaves being the richest organs in terms of total polyphenols.From a biological perspective, the observed trend in polyphenol accumulation in some organs seems to pinpoint that they may have a protective role against some abiotic factors, such as UV-B radiations.On the other hand, the obtained results with A. roseum var.grandiflorum extracts showed that total polyphenol content was dependent on the solvent nature, because of the different degrees of polarity among different compounds.The lower the IC50 value, the greater the free radical scavenging activity observed.All the organic extracts were able to reduce the stable, purple-colored radical, DPPH into the yellow-colored DPPH-H reaching 50% reduction.As shown in Table 5, IC50 values of the DPPH radical scavenging activity ranged from 0.35 ± 0.01 to 4.58 ± 0.06 mg/mL for the nine organic extracts.Essential oils have an IC50 values > 5 mg/mL.Interesting results were shown by the SLEa extract.This extract can be considered to have a high percentage of inhibition of DPPH, because after completing the reaction, the final solution always possessed some yellowish color.FlEa, SLB and SLCh extracts also gave good results with IC50 values of 0.54, 0.65, and 0.96 mg/mL, respectively.The different extracts of A. roseum var.grandiflorum bulbs and bulblets have the lowest DPPH radical scavenging activity, with IC50 values ranging from 3.57 to 4.58 mg/mL.The DPPH radical scavenging capacity of the organic extracts and essential oils has not been previously reported."In earlier studies, the IC50 values for the Italian A. roseum were determined for the flowers and leaves' aqueous extracts and in the leaves' hydroethanolic extract.The same authors reported that the bulbs and bulblets from the same species exhibited a lower effectiveness, in good agreement with results in this study, where the bulbs are the part of the plant presenting with the weakest antioxidant activity.In the DPPH assay, the free radical scavenging activities decreased from ethyl acetate extracts to butanol and chloroform extracts for each of the plant parts of A. roseum var.grandiflorum, evidencing a clear secondary metabolite content due to the solvent–solvent partitioning processes.As shown in Table 5, the IC50 values of the ABTS+ radical scavenging activity ranged from 0.71 ± 0.01 to 1.49 ± 0.02 mg/mL.However, no 50% of inhibition was achieved for essential oils with concentrations up to 5 mg/mL, which mimicked that observed in the DPPH assay.The best results were observed with the SLEa and the SLB extracts followed by BbEa, SLCh, FlEa and FlB extracts.In contrast, FlCh and BbCh extracts had the lowest ABTS+ radical scavenging activity.The TEAC value is defined as the molar concentration of Trolox solution having an antioxidant capacity equivalent to the sample solution being tested.Table 5 illustrates the TEAC values of A. roseum var.grandiflorum organic extracts, where BbB extract and essential oils did not show any activity.The best values of TEAC were obtained with the SLEa and SLB extracts."Statistically, all extracts were more active than Trolox, except the flowers and bulbs and bulblets' chloroform extracts with the lowest TEAC values.If each organ is examined separately, it is remarkable to note that the best antioxidant activity was generally found in the ethyl acetate extracts followed by butanol and chloroform extracts in the order.TEAC values of the extracts decreased in the following order: SLEa > SLB > SLCh; FlEa > FlB > FlCh.Ethyl acetate seems to be the solvent that best concentrates antioxidant substances of intermediate polarity; this is in accordance with the findings of Anagnostopoulou et al."In this study, both leaves' and stems' extracts of A. roseum var.grandiflorum possessed the higher TEAC values and best antioxidant activity.In the literature, the highest antioxidant activity was observed in the leaves of Allium schoenoprasum L., Allium giganteum L. and A. roseum.The antifungal activity of the tested essential oils and the different organic extracts from A. roseum var.grandiflorum on some fungi varied according to the type of solvent used, the essential oil and the fungus tested.Interesting results were obtained against Fusarium solani f. sp. cucurbitae.In fact, all organic extracts and all essential oils have antifungal activity, except the FlB extract.The percentage of mycelium growth inhibition ranged from 28 to 56% compared to the positive control benomyl.SLEa and BbCh extracts displayed a highly significant growth inhibition of this fungus.BbB, FlEa and FlCh extracts revealed potential to inhibit the mycelium growth of F. solani f. sp. cucurbitae with a value of the inhibition of 36% for each extract.SLCh extract has the lowest inhibition against F. solani f. sp. cucurbitae.As for the essential oils, LEO exhibited the highest inhibition with 39.13%, followed by FlEO then SEO.The antifungal growth inhibition varied from 30.43 to 52.17% for organic extract against Botrytis cinerea, with the exception of butanol extracts which showed no effect against this fungus.In contrast, the most active extracts were the chloroform and ethyl acetate samples of the different parts of A. roseum var.grandiflorum.B. cinerea was also significantly inhibited by the leaf essential oil.An interesting inhibition of Alternaria solani was observed for the stems and leaf extracts, where SLB was as potent as benomyl on mycelium growth, followed by SLCh and SLEa.While a moderate inhibition was reported for FlCh and BbCh.All the other extracts and essential oils did not inhibit this fungus.Pythium ultimum was inhibited only by the butanol extracts of bulbs and bulblets and of stems and leaves.The hyphal growth of Rhizoctonia solani and Fusarium oxysporum f. sp. niveum was only reduced by the two extracts, SLEa and BbCh.In general, organic extracts and essential oils of A. roseum var.grandiflorum showed strong antifungal activity against F. solani f. sp. cucurbitae.This is supported by previous studies on the antifungal activity of A. sativum against F. solani,Sacc.and other fungi, where the A. sativum extract completely inhibited the mycelium growth of F. solani.In another study, the mycelial development of the phytopathogenic fungi including F. solani and R. solani Kühn was strongly inhibited by A. sativum extract.In addition, the growth of P. ultimum var.ultimum was entirely blocked by aqueous extract of A. sativum.Singh and Singh also found that garlic oil suppressed sclerotial formation in R. solani and killed the microorganism when hyphal disks were exposed to the oil in the Petri plate assay.It was concluded that the inhibition of sclerotia was in part due to the result of the inhibition of hyphal growth.Pârvu et al. and Pârvu and Pârvu investigated the antifungal activities of some Allium sp.In fact, Allium obliquum L. hydroethanolic extracts inhibited B. cinerea and F. oxysporum and Allium fistulosum extract had antifungal effects against F. oxysporum.Furthermore, Pârvu et al. mentioned that A. ursinum flower and leaf hydroethanolic extracts had antifungal effects against B. cinerea and F. oxysporum.In another study, the inhibitory effect of the A. ursinum flower extract was stronger than that of the leaf extract for all concentrations and for all the tested fungi."In our study, we noted that the inhibition growth of B. cinerea by the flower chloroform extract was higher than that by the stems and leaves' chloroform extract. "In contrast, the inhibition of growth by the flower ethyl acetate extract was lower than stems and leaves' ethyl acetate extract against B. cinerea.Concerning Alternaria sp., Zarei Mahmoudabadi and Gharib Nasery reported a remarkable activity of Allium ascalonicum L. bulbs aqueous extract.Probably, the variations observed between our study and those of Pârvu et al. and those of Pârvu and Pârvu could be due to the different plants and fungi species, the extraction solvents and experimental conditions.In fact, relative variation in the content of polyphenols in A. roseum var.grandiflorum organic extracts was speculated to be one of the reasons for the variation in antifungal activity.Plant extracts have antibacterial, antiviral and fungicidal effects both in vitro and in vivo.Their properties were mainly attributed to alkaloids, several flavonoids and phenolic acids.The antifungal activity of the essential oil of A. roseum var.grandiflorum leaves against F. solani f. sp. cucurbitae and B. cinerea may be explained by the presence of hexadecanoic acid as the major component of the essential oil, where hexadecanoic acid is known to have potential antibacterial and antifungal activity.A number of fatty acids have been shown to inhibit or stimulate the growth and sporulation of pathogenic fungi in plants.Moreover, Liu et al. mentioned that this saturated fatty acid has stronger antifungal activity than unsaturated ones.The latter authors tested the effect of palmitic acid at different concentrations against A. solani, two varieties of F. oxysporum and other fungi and reported that at concentrations of 100, 1000 and 2000 μM of hexadecanoic acid, mycelium growth varied between 39 and 52 mm against tested fungi reflecting a weak inhibition.In contrast, at the highest concentration, palmitic acid showed significant inhibitory effect on the growth of all tested fungi, reducing the growth of A. solani, F. oxysporum f. sp. cucumerinum, and F. oxysporum f. sp. lycopersici by 42%, 40%, and 36%, respectively.This suggests that this fatty acid, the hexadecanoic acid, might be an alternative approach to integrate control of phytopathogens.The polyphenolic constituents, the biological activities and the essential oil profile of A. roseum var.grandiflorum could represent a fingerprint for chemotype differentiation.In conclusion, this plant could be a promising source for health and nutrition due to their powerful antioxidant properties and richness in polyphenols.Additionally, A. roseum and its antimicrobial compounds could also be used in agriculture as an alternative pesticide in the control of plant diseases.The potential for developing preparations of this Allium species for use as an alternative to synthetic fungicides for organic food production has to be considered.
The chemical composition of essential oil hydrodistillized from Allium roseum var. grandiflorum subvar. typicum Regel. leaves was analyzed by GC and GC/MS. Nine extracts obtained from flowers, stems and leaves and bulbs and bulblets of A. roseum var. grandiflorum were tested for their total phenol, total flavonoid and total flavonol content. All these extracts and the essential oils from fresh stems, leaves and flowers were screened for their possible antioxidant and antifungal properties. The results showed that the hexadecanoic acid was detected as the major component of the leaf essential oil (75.9%). The ethyl acetate extract of stems and leaves had the highest antioxidant activity with a 50% inhibition concentration (IC50) of 0.35±0.01mg/mL of DPPH and 0.71±0.01mg/mL of ABTS+. All the extracts appeared to be able to inhibit most of the tested fungi. The essential oil of the leaves had an antifungal growth effect on Fusarium solani f. sp. cucurbitae and Botrytis cinerea (39.13 and 52.50%, respectively). This could be attributed to the presence of hexadecanoic acid, known for its strong antifungal activity. In conclusion, in addition to the health benefits of A. roseum, it can be used as an alternative pesticide in the control of plant disease and in the protection of agriculture products. © 2013 South African Association of Botanists.
158
Ni-Cu interdiffusion and its implication for ageing in Ni-coated Cu conductors
There is growing interest in the use of metal-coated conductors , such as Cu coated Al wires and Ni coated Cu wires , for interconnection applications including power transmissions, electrical interconnections and microcomponent connections, because these conductors offer advantages compared to conventional, single-metal conductors.Consisting of a low-resistivity Cu shell and a light weight core of Al, Cu coated Al wires offer a high conductivity to weight ratio to the automobile industry.However, the temperature limit of such wires is about 200 °C, beyond which the growth of intermetallic compounds can lead to embrittlement .In addition, high temperature oxidation of Cu limits the use of Cu coated Al wires in future aerospace applications where electrical motors will be required to operate at elevated temperatures.Since a Ni coating can protect Cu conductors at elevated temperatures, Ni coated Cu wires are preferable for high temperature applications.A primary requirement for any interconnection system to be employed in the next generation of aerospace applications is that it should perform reliably throughout the lifetime of the motor .Therefore, the effects of thermal ageing on the mechanical and electrical properties of these wires are of great concern to the designers of the electrical motors.Kim et al. showed that the development of intermetallic phases during the ageing of Cu coated Al wires could lead to progressive embrittlement of the conductors at high temperatures , and damage to motor components.Unfortunately, the physical processes occurring in composite wire conductors during extended thermal exposure have received comparatively little attention.In the work of Loos and Haar , a thin layer of Ni was applied by electroless plating onto the surface of Cu microstrips in order to prevent polymer dielectrics from being contaminated by Cu during the high temperature curing of the polymer materials at 350 °C; they found a measurable increase in the resistance of the Ni-coated Cu microstrips as a result of Ni diffusing into the Cu.There is also evidence that the effective electrical resistivity of Ni–Cu foil sandwich assemblies increases steadily when subjected to high temperature environments .This is consistent with the known behaviour of Ni–Cu alloys, where effective electrical resistivity increases significantly with Ni content.Ho et al. noted that Ni–Cu alloys containing more than 40% Ni exhibit significantly higher resistivity than end member compositions due to short-range clustering.The practical impact of increased electrical resistivity of the wire used in a machine is that the I2R losses may increase with time, degrading the performance of the electrical system.Thus Ni–Cu interdiffusion processes need to be understood in order to predict the long-term behaviour of the conductors and the aerospace or other systems in which they are used.Diffusion in the Ni–Cu binary system has been extensively studied using a variety of geometries and temperature ranges .Whilst there is broad agreement between the published data, there are clear differences relating to materials used and experimental configurations.Hart suggested that microstructural factors including grain size and morphological parameters could govern diffusion processes at low and intermediate temperatures .In view of the variation in published Ni–Cu diffusivity values, it is essential to take into account microstructural and related factors to ensure the diffusion data is appropriate for individual materials and specific engineering applications.The objectives of the present study are to investigate the effective resistivity of typical Ni coated Cu conductors at elevated temperature, Ni–Cu interdiffusion processes and their behaviour under conditions relevant to Cu conductor performance, and to develop a diffusion-based model to predict the long-term electrical properties of Ni-coated conductors at high temperatures.The wires selected for investigation were obtained from Compagnie Générale des Plastiques.The wires are typical AWG20-CLASS3 and AWG18-Class27 Ni coated Cu wires in accordance with ASTM standard B355 ; details of the wires are presented in Table 1.Typically, very thin coating of Ni on the surface of Cu conductor AWG20-Class3 wires are applied by conventional electroplating methods , whilst hot-cladding methods are often used to apply thicker Ni coatings on AWG18-Class27 wires.Small fractions of as-received AWG20-Class3 and AWG18-Class27 wires and samples thermally aged at 400 °C for periods of 600 h or 1200 h were cut and mounted in epoxy resin.The cross-sections of the wires were ground on SiC papers to 1200 grade and polished down to 1 μm diamond paste and finally with colloidal silica suspension.After carbon coating, the morphologies were examined in detail using Philips XL30 and Zeiss EVO 60 Scanning Electron Microscopes; compositional variation across Ni–Cu surface region was investigated by Energy Dispersive Spectroscopy techniques.To aid microstructural investigations, selected samples were etched for 5 s in an ethanol based solution containing 20 vol% HCl and 5 wt% FeCl3; the average grain sizes of the wires were determined by the linear intercept method .Ni–Cu interdiffusion couples were prepared as Ni–Cu–Ni assemblies using high purity Ni and Cu foils.The foils were cut into squares of side 10 mm, polished down to 0.25 μm diamond paste, cleaned in acetone and dried.The Ni–Cu–Ni foils were stacked together and placed between a pair of alumina plates and held in close contact by alumina screws.The diffusion couples were annealed at temperatures in the range 400–600 °C for periods of 48–192 h. All experiments were performed in flowing argon in a Vecstar VTF-7 tube furnace, using a heating rate and cooling rate of 5 °C/min.After cooling, metallurgical cross-sections of the annealed Ni–Cu diffusion couples were prepared.The variation in Ni and Cu concentrations across the interfaces was determined by use of a Philips XL30 SEM equipped with EDS; Ni and Cu concentrations were obtained at 2 μm intervals.At least five compositional profiles were collected for each sample interface.Optical micrographs for cross-sections of etched, as-received AWG20-Class3 and AWG18-Class27 Ni-coated Cu wires are presented in Fig. 1.From the micrographs, the Ni coating thickness was found to be 6 μm for AWG20-Class3 and 75 μm for AWG18-Class27 wire, consistent with ASTM standard B355 .The grain morphology of the Cu core of the AWG20-Class3 wire is characterized by small uniform grains, and a mixture of hexagonal and elongated grains; with average grain size of 7 μm.In contrast there were large, elongated 100 μm grains in the Cu core of the AWG18-Class27 wires.Fig. 2 shows the effective electrical resistivity at 400 °C, as a function of ageing time, for AWG20-Class3 and AWG18-Class27 Ni coated Cu wires.The effective electrical resistivity of the AWG20-Class3 Ni-coated Cu wire annealed at 400 °C gradually increased from 4.26 × 10−8 to 4.60 × 10−8 Ω m after 5500 h; in contrast the effective electrical resistivity of the AWG18-Class27 Ni-coated Cu wire increased from 5.58 × 10−8 to 5.70 × 10−8 Ω m after 5500 h.This indicates that effective electrical resistivities of AWG20-Class3 and AWG18-Class27 annealed at 400 °C, increased by 6.9% and 2.3%, respectively.Similarly, when the AWG20-Class3 wire was annealed at 500 °C for 1200 h its effective resistivity increased by 13.5%.Such increases are of potential concern to designers of high temperature electrical systems as it implies that with increasing temperature of operation, there will be an increase in the overall wire resistance, and therefore greater power demands.Understanding the mechanisms of the increase in effective resistivity with ageing is important as it could help to identify routes to minimize its effect.EDS compositional analysis of cross-sections of as-received and heat-treated Ni-coated wires showed marked differences.In the AWG20-Class3 as-received wires there was a clear Ni region, approximately 6 μm thick, on the surface of the Cu core, in accordance with the ASTM standard .Analysis of the heat-treated wires indicated that the Ni-rich coatings had reduced in thickness and there was evidence of interdiffusion between Ni and Cu; this increased with annealing time.Even with the thicker coatings on the AWG18-Class27 wire, the short diffusion profiles and curved sample geometry made it impossible to gather statistically reliable compositional profiles for quantitative analysis.These observations indicate that diffusion processes occur upon extended heat treatment.Kim et al. and Gueydan et al. showed that interdiffusion and the development of intermetallic compounds can give rise to changes in both mechanical and electrical properties.Indeed, Ho et al. compiled electrical resistivity data for the Ni–Cu binary alloy system as functions of alloy composition and temperature and found that the resistivity of a bulk Ni–Cu alloy varies with its composition in a grossly non-linear way; for example 50–50%, Ni–Cu compositions are significantly more resistive than either of the two end members.According to Ho et al. , Ni–Cu alloys do not form complete solid solutions with truly random atomic arrangements; at certain alloy compositions, Ni tends to segregate from Cu and form short-range clusters, thereby causing the resistivity of the alloy to be higher than either of the two end members.In the annealed Ni-coated Cu wires, the migration of Ni into the Cu core led to the formation of a Ni–Cu alloy zone at the Ni–Cu interface with composition gradually changing; this alloy zone should have a much higher resistivity than either pure Ni or Cu.It is then plausible that this alloy zone controlled the overall/effective resistivity of the wire, thus causing an increase in the effective resistivity of the annealed Ni-coated Cu wires.Fig. 3 shows an optical micrograph of a cross-section of an as-fabricated Ni–Cu diffusion couple used for interdiffusion experiments.By chemically etching, the morphology was revealed: uniform and hexagonal-shaped Cu grains were found in the Cu foil used for diffusion experiments.Using the linear intercept method, the average grain size of the Cu was found to be 18 μm.After high temperature annealing, EDS compositional profiles were collected for both elements across the metallic interfaces using methods similar to those employed by Arnould and Hild .Although there are significant uncertainties in compositional analyses for very short diffusion lengths, the relative errors decrease and vanish for long diffusion profiles .In the current Ni–Cu diffusion experiments, diffusion lengths of 18–50 μm were obtained in all annealed couples.For such long diffusion lengths, a resolution of ±2% can be achieved using EDS techniques .A typical Ni concentration profile is shown in Fig. 4.The central part of the profile, representing the interdiffusion region, is approximately 24 μm long.Whilst the activation energy varies slightly with composition from 79 to 90 kJ mol−1, the values fall well within the range of published data for temperatures of 200–1050 °C.At the highest temperatures, working with single crystals Fisher and Rudman reported a high activation energy of 217.6 kJ mol−1.Schwarz et al. found a similar value at temperatures of 500–650 °C and attributed the high activation energy to a vacancy/lattice diffusion mechanism.In the lower temperature range 200–550 °C, activation energies for Ni–Cu diffusion vary from 41 to 131.8 kJ mol−1 .Schwarz et al. attributed such low activation energies to a grain boundary diffusion mechanism.It is therefore expected that the low activation energies obtained in the present diffusion study at low-to-intermediate temperatures reflect a dominance of grain boundary diffusion, with similar behaviour in the heat-treated Ni–Cu wires.Of the few investigations to address the compositional dependence of interdiffusion in the Ni–Cu system, Ijima et al. noted that below 60 at.% Ni, activation energies tend to fall with decreasing Ni concentration.In addition, Ijima et al. argued that at low Ni concentrations, there would be increased quantities of voids, which would enhance interdiffusion, leading to lower activation energies.This trend is consistent with the present data.In spite of the compositional dependence of Ni–Cu interdiffusion, it is useful to adopt a mid-range composition to explore broad trends in the diffusion data.Fig. 6 therefore shows an Arrhenius plot for an alloy of the mid-range composition, based on the data presented in Fig. 6.Although the present interdiffusion coefficients are consistent with literature data, there are differences between individual studies.A number of investigators explored the effect of microstructure on interdiffusion in binary metallic systems and suggested grain size has major impact on mass transport .To assess the influence of grain size on Ni–Cu interdiffusion, relevant published data along with data from this study are presented in Fig. 7.It is clear that Ni–Cu interdiffusion rates depend critically on the microstructure of the constituent materials.In order to quantitatively evaluate the impact of Ni–Cu interdiffusion on the ageing of Ni coated Cu wires reliable and relevant diffusion data are required.Hence, the present interdiffusion data from the Ni–Cu foils needs to be corrected by the geometric factor which is relevant to the microstructure of the wires used in the resistivity investigations.The grain sizes of the polycrystalline materials used in the diffusion couples employed by Zhao et al. and Schwarz et al. were 30 μm and 10 μm, respectively; the average grain size of the Cu foil in the present study was 18 μm.The grain boundary width was assumed to be 0.5 nm for the face centre cubic-structured Cu .Since the shape of the grains in the microstructure of the Cu foil was regular hexagonal, the grain shape factor in Eq. was assumed to be equal to 3.On this basis, the geometric factor was determined for the Cu–Ni diffusivity data of Zhao et al. , Schwarz et al. and this study using Eq., and effective interdiffusion coefficients calculated by use of Eq.As 400 °C is the temperature of primary interest, the data from the literature were extrapolated to that temperature by relevant Arrhenius relationships.Fig. 8 shows the resulting plot of effective Ni–Cu interdiffusion coefficients at 400 °C as a function of the geometric factor.The limited dataset suggests a simple linear relationship.From Fig. 8 it is possible to obtain, by interpolation, effective interdiffusion data relevant to the Ni-coated Cu wires used in the resistivity investigations.The Cu core of the AWG20-Class3 wires had an average grain size of 7 μm, with grain shapes being a mixture of both hexagonal and parallel forms.The grain shape factor for this wire was taken to be 2.In contrast, the Cu core of the AWG18-Class27 wire consisted of grains which were elongated with parallel sides, having an average grain size of 100 μm; this gave a grain shape factor of 1.For both wire samples the grain boundary width was assumed to be 0.5 nm .These data enabled the geometric factors to be determined for the two types of wire: 1.43 × 10−4 and 5 × 10−6 respectively.Interpolating the Ni–Cu interdiffusion data in Fig. 8 for these geometrical factors yielded effective interdiffusion coefficients of 1.4 × 10−17 m2 s−1 for the AWG20-Class3 wire and 5.1 × 10−19 m2 s−1 for the AWG18-Class27 wire at 400 °C.A similar correction procedure can be applied to all the interdiffusion data obtained in the present work.In addition to the microstructural factors discussed above, the movement of mobile boundaries may cause further changes in the diffusion profiles under certain conditions .Diffusion induced grain boundary migration occurs in the Ni–Cu system at low temperatures when grain boundary diffusion is most effective .It is therefore useful to examine if DIGM would have an impact on the diffusion profiles of aged Ni-coated Cu conductors.For DIGM to operate, volume diffusion in the system needs to be ‘frozen’ out; this is denoted by the condition that volume diffusion divided by the velocity of grain boundary migration must be less than λ, i.e. Dv/v < λ .For the Ni–Cu system, volume diffusion data are available for a range of temperatures , grain boundary migration velocity data are reported by Ma et al. , and λ ∼3.61 Å for Cu, enabling calculation of Dv/v.At a temperature of 400 °C, Dv/v equals 4.5 Å for the Ni–Cu system; this value is already larger than the lattice parameter of Cu, and so fails the necessary condition for DIGM.Thus at 400 °C and above, DIGM would not have a significant impact on Ni–Cu interdiffusion profiles, and it is not necessary to apply further corrections to the diffusion data.Using Eq. and corrected diffusion data for Ni–Cu interdiffusion, sets of concentration-distance profiles were calculated for Ni-coated Cu wires thermally aged at different temperatures.Fig. 9 shows simulated concentration-distance profiles in an AWG18-Class27 wire, thermally aged at 400 °C, for 1 × 105 and 1.2 × 107 h.It was noted that the surface concentration of the wire would reduce to less than 30 at.% Ni after a time of 1.2 × 107 h.In contrast, for the AWG20-Class3 wire, it would take a much shorter time of 3200 h for the same reduction in surface concentration at the same temperature.Using the criteria of Castle and Nasserian-Riabi these times can be regarded as the minimum at which surface oxidation will start.The growth of an oxide film on the surface of the wire will reduce the strength of the bond between the insulation and the conductor, thus degrading the insulation .Therefore, the time at which the surface concentration of the wire decreases below 30 at.% Ni will help define the effective life-time of a Ni-coated Cu conductor.Using these criteria, the lifetime of a Ni-coated Cu wire was calculated as a function of coating thickness.In the case of an AWG18-Class27 wire annealed at 400 °C), the lifetime increases with Ni coating thickness in a non-linear way; for example a wire with a coating 160 μm thick is predicted to have a lifetime of 5 × 107 h.Similarly, as grain size can have a significant impact on diffusion process and thereby the ageing behaviour of the bimetallic wires, Fig. 10 shows the lifetime of a Ni-coated Cu wire as a function of grain size.A simple linear increase in lifetime with grain size is predicted.With an increase in grain size from 20 to 40 μm, the lifetime of the Ni-coated Cu wire would increases from 2.2 × 106 to 4.4 × 106 h.Changes in the effective resistivity of a Ni-coated Cu wire can be simulated by adopting concentric circle geometry, where the composition of each ring is constant, and different from that of each neighbouring ring.In order to model the resistivity behaviour of the wires, the simulated concentration-distance profile of Fig. 9 was then sub-divided into 50 segments; for each, the alloy composition was assumed to be homogenous.Based on the sub-divisional thickness, the resulting area of each respective region in Fig. 11 could be calculated as function of thermal ageing time.The effective resistivity of the wire was then calculated) as a function of ageing time using the effective interdiffusion coefficients from the foil experiments.Fig. 12 shows the calculated effective resistivity for the AWG18-Class27 wire as a function of ageing time at 400 °C.The importance of employing microstructure-corrected diffusion data for calculations of resistivity and ageing effects in Ni-coated conductors is exemplified in the figure.Whilst there is an excellent agreement between the experimental resistivity data and the simulated data based on diffusion data corrected for geometrical effects, it is clear that the original, uncorrected, Ni–Cu interdiffusion data significantly overestimates the increase in resistivity and thus the rate of ageing.This demonstrates the importance of using diffusion data which are appropriate for the samples and materials under consideration.Using the model, it was possible to predict changes in the effective resistivity of a Ni-coated Cu wire when subject to long-term high temperature annealing.Fig. 13 shows the effective resistivity of an AWG18-Class27 wire as a function of ageing time at temperatures of 400 and 500 °C.As anticipated, the rate of changes in the effective resistivity of the wire, annealed at 500 °C, is much faster than that of the same wire, annealed at the lower temperature of 400 °C.The enhanced diffusion rate at the higher temperatures causes Ni to penetrate more quickly into the Cu core, thus forming more resistive alloying regions at the Ni–Cu interface.This leads to a higher rate of changes in the effective wire resistivity at higher temperatures.Although the initial electrical resistivity of the AWG20-Class3 wire was smaller than that of AWG18-Class27, the former exhibited a greater rate of change after 5500 h of ageing at 400 °C.The model also indicated that at 400 °C it would take 4.8 × 104 h for a 10% increase in the resistivity of AWG20-Class3 wire, but that it would take 1.4 × 105 h for a similar change in the AWG18-Class27 wire, confirming the superior ageing behaviour of the AWG18-Class27 Ni-coated Cu conductors.A key element in machine design involves detailed thermal modelling.The limiting design factor is the absolute temperature of the stator conductor insulation.Machine designers endeavour to predict the rated machine losses and hence temperature of the stator conductors to a high degree of accuracy, so that the design margins on temperature can be minimized and the motor volume and weight reduced.The dominant electrical losses in a machine are the winding copper losses and the iron losses in the magnetic circuit; the relative magnitudes of these vary, depending on the machine power rating.Increasing resistivity of the copper windings due to Ni–Cu interdiffusion is clearly a serious consideration for the overall machine design because the stator copper losses and temperature will increase at full rated current.This would result in a reduction of full load operating motor efficiencies but also impacts on the rating of the machine.The design process uses an estimated motor lifetime, and with knowledge of the increased copper resistance over that lifetime, the designer has two choices: either to de-rate the machine to compensate for the increased winding resistance over its lifetime; or increase the copper area of the windings to reduce the resistance leading to an increase in the motor size for the same rating.The smaller grain size of AWG20-Class3 wires appears to be responsible for more accelerated ageing."As suggested by Hart's Equation and), a smaller grain size, and generally a larger geometric factor, leads to a much higher effective rate of interdiffusion, and thereby more accelerated ageing behaviour in terms of the wire resistivity.It is implied a Cu conductor wire having a preferred microstructure, of large and parallel grains, may exhibit superior ageing behaviour in terms of changes in the effective wire resistivity.However, the lifetime of the Ni coated Cu wire is defined as the time at which the surface concentration decreases below 30 at.% Ni; therefore, the lifetime depends critically on the thickness of the Ni coating.Although a thicker Ni coating indeed gave rise to a higher effective resistivity than a thinner Ni coating, the model predicts that lifetime of a bimetallic wire is significantly extended by thicker Ni coatings.Heat treating typical AWG20-CLASS3 and AWG18-Class27 Ni coated Cu wires at 400 °C for times up to 5500 h showed that the electrical resistivity increased by 6.9% and 2.3%, respectively.Microstructural analysis revealed evidence of Ni–Cu interdiffusion, implying it was responsible for the change in resistivity.The data was consistent with known changes of resistivity of Ni–Cu alloys.Ni–Cu interdiffusion experiments at 400–600 °C using metal foils enabled composition dependent interdiffusion coefficients to be determined for the system.Calculated activation energies were in the range 79–90 kJ mol−1, consistent with a grain boundary diffusion mechanism.Detailed analysis of the available Ni–Cu interdiffusion data suggested the interdiffusion rate depends upon microstructural effects controlled by grain size, grain boundary width and grain shape, reflecting the contribution grain boundary diffusion to the effective interdiffusion rates.A concentric circle-type model was developed to simulate the changes in composition in Ni-coated conductors as a function of ageing time at elevated temperatures.The model was used to predict the life-time of the wire, which was defined as the time at which the surface concentration decreased to 30 at.% Ni and thus the surface oxidization would commence.A thicker Ni coating layer reduces the rate of Cu transport to the surface of the wire, thereby enhancing the life-time of the wires.The model was also used to predict the resistivity of the wire after high temperature annealing.Good agreement between simulated and experimental data was only achieved by employing microstructure-corrected diffusion data in the model, indicating the impact of microstructural factors on diffusion processes.
Abstract After heat treatment at 400 °C the effective resistivities of typical Ni-coated Cu conductor wires increased by up to 6.9% as a result of Ni-Cu interdiffusion. Direct Ni-Cu interdiffusion experiments were performed between metal foils at temperatures of 400-600 °C for times up to 192 h. Calculated activation energies were in range 80-90 kJ mol<sup>-1</sup>, consistent with a grain boundary diffusion mechanism. Analysis of published Ni-Cu interdiffusion coefficients suggested a clear dependence on grain size and grain shape. A concentric circle model was developed to simulate changes in composition and effective resistivity in the Ni-Cu wires as a function of time. It was predicted that it would take 1.4 × 10<sup>5</sup> h at 400 °C for 10% increase in the effective resistivity of an AWG18-Class27 conductor wire. Good agreement between simulated and experimental data for effective resistivity was only achieved by employing effective diffusion coefficients corrected for microstructural effects.
159
The worm has turned: Behavioural drivers of reproductive isolation between cryptic lineages
An important question in the understanding of speciation is: what mechanisms are involved in the origin and maintenance of reproductive isolation between populations?,Behavioural processes such as mate choice potentially play an important role in pre-copulatory reproductive isolation, leading to genetic divergence between the isolated lineages and, ultimately, to speciation.Divergence in mate choice preferences can occur between populations that become geographically isolated, leading to the persistence of reproductive isolation following secondary contact between the populations.Examples of signals that have contributed to behavioural isolation involve visual signals observed in butterflies, damselflies, fish and frogs, acoustic signals observed in insects, frogs and bats, and chemical signals observed in moths, spiders, and flies.Variation in these signals between lineages can result in pre-mating reproductive isolation, providing a behavioural mechanism driving speciation.“Cryptic species” are morphologically indistinguishable, but genetically distinct taxa.The advent of molecular techniques has led to many such cryptic species being identified among many animal taxa in recent years, and they are particularly common among soil dwellers, such as earthworms.Morphological stasis despite genetic divergence is commonly manifest in non-visually guided invertebrates that live in opaque media such as soil or turbid waters, where chemical signalling may play a more important role than morphology in sexual selection.Soil-dwelling cryptic species therefore provide an ideal model for investigating veiled processes of behavioural isolation and mate recognition systems.Lumbricus rubellus Hoffmeister, 1843 is a lumbricid earthworm species that comprises two cryptic lineages living in sympatry in the UK.These lineages are genetically differentiated deeply enough to warrant the status of cryptic species.According to Sechi, speciation within the L. rubellus taxon may have first arisen through allopatric speciation during the last European glaciation event whereby lineages were geographically isolated into separate refugia during the glacial period, followed by secondary contact between lineages in the post-glacial period.The two lineages have remained genetically distinct despite living in sympatry, but differ in subtle aspects of phenotypic expression, such as disparate responses to high levels of arsenic exposure or minor morphological traits.Chemical communication through pheromones has been described in earthworms, serving as alarm systems, or as signals inducing migration or egg-laying.Attractin, Temptin, Enticin and Seductin are small molecules that act as water-borne sex pheromones promoting mate attraction.These pheromones were first described in the marine mollusc Aplysia, and in terrestrial snails where they are implicated in trail-following.The expression of Attractin and Temptin has been detected in the transcriptomes of earthworm tissues, including epidermis and digestive tract, forming physical and functional interfaces with the environment, thus suggesting that the behaviour-modulating molecules could be released through the mucus or faeces to create a trail analogous to those in snails for the attraction of potential mates.Given the known existence of cryptic, sympatric, earthworm lineages and the identification of sex pheromones in L. rubellus, our aim was to test the hypothesis that chemical cues play a role in reproductive isolation, through pre-copulatory assortative mate choice between cryptic species of earthworms.This hypothesis was tested using a behavioural assay on genotyped individual L. rubellus derived from a site where preliminary observations indicated that the two lineages co-exist in approximately equal numbers.A further experiment tested whether the behaviour-modulating chemical substances released by the two lineages are water-soluble and retain their specific activities on worm behaviour when presented as soil water-extracts.Earthworms of the species L. rubellus were collected by manual digging and hand sorting from a single site in Rudry, South Wales and were transported back to the laboratory in their native soil.The site was a lowland dry acid grassland, not polluted by heavy metals.Only mature worms were used in the experiments.On return to the laboratory, the worms were individually weighed, and placed into numbered containers filled with native soil from the extraction site.Throughout the duration of the experiments the worms were maintained in an unlit climate chamber of 13 °C.Posterior segments of approximately ≤ 1 cm were amputated from each individual and preserved in Eppendorf tubes of absolute ethanol prior to DNA extraction for genotyping.Experiments were started two weeks after caudal amputation.In total, 134 earthworms were genotyped; including 5 individuals of the species Lumbricus castaneus and a single individual of Aporrectodea longa, in order to be used as outgroups for the phylogenetic tree."The remaining 128 earthworms were of L. rubellus lineages A or B. Genomic DNA was extracted from 25 mg of tissue with the Qiagen DNeasy Blood and Tissue kit following the manufacturer's instructions.A 407 bp fragment of the mitochondrial gene cytochrome oxidase subunit II was amplified using specific primers for L. rubellus.PCR reactions had a final volume of 20 μl, with 1 μl of DNA template, 0.5 μM of each primer, 0.25 mM dNTPs and 1.25 units of GoTaq® DNA polymerase buffered with 1.3× GoTaq® reaction buffer and supplemented with 2.5 mM MgCl2.The PCR reaction included a denaturation step of 95 °C for 5 min and then cycled 35 times, at 95 °C for 30 s, 55 °C for 30 s and 72 °C for 1 min.This was followed by a 10 min final extension at 70 °C.PCR products were purified and sequenced by Eurofins.Sequences were aligned and cut using Mega 6.06 and the ClustalW option.Genetic variability was measured in DNAsp v 5.10.1 and a parsimony network with a 95% connection limit was built in TCS v1.21.Unique haplotypes were retrieved with DNAcollapser in FaBox and were then used for maximum likelihood tree construction in Mega with 1000 bootstrap repetitions under the model GTR + I + G.The primary experiment was carried out in the form of a classical animal behaviour side-choice experiment, similar to that carried out on earthworms by Lukkari and Haimi.Experiments were conducted in food-quality plastic container mesocosms.A vertical PVC divider was placed in the middle of each container.A well-mixed ‘standard soil’ was made in a separate container for each mesocosm in turn, totalling 871 g, and was made up of 732 g Boughton Kettering Loam, 39 g of Organic Farmyard Manure Gro-sureManure, and 100 g of water.From this mixture, 400 g was placed in each compartment of the mesocosm, in order to ensure homogeneity within each replicate.In each mesocosm, one worm from lineage A was placed in one compartment, and a worm from lineage B placed in the other compartment.Pairs of worms from the two lineages were chosen according to similarity in weight in order to account for the possibilities of larger earthworms secreting more chemical signals, and thus mate selection decisions being made on the basis of size.The worms were then left in the mesocosms for a period of 31 days, in an unlit climate chamber at 13 °C, in order for them to secrete potential chemical signals into the surrounding soil.After the elapsed conditioning period both worms were carefully removed from the mesocosms, the central divider was removed so that the soils in both halves of the mesocosm were in contact, and a third worm was placed on the soil surface in the middle of the mesocosm.Due to stock limitations, earthworms used for working the soils were used afterwards for the test.A small number of the mesocosms were discarded from the subsequent behavioural tests because of the death of one of the pre-test worms.The containers were left in a climate chamber at 13 °C and after 48 h the side choice of the worm was recorded, either as that of the same lineage, or that of the opposite lineage.If there was a part of the worm still in the central area, the side chosen was considered to be the side where the anterior part of the worm lay.For a subset of worms, the distance that the worm had burrowed into the soil was also recorded, as a measure of the extent to which a worm had exhibited a preferred direction of movement.Genotyped earthworms were placed into plastic mesocosms with 400 g of soil and left for one month at 13 °C.Due to earthworm stock limitations, 22 lineage A worms and only 6 lineage B worms were available for the second experiment.In order to observe whether the pheromones involved in mate attraction are water-soluble and if they have the same effect on the worms as in the primary experiment, 4 g of soil from each container was mixed with10 ml of de-ionized water and then vigorously shaken overnight.The solutions were then centrifuged at 6000 rpm for 4 min.After centrifugation, 10 ml of the supernatant was transferred to new test tubes.Control samples were also created, for comparison with the lineage-specific samples.Filter paper was placed inside Petri dishes and were used as mesocosms for Experiment 2.A line was drawn down the centre of the filter paper using a wax DakoPen, creating a non-permeable barrier between the two solutions and preventing any spill over or absorption between the two halves of the filter paper.Then, 320 μl of each solution was pipetted onto each side of the filter paper.Next, a worm of known lineage was placed in the middle of the filter paper, aligned with the wax barrier, and the side choice of the worm was recorded every 3 min for the first 15 min and then noted every 15 min, for a total of 3 h.In total, 12 lineage A worms and 6 lineage B worms were used for the A/B selection, and a further 10 lineage A worms were used for the A/control selection.Extracts came from different worms and separate control samples for each replicate and no worms used to create the soil solutions were used in the same choice trials.Statistical analysis were conducted using the software R.Chi-squared tests were used to examine deviation in worm head orientation from expected orientation.The data for Experiment 1 were also analysed using a Generalised Linear Model with a binomial error structure and logit link function and cauchit link function.The dependent variable was whether the worm turned towards the side of the mesocosm that had previously housed a worm of the same lineage, or towards the side that had previously housed a worm of the other lineage.Independent variables were: lineage, body mass, experimental round block, and the distance that the worm had burrowed into the soil.Model selection was based on AIC comparisons, to identify a minimal adequate model.Biologically relevant two-way interactions were examined, and retained in the minimal model on the basis of AIC comparisons and significance tests.Model validation was based on examination of residual plots, following Thomas et al.The data for Experiment 2 were analysed using a Generalised Linear Mixed Model with a binomial error structure and complimentary log–log link function.The dependent variable was whether the worm turned towards the side of the Petri dish containing extract from soil that had previously housed the same lineage.The repeated measurements of the same individuals over time were represented in the model, by including individual identity as a random variable.The fixed terms were; time, body mass of the focal worm, and treatment.Model selection and validation procedures were as for Experiment 1, above.The null hypothesis for both of these experiments was that there would be no significant attraction to a specific side, i.e. the number of worms showing a preference for soil previously housing the same lineage would be the same as the number of worms showing a preference for soil previously housing the opposite lineage, or no worm.Conversely, our expectation was that worms would be significantly more likely to turn towards the side of the mesocosm or Petri dish containing soil or soil water-extract, respectively, derived from its own lineage.Generated sequences have been deposited in GenBank.The 128 genotyped individuals of L. rubellus belonged to two distinct lineages; 56 individuals belonging to lineage A and 72 individuals belonging to lineage B were found.The total number of haplotypes was 20, 6 of those haplotypes being unique to lineage A and 14 unique to lineage B.Sixty segregating sites were found within the 407 bp sequence.Networks from both species were separated according to the parsimony limit and each showed a more abundant haplotype, considered as the ancestral by the program, and less abundant derived ones, showing a star-like network shape in case of lineage B.Mean uncorrected p-distance between both lineages was 11.52%, whereas within-lineage the distances ranged from 0.25% to 1.47% for lineage A and from 0.25% to 0.98% for lineage B.The phylogenetic tree presented in Fig. 1B clearly shows the phylogenetic divergence between the two cryptic lineages.Earthworms from lineage B were substantially heavier than earthworms from lineage A.A GLM explaining body mass demonstrated a significant difference in mass between the two lineages.Earthworms showed a tendency to move into the side of the mesocosms containing soil that had previously occupied by worms from the same lineage.This was true for both lineages separately, with 75% of worms from lineage A choosing the side of their same lineage and 76% of the worms from lineage B moving into the side previously occupied by worms of the same lineage.A 2 × 2 contingency table test showed that there was no significant difference between lineages in this regard and that, therefore, they could be analysed together in order to improve statistical power.The combined results showed that worms exhibited a significant preference to move into soils previously occupied by worms of the same lineage as themselves.A GLM to explain whether worms moved into soils occupied by worms of either the same or different lineage showed that there was no significant effect of lineage, body mass or experimental round on the outcome.The minimal adequate GLM also revealed a significant association between outcome and the distance that the worm had burrowed into the soil, with worms that had burrowed further into the soil being more likely to have moved towards soil that had previously contained the same genotype.The results of Experiment 2 mirrored those from Experiment 1, with the difference being that each earthworm was presented with a choice between two halves of a Petri dish, containing filter paper wetted on one half with water-soluble chemicals extracted from soil previously occupied by worms of the same lineage as themselves and on the other half with water extracted from soil worked by individual worms belonging to the other lineage, or a control solution.Overall, there was a high proportion of ‘correct side’ choice exhibited by worms of both lineages across the 3-h test period.A GLM revealed that there was no significant variation in the orientation behaviour of the earthworms across the observation period,.There was also no difference in the ability of the two lineages to orientate towards the side containing extract of the same lineage.Nevertheless the behaviour of the control group was significantly different from the earthworms in treatments A and B, with worms in the control group showing a stronger preference for the ‘correct side’.The same GLM revealed a significant positive association between the body mass of the worm and its preference with heavier worms showing a stronger preference for the extract from the same lineage.Our results provide evidence that water-soluble molecules mediate the attraction of individual L. rubellus to individuals of the same genetic lineage.This behavioural response to chemical signals released by conspecifics into the soil is a candidate mechanism to explain the maintenance of pre-copulatory reproductive isolation between cryptic lineages of earthworms.Our study examined the direction of movement towards or away from soil- and water-borne extracts from different lineages as a measure of attraction; future studies could examine whether these behavioural responses do indeed lead to assortative mating between the two lineages.Genetic characterization through the mitochondrial gene COII and phylogenetic analyses, confirmed the presence of two cryptic lineages within L. rubellus collected from the field site.The subtle intraspecific genetic variation shown within this species has been previously documented by King et al. and Andre et al. and our sequences clustered together with the described lineages A and B in these previous studies.Donnelly et al. confirmed the lack of gene flow between these two lineages using microsatellite markers.Although RAD-Seq analyses revealed that certain European L. rubellus lineages may not be reproductively isolated, a similar genetic analysis observed that no hybridization occurs between A and B lineages in a number of UK locations where the lineages co-existed.According to Sechi the two sympatric lineages appear to have evolved in allopatry during the last glaciation, thus suggesting that pre-copulatory isolation mechanisms may have developed during allopatry before their secondary contact.However, as stated above, post-copulatory isolation mechanisms cannot be dismissed and experiments addressing this point are worth considering.This is an example of cryptic speciation, which appears to be relatively common within soil invertebrate taxa and perhaps especially in earthworms.The named L. rubellus cryptic lineages were found in a 44/56 A:B abundance, and their mean genetic uncorrected inter-lineage divergence was 11.5%, at the field site chosen for the present study.In contrast, the intraspecific divergence for lineage A and lineage B are much lower and may reflect reproductive isolation between lineages.Constructed haplotype networks suggest that Haplotypes 1 and 2 are ancestral in lineages B and A, respectively, and may therefore represent the genotype of the founders of the lineages in the studied population.The data gathered in Experiment 1 support the hypothesis that chemicals released into their surrounding environment by L. rubellus have properties involved in mate attraction.Worms showed a significant preference to move towards the mesocosm side that had previously been occupied by a worm of the same lineage.This suggests that the worms are able to detect the specific chemical signals secreted by conspecifics into soil in the relatively confined space in our laboratory-based mesocosms.Whether this phenomenon is operative under field conditions remains an open question, as is the persistence of the molecules involved.The fact that the choice tests were conducted in mesocosms using single worms eliminates non-chemical modes of driving directional movement, for example direct tactile contact or locomotion-mediated vibrations.L. rubellus has already been shown to be able to actively avoid soils laced with Cu and Zn, supporting the conclusion that chemoreception is an effective means of detecting abiotic chemical stimuli within their environment.The behaviour-modifying role of biogenic compounds secreted and released by conspecific earthworms adds considerably to the present knowledge of the behavioural- and population-ecology of this taxon of soil-dwelling ecosystem engineers.The relationship between distance moved from the centre of the mesocosm and the apparent preference recorded, suggests a methodological approvement for such behavioural choice assays; the further the worms moved into the mesocosm, the more likely the choice was to be “correct”.Therefore, subsequent analyses of this type should record the preferences of worms once they have moved more than 2 cm in either direction from the centre line.With mate-attraction pheromones having previously been identified in L. rubellus, we hypothesize that the release of lineage-specific pheromones acts as a means of lineage-specific mate choice.These pheromones are water-borne molecules, analogous to those originally identified in the marine mollusc Aplysia, that function as mate attractants.The data generated over the 3 h in Experiment 2 indicates that more worms of lineage A and B are attracted to a filter paper side that was soaked with a soil solution derived from soils occupied by worms of the same lineage.The same was true for the trials comparing lineage A solutes versus controls, which showed even an stronger effect since there was no confounder in the other side of the test arena.The results demonstrate that the worms of both lineages of L. rubellus detected specifically water-borne chemicals and altered their movements accordingly.Further studies are clearly needed to identify the molecular attractants secreted by the two L. rubellus lineages, to characterise their individual and combined effects, and to evaluate their persistence under a range of realistic environmental conditions.The mean body mass of lineage B worms was shown to be significantly greater than that of lineage A worms in the study site, indicating possible size dimorphism between the two cryptic lineages, something previously reported for other cryptic species of Lumbricus.This may be a contributing factor in their reproductive isolation, as size assortative mating is known to occur in other earthworm species such as Eisenia fetida and has been documented to occur to a degree within Lumbricus terrestris.The difference in average weights could also suggest niche partitioning, whereby the two lineages use different strategies for the exploitation of available resources.Kille et al. reported distinct adaptive responses to soil contamination of the two lineages, which also exhibit differing environmental preferences.These studies could suggest commencement of niche specialization.Klok et al. found that frequent floods made the reproductively mature L. rubellus to be half weight of those from non-flooded sites.Further work on several mixed lineage A + B populations is warranted to determine if mature lineage B individuals are consistently larger than their lineage A counterparts, and to establish whether the size difference if it exists plays a role in assortative mating.The results show that two cryptic species of the earthworm L. rubellus, which evolved initially as allopatric lineages, live in sympatry and are found in a similar proportion within the studied area, where they maintain their genetic diversity and differentiation by means of reproductive isolation.This study has provided evidence for pre-copulatory sexual selection mechanisms driven by the release of lineage-specific chemical signals, which act as a recognition flag for worms of the same lineage to aggregate towards.An experiment involving soil extracts indicated that this attraction is mediated by water-borne molecules.Further studies would shed light on the exact nature and blend of molecules exerting this effect and their genetic basis.
Behavioural processes such as species recognition and mate attraction signals enforce and reinforce the reproductive isolation required for speciation. The earthworm Lumbricus rubellus in the UK is deeply differentiated into two major genetic lineages, 'A' and 'B'. These are often sympatric at certain sites, but it is not known whether they are to some extent reproductively isolated. Behavioural tests were performed, in which individually genotyped worms were able to choose between soils previously worked either by genetically similar or dissimilar individuals (N = 45). We found that individuals (75%) were significantly (P < 0.05) more likely to orientate towards the soil conditioned by worms of their own lineage. Further testing involved a choice design based on filter papers wetted with water extracts of soils worked by a different genotype on each side (N = 18) or extracts from worked soil vs. un-worked control soil (N = 10). Again, earthworms orientated towards the extract from their kindred genotype (P < 0.05). These findings indicate that genotype-specific water-soluble chemicals are released by L. rubellus; furthermore, they are behaviour-modifying, and play a role in reproductive isolation between sympatric earthworm lineages of cryptic sibling species, through pre-copulatory assortative mate choice.
160
Stakeholder empowerment through participatory planning practices: The case of electricity transmission lines in France and Norway
Grid extension has always been an essential topic for the electricity sector, as electricity consumption increased in the last decades .Today, new challenges related to grid extension are emerging: the goals of the European Union for an almost completely decarbonized electricity sector by 2050 are changing today’s patterns of electricity production, consumption and transport .While grid extension is needed today, citizens’ opposition to new electricity corridors is slowing down planning processes for new power lines and power lines upgrades as well, thus decelerating the energy transition for the European electricity sector .The reasons of opposition are manifold and include the intrusive nature of transmission lines in the landscape, the fear of health consequences due to population exposure to electromagnetic fields and the decrease of property values nearby new corridors .Opposition to transmission lines is not new in itself, as documented in several cases from the 1930s in the United States .However, while in the past power lines have been considered as a symbol of progress, today some stakeholders consider them as a threat .Stakeholder participation is seen as a way to smooth planning processes, decrease opposition, diffuse conflicts, and develop the grid by addressing stakeholders’ heterogeneous concerns and needs .While formal stakeholders’ participation is already today included in planning processes for transmission lines, several scholars and organizations claim that it should be carried out in a different and better way .Yet, it is assumed that enhanced stakeholder participation is a condition for an increased acceptance of power line projects .However, while there are no universal metrics to evaluate stakeholder participation , empowerment is a concept that can be used to evaluate qualitatively the levels of participation in a decision-making process .In this paper, we evaluate the level of stakeholder empowerment in the planning processes of two European countries: France and Norway.In order to do so, we divide the planning processes in three main phases: need definition, spatial planning and permitting .Based on a documentary analysis, we evaluate for each phase the degree of stakeholder empowerment operationalized as information, consultation and cooperation .In order to better understand future trends, we also describe and evaluate recent innovative projects adopting participatory methods for stakeholder engagement.In these projects, the transmission system operators voluntarily improved the planning process and engaged stakeholders by using innovative tools or procedural measures.Finally, we compare and contrast the experiences in the two countries in order to highlight similarities and differences.Stakeholder engagement in power line planning is a relatively new research topic compared to other fields like environmental conservation , water management or sustainable urban development .Since more than one decade, grid development has faced rising public opposition.Stakeholder participation is considered as a way to reduce conflict, foster acceptance and legitimize decisions related to power line projects .Public opposition does not only affect grid extension projects.Wind, solar and biogas energy facilities are also depending on stakeholder acceptance .However, while wind turbines, hydroelectric power plants or biogas plants produce energy locally, thus creating an added value to the area, transmission lines do not directly add value to the land they affect.Moreover, the incentives for grid extension are usually linked to additional installed energy production capacity, which also depends on grid availability, causing a chicken-and-egg problem .Nevertheless, stakeholder engagement in the planning process for power lines and other infrastructures related to renewable energy is similar due to their impacts on landscape and property value .Today, transmission system operators and regulators carry out planning processes for power lines in a top-down fashion, by providing information or asking stakeholders for feed-back – e.g. on grid positioning – during the different phases of the planning process .Many scholars consider these involvements as insufficient and as the root of opposition .Therefore, it is assumed that enhanced stakeholder participation would ease planning processes for power lines.The premise of this assumption rests on the so-called ‘crisis of representative democracy’: stakeholder participation is seen as a way to revitalize a stiff representative democracy and lack of trust in responsible authorities.Although participation has inherent advantages, it has also limits.This is a highly debated topic in the academic literature.Pellizzoni and Vannini proposed an ‘ascending’ reading of participation-related literature, carried out through optimism in the 1980s, and a ‘descending’ reading of participation literature later in the 2000s, where the optimism faded away for a less optimistic, but more realistic approach.In the case of power lines, today’s planning processes already engage stakeholders at specific points in time and with specific aims.Economic, social actors and citizens are informed and consulted in the planning process and these interactions are embedded in the current legislative procedures to build the grid .Nevertheless, this engagement is not always considered sufficient or appropriate .More precisely stakeholder engagement is often reduced to one-way information activities that do not serve the purposes of participation such as enhancing the buying in of heterogeneous stakeholders’ perspectives or addressing conflicts in an open democratic debate .Stakeholders have very different reasons to oppose to power lines.These reasons may be individual, for instance related to health risks due to electro-magnetic fields, visual disruption or property value loss .However, these can also be of social nature, for instance of disruption of sense of place , or of political nature, for instance the influence of the national political context or the trust stakeholders have in existing institutions .Stakeholders may have a very different perception on the issues at stake depending highly on the context of the project, their needs, interests and values .Nevertheless, most of these stakeholder needs are formally taken into account in currents planning processes, which are accurately designed .While there is a large body of literature that isolates and explains the public’s reasons for opposition and acceptance of transmission lines, the same is not true for stakeholder participation in the planning processes.Stakeholder participation is subject to different interpretations and academics frame it in different ways.Some describe the attributes that define stakeholder participation and propose outcome evaluation criteria.Other scholars focus on the aims of participation and maintain that participation should reach certain social, democratic or interactional goals .The gaps in the literature and research on participatory processes are numerous.So far, little attention has been devoted, for instance to the comparison of methodological approaches used to engage with stakeholders; the methods and tools to co-produce knowledge that is useful and usable to inform decisions; the relationship between process and outcome; the evaluation of the quality of participation .In this paper we focus on a research gap that is particularly relevant for stakeholder participation in power line planning processes, i.e. their level of empowerment and its evaluation methods.Taking stakeholder empowerment as a criterion for classifying stakeholder engagement practices, Arnstein developed a ladder with eight rungs, from manipulation to citizen control, divided in three groups: nonparticipation, degrees of tokenism and degrees of citizen power.Although most scholars use the word ‘participation’ as a generic term for stakeholder involvement, Arnstein maintains that the word participation can be used only if stakeholders have a real say, thus power, in the process.Nevertheless, the empowerment levels of stakeholders in a process, although mostly not at the highest rungs as described by Arnstein, can still be evaluated.Therefore, an empowerment scale is appropriated to evaluate the way stakeholders are embedded in a process, in our case power line planning.While in the case of planning processes for power lines, the procedures are often described accurately in the regulation , it is possible to evaluate the extent to which the stakeholders are empowered in the process.Without going into detail on the intrinsic nature of the power relation between actors involved in the process , the way stakeholders are formally embedded in the planning process makes it possible to use a relatively simple empowerment scale like the one formulated by Arnstein .For the purpose of this paper, participation and empowerment of stakeholder starts as soon as stakeholders are engaged in the process.Komendantova et al. already used the scale provided by Arnstein to evaluate stakeholder engagement in power line planning.However, in their research, the authors only focused on new participatory practices of some TSOs across Europe and did not focus on the regular planning processes.This leaves a gap that we aim to address in this paper through an evaluation of the empowerment of stakeholder as a result of the formal process carried out for power line planning.Drawing on the seminal work of Arnstein’s ladder of citizen participation, several scholars developed other scales of stakeholder empowerment.Instead of eight rungs, Lüttringhaus and Rau et al. described a simpler scale with a split between the process owner and the participants where interactions can be classified in four main levels: i. information: stakeholders only receive information provided by the process owner; ii.consultation: stakeholders’ perspectives are elicited by the process owner; iii.cooperation: stakeholders’ perspectives are explicitly taken into account and decisions are co-produced with the process owner; and iv.delegation: stakeholders take over a task and the process owner accepts their decision.Originally, Lüttringhaus added an additional level on the stakeholder side as ‘self-reliance’, where citizens have the power to initiate a process.In the case of grid extension, the initiation of the process is usually expert-driven.TSOs identify bottlenecks, future needs and then start a planning process for a line upgrade or a new line .Therefore, as the process is usually initiated as a response to a technical need assessed by the TSOs, we do not consider the additional level of self-reliance as appropriate for power grids and, in this paper, we rely to the scale described by Rau et al. .The aim of the research was to evaluate, compare and contrast the degree of stakeholder empowerment in power grid planning processes in France and Norway.In order to do so we took a qualitative approach and performed a documentary analysis of planning processes for very-high voltage power lines.For this research, we used two types of data: official documents and TSOs-documents.We used official documents, mainly in form of laws and regulative guidelines provided by state organisms, for instance the Norwegian Water Resources and Energy Directorate in Norway or the National Commission of Public Debate in France.Although no documents can be taken as describing accurately the reality , we consider documents tightly related to regulations and their application appropriated to evaluate the way stakeholders are formally involved and thus empowered in the process.Additionally, we use documentation generated by the project owners, in our case the TSOs.We considered the following criteria for assessing the documentary sources: authenticity, credibility, salience, legitimacy, representativeness and meaning .More precisely, we paid attention to the subjective judgments and biases in TSOs documents and we took these aspects carefully into account when drawing our conclusions.The first methodological challenge was the cross-country comparison of the power line planning processes.Indeed each country has its own process, entailing different ways to involve stakeholders and to make decisions.In the attempt to compare planning processes in the European Union, Berger proposed six steps as a common denominator: determinations of needs, project preparation, spatial planning, permitting, construction, and operation.However, for the purpose of this research, we reduced the process to three main planning phases, as proposed also by Renewable Grid Initiative in their European Grid Report : need definition phase, spatial planning phase, and permitting phase.We use these three phases as a common denominator to compare the processes.The second methodological challenge consisted in the definition of stakeholder empowerment levels.Arnstein stated that participation, thus ‘real’ empowerment, only happens at the highest rungs of her ladder, i.e. partnership, delegated power and citizen control, the other levels are only forms of non-participation and tokenism.However, this perspective is questionable, as empowerment may start when a process owner interacts with potentially affected stakeholders .Therefore, we use a notion of gradual empowerment in the sense of increasing stakeholder participation possibilities.We used the first three levels of the participation pyramid provided by Rau et al. : information, consultation and cooperation and we left the highest level of stakeholder participation out because of the nature of power lines.Indeed, planning processes are embedded in existing legal frames where experts play a major role, making delegation not achievable from a project owner perspective.Aiming to evaluate the empowerment levels of affected stakeholders gives us an appreciation of how participation is carried out for power line planning.However, as the planning procedures may greatly vary across countries , we focus on two distinct European countries that have very different procedures and stakeholder involvement cultures: Norway and France.More details of the two planning processes are provided in section 4.Here we point out only some of the key differences.While in France the TSO plays the main role as owner of the project , in Norway once the application for a project is submitted, the process is taken over by the Norwegian Water Resources and Energy Directorate .Additionally, although it is not the direct focus of this research, there are also local differences in the culture of participation, mainly due to the legal frame, the local topology and the different roles of the involved actors in both countries.Acknowledging the limits of current planning processes from a stakeholder participation perspective, the TSOs, Rte and Statnett, engaged with stakeholders in innovative ways in some recent projects.Therefore, we added also an evaluation of these new projects to better understand what is the trend in stakeholder empowerment for power line planning.Finally, we compared and contrasted the planning procedures in the two countries to highlight general tendencies.In France, the administrative process applicable to power grid projects can be schematically divided into the three main phases: the need definition, the spatial planning, and the permitting.Each phase is divided in different steps and it involves different categories of stakeholders at different scales: State’s representatives, regulatory bodies, TSOs, local authorities, NGOs, residents and the general public.The need definition phase aims to identify and justify the needs for future grid projects and to collect stakeholders’ opinions about them.This is the general purpose of the French TYNDP, which describes on the basis of several scenarios the evolution of electricity production, consumption and exchanges at the European level, and how the national power grid will evolve, with regional focuses .This document is published every year on the institutional website of Rte, the French TSO, so that all interested stakeholders, Rte’s customers as well as NGOs and citizens, can comment on it.These comments as well as answers provided by the TSOs are then integrated into the final report to be sent to the regulator before the official publication .For each project, solutions to answer the needs identified in the French TYNDP must be justified by Rte.Therefore, at the beginning of each project, a technical-economic justification is carried out, whose validation is provided either by the Ministry in charge of Energy or by the regional State’s representative .From an empowerment scale provided by Rau et al. , this step ranges in the level of consultation.However, only stakeholders actively interested into the topic of grid expansion consult the published TYNDP and potentially affected stakeholders like citizens or local associations are likely to not provide any feedback to it.Therefore, this first step may be considered at the margin of consultation, as the TSO does not proactively ask all potentially affected stakeholders to take position on their development plans.Besides the general development plans, the TSO provides a technical justification for each project, an exercise that we consider as information to potential stakeholders.The purpose of the spatial planning phase is to define a study area and to address the environmental and economic aspects, including landscape impacts, of possible corridors in the most suitable area to select the corridor of least impact.This formal step called concertation is split in two main phases.The first aims to delimit the study area, which is large enough to include all possible power line alternatives.The second step aims to collect all territorial characteristics related to the study area in order to define a pathway that causes the least impact to the environment of the affected regions .The organization of a public debate for 400 kV lines with a length over 10 km, under the supervision of an independent administrative body, the National Commission for Public Debate, was mandatory until 2015.Today, stakeholders like the TSO, parliamentarians, councils at regional and municipal levels, and agreed environment protection associations may voluntarily ask for the involvement of the CNDP .At the end of the process, the National Commission for Public Debate produces a report with recommendations on the basis of which Rte must declare whether it is willing to continue the project and, if yes, how it will integrate the CNDP recommendations .Local inhabitants also constitute relevant stakeholders at the spatial planning phase.As the characteristics of the project at this stage are more precisely defined than during the previous phase, this makes it possible to discuss precise points of the project with the affected stakeholders.There is a possibility to involve a neutral third party, named guarantor.His or her nomination by the National Commission of Public Debate can be voluntarily asked by the TSO to ensure transparency and fairness during the participatory process .This is also a way to enable the integration of stakeholders’ expectations and concerns before the official public inquiry of the permitting phase.During the spatial planning phase, the TSO involves institutional stakeholders like municipalities to gain a better knowledge of local issues related to the development of the line.In this phase, local NGOs and diverse institutions are involved.Because of the integration of different stakeholders’ expectations and concerns that is not necessarily binding, we range this step as being at the empowerment level of marginal consultation.The permitting phase begins with the request for a declaration of public utility.The purpose of this declaration is to make some future utility easements or propriety transfers legally possible if no amicable agreement is found with landowners .To that end, a two-month consultation is organized with many different stakeholders, for example State services, regional authorities, local representatives and protected area managers.Remarks are made concerning both the demand for the declaration of public utility and the Environmental Impact Assessment, which is mandatory for new overhead lines over 15 km .Following this consultation, a minimum of one-month public inquiry is opened to all citizens living in the local communities concerned by the projects.The public inquiry is managed by an investigating commissioner who is appointed by the administrative court related to the area of the project .At the end of the process the commissioner proposes a report bringing together the various positions held by participants, adding conclusions and recommendations.Due to the active character of these inquiries, this step may be considered as consultation.Finally, the decision of delivering the declaration of public utility is taken in light of the public inquiry by either the Ministry of energy for 225–400 kV projects, or by regional State services for lower voltages .Once the declaration of public utility is delivered, the last details of the project and its precise localization are determined through bilateral meetings with the relevant authorities and any stakeholders whose interests may be directly impacted by the project .This process is supervised by the Regional Direction of the Environment, Landplaning and Housing and the Departmental Direction of the Territories to ensure that the procedure is carried out according to the law .Specific agreements, for instance on building licenses, exemptions related to protected species, and compensatory measures, in particular in matters of landscape impact, are then discussed.The Project Accompanying Plan, which is financed by Rte to cover a set of environmental measures for the visual integration of the structures into the surrounding landscape, is also discussed at that time .Due to the exchanges and discussion between the TSO and the affected stakeholders, the character of these last meetings can be ranked in a level of cooperation.However, due to the declaration of public utility in terms of means to enforce the application of the project, we consider this step as a level of consultation, as the enforcement means of the declaration of public utility makes a sharing of power between stakeholders and project owner impossible.The French TSO, Rte, adopted innovative ways to involve stakeholders in three of their projects.We provide below a short description of the innovations and the related empowerment of stakeholders.In the project Lonny-Vesle, an upgrade of a 400 kV power line , the innovation consisted mainly in an early landscape ‘diagnosis’, i.e. an integrated analysis which takes into account economic, social and environmental aspects .Additionally, the TSO mandated an external company to perform a socio-environmental inquiry through workshops with local stakeholders and citizens affected by the power line path .This approach provided an overview about the effects of the line on the landscape and the possibility to tailor the line to the needs and future plans of local stakeholders.These additional steps in the project are beyond a regular consultation and go in line with cooperation.However, as these steps happened beside the formal process, and not in form of binding-steps added to the traditional project, this cooperation could not be fully considered as such.Therefore, from a whole-project perspective, we considered the empowerment level during these steps as marginal to cooperation.From a time-perspective, most of the additional work involving stakeholders has been carried out early in the process .However, the scope of evaluation was already defined by the path of the line to be upgraded.Although the content could also be related to the spatial planning phase, the emphasis in the additional stakeholder engagement has been on needs at a very local scale .Therefore we considered this engagement mainly as part of the need definition phase.This step showed that consultation could go far beyond the usual way to involve stakeholders, providing insights that make possible a more constructive integration of the power line in the territory.In another project, Avelin-Gavrelle, and upgrade of a 225 kV and 400 kV line, the TSO organized five ‘thematic commissions’ during the early steps of the spatial planning phase.These commissions dealt with issues related to power lines like health, agriculture, environment, landscape and energy-economy .In these commissions, independent external experts explained their views and discussed the needs and implications of the power line with representatives of NGOs, socio-economic actors, citizens, and representatives of the state services and of local authorities .Additionally, Rte organized local workshops with citizens affected by the power line to gather local insights on the affected areas at a small scale .We classify this project as genuine consultation because the stakeholders’ perspectives were taken into account explicitly in the further steps of the process.The project of the line between France and Spain, Baixas-Santa Llogaia, has been documented as an example where opposition grew so high that the TSOs, in this case the French Rte and the Spanish REE regrouped in a partnership for the project, required mediation for the project at the European level .Discrepancies appeared between the TSOs and the opposing citizens’ groups on the rationales behind the project, on the layout of the line, and on the environmental implications of the construction, causing delays in the process .This led to an abandonment of the regular process, and to the organization of specific workshops to detail the technical specificities of the line according to the views of the population.Under a European coordination, stakeholders like local governments, environmental and opposition groups have been involved through workshops .These stakeholder involvements lead to a consensus on a final layout for the power line, which entailed a dedicated eight-kilometer long tunnel for an underground cable under the most sensitive area .This is a clear example of cooperation because stakeholders’ perspectives have been explicitly taken into account and decisions have been co-produced with the involved TSOs.Moreover, the project showed how an increased degree of stakeholder empowerment led to a compromise solution to build the line.The Norwegian planning procedure for power lines shares the same fundamental phases as the French process: the need definition, the spatial planning, and the permitting.However, the phases can be divided in different steps.Moreover, a different organization has a critical role: the Norwegian Water Resources and Energy Directorate, whose role is to ensure a fair use of resources, especially in the interest of the affected communities .The Norwegian development plan follows the aims stated in the European Ten-Year Network Development Plan .The plan describes trends and scenarios, and projects the evolution of electricity production, consumption and exchanges at the Norwegian-European level .It also describes how the national power grid should evolve, with a regional focus.This document is published every second year, and broadly discussed with politicians and in energy-experts fora.From 2015 there is, in addition, a public hearing on the Norwegian Grid Development Plan .The discussion on the need of the project starts at the early concept evaluation of each project and may recommend several projects for a studied region.Although this first dialog formally takes the form of a consultation, the TSO mainly involves established stakeholders like public authorities and NGOs, but not the wider public, as at this point there is still no clear concept of a path for the line .For large projects, longer than 20 km and with tensions over 300 kV, the TSO carries out an external audit on the concept evaluation for grid development by external consulting companies before sending the justification report to the Ministry of Oil and Energy .Additionally, discussions about the need rarely end up justifying only one power line, but they are valuable to identify interested stakeholders, their perspectives, alternative solutions and to receive inputs about future needs.This dialog is the first formal discussion between the TSOs, the public authorities and NGOs.During this phase, Statnett presents the need for a new power line, the issues at stake and what needs to be taken into account before the ‘spatial planning’ phase .Therefore, mainly due to the explorative character of this first stakeholder engagement and the restricted consultation scope for large projects, we categorize it as marginal to consultation.During the spatial planning phase, Statnett sends a notification to the regulator, the Norwegian Water Resources and Energy Directorate.The organization of the public involvement is then shifted to the regulator who coordinates and organizes the hearings with affected stakeholders .In these hearings, information about the different routes and a proposal for the environmental impact assessment are provided.The consultation starts with local authorities, and the public is invited immediately afterward.The TSO and the authorities gather various interests, ideas, and remarks from stakeholders.The notification has usually alternative routes.During the process, some of them can be excluded or others can be added.This part of consultation with the regulator lasts for 8 weeks .After the hearings, NVE gives Statnett a program of an EIA, including which topics shall be included and the alternative routes .The boundaries of the EIA, a result of stakeholder consultation during the hearings, are binding; therefore we can consider this step as a consultation engagement.After this period, the dialogue with stakeholders often continues into the next phase, and then bilaterally between the TSO in charge of building the line and the affected stakeholders .Contact and dialogue with the county officials as well as the officials in the municipalities are maintained in order to give Statnett the possibilities to react and take additional local constraints into consideration.After the first round of public meetings, Statnett adapts the alternatives and external consultants carry out the Environmental Impact Assessment .The EIA and a formal application are sent to the regulator who organizes a second round of public hearings.The input of this second round is compiled by NVE in a binding form for the TSO.At this stage the most accurate route for the power transmission line is usually decided.Indeed, a different alternative route will require an additional EIA, which may delay the entire process.In this phase there is a constant dialogue with the landowners and other stakeholders to get into the details of the planning.This process also results in changes.Moreover the process includes a gathering of comments by the county, municipality officials, landowners and NGOs through hearings.These officials provide feedback on the EIA and formulate demands for additional information, for instance about biodiversity loss .From an empowerment perspective, the permitting phase fulfills the conditions of a consultation through the binding results of the public hearings.For small projects, NVE makes the final decision, unless an appeal is made.Once the regulator gives its approval to the selected route, they send an information letter including the assessment paper of the project, which is the basis of their decision.This letter is sent to the stakeholders who contributed or were involved, the landowners, and the local and regional authorities .Stakeholders are given the possibility to object the decision within three weeks.We consider this sub-step as marginal to consultation, because the input from stakeholders can only be in form of an objection.If there is an objection to the decision, NVE takes it into consideration and either changes the decision in accordance with the objection, or overrules the objection.If overruled, the original decision is maintained and NVE forwards the conclusions and recommendations to a higher level, i.e. the Ministry of Oil and Energy, which evaluates and makes a final decision, which cannot be objected anymore .For larger projects, the final decision is made by the Ministry of Oil and Energy based on NVE’s recommendations.The stakeholders are informed about the decision made by the Ministry of Oil and Energy, but the stakeholders cannot object to it .The variation of the process between big and small projects is the result of the a document named White Paper, which was introduced in 2012 as a result of the ‘Hardanger line’, whose development was heavily undermined by stakeholder opposition .More precisely stakeholders raised opposition because they wanted subsea cables instead of over-head lines and due to a perceived ambiguity on the need for the power line .Finally, the negotiations with the landowners about compensation for land loss related to the new line or corridor begin after the application hearing.The compensation only covers the economic losses .Common interests with landowners and municipalities may be included, often resulting in the provision of local benefits .The TSO collaborates in parallel with the affected municipalities, NGOs and landowners.Collaboration is project-specific and the wider public is not involved anymore.As a result of facing opposition, the Norwegian TSO Statnett enhanced its planning processes with additional hearings.In addition to the mandatory public hearings related to the notification and the application of power lines projects, Statnett organized meetings with stakeholders, among them potentially affected residents and landlords, earlier in the planning process.For instance, in the project Bamble-Rød, a new 420 kV line, the TSO organized meetings with the population before the official hearings organized by the regulator.These hearings made possible to discuss issues like the reason for grid extension and the project, possible cabling solutions, its price and its impact on the landscape, and finally develop additional path alternatives, taking into account stakeholder input in a very early phase , therefore, we considered this as a consultation.Additionally, as a result of the meetings, a new process to remove power lines with lower voltage begun.These additional meetings gave more possibility to affected stakeholders to exchange on projects affecting them, providing inputs to the TSO, helping to select technical alternatives, and to reduce tensions between the TSO and the stakeholders.The same procedure has been applied to the general upgrade of the existing power line network around Oslo .The project Nettplan Stor-Oslo covers the complete upgrade of the electricity grid around the capital, where most lines have been built between the 1950s and the 1980s, and will not be able to satisfy the city’s future consumption patterns until 2050 .In the early meetings organized by the TSO, stakeholders could provide direct input, especially consisting of requirements regarding the visibility of the line, to be then considered in the framing of later stages of the planning process .Therefore, we consider this involvement as a form of consultation.Although the planning phases of the project Stor-Oslo are not finished yet, collecting stakeholder views early in the process is gaining momentum in Norway.Streamlining the process in this way may therefore reduce the risk of time-costly appeals at the end of planning procedures.The planning processes for transmission lines differ between France and Norway in several points.The most salient one is the involvement of stakeholders, mainly in form of hearings.Statnett in Norway does not have control on the stakeholder hearings in the formal process as it is carried out entirely by the Norwegian Water Resources and Energy Directorate .In France, Rte plays a crucial role in the involvement of stakeholders and although large projects are monitored by the National Commission for Public Debate , Rte is the key actor in charge of stakeholder involvement.The evaluation of regular planning processes in both France and Norway revealed similarities in the involvement of stakeholders between the two countries.First, in both countries there is a common trend of adopting higher levels of stakeholder empowerment in the need definition phase additionally to the formal requirements of the planning regulations, as shown by the innovative projects.However, the two countries have different ways to empower stakeholders in the early phase of the project, mainly using citizens’ workshops in France and additional stakeholder hearings in Norway.Second, both countries show higher levels of stakeholder empowerment in the spatial planning phase.Third, both countries did not increase stakeholder empowerment in the permitting phase.It is difficult to provide an explanation for these trends.We may hypothesize that this is due to an increased TSO awareness that taking into account stakeholders’ perspectives in the early stages of the planning process can avoid later bottlenecks and conflicts.This mechanism is also known as the ‘Participation Paradox’: over time the interests of stakeholders grow, while the possibility to influence the project decreases .Additionally, recent research showed that stakeholders and citizens are indeed willing to participate in planning processes .In this paper, we describe, evaluate and compare the planning processes for very high-voltage transmission lines in France and Norway by means of a document analysis.Grounding on previous research, we operationalize the degree of empowerment in three levels: information, consultation and cooperation.Our analysis of traditional electricity grid planning procedures shows lower levels of stakeholder empowerment in the early phase of the planning process than in the final one.This emerges as a common trend in both countries under study, France and Norway.Also the results about innovative projects reveal a common trend, but it goes in the opposite direction: innovative projects show higher levels of stakeholder empowerment in the need definition and spatial planning than in the permitting phase.These results open up several questions: why could we not observe very high levels of empowerment in the traditional processes?,Why could we not observe innovations in terms of enhanced stakeholder engagement in the permitting phase?,Why is there a tendency to increase empowerment in the early phases?,In principle citizens could be highly empowered in order to cooperate with equal powers on power lines issues .However, as decisions about power line planning are usually started at the national or European level, it is unclear how to really empower citizens and how to conciliate their local interests with the national ones.We may hypothesize that the lack of enhanced levels of empowerment in the last critical phase, the permitting phase, reflects the difficulties in effectively addressing the conflicts between the national strategic decisions and local protests.At the same time our results show that, in their innovative projects, the TSOs tend to increase dialogue, engage with stakeholders and address disagreement as early as possible in the procedure.Therefore, they adopt a ‘precautionary approach’ to anticipate local protests.The specific characteristics of power lines are another aspect affecting the conciliation between local and national interests.Due to their linear structure, power lines have disadvantages compared to other infrastructures like wind turbines.In the case of wind energy, affected communities can directly benefit from the additional energy production in their area, stimulating in some cases grassroots’ initiatives to build wind turbines .In the case of power lines, the question of the added value at the local level remains open because the line usually goes through the land and the affected community does not directly benefit from it.Therefore, it is unclear how a bottom-up approach could actually reduce local protest in planning processes for power lines.However, what a bottom-up approach can do is to open-up critical issues and make conflicts visible since the very early stages of the decision making process.While traditional top-down approaches show their limits in matter of acceptance for power lines , some authors argue that stakeholder engagement should be tailored to the process with the right level of empowerment for each phase of the planning process .Our results clearly go in the same direction, showing that there is no ‘one solution’ for the level of stakeholder empowerment that fits for all the phases of the planning process.Another result that deserves further discussion is that France and Norway use different ways to empower stakeholders in the early phases of the project, mainly using citizens’ workshops for the former and additional stakeholder hearings for the latter.Considering the large amount of methods available, thus out of the direct control of the TSO Statnett.In France, Rte organizes the formal stakeholder participation, often with a monitoring of the National Commission for Public Debate.Although Rte emphasizes its role as a servant of the legislator and its political vision on the future of the energy and electricity system , this creates a potential bias in the process, where Rte can be perceived as conducting the participatory process to legitimize a decision that has been already made.Nevertheless, a process owner, in this case Rte, can still allow independence of the process by defining how the end-conclusions of the participatory process will influence decision-making .On a different register, Statnett clearly states its stake in the future energy landscape claiming as guiding vision that the “future is electric” .Therefore, co-organizing the planning process while being also an interested stakeholder does not preclude a fair amount of independence, as long as interested stakeholders do not exclusively carry out the process.Therefore, if in the future TSOs need to play a greater role advocating the development of electricity transmission lines against other forms of energy sources or distribution, a participatory process owned by a neutrally perceived instance like it is the case in Norway with NVE seems more appropriate.This has substantial implications for the existing legal frameworks for power line planning.As TSOs act more like private companies defending financial and technical interests, a too deep implication as process owner may compromise the neutrality of the process and this would increase criticism from stakeholders.Therefore, our results suggest that a separation between the process owner and the TSO may have beneficial effects on the process.Finally, although further empowerment of stakeholders and citizens may be considered as a way to revitalize a stiff representative democracy , the question of the limits of stakeholder empowerment remains open.On the one hand, from a political perspective, a power line project might be perceived by some stakeholders to be a form of tyranny of an expanding system where stakeholders are social outcasts.On the other hand, power line planning might be considered as the result of a democratic process and of energy politics aiming at expanding the electricity transmission infrastructure.If these two conflicting instances are not addressed, stakeholders who feel neglected will continue to protect themselves and their assets through legal appeals, thus delaying the processes or even causing their failure.Increased empowerment of stakeholders is therefore a way to give a voice to stakeholders in order to avoid decision deadlocks, blockades and legal conflicts.Avoiding these delays is crucial to speed up the grid development that is necessary for an energy transition toward a decarbonized European electricity sector.
The importance of grid extension in Europe has risen in the last decade as a result of an aging grid and the energy transition toward a decarbonized electricity sector. While grid extension is claimed as necessary, stakeholder opposition has slowed down this process. To alleviate this tension, increased stakeholder participation is considered as a solution to increase acceptance. The question of stakeholder empowerment is central to participation and it is assumed that higher levels of empowerment improve planning processes. In this paper, we describe, evaluate and compare the planning processes for very high-voltage transmission lines in France and Norway by means of a document analysis. We operationalize the degree of empowerment in three levels: information, consultation and cooperation. The results reveal low stakeholder empowerment that barely rises above the level of consultation. The evaluation of recent projects entailing innovations to enhance stakeholder participation reveals a trend of increasing empowerment levels, especially in the early phases of the planning procedure, i.e. the discussion about the needs for new lines and about the needs of the affected stakeholders. The results suggest that current planning regulations can benefit from high levels of stakeholder empowerment, especially in the early phases of the planning process.
161
Stories of the future: Personal mobility innovation in the United Kingdom
The future of surface transport is a much debated topic.Specifically, the current modes and volume of personal mobility are considered environmentally unsustainable, primarily because of greenhouse gas emissions of cars.In the UK, as elsewhere, there have been growing concerns about transport implications for climate change, but also for energy security, social exclusion, and public health and wellbeing .Personal surface transport in the UK has for decades been dominated by a system of automobility where privately owned cars are seen as a right and a necessity; car-based mobility is linked to economic development; and norms, practices and institutions reinforce the role of cars in society .This makes the system resilient and resistant to change, as path dependencies and lock-in make shifting the transport system towards sustainability difficult.However, there are many innovations which could potentially make personal mobility more sustainable, from technical improvements to engines and new fuels, to new models of transport behaviour and car ownership.Creating future visions is considered part of the innovation process: it can raise expectation and lend legitimacy to an innovation, generate support from stakeholders.This paper reports from a project on future visions of personal mobility in the UK, in the context of sustainability and emissions reduction, and the dominance of the private vehicle.Roughly 25% of UK CO2 emissions come from transport, nearly 2/3 of which comes from cars and vans .Two relevant innovations were considered in the study: electric vehicles, which offer technological reduction in emissions, but potentially keep other parts of automobility in place, and car clubs, which offer cultural and behavioural shifts, including severing the link between car use and ownership.There are other innovations of interest, with autonomous vehicles receiving attention recently.However, this study focuses on two innovations which are already well-established in the transport system and therefore in policy-relevant literature.A collection of 20 documents looking at the future was analysed; these include forecasts, roadmaps, pathways, and more.We consider each document to be a future exploration.The explorations were created by various stakeholders in the UK transport sector, including government, industry, consultancies and transport coalitions.The research focused mostly on EVs, as they feature more prominently in most documents, with car clubs offering a counterpoint useful for examining underlying assumptions in the narratives.Another paper from this study focuses on how aspects of the future were imagined and how the visions served the agendas of their authors.This paper considers how the framing of the innovations plays an important role, with frames acting as building blocks in constructing stories told about the future, and how narratives, which weave together different frames, reflect and strengthen dominant worldviews and agendas.It concludes that analysing the frames underlying the stories helps unpick and challenge unrealistic expectations that might leave us unprepared for the future.Section 2 offers a theoretical background on the importance of visions in innovation, and describes the concepts of frames and narratives and their use in this paper.Section 3 outlines the methodology, including selection of documents and coding procedure.Section 4 reviews some of the common frames in the explorations, while Section 5 covers some demonstrative narratives found implicitly or explicitly in the visions.Section 6 offers some conclusions and observations.Internal combustion engine vehicles are ‘regular’ vehicles powered by combustion of petrol or diesel.In contrast, electric vehicles are powered using electrical energy, most commonly stored in plug-in rechargeable batteries.The term EV sometimes also includes plug-in hybrids, which have both an electric motor and an internal combustion engine.Ultra-low-emission vehicles refer to any motor vehicle with very low emissions, including EVs, biofuel and hydrogen powered vehicles, and other technologies.Finally, car clubs are a form of shared mobility where members pre-book cars for short periods, often paying by the hour.Cars are typically picked up and dropped off at the same on-street location.Stories about the future are often articulated as visions.Visions can be powerful tools in public discourse and policymaking, because those that become widely accepted can shape expectations about the future, and therefore motivate actions in the present towards such a future .Such visions can be considered ‘shared imaginaries’, which have been defined as “collectively held, institutionally stabilized, and publicly performed visions of desirable futures, animated by shared understandings of forms of social life and social order attainable through, and supportive of, advances in science and technology”.In the innovation context, successful vision creation can generate more support from a greater range of stakeholders, for example when rapid technological change is required, “technology promoters have much to gain by having ‘the public’ on-side rather than resistant to innovation”.More broadly, in order to be successful and effective, visions must attract credibility through realistic strategies and tactics, achieving the right balance between utopia and realism, and also have the potential to be open to new entrants .This paper takes the position that imagined futures are always normative, as they inevitably make assumptions about behaviour, economics, technological development, and more.Assumptions about innovations could include imagining the nature and behaviour of adopters and users, which could significantly shape innovation trajectories in envisioned futures.The visions literature can therefore be useful even for future explorations which do not intentionally pursue a political or other agenda.A variety of research points to the importance of creating expectations about the future of innovations, partly through the use of visions.Expectations can motivate and stimulate action on technological innovation among engineers, designers and managers, as well as among sponsors, investors and politicians .It has been further argued that expectations and visions are not separate from the technological innovation process, but a formative element of it.Ruef and Markard argue that actors sometimes strategically inflate expectations of new technologies to attract resources and attention.Once expectations are shared, they effectively act as requirements that cannot be ignored by other innovating actors, which can lead to a period of hype, during which media attention and expectations peak.A ‘herd effect’ is also possible as technological ‘solutions’ become more and less fashionable .Periods of hype are inevitably followed by a decline in expectations and attention, potentially leading to disappointment.While details of this ‘hype cycle’ model have been criticised, the concept of hype and disappointment is well established.For any innovation to be widely taken up, it must gain legitimacy, that is, it must be accepted by a consensus of a social group as matching their norms, values, practices and procedures .Sources of legitimacy are varied, and can include narratives, discourses, verbal accounts, and traditional and social media .Legitimacy can be crucial early in an ‘innovation journey’ for securing investments, ventures and policy support for new technologies, while later in the journey, legitimacy can maintain public and political support .Maintaining positive expectations after a hype and disillusionment cycle can be crucial to keep legitimacy intact .The next section turns to frames and narratives, which underpin the stories that help create and maintain expectations and legitimacy.This paper focuses on some of the building blocks of stories of the future – frames and narratives – and how they make these stories influential.Frames can be described as conceptual models that help us make sense of the world, or “basic cognitive structures which guide the perception and representation of reality” .This approach follows the cognitive sciences, which suggest we think in terms of unconscious structures – frames, with even our everyday thinking using metaphorical concepts that are deeply entrenched and culturally reinforced .Our knowledge makes use of frames, and our neural circuits mean repetition of frames makes them more ‘hardwired’ in our brains, connected to emotions and ideologies more than reason .Frames can structure “which parts of reality become noticed” .Lakoff suggests that facts must make sense in terms of the system of frames of the person hearing them, or they will likely be ignored.In this way frames might limit our understanding, e.g., we might have the ‘wrong’ environmental frame to understand the ‘real crisis’ .Public discourse can be thought of as which frames are being activated, and therefore strengthened.In sum, frames “are principles of selection, emphasis and presentation composed of little tacit theories about what exists, what happens, and what matters”.A more constructionist approach suggests that people such as journalists cannot tell stories effectively without preconceived notions .Van Gorp reviews how journalists need myths, archetypes and narratives to cover news events; culturally constructed and embedded frames are part of the journalist’s toolkit, as their symbolic meaning evokes other stories the audience is familiar with .Culturally embedded frames are ‘universally understood codes’ .In this approach, the storyteller chooses their frames, and in fact, Van Gorp states that ‘individuals can mediate the persuasive power of frames by using them’.The documents studied here are reports and reviews, but nonetheless the authors do have an audience in mind.This paper draws on both approaches.Frames found in the documents studied could be Lakoff’s cognitive tools which shape how we see the world, often unconsciously.However, they could also be Van Gorp’s rhetorical devices, with authors choosing to tell a story whilst invoking certain culturally understood codes that match their agendas.These do not contradict each other, as the authors, in Lakoff’s terms, seek to activate, and thereby strengthen, their preferred frames.Frames are not fixed, but can change over time, and be replaced in the long term as new frames are born and gain popularity.They can become ‘reified’ in various institutions and cultural practices; they will then not disappear until the institutions and practices change .König suggests two criteria for viability of frames, that is, how likely they are to become culturally resonant: narrative fidelity, i.e., how congruent the frame fits with the personal experiences of its audience; and empirical credibility, i.e., how well the frame fits with real world events.In the context of new vehicle technologies, Ruef and Markard see frames as “overarching expectations which place the technology in the context of generic societal problems or visions”.Such expectations can direct technological innovation and create hype, as our stories affect reality.However, if they fail to deliver, the post-hype disappointment will lead to a loss of empirical credibility as the frame loses viability.In this study, frames were chosen for their salience in telling stories of the future, the assumptions or agendas they highlighted, or the difference between stories that resulted from choice of frame.Frames were chosen from economic, technological and other contexts, and included things like consumer choice, technology as progress, andsustainable transport.The methodology and chosen frames are detailed in Section 3.2.Beyond individual frames, this paper considers how frame combinations are used to tell stories of the future, and what part these stories play in policy related discourses.A narrative can be described as a script structure which shows development in phases from emergence to problem to resolution, and could be constructed from a repertoire of frames .Narratives are a mechanism by which frames are circulated andproduced, making them a rhetorical tool in construction of group identity or building powerful stories about the future.The focus in this paper is on how the narratives told make use of different frames, how they serve different agendas, and how they might alter expectations or lend legitimacy to different innovations or other ideas.Two broad narratives are apparent from the choice of study: The first is the emergence of climate change and the problem of surface transport CO2 emissions, which will be resolved through improvement of standard vehicles and the introduction of electric vehicles.This mainstream story is not surprising, given the role of automobility in our culture.The second defines the emergence of a broader sustainability problem, in which cars are implicated, and there is a shift to integrated transport and away from privately owned vehicles.This includes ‘car club narratives’, where clubs grow to fulfil their presumed potential, thereby maintaining access to mobility.The reduction in personally owned vehicles saves people money, reduces CO2 emissions, and also benefits local communities through reduced congestion and air pollution.Other, more specific narratives will also be highlighted, as will interpretations of narratives as ‘hero stories’ , in considering how frames and narratives found in the collected future explorations of personal transport tell stories of the future.In this project, documents about the future of transport and mobility in the UK were analysed.The documents include reports and reviews prepared by, and on behalf of, a range of stakeholders in the UK transport sector, including government, industry, consultancies and transport coalitions.Over 40 relevant documents were found through web searches, journal articles, transport reports, and suggestions from colleagues.Documents were selected for several criteria:They consider EVs’ or car clubs’ role in the future of personal mobility in the UK.They contain projections for the medium term future, i.e., 2020s–2050s, a period long enough for a systemic shift, or ‘socio-technical transition’, in the transport sector ;,They were published in 2002–2015; this is a period during which hype over EVs increased to the point that some believe the automotive industry has chosen EVs as the ‘winner’ among low-carbon technologies , while car clubs grew from a few thousand members to nearly 200,000 in the UK, mostly in London , and there was a general increase in public discourse about low carbon transport .They were suited to in-depth textual analysis.The study ultimately focused on the 20 documents listed in Table 1.While not exhaustive, these come from a variety of different bodies, and were chosen to cover a wide range of political, technological, economic and behavioural assumptions and perspectives without too much repetition.Only a few of the documents focus exclusively on car clubs or EVs in the UK.However, the documents that focus on low emission vehicles, road transport or the UK economy as a whole all discuss EVs as the main route to transport emissions reduction, while only a few mention car clubs, usually grouped with local action or behaviour change as complementary action.The main focus of the research was therefore on EVs, with car clubs offering a counterpoint that helped highlight agendas and assumptions.Coding for frames in the documents was a mixed process of searching for pre-defined frames and identifying frames by recurring themes in the texts.The goal was to create a list of frames that matched salient themes in the explorations, helped identify agendas or prevailing assumptions, or differentiated between narratives.When choosing or defining frames, it is important to find the appropriate level of abstraction.Van Gorp suggests frames should be applicable to other issues beyond the specific topic.König suggests three well established ‘master frames’: the ethno-nationalist frame, which considers an ontology based on religion, culture etc.; the liberal-individualist frame, which considers individual freedom and equality; and the harmony with nature frame, which attributes intrinsic value to nature.Alongside them, König offers three ‘generic frames’: conflict, human interest and economic consequences.These are all too general for this study.In technology focused studies, Dijk uses more specific attribute framing in a study comparing perceptions of diesel, hybrid-electric and battery-electric car engines.These include technological attributes such as capacity, noise and efficiency, and other parameters such as social connotation and tax benefit.These are not sufficiently abstract, and deviate from the definition of frames used here.Ruef and Markard’s work on stationary fuel cells shows how frames refer to the role of the technology in society, and offers a level of specificity and abstraction appropriate to this study.König suggests there is a tendency in frames analyses to produce new, unique sets of frames, where almost anything can pass as a frame, meaning creating new frames is better avoided.This paper follows König’s advice and makes use of Ruef and Markard’s work in order to reduce creating new frames.Ruef and Markard list four contexts: society, environment, policy and economy.The emphases in this paper lead to a partially matching list of contexts: economics, technology, policy, transport and environment.Further, several of Ruef and Markard’s frames were used with slight variation as the basis of the frames in this study: ‘technological progress’, ‘market potential’, and ‘nations’, while others were less relevant due to difference in focus, such as ‘hydrogen economy’ and ‘decentralisation’.Where new frames were defined, they were limited to those which represent common themes in the texts, were relevant to transport and discourses of the future, and were useful for analysis.Van Gorp advises using frames that are mutually exclusive to improve reliability of analysis.However, this study found that using overlapping frames offered a more nuanced analysis, and notes that Ruef and Markard use their listed frames in combination with each other.It is acknowledged that the choice of frames is inevitably arbitrary and subjective to some extent .Ultimately, frames were chosen that either contrast different narratives or highlight salient perspectives in the documents.The list of frames in their contexts, along with an indication of the narratives in which they are used, is in Table 2.Finally, in order to define identified frames more rigorously, a thematic coding method based on Boyatzis was used, as demonstrated for the identified frame of technological breakthroughs, see Fig. 1.There are a large variety of frames in the documents, with economic and technological the most prominent.This section shows how each identified frame is used.Economic frames are prevalent in the explorations, with personal transport seen as tightly coupled with the economy.These frames are mostly rooted in free market liberalism, and are uncritical of it, e.g., assuming economic growth is good and markets mechanisms find optimal solutions.One of the most common frames is economic growth, often used to justify or support an innovation by predictions or assumptions that it will deliver economic growth, new jobs, or other tangible economic benefits.This frame might be used to highlight the challenges of continuing economic growth while reducing emissions, as the Department for Energy and Climate Change suggests in a report to Parliament:By 2050 the transport system will need to emit significantly less carbon than today, while continuing to play its vital role in enabling economic growth.It can also be used to promise that low-carbon vehicles offer economic opportunities, as this report from the independent Energy Saving Trust:Establish the UK as a base for the engineering and manufacturing of low carbon vehicle technologies which could support the creation of new job opportunities.While the framing of car clubs as providing economic benefits is less common, car club focused visions arguably follow a similar pattern, with innovation leading to economic growth.However, as car clubs are less established than technological innovations in transport, more justification is needed, as demonstrated in this consultancy firm report on car clubs:he typical London round-trip car-sharing member that disposes/defers car ownership saves £3,000 per year… This cost saving releases more disposable income that can be used more productively in the local London economy.Another common frame can be called markets, a framing that assumes market activity will find the ‘best’ solution or alternative among several options.It follows that creating markets is a vital step, as expressed by the Government commissioned ‘King Review’ of low-carbon cars:… bringing existing low emission technologies from “the shelf to the showroom” as quickly as possible” and “ensuring a market for these low emission vehicles”.Further, markets must be maintained through avoiding uncertainty and reassuring consumers, as a report from the thinktank Institute for Public Policy Research highlights:This creates uncertainty for the ULEV market, as potential consumers need to know that their purchase is contributing to the decarbonisation of the economy.Finally, consumer choice is a frame which presents choice as an inviolable right, tied to notions of freedom; however, many explorations effectively limit consumer choice to choice of vehicle purchased.This frame is often used in modelling, where consumer choice is a proxy for behaviour, and used to calculate future projections or results of proposed interventions.In these models consumers are often framed as rational economic actors, as in this modelling exercise from the EST:It is then assumed that each consumer makes an assessment of the various attributes of the vehicles, to define an overall score for each vehicle.The consumer chooses the vehicle with the maximum utility.Choice is sometimes invoked as a reason for intervention, e.g., enabling new technologies to reach markets so consumers can choose between them, as in the King Review quote above.On the other hand, it is sometimes invoked as a reason for lack of intervention.For example, regarding car clubs, the King Review argues for ‘promotion’, but not market creation:Promotion of car clubs by central government could increase awareness and enable people to make informed decisions over whether car clubs could be appropriate for them personally.This suggests an underlying agenda of techno-optimism, where technological innovations are considered central to reducing emissions, but other shifts to more sustainable modes of behaviour are marginal.The seeming frustration at lack of EV uptake strengthens the idea that low carbon vehicles are seen as the ‘correct’ choice for consumers.Technology as progress is a frame in which technological development, and especially innovation, is seen as synonymous with progress.Ruef and Markard have a similar frame ‘technological progress’, which also suggests a ‘better future world’, as progress is assumed to be purely positive.One of the uses of this frame is the assumption that challenges can and will be solved with technological solutions.This focus on technological solutions marginalises other policy options, such as behaviour change or cultural shift, as irrelevant, unnecessary or extremely difficult.For example, a modelling exercise commissioned by the former Department for Business, Enterprise and Regulatory Reform and the Department for Transport projects emission futures as EVs and hybrids penetrate the market, but the projections do not change car demand, mileage or journey profiles, despite discussing the geography of EV spread.Similarly, the Energy Saving Trust model the uptake of different low-carbon vehicles under different policy scenarios; despite a wide range of parameters, consumer behaviour is modelled only in choice of car purchase.While these explorations acknowledge the limitations of their models, the framing of emissions reduction through market transformation to low-carbon vehicles, with no social or cultural changes, ignores the co-evolution of norms and practices with technology .Technology as progress reassures us that even if one technology fails, another will succeed.For example, the Committee on Climate Change analyses EVs’ potential, but then suggests that if EVs prove unsuccessful through 2030, hydrogen vehicles will penetrate the market more.These ideas support – and are supported by – a narrow definition of sustainable transport as reduction in carbon emissions.A different frame could be called technological breakthrough.This is a frame which casts doubt on the ability of technology to deliver, until some – usually unlikely – breakthrough is achieved, i.e., current technologies are insufficient or insufficiently developed for the predicted or desired vision of the future.This frame applies specifically to plausible but uncertain futures, highlighting the non-trivial advances that assumptions about the future might make.This might seem incongruous with the general techno-optimism of many visions, but this frame is applied to specific technologies, used to cast doubt on their ability to deliver, but not the ability of technology as a whole to succeed.Nonetheless, there is a certain irony in invoking notions of technological failure, or the endless wait for a breakthrough, in techno-optimistic explorations.This frame is sometimes used to stress uncertainty or cast doubt about the future of a given innovation to become widespread and mainstream.Doubt about EVs’ potential is present in some of the earlier of the explorations in this study, e.g.:espite decades of battery development… Outside niche markets, the future of the electric car does not appear optimistic… Struggles with the development of dedicated battery cars, have led car manufacturers to divert their efforts…,Some explorations suggest shorter term emission targets can be fully met through improvements to internal combustion engine vehicles, using the uncertainty around ultra-low emissions vehicles to strengthen the message.In the longer term, ULEVs might be inevitable, and the same framing of technological breakthrough is sometimes used to stress great potential and great challenges ahead, as the King Review demonstrates:In the longer term, possibly by 2050, almost complete decarbonisation of road transport is possible.This will require breakthroughs in battery and/or hydrogen technology and a zero-carbon power source for these vehicles.Bucking the trend of technological optimism is the Foresight1 work , in which drivers and trends of transport futures were developed in workshops with “Experts from the research community, business and the public sector”.Scenarios were based on two key uncertainties: whether low-environment-impact transport systems would be developed and whether people would accept intelligent infrastructure, which could respond ‘autonomously and intelligently’.This outlier contradicts assumptions made in most of the explorations: that technological progress will necessarily prevail, and that people will accept it.These stories dare present technological failure and even dystopian futures, in sharp contrast to all other explorations.A technological narrative implicitly drawn on but not discussed is the linear model of technological innovation and diffusion.This model suggests that innovation follows a linear route, beginning with basic research, followed by applied research and development, and ending with production and diffusion.This narrative is a powerful rhetorical tool, connecting several themes together.For example, in the studied explorations, development of EVs is a preliminary requirement before marketisation, with barriers to uptake, or demand more broadly, addressed last.The linear model has been critiqued for, among other things, drawing a strong distinction between the social and the technical, and thereby lacking the socio-technical context in which technological innovation happens , but is nonetheless widely used.While it is not universally understood, and too specific for this paper’s definition of a frame, in some contexts it might act as one, as “t is a thought figure that simplifies and affords administrators and agencies a sense of orientation when it comes to thinking about allocation of funding to R&D”.Two related political frames are identified.The first is the greater good, which frames action as something that ‘should’ be done.It is marked by an appeal to national or local pride, such as London’s transport reputation:London’s transportation network is regularly benchmarked globally, having consistently invested in and promoted sustainable mobility initiatives that have achieved modal shift away from the private car to sustainable travel, public transport in particular.It can also be a call of duty or a call for leadership, as when the King Review tells us that:The UK can and should lead by example….Ruef and Markard have a similar frame ‘nations’, which focuses on the opportunity for a nation to stay competitive.Beyond this general call for action, there is a more specific frame of responsibility, which allocates roles to actors from different sectors, makes specific policy recommendations, or simply calls for actors to ‘do their part’.This frame appears most commonly in reports for the government, such as the EST and the King Review:olicy recommendations aimed at ensuring that government, industry, the research community and consumers all contribute to reducing carbon emissions from cars…,The majority of the explorations frame transport as continued automobility, seeing the future very much as the present in terms of mobility dominated by privately-owned cars, with high transport demand.Explorations which use this frame analyse the future in terms of the changes in vehicle technology or fuel, with projections of future emissions calculated from uptake of EVs or other low-carbon vehicles and improvements to ICEVs.Other changes, such as modal shifts or reduced travel, are considered complementary or marginal in effect.Sustainability is portrayed as predominantly about reducing greenhouse gas emissions, allowing high demand to continue, as long as cars can be made ‘low-carbon’.A competing frame might be calledsustainable transport, framing current transport as unsustainable beyond the need for emissions reduction – not surprisingly used in car club related explorations.In this frame, the current transport regime and trends are framed as problematic, due to congestion, air pollution, equity and accessibility, future population growth and more.This implies that reducing car numbers and usage is desirable or even necessary, as this report from Transform Scotland2 puts it:The essential prerequisites for expanding car club membership in Scotland … addressing the need for changes in attitudes in our “car culture”.However, the explorations that use this frame stop short of calling for a car free culture, as the London based Car Club Coalition shows:Car clubs will play an important role in reducing the need to have a car because they offer an alternative to conventional car use models and can reduce habitual car use while still enabling access to a car for essential journeys.This section showed how individual frames reflect worldviews, assumptions and agendas, and how these come into play in building stories of the future visions.The next section considers putting frames together in those stories.This section looks beyond individual frames to consider how narratives weave frames together into powerful discourse.The aim is to show how they are used by different actors, and to analyse how different frames are used in constructing them.Three examples of narratives are highlighted, which might be called the good, the bad and the neutral.A recurrent theme in the explorations is the story of success.The scenarios and projections almost always detail the ‘good news’, i.e., how targets and challenges can be met, or how innovations will succeed.The narratives are often excused as heuristic tools and not predictions, but nonetheless there is a noticeable lack of scenarios that detail missing emission reduction targets, and how such situations might be dealt with.Reducing emissions is often framed as a difficult challenge, which can be met through strong action.For example, the Committee on Climate Change reports to Parliament that:The fact that electric vehicles play a potentially major role in moving to a low-carbon economy therefore poses a challenge … However, there are a number of promising developments which provide confidence that this challenge can be addressed.These narratives invoke political frames such as the greater good and responsibility, but as always, economics are important, with consumer choice and economic growth frames used, for example:The key challenge in transport is decarbonising travel in a way that is both cost effective and acceptable to consumers.One of the challenges these stories face is the electricity grid itself, and its carbon emissions, as in this consultancy report on the future of low-carbon cars and fuels:And then there is their future greenhouse gas reduction potential, which relies largely on decarbonisation of the grid.Although this is strongly implied by the Climate Change Act 2008, it cannot be accepted as a given …,While these narratives do not promise success, in not discussing the possibility of failure, they seem to suggest that a shift away from fossil fuel vehicles is inevitably on the way, despite presenting it as a challenging task.This could be explained by the need to raise expectations and gain legitimacy and support for innovations in low-carbon vehicle technology, or more broadly to create shared visions of the future in which these vehicles succeed both in reducing emissions from surface transport and in maintaining the automobility system.The success stories could be classified as hero stories in which society is saved by new technologies; the technologies themselves are the hero .Janda and Topouzi draw on Vogler’s modernised map of a hero’s journey.An early stage in the journey is ‘crossing the threshold’, when the hero commits to the challenge, leaving the ‘ordinary world’ for a ‘special world’ with unfamiliar rules.Janda and Topouzi interpret this as the ‘imaginary world’ of technical potential.The success narratives, which draw on technological promise and pay less attention to social context and behaviour, match this interpretation well as ‘there is no need for people to change because the technology will make the necessary changes for us’.A notable exception to the success stories comes from the National Grid exploration, which focuses on electricity use throughout the UK economy.It provides four scenarios according to higher or lower future prosperity and higher or lower social ambition to decarbonise the economy.Only one scenario, where society is environmentally engaged with higher prosperity, leads to meeting the UK’s emission reduction targets on time.The IPPR report, which uses the National Grid’s scenarios, is more candid than most about the difficulties in meeting the targets:Representatives from the automotive industry expressed frustration at the rate of progress of the energy industry in preparing the UK for a transition towards ultra-low emission vehicles… expressed frustration that ULEVs are not designed with the UK electricity network in mind.These setback stories could be seen as learning stories : these take place in the messy real world, with protagonists who are normal people rising to a challenge.In the energy context, the learning story happens in the gap between technical potential and what is actually achieved .Car club explorations also use narratives of success.First, in an optimistic portrayal that car clubs will grow and prosper, even without policy support, and more so with support.There appears to be some hype in the predictions, as one exploration looks at scenarios for 500,000 members in Greater London by 2020, and a ‘more ambitious’ 1 million users by 2025.In Scotland , projections for 2014/15 of up to 13,000 members without policy support proved overly optimistic, with actual numbers around 7600 .Second, these narratives build on thesustainable transport frame in emphasising how car club success could deliver sustainability – a future with fewer cars but high access to multi-modal mobility, with diverse social, environmental and economic benefits.Again, this optimism could be explained by the need to create expectations and gain support for car clubs.The marginal role car clubs play in most of the explorations, despite having more users than there are EVs on the roads, suggests they have less legitimacy as a ‘proper’ form of transport that can play a part in a shift to sustainability.An implicit theme emerging in some of the more recent explorations is addressing low sales numbers of EVs, with narratives suggesting this is a ‘demand side problem’, and attempting to understand the lack of consumer uptake.An exploration from consultancy Ecolane expresses this most clearly, claiming that car manufacturers are rising to the challenge of delivering low-carbon cars for the future, and that consumer demand must now be addressed.This logic limits behaviour to choice of car purchased, and is in line with the linear diffusion model, portraying the technological innovation process as separate from the socio-cultural process of diffusion, with the need to tackle ‘barriers’ to uptake – both financial and non-financial.Some attempts to understand low uptake use the economic frames of market solutions and consumer choice, suggesting failure to purchase EVs is due to their high upfront cost and consumers having ‘high discount rates’.There seems to be – in explorations written by or for government – frustration that consumers are not behaving as rational actors: they fail to recognise future savings from EVs’ low running costs.This behaviour is described as ‘sub-optimal’ or ‘myopic’ , as purchasing an EV would seem to be ‘in their own interests’ .On the other hand, several explorations highlight the fact that consumer purchase of vehicles does not follow the rational economic model, as they value certain vehicle features even if these do not give significant savings .In addition, brand loyalty and symbolism such as status and identity play a role in car choice .More recent explorations problematise users in another way.While the range of EVs is portrayed as a barrier throughout, only a few recent explorations use the phrase ‘range anxiety’, portraying this squarely as a user issue rather than a technical challenge.This dichotomy is also found in how users are framed in a recent Norwegian study on stakeholders’ imagining of EV users : on the one hand, users were portrayed as rational actors concerned with cost, but on the other, range anxiety was believed to be a major concern – and was seen as an irrational fear, a psychological barrier that would disappear with experience of EV use.This dual framing of the public is discussed by Barnett et al. , who contrast public concerns that could be addressed, which were seen as understandable or rational, such as cost and performance, with situations where the public was seen as ‘blinkered’ or having ‘misconceptions’, of which range anxiety could be an example.The inconsistent framing of people indicates clinging to certain frames or narratives that have lost empirical credibility – a sign of not accepting facts that do not match their system of frames .There also appears to be an element of the automotive industry shifting risk and responsibility by portraying the lack of uptake of low emission vehicles as a problem for the state or civil society, whilst still portraying behaviour narrowly as choice of vehicle.The EV hype from 2005 onwards might have met disillusionment due to low sales in following years, so shifting perception to present the technologies and industry as successful might help maintain the positive frames and legitimacy needed for ongoing political and stakeholder support.Going back to the hero’s journey : in the final stages of the story, the hero returns to the ‘ordinary world’ and faces last minute dangers.The frustration at lack of demand could be interpreted as a final test after the demons were thought to be conquered.The problematisation of users after manufacturers rose to the challenge could even be seen as the ‘purification’ of the hero: the technology is flawless and must not be tainted by the users.Alternatively, the frustration could be seen as part of a learning story , as the ideal technologies encounter the real world.Several documents refer to the importance of government, or government policy, being ‘technology neutral’.Technology neutrality is a narrative in which policies and regulations should not choose winners among technologies offering essentially similar services or solutions, as the markets can – and will – choose the most successful options.In our context, this means the best options of the fuel and engine technologies available to deliver a reduction in greenhouse gas emissions will win out.Several explorations written by and for government, or by consultancies, suggest the automotive industry supports technology neutrality , e.g.:There is a strong sentiment in the vehicle manufacturing community that interventions by Government should be technology neutral.Some government written documents, in turn, support this call :We cannot say for sure which technologies will emerge as the most effective means of decarbonising car travel, so it is essential that the Government takes a technology neutral approach, allowing us to achieve emissions reductions in the most cost effective way.The conscious, active reinforcement and reassurance of this narrative are what distinguish it from a frame: frames do not require justification and explanation.Unpacking the narrative suggests it frames sustainable transport narrowly as reducing emissions, and relies on frames including: technology as progress, continued automobility, markets, and consumer choice.It also links to the hesitation of the state to pick winners in a free market paradigm, perhaps invoking bad experiences of the state picking early winners in the past.This can be seen, for example, in calls to gradually reduce tailpipe emission targets or regulations, allowing for ICEV improvements alongside uptake of EVs and other ULEVs, rather than show explicit support for EVs.Hesitation to back EVs has been observed in Sweden as well .While technology neutrality could be seen as simple prudence in the face of unknown development, it is important to recognise that it is a policy that favours incumbent firms and established technologies which do not require public support.It could be seen as a delaying tactic protecting ICEVs in the short term.In addition, the dialogue between policymakers and industry reinforces the current transport regime .Overall, there is much to suggest that this narrative is a political tool serving agendas of incumbent powerful actors.This paper reviews frames and narratives used in explorations of personal transport futures in the UK, considering perspectives from frame analysis, visions and expectations, and stories of the future.This section offers some summarising thoughts and conclusions.There are, not surprisingly, differences between the stories told by incumbents in what might be considered a ‘central narrative’ of the future, and alternative stories, such as car club narratives.The central narrative portrays the future as an unbroken continuation of the past and present, including continued car culture and high transport demand, relying on technological progress to meet a narrowly defined sustainability agenda.This story relies on several frames as building blocks – economic growth, technological progress, consumer choice, and automobility, and invokes political and technological narratives such as the ‘linear model’ and ‘technology neutrality’.It creates narratives of success which tend to reinforce incumbent power whilst promising sustainability in the form of lower emissions.By contrast, car club focused visions discuss integrated transport yielding local and national benefits, with greater change to how we travel, building on frames of economic growth and consumer choice, but questioning frames of automobility and narrow definitions of sustainability, and hardly relying on technological progress.Systems of frames underlie political ideologies, worldviews and moral codes .Lakoff’s work suggests that in politics, conservatives make better use than liberals of a unified, self-reinforcing worldview, through long-term framing of issues, connecting the rhetorical and the cognitive .This paper shows how different frames are used together in stories of the future, relying on and reinforcing each other.Most important is the belief of strong links between technology, mobility and the market economy, best expressed in the King Review:Technological progress has been fundamental to furthering the universal objectives of growth and mobility.It has also enabled a major, global industry to prosper in its own right.This, then, is the reinforcement of an entire worldview through the repeated framing of issues.This ‘dominant worldview’, in which mobility and technology are intimately linked to economy, can more easily accommodate privately owned electric vehicles than car clubs, which require institutional and behavioural shifts.Put in terms of the hero story, a clever technology can save the day without need for behaviour change .The frustration found in some explorations at low uptake of EVs, contrary to the ‘rational economic actor’ model, matches Lakoff’s description of framing limiting our understanding.Some documents going as far as trying to fit reality into their system of frames, by suggesting consumers need education about whole life cost of vehicles .However, the dominant worldview is not homogenous: there are also explorations that acknowledge that a more nuanced understanding of behaviour is needed; it is worth considering how to strengthen the currency of those stories.Electric vehicles now appear to have a promising future in the UK: sales have risen sharply in recent years to over 28,000 in 2015, making up 1.1% of sales , and they feature prominently in the government’s recent Industrial Strategy Green Paper .However, this has not been a smooth road.This hero’s journey arguably began with the realisation of the threat of emissions reduction:olicy makers have come to realise the difficulties in reducing greenhouse gas emissions from the transport sector, particularly given the strong linkages between transport consumption and economic growth.However, the transition to EVs is far from trivial, in terms of both infrastructure and consequential changes beyond the technological shift.EVs are a threat to some incumbent actors, through changes in supply chains, the repair industry and fuel supply, and more generally the risk of great change which not all incumbent manufacturers will survive.In order to become acceptable to the dominant worldview, EVs are framed as a straightforward substitution for internal combustion engine vehicles, leaving automobility virtually unchanged.This assumes behaviour to be fixed and independent from technological artefacts, whereas in reality, norms and practices co-evolve with technology .A German study suggests there are discrepancies between visions of the future based on sustainable electric mobility and strategies rooted in the current regime: portraying EVs as a ‘techno-fix’ could jeopardise ‘deeper’ sustainability transitions to lower car dependency, and moreover, this innovation cannot simply replace ICEVs without significant change, perhaps even redefining the role of the car in society.So, while building on accepted frames has gained EV narratives legitimacy by building on expectations of continued automobility and lowered emissions, these stories have played down both the potential for greater change from electrified transportation, and the inevitably disruptive transformation process.This suggests that even if current efforts to electrify transport are successful, they might leave us unprepared for the future.This can be seen as a precautionary tale of not appreciating the inevitable disruption and unpredictable socio-cultural changes from technological innovations.It is worth considering what is missing from the explorations analysed.One omission from both the main narratives is the possibility of a post-car society.Indeed, even ‘peak car’ was not discussed in any of the documents.The car club narrative suggests severing the link between ownership and car use, retaining functionality of car journeys where necessary.This goes against current trends, where the functionality of car use as a means of transport is arguably declining, especially in car-saturated urban environments in the developed world, although its cultural value is still great .However, others suggest the image of the car has been reduced from an ‘icon of modernity’ to a more utilitarian perspective as ‘an ordinary piece of household equipment’ .The omission of this entire debate attests to the strength of the continued automobility framing.While beyond the scope of this study, autonomous vehicles could affect automobility through shifts in car ownership and image.However, the technology in itself does not guarantee sustainability in terms of CO2 emissions or access to mobility.Examples of broader stories missing from most explorations can be found in the future scenarios of the Foresight work .The potential for technological failure, resulting in greatly reduced mobility and transport no longer seen as a right, matches the horror story, the ‘tale no one wants to tell’.It is worth considering whether avoiding stories of failure leaves us overoptimistic about the challenges ahead.There are limits to this type of study, such as the inherent subjectivity of frame analysis.Moreover, each document analysed was written with various audiences in mind, and it is possible to read too much into them.Nonetheless, these are influential stories which shape public discourse about the future.If we do not question these stories, and how they were made, they might leave us overoptimistic and underprepared for the future.In Lakoff’s terms, if we think in terms of the ‘wrong frames’ we cannot fully understand the environmental crisis, much less act on it.It is worth considering how to “ the frames that are needed in the long run”, or in other words, how we plan a ‘cognitive policy’ for the future.Some suggestions can be found in Janda and Topouzi’s caring story, which aims to create greater engagement with the system around us and establish new social norms.Here, this could imply actors supporting deeper change advocating to policymakers and the public why a more sustainable transport system is worth the effort.This engagement with more actors, especially non-incumbent actors, can broaden the ‘we’ who plan the future and include more stories in public discourse.Alternative stories can take the form of social innovation, creating new norms and institutions that help maximise the benefits of ‘green’ technological innovations through recognising their social aspects .Social innovations for sustainability often sit outside mainstream socio-economic narratives, and find it hard to gain support and legitimacy.Nonetheless, they could offer much needed alternative frames and narratives for planning for the future.This work was supported by the Research Councils United Kingdom Energy Programme .Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of RCUK.
This paper looks at personal transport futures in the context of greenhouse gas emissions reduction, as portrayed in documents from various stakeholders in the transport sector. It analyses the role of frames and narratives in constructing stories of the future, through the lens of two innovations: electric vehicles (EVs) and car clubs. Most of the documents draw on technological progress to tell stories of a future similar to the present but with EVs or other low-carbon vehicles, while car club focused documents stress broader notions of sustainability. A number of economic, technological and political-related frames are identified, which are used in constructing and justifying these stories. Some frames, such as ‘economic growth’, are nearly ubiquitous. Narratives support and are sometimes actively supported by the stories, such as ‘technology neutrality’. Frames and narratives play a key role in creating stories of the future, and help create and maintain expectations and legitimacy of innovations. Frame analysis helps unpick and challenge unrealistic expectations that might leave us unprepared for the future.
162
The effects of variable renewable electricity on energy efficiency and full load hours of fossil-fired power plants in the European Union
In 2014, as much as 27% of the European Union's greenhouse gas emissions were caused by the combustion of fossil fuels in public electricity and heat production .In order to reduce greenhouse gas emissions and the accompanying climate change effect, the European Union aims for increased electricity production from renewable energy.Renewable electricity has increased over the last decade from a share of 14.4% in total gross electricity generation in 2004 to 27.5% in 2014 , and the projection for 2020 is 35% .The current share of renewable electricity is not evenly spread over the EU member states.Based on data from Eurostat , in 2014 the share of renewable electricity was highest in Austria, Sweden and Portugal and the share was lowest in Malta, Luxembourg and Hungary.EU-wide, hydropower is responsible for the highest share of renewable electricity.However, the largest growth in the past ten years is visible in the expansion of wind power and photovoltaics.Similar to the overall percentage of renewable electricity, the share of wind and PV is not evenly spread over the EU member states.The highest shares of wind in 2014 are found in Denmark, Portugal and Ireland, while the highest shares of PV are found in Italy, Greece and Germany .The variability in the electricity output of wind and solar energy technologies, caused by weather characteristics, has implications for transmission and distribution systems .These characteristics can affect up to 70% of daytime solar capacity due to passing clouds, and 100% of wind capacity on calm days .These uncertainties are much greater than the traditional uncertainties of a few percent in demand forecasting.Intermittency of variable renewable electricity sources becomes increasingly difficult to manage as their penetration levels increase .Currently, renewable energy variability is generally compensated by fossil-fired power plants being started-up and-shut down, ramped up and down, and operated at part load levels more frequently .This impacts the year-round average energy efficiency of these plants, which achieve maximum efficiency when they operate at full load .Especially older coal-fired power plants tend to have limited operational flexibility and cycling to operating at part load levels and increased start-ups of these power plants results in a lower energy efficiency due to the increased fuel consumption .In this study we look at how increasing VRE penetration has affected the full load hours and energy efficiency of European fossil-fired power plants.This will provide insight into the effect of VRE on the performance of fossil-fired power plants.A number of studies are present that estimate the effect based on modelling but, to our knowledge, there is no study which looks at what the actual effects have been so far.We do this by first reviewing literature in order to determine what the effect of VRE on the performance of fossil-fired power plants is according to modelling studies.Then we analyse the development of full load hours and energy efficiency of fossil-fired power plants in the EU1 in the years 1990–2014, as this period shows the emergence of significant levels of wind and solar in many member states.The paper is structured as follows.Section 2 presents effects of VRE on full load hours and energy efficiency, as found in literature.Section 3 describes the method used and section 4 gives the results of this study.Section 5 contains the discussion of uncertainties and lastly, section 6 gives conclusions.In this section we first discuss the impact of VRE on full load hours of fossil-fired power plants, as found in literature sources, and second the possible impact of load hours on the energy efficiency of fossil-fired power plants.Energy efficiency of power generation refers here to the ratio of yearly electricity output of a power plant and primary energy input.This is a year-round average efficiency.The efficiency is significantly affected when plants operate under off-design conditions, such as part-load operation and shut downs .Both are reflected in the number of full load hours of a power plant.This is defined as the total electricity output in a year divided by the installed capacity.Table 1 shows the effect of increased VRE penetration on full load hours of fossil-fired power plants as found in available studies.Multiple scenarios are included with increasing levels of VRE penetration and a base case.These studies all show a decrease in full load hours with increasing VRE penetration level.In general, the decrease in full load hours of coal-fired power plants is lower compared to gas-fired power plants.The highest decrease for gas-fired power plants was 27% when VRE penetration increased from 4% to 20% .The highest decrease for coal-fired power plants was 12% with increasing VRE penetration from 15% to 43% .The lower full load hours due to increased VRE penetration can have an impact on the average annual energy efficiency by increased part load operation and more start-ups/shut downs.Direct effects of increased VRE on energy efficiency due to part load operation are not available in literature.However, part load efficiency curves can provide an indication of the range of the effect of part load operation.Figs. 1 and 2 show available part load curves for coal and gas-fired power plants, respectively.The figures show that part load operation could impact efficiencies by typically up to 3–8 percent point for most coal-fired power plants and 10-17 pp for most gas-fired power plants, depending on the power plant type and the degree of part load operation.The amount of fuel used for start-up of a power plant depends on the status of the power plant.If a power plant has been shut-down less than 8–12 h ago, it is referred to as a hot start, between 12 and 48 h as a warm start, and between 48 and 120 h or more as a cold start .The longer the downtime, the higher the fuel consumption will be during start-up.In Table 2, two studies are listed in which the effect of a higher penetration of wind power on the number of start-ups of fossil-fired power plants were simulated.These changes in start-ups were translated into an effect on the energy efficiency of the power plants by using start-up fuel consumption data from Kumar et al. .In order to do so, the energy efficiency of coal-fired and CCGT power plants are assumed to be 43% and 55%, respectively, reflecting the average of the energy efficiency curves.For each scenario, a range of energy efficiency effects are presented.The left limit within the range is if all start-ups are hot starts.The right limit is if all start-ups are cold starts.The studies model that a significant increase in VRE affects the amount of start-ups per year.However, the relative impact on overall energy efficiency is calculated to be low, with a maximum effect of 0.5 pp for gas-fired power plants and 0.3 pp for coal-fired power plants.Since only two studies were found, the results cannot be generalized and in reality the effect may be different, depending on the number of increased start-ups.This section provides an overview of the methods applied in this study:Calculation of the VRE penetration for each EU member state per year between the timeframe of 1990–2014.Calculation of full load hours of coal-, gas-, oil-fired power plants and total fossil-fired power plants.Calculation of the energy efficiency of electricity generation,Correlation and regression analysis.For each EU member state, the VRE penetration for each year in the period 1990–2014 was calculated by formula.VRE penetrationyear i … N =/ETOT) year i … Nwhere EW is the gross electricity generation from wind; EPV is the gross electricity generation from solar photovoltaics; ESTh is the gross electricity generation from solar thermal; ETOT is the total gross electricity generation from all sources, both renewable and non-renewable; year i = 1990, year N = 2014.Based on the VRE penetration in 2014, three groups were made in which countries were aggregated to form a group with high, medium and low penetration VRE, presented in Fig. 3.The groups will be referred to from this point onwards as VRE-high, VRE-medium and VRE-low.At the start of the timeframe in 1990 the VRE penetration was negligible in almost all countries, except Denmark, where the VRE penetration was 2%.The reason to form groups is to be able to compare developments in fossil full load hours and energy efficiency for only three groups instead of individual countries.Also the effect of coincidences in single countries is reduced when comparing the three VRE penetration groups and make the results more robust.The aggregation is based on weight, meaning that countries with high electricity generation have a higher impact on the results of a VRE penetration group compared to countries with low electricity generation.In general, small countries may compensate VRE output more easily with import and export and thereby limit the effect of VRE intermittency on fossil-fired power plants.The full load hours per year and per VRE penetration group were calculated for each fuel type separately and for fossil fuels in total and respectively).Besides the aggregated full load hours of fossil-fired power generation also coal- and gas-fired power generation were calculated separately since the impact of VRE on FLHs is expected to be different per fuel.Oil-fired power generation was not included individually since it only accounted for 2% of electricity generation in the EU in 2014.FLHx year i … N = year i … NFLH year i … N =/ year i … Nwhere x is coal, oil or gas; FLH full load hours; E is gross electricity generation in GWh per fuel; Cx are the total installed capacities of the fossil fuel plants; year i = 1990, year N = 2014.The electricity generation per fossil fuel was obtained from Eurostat .The capacity input data was obtained from the UDI World Electric Power Plants database .This database contains all power plants in the EU.The individual power plants were divided into the three fuel types based on their listed primary fuel in the WEPP database.For each power plant the database indicates the commissioning and if applicable, decommissioning date.Most power plants had a commissioning date of 1 January.For the others, the listed dates were rounded off to the nearest year, meaning a power plant decommissioned on June 30, 2010 was considered to be offline for the whole year 2010, while a power plant decommissioned on July 1, 2010 was considered to be online for the whole year 2010.The WEPP database provided for this research is updated until 2011.For the countries Austria, Belgium, Bulgaria, Cyprus, Czech Republic, Denmark, Estonia, Finland, Greece, Hungary, Ireland, Latvia, Lithuania, Luxembourg, Malta, Poland, Portugal, Romania, Slovakia, Slovenia and Sweden the decommissioned and newly constructed power plants were manually edited and added to the database based on press statements and news articles.For larger countries, national statistics and other sources, were consulted to determine the capacities between 2012 and 2014.The method for calculating the energy efficiencies per fossil fuel and VRE group was based on Graus et al. ).Autoproducer plants were not included in the energy efficiency calculations as it was assumed that these plants mostly do not adjust their power output depending on VRE generation.Energy efficiency year i … N =/I) year i … Nwhere E is the gross electricity output from main activity power and CHP plants; H is the heat output from main activity CHP plants; s is the correction factor between useful heat and electricity; I is the fuel input in lower heating value; year i = 1990, year N = 2014.The correction factor “s” represents the typical electricity production lost per unit of useful heat extracted from CHP plants.This formula thereby gives the estimated energy efficiency, without heat output.Main activity CHP plants are assumed to dominantly deliver space heating to cities.According to Phylipsen et al. , for space heating based district heating schemes the correction factor varies between 0.15 and 0.2.Therefore, similar to the energy efficiency calculation in Graus and Worrell , in this research a value of 0.175 was used.The input data was obtained from Eurostat ."In order to assess which factors played a role and to what degree in the development of FLH and energy efficiency, Pearson's correlation coefficients were calculated and linear regression analyses were performed.Fig. 4 shows a causal diagram of the main impacting factors on full load hours of fossil-fired power plants and energy efficiency.For each relationship, it is indicated whether the relationship is positive or negative.A distinction is made between factors on country level and plant level.So for example the electricity generation regards the electricity generation for the whole country, and the SRMC concerns the cost of single power plants.Electricity generation, installed non-VRE capacity and VRE penetration directly affect the average full load hours of fossil-fired power plants in a country.A decrease in electricity generation will decrease the full load hours of fossil-fired power plants, assuming the non-VRE capacity and VRE penetration remain constant.Electricity generation can have an impact on full load hours especially when it is different from expectations.The planning for new power plants starts many years before the commissioning and is based on expectations for electricity demand.Related to the economic recession, electricity demand has been lower than expected, leading to overcapacity in the electricity market.In order to take this effect into account we calculated the yearly growth rate of Gross Domestic Product in the period 1990–2006 and used this to estimate the GDP development without recession).Data for GDP were taken from European Commission and calculated per VRE penetration group.Recession indicator year i … N = year i … Nwhere GDP: Gross Domestic Product in market exchange rates and 2010 prices; year i = 2007, year N = 2014.We also took into account the electricity demand development per VRE penetration group, based on Eurostat .The Short-Run Marginal Costs of individual power plants affect the degree to which these power plants are utilized.For fossil-fired power plants, these are mainly influenced by fuel prices.E.g. low coal prices compared to natural gas prices can reduce the operation time of gas-fired power plants.Here we therefore included the natural gas price and coal price.Both are in 2010 prices and taken from World Bank ."Increased imports and exports can limit the impact of VRE's variability on FLHs.For instance neighbouring countries relieved the German grid by consuming excess electricity generated from variable renewable sources .Also pumped hydropower storage, widely available in Norway where the total storage capacity is equal to 70% of the annual electricity generation , can be used for reacting to sudden changes in variable renewable energy output within the region .Since both imports and exports can be used to compensate for variabilities, these were both added up and divided by gross electricity generation as indicator).This is not to be confused with the concept of net imports where exports are subtracted from imports to calculate the net amount of electricity consumption originating from outside of the country.Data for imports and exports per VRE penetration group were obtained from European Commission .Import + Exportyear i … N =/E year i … Nwhere E is gross electricity generation; year i = 1990, year N = 2014.Year-round average energy-efficiency is mainly impacted by power plant characteristics, such as capacity age, fuel type, part-load operation and biomass co-firing.The average age of power plants was taken into account by the average commissioning year of the power plants based on Platts weighted by capacity size.Retrofitting can increase the energy-efficiency of older power plants but since no information is available for the amount of capacity retrofitted this factor is not taken into account.Also biomass co-firing in coal-fired power plants, which may decrease the energy-efficiency, is not included due to lack of data.The impact is expected to be limited since the share of biomass in electricity generation is only 5% of total electricity generation, while coal accounts for 25% .In this section the results for the three VRE penetration groups are discussed.These are split into full load hours and energy efficiency, after which the results of this study are compared with the results found in literature.In Fig. 5 the full load hours and VRE penetration are presented of VRE-high and Fig. 6 shows the trends in electricity generation, installed non-VRE capacity and installed VRE capacity.The fossil full load hours decreased by 53% from 4773 hours in 2005 to 2264 in 2014, while VRE penetration increased from 8% to 25%.The fossil full load hours slightly increased in the period 1990–2005, mainly caused by the increase in full load hours of gas-fired power plants.Around 1990 the capacity of gas-fired power plants was limited and most of the installed gas-fired power plants were peaking units, from 1994 onwards the gas-fired capacity increased rapidly and base load units were added to the landscape.From 2005 to 2010 most of the fossil full load decrease was caused by coal-fired power plants, of which the average full load hours decreased by 54% from 6173 in 2005 to 2897 in 2010.After 2010, the load hours of coal-fired power plants increased while those of gas-fired power plants continued to decrease by 56% from 3654 in 2010 to 1613 in 2014.Fig. 6 shows that electricity generation reached its maximum in 2008, after which it decreases from 440 TWh to 394 TWh in 2014, a decrease of 11%.The non-VRE capacity remained more or less constant in this period at 108 GW, while the VRE capacity increased rapidly from 27 GW to 43 GW, equal to an increase in VRE penetration from 12% to 25%.In Fig. 7 the development of full load hours in VRE-medium are presented and Fig. 8 shows the development of electricity generation, VRE and non-VRE capacity.From 1990-2007 the fossil full load hours remained more or less constant.However, within this timeframe the introduction of gas-fired power plants utilized as base load units is visible.This caused the full load hours of gas-fired power plants to increase and the full load hours of coal-fired power plants to decrease, of which the total installed capacity remained more or less constant between 1990 and 2007.From 2007 onwards the fossil full load hours decreased from 4921 hours to 3264 in 2014.This decrease in fossil full load hours was mainly caused by gas-fired power plants, of which the full load hours decreased from 4921 in 2007 to 2329 in 2014.The average full load hours of coal-fired power plants fluctuated between the whole period of 1990–2014 around 5000.Fig. 8 shows that electricity generation reached its maximum in 2007 at 1565 TWh.The electricity generation decreased by 8.3% to 1435 TWh in 2014.Similar to VRE-high, 2008 acts as a turning point, as electricity generation started decreasing from this year onwards, related to the financial crisis.Within this same period, the non-VRE capacity fluctuated but ended at the same level in 2014 as in 2007, around 340 GW.The VRE capacity however, increased significantly from 33 GW in 2007 to 137 GW in 2014, equal to a VRE penetration increase from 3% to 13%.In Fig. 9 the development of the full load hours of the last VRE penetration group, VRE-low, are presented and in Fig. 10 the electricity generation, non-VRE capacity and VRE capacity.The fossil full load hours decreased by 32% from 4379 in 2007 to 2961 in 2014, mainly allocated to a 51% decrease in gas-fired capacity use from 4881 full load hours in 2008 to 2433 in 2014.The average full load hours of coal-fired power plants decreased by 23%, from 4883 in 2007 to 3768 in 2014.Within the timeframe of 2007–2014, where fossil full load hours were found to decrease, the electricity generation remained roughly constant around 1335 TWh.In this VRE penetration group, the effect of the financial crisis is only clearly visible in the year 2008.After 2008 the electricity generation restored to pre-2008 levels.The non-VRE capacity increased slightly from 282 GW to 294 GW and the VRE capacity from 7 GW to 37 GW from 2007-2014, equivalent to a VRE penetration increase from 1% to 5%.In Fig. 11 the average energy efficiency trends of coal-fired power plants of the three VRE penetration groups are presented.The average coal efficiency of VRE-high was highest at the start of the timeframe due to the modernity of the installed power plants.The average energy efficiency in this year was 38%.During the earlier period in VRE-medium and VRE-low the energy efficiency increased.In VRE-medium the energy efficiency levelled between 38% and 39% from 2003-2014.In VRE-low this levelling occurred at a lower efficiency between 36% and 37%, but within the same timeframe.The overall lower energy efficiency in VRE-low was mainly caused by low efficiency lignite fuelled power plants in countries like Poland, Slovakia and Czech Republic .This lack of energy efficiency improvement from 2000 onwards can be explained by the trend in average year of commission in Fig. 12.Even though there is an increasing trend in average year of commission, indicating decommissioning of old power plants and/or commissioning of new power plants, the increase was low.In VRE-medium the average year of commission increased only from 1976 in 2000 to 1979 in 2011 and in VRE-low the average year of commission increased from 1976 in 2000 to 1981 in 2011.Since 2010 there appears to be a decreasing trend in the energy efficiency in VRE-high, from 40% to 38%, which is higher and more consistent than previous singular energy efficiency decreases and may be linked to the high FLH decrease of 28%.In Fig. 13 the trends of gas-fired power plant efficiencies are presented for the three VRE penetration groups.In all three VRE penetration groups the gas-fired efficiency increased from 1990 onwards.The low starting energy efficiency in VRE-high was due to gas-fired capacity only consisting of peaking units.The energy efficiency of VRE-low levelled between 43% and 45%, while the energy efficiency of VRE-medium levelled around 50%.The energy efficiency in these two VRE penetration groups stopped increasing from around 2008 onwards.In Fig. 14 the trends in average year of commission are presented and from this figure it can be identified that the average year of commission in both VRE penetration groups continues to increase from 2008 to 2011.The increase in average year of commission in VRE-medium is 2 year and in VRE-low 5 years.The lack of energy efficiency increase from 2008 onwards may partly reflect increased VRE penetration where gas-fired plants compensate for the intermittency of VRE, but is also linked to the identified decrease in electricity generation from 2008 onwards in the previous section, caused by the financial crisis, causing power plants to shut down or force into part load operation.In VRE-high the energy efficiency reached a maximum of 53% in 2008.However, from 2011 onwards the energy efficiency experienced a decrease til 48–49%.This decrease is higher and more consistent compared to the decreases in 2006 and 2009–2010.It is therefore plausible that in VRE-high the decrease in energy efficiency from 51% in 2011 to 49% in 2014 was mainly caused by the decrease of FLH.The relative share of import + export compared to the total electricity generation is presented in Fig. 15.The hypothesis that in countries with high VRE penetration the share of import + export is highest is rejected by the results.The share of import + export was found to be highest in VRE-low, indicating other factors are of influence, such as electricity prices.From the full load hours and energy efficiency developments some general trends are visible:The increase in natural gas FLH in the nineties and in energy efficiency in the whole period, both due to the installation of new capacity.The increase of VRE penetration in the 2000s.The decrease in FLH for all fossil fuels after 2005 and 2007.FLHs of natural gas decrease most in the 2010s.The impact of the economic recession on electricity demand after 2007.The trends for VRE, full load hours and energy efficiency are summarized in Table 3.In order to assess the effect of the recession we calculated a recession indicator in section 3.4) which reflects the deviation of the trend for GDP in the period 1990–2006 with the development after 2007.For the years before 2007 this factor therefore is equal to 1.The ratio increases most for VRE-high, reflecting the largest deviation of actual GDP development from the historical trend.This could mean that the impact of the recession on full load hours is biggest for VRE-high.Table 4 also shows the growth rates for electricity demand that show similar developments as the GDP growth rates.In order to assess the impact of VRE on full load hours and energy efficiency we focus the correlation and regression analysis on the period of 2000–2014, where VRE grows most strongly.Table 5 shows the correlation of included factors with FLH and energy efficiency for natural gas and coal per VRE penetration group.Both the share of VRE and the recession indicator show a strong correlation with full load hours of natural gas for all three penetration groups and for coal only for VRE-high and VRE-low.This is because for VRE-medium FLH for coal remain the same.For electricity demand itself there is little correlation with FLH.This is because electricity demand does increase but much less than before the recession.Therefore the recession indicator reflects better the impact of the recession on FLH.For energy efficiency, the commissioning year shows a strong correlation for natural gas, reflecting the higher efficiency level for newer capacity.There is limited direct correlation visible for FLH and energy efficiency, likely because of the disturbing impact of capacity age.In the regression analysis we therefore correct for capacity age.Since we aim to assess the influence of VRE on FLH and FLH on energy efficiency we make separate regression models for both, see Table 6.For VRE and FLH we make two models one with the recession indicator and VRE penetration level included and one with only the VRE penetration level.For FLH and energy efficiency we include average commissioning year and FLH.The significant models and variables indicate that the impact of VRE on FLH is −270 to −138 h per percent-point increase of VRE for natural gas.For coal this range is from −138 to 110 h.The upper value is for VRE-medium where coal-fired full load hours do not decrease in the analysed period.For VRE-high, model 1 gives the recession as significant variable for the decrease in gas-fired FLH and the share of VRE is not significant in this model.This means that the trend of gas-fired FLH for VRE-high could be explained with equal significance by the recession as by the share of VRE.This is not the case for VRE-medium where the impact of the recession is not a significant variable in model 1, but the share of VRE is significant.Two models give a significant impact of the recession indicator on full load hours; one for natural gas and one for coal.They predict for a 10% lower GDP than expected a decrease in FLH of about 1100 ± 200–300 h.For energy efficiency the impact of average commissioning year is 0.3–0.8 pp for natural gas per year and about 2.4 pp impact per 1000 FLH change.For coal the relationships for energy efficiency were not significant.Combining the regression analyses we find that a 10 pp increase in VRE could lead to a 1400-2700 decrease in FLH of natural gas.This would decrease the efficiency of natural gas-fired power generation by 3.4–6.5 pp.Note that if average capacity age decreases by 10 years the energy efficiency level could still improve since this would increase efficiency by 3-8 pp.There are a number of uncertainties present in the statistics used and in the assumptions taken to calculate and compare the full load hours and energy efficiency of fossil-fired power plants in the EU-27.This section provides an overview of the main uncertainties.The data from the WEPP Database which was used in this research was updated until 2011, while the calculations of full load hours were made until 2014.National statistics were used for the countries France, Germany, Italy, Netherlands, Spain and the UK to determine the installed coal, gas and oil capacity from 2012-2014.These values were found to be largely consistent with the WEPP database.For some countries deviations were found, mainly for France and Italy.For these countries the percentage change was derived from the national statistics for each year within 2012–2014 and applied to the latest value available in the WEPP database: 2011.If the national statistics of these two highest deviating countries would have been used for the period 2012–2014, this would results in a 3% lower fossil full load hours in VRE-medium and 5% lower fossil full load hours in VRE-low for the years 2012, 2013 and 2014.These small percentage decreases would not affect the main results of the study.A second uncertainty in capacity data arises from mothballing of power plants.This concerns the taking offline of capacity if the electricity demand in a country is significantly lower than the total installed electrical capacity.The least profitable power plants are usually taken offline first.Mothballing is aimed at temporarily shutting down the power plant until the demand for electricity increases again.Systems are put in hibernation and protective measures are taken to make sure equipment is preserved and to prevent damage."This way the power plant's expenditures are cut down and controlled.In the 2010s many fossil-fired power plants in the EU were mothballed.This is in principle reflected in the national statistics used and the WEPP database, but it cannot be determined if all included power plants were actually online.Therefore part of the low load hours may be explained by offline capacity.Offline capacity can have an impact on the energy efficiency.The degree of part-load operation and increased start-ups/shutdowns would be lower than expected from the full load hour results.On the other hand many NGCC power plants were mothballed with high energy-efficiencies which could have a downward effect on the energy efficiency.The power plants in Platts were categorised into coal-, gas- and oil-fired power plants based on their listed primary fuel.However, for some power plants, a secondary fuel was listed.For example, in some coal-fired power plants, biomass was listed as secondary fuel.When this power plant is fuelled by biomass for a large period within a year, the electricity generated by the biomass is not categorised under a type of coal but under biomass, while the capacity is categorized under coal.This will have a decreasing effect on the full load hours of the coal-fired power plant, even though the power plant is not losing operating time.To provide an indication of the maximum possible effect, it can be calculated what the effect would be if all electricity produced from biomass would have been produced in coal-fired power plants.If in 2014 all electricity produced from biomass was generated in coal-fired power plants, the average full load hours of coal-fired power plants in the EU-27 would need to be corrected 10% upwards.In contrast to the full load hour calculations, where autoproducers were included as the WEPP database did not make a distinction between main activity and autoproducer plants, in the energy efficiency calculations autoproducers were excluded.This decision was made based on the assumption that autoproducers typically do not adjust their production depending on VRE output like main activity power plants do.However, since no other option was available, in calculating full load hours autoproducers were included.In 2014, 4.8% of the electricity produced from coal, gas and oil was produced by autoproducers.It is difficult to determine whether the full load hours of the average autoproducer are higher or lower than the average public power plant.However, it is likely that the full load hours of autoproducers remained more constant in the most recent years compared to the decreasing trend found in total fossil full load hours in all three VRE penetration groups, since industrial processes require a more constant electricity flow.Therefore it may be that the decrease in fossil full load hours would be slightly higher if autoproducers were excluded from the calculations.Uncertainties in energy efficiency calculations arise from the input data from Eurostat statistics regarding electricity generation, heat output and fuel input.Especially for smaller countries or fuels the uncertainty is greater.The advantage of using Eurostat statistics is that they present country statistics in a harmonized way.No other data sources that provide information in this manner are available.The EU member states were divided into three VRE penetration groups based on the VRE penetration in 2014.However, the VRE penetration groups were analysed from 1990-2014.During the timeframe of 1990–2014 the development of VRE penetration differed for each member state.In Fig. 16 the development in each EU member state is presented.As can be identified from the figure, Germany for example was the country with the third highest VRE penetration from 2001-2005.But due to the lower growth after this period compared to Portugal and Ireland, Germany is allocated to the medium VRE penetration group.The decision was made to aggregate static, based on one year, as switching countries between 1990 and 2014 based on VRE penetration would cause high deviations in full load hour and energy efficiency trends in years where countries were switched.These high deviations would be caused by, for example, the different types of coal-fired power plants in each country which have unequal energy-efficiencies.The method used for aggregating countries into VRE penetration groups was based on weighted averages instead of considering each country equal and taking the average full load hours/energy efficiency of all member countries within a VRE penetration group.This decision was made to maintain large countries having a higher impact on the results within a VRE penetration group, as smaller countries may compensate VRE output more easily with import and export.This study aimed at determining whether the implementation of VRE had an effect on full load hours and energy efficiency of fossil-fired power plants in the European Union from 1990-2014.For this purpose we analysed the VRE penetration of each EU member state and aggregated the member states into three groups with different VRE penetration levels in 2014.These VRE penetration groups were then analysed based on full load hours and energy efficiency and compared to each other.In all three groups the fossil full load hours were found to be decreasing in the most recent period from 2005/2007–2014.The largest decrease was found in the penetration group with the highest VRE penetration: VRE-high, followed by VRE-medium and lastly VRE-low.In absolute numbers the decrease in full load hours found in this study were up to 3000 h for natural gas and 2000 h for coal.This is higher than the values in literature, where the biggest decrease for similar increasing VRE penetration levels were 988 h, 483 h and 346 h fossil full load hours.Both the share of VRE and the recession indicator show a strong correlation with full load hours of natural gas and coal.A linear regression analysis gives indications for the impacts of the share of variable renewable electricity generation on the average full load hours of fossil-fired power plants, which are up to −270 to −125 h per pp increase of VRE.These values are uncertain though since overcapacity is a factor that is difficult to estimate.The regression analysis shows that for VRE-high this factor can be equally significant in predicting the developments for natural gas.A 10% lower GDP than expected could reduce average full load hours by about 1100 h. For VRE medium no significant relation with the recession was found but only with the share of VRE and for VRE-low both factors were not significant.For energy efficiency, the commissioning year shows a strong correlation for natural gas.A linear regression analysis gives an impact per average commissioning year of 0.3–0.8 pp energy efficiency improvement.For full load hours the impact is about 2.4 pp per 1000 h. For coal the relationships for energy efficiency were not significant.The value for natural gas is within the values found in literature where the effect of increased start-ups on coal- and gas-fired power plants, amounted to up to 0.3 pp and 0.5 pp, respectively.For part-load operation a decrease of up to roughly 10 pp for coal-fired power plants and 20 pp for gas-fired power plants, was found.
This study focused on the effects of variable renewable electricity (VRE) on full load hours and energy efficiency of fossil-fired power generation in the European Union from 1990-2014. Member states were aggregated into three groups based on the level of VRE penetration. Average full load hours are found to be decreasing since 2006 for all groups. The decrease is most in the group with the highest VRE penetration level with a 53% decline from 2005 to 2014 (while VRE penetration increased from 8% to 25%). For VRE-medium the decrease was 34% from 2007 to 2014 (while VRE increased from 3% to 13%) and for VRE-low 32% (with 1% to 5% VRE penetration increase). Both the financial crisis and the share of VRE show strong correlations with full load hours. Both can explain the developments for VRE-high. For VRE-medium no significant relation with the recession was found and for VRE-low both factors were not significant. For energy efficiency, the commissioning year shows a strong correlation for natural gas and less for coal. Significant impacts are found for average commissioning year and full load hours on the energy efficiency of natural gas-fired power generation but not for coal.
163
Waning protection of influenza vaccination during four influenza seasons, 2011/2012 to 2014/2015
Influenza vaccine administration is universally recommended for certain groups at risk of severe outcomes .The effectiveness of influenza vaccines is moderate , and potential waning of this moderate effect has been reviewed .During the 2011/2012 European influenza season, patients belonging to target groups for vaccination experienced a reduction of vaccine protection against influenza A virus infection .A similar effect during the same season was reported by other authors .Similar results were reported in the United Kingdom of possible waning protection against infection with type A influenza during the 2012/2013 season .One study using pooled data collected during five consecutive influenza seasons in Europe reported waning of the protection conferred by the influenza vaccine to prevent medically attended illness with laboratory confirmed infection with A and B/Yamagata lineage influenza viruses in outpatients .Similar results have been reported in a study during 4 consecutive influenza seasons in the United States .In most published studies, the main exposure variable has been defined as the number of days elapsed between vaccination date and symptoms onset date .Since this definition is related with the definition of the outcome variable, the validity of a reported waning effect of influenza vaccination using this covariate should be interpreted with caution .Since 2010, the Valencia Hospital Network for the Study of Influenza has conducted a prospective, hospital-based, active surveillance study in Valencia, Spain.We ascertain hospital admissions with influenza infection by reverse transcription polymerase chain reaction using pharyngeal and nasopharyngeal swabs, and we obtain vaccination data from the Valencia Region Vaccine Information System .This ongoing study has provided the opportunity to investigate whether influenza vaccine effectiveness waned in accordance with the calendar period of vaccination during four influenza seasons.The VAHNSI study methods have been previously described .During four influenza seasons, we identified patients who were consecutively admitted to the hospital with complaints that could be related to recent influenza infection.The number of participating hospitals varied by season, according to the available funds.The Ethics Research Committee of the Centro Superior de Estudios en Salud Pública approved the study protocol.Using incidence-density sampling , we enrolled all consecutively admitted patients aged 18 years or older who were non-institutionalized, residents for at least 6 months in the recruiting hospitals catchment area, not discharged from a previous admission episode in the past 30 days, with length of hospital stay less than 48 h, who reported an influenza-like illness defined as one systemic symptom and one respiratory symptom within 7 days of admission.All enrolled participants provided written informed consent.We collected sociodemographic and clinical information by interviewing patients or by consulting clinical registries and we obtained nasopharyngeal and pharyngeal swabs from all patients.Influenza infection was ascertained by RT-PCR at the study reference laboratory following World Health Organization laboratory guidelines for the diagnosis and characterization of influenza viruses .The current analysis was restricted to individuals who belonged to target groups for vaccination and who had received the seasonal influenza vaccine more than 14 days before the onset of ILI.We decided to include only vaccinated patients, from which we could obtain their vaccination date from the population-based registry VRVIS, in order to avoid possible heterogeneity introduced by the uncertainty vaccination status among unvaccinated individuals.Each hospital provides care to a defined population who are entitled to free health care.Each individual has a unique identification number that is linked to the VRVIS, inpatient and outpatient clinical records, and sociodemographic information .Influenza vaccines were offered free of charge to participants older than 6 months with high-risk conditions and to those 60 years or older with or without high-risk conditions.Information on the vaccine administered to all patients included in the study, in addition to the date of vaccination, was obtained from the VRVIS.The outcome variable was hospital admission with RT-PCR-confirmed influenza.Main exposure variable was the date of vaccination.Calendar date of vaccination was grouped in tertiles to report results by comparison of late and early vaccination.Participants who were vaccinated during the first tertile were considered early vaccinees and those vaccinated in the third tertile, late vaccinees.We used a multivariable logistic regression to explore the impact of those factors possibly related to early or late vaccination such as age, number of chronic medical conditions, smoking status, socioeconomic level, recruiting hospital, and previous season vaccination.Covariates with p < 0.05 were included in our final model to estimate the waning effect.We considered waning vaccine protection to be present when the adjusted odds ratio of hospital admission with influenza for late versus early vaccination was less than 1 and the aOR with 95% confidence interval did not include the unity.The aOR of being positive for influenza was estimated using multilevel models .The following variables were used as fixed effects: age, sex, smoking status, socioeconomic level, number of chronic medical conditions, days from onset of symptoms to swab collection, number of hospitalizations in the previous 12 months, number of outpatient visits in the previous 3 months, and epidemiological week of admission.Hospital was included in the model as a random effect.Age was divided into deciles and modeled using restricted cubic splines.The epidemiological week was also modeled with restricted cubic splines.In both cases, the number of knots was chosen using the Akaike information criterion .In each of the four influenza seasons, we estimated the presence of waning vaccine protection to prevent hospital admissions, with influenza overall and with the dominant virus subtype.We repeated these analyses, but restricted to patients aged 65 years or older and excluding participants vaccinated after the beginning of the season, to account for immortal time bias, which can create the idea of treatment effectiveness .In other words, in this sensitivity analysis we only took into account patients vaccinated before the influenza season because if we allowed to include subjects vaccinated during the influenza season we could find no waning effect just because we did not have enough time to observe the phenomenon.Sparse numbers prevented estimates by influenza virus subtype for the age group 18–64 years or accounting for vaccination in previous seasons.We explored the consistency of our results by repeating the analysis using three additional approaches: a Cox proportional-hazards model, conditional regression analysis after matching by week of admission, and mixed-effects logistic regression taking as exposure the number of days from vaccination to hospital admission in two categories: 90 days or less and 90 days or more.Homogeneity among estimates was ascertained by I2, although, this index may not be as reliable with such a small number of studies .Sensitivity analyses were restricted to participants aged 65 years or older and admitted with the dominant influenza strain as outcome.All statistical analyses were performed using Stata version 14.In 2011/2012, we ascertained 1077 hospital admissions among participants who received seasonal influenza vaccine between September 26 and December 27.In 2012/2013, there were 519 admissions among participants vaccinated between October 8 and January 23; in 2013/2014, there were 604 admissions among participants vaccinated between October 14 and February 18; and in 2014/2015, there were 1415 admissions among participants vaccinated between October 13 and January 21.In these four influenza seasons, respectively 293, 68, 106, and 355 patients were positive for influenza.The rates of vaccination before the start of each season were 99.7% in 2011/2012, 99.6% in 2012/2013, 98.2% in 2013/2014, and 98.7% in 2014/2015.Fig. 1 shows the evolution of the vaccination campaign among patients admitted to the hospital who were positive and negative for influenza by RT-PCR, as well as the ensuing epidemic influenza waves by epidemiological week, and virus type and subtype.The types of vaccine used, vaccination periods, dominant strain in each influenza season, season duration and peak, and participating hospitals are provided in Table 1.Of the four influenza seasons studied, A strains were dominant in the first and last seasons, B/Yamagata lineage viruses were dominant in the second season, and Apdm09 strains predominated in the third season.We observed a non-significant increase in the risk of influenza with age during the 2011/2012A-dominant season.The risk by age group was homogeneous during the 2012/2013B/Yamagata lineage-dominant season and 2014/2015A-dominant season.We observed a higher risk of influenza in the age group 50–64 years during the 2013/2014Apdm09-dominant season.Admissions with influenza were less common among those with underlying medical conditions or among patients hospitalized in the previous 12 months.This result was statistically significant in the 2011/2012, 2013/2014, and 2014/2015 seasons, with a similar non-significant pattern during the 2012/2013 season.Earlier vaccination was not significantly related to increased risk of a positive result for influenza in any season.However, the number of days elapsed between vaccination and hospital admission was higher among patients admitted with influenza in both A-dominant seasons and in the B/Yamagata lineage-dominant season, whereas this number was lower during the Apdm09-dominant season.Individuals vaccinated against influenza in the previous season were more likely to be earlier vaccinees during three of the studied seasons.Participants aged 60 years or older were more likely to receive earlier vaccination in the 2012/2013 and 2014/2015 seasons, whereas females received later vaccination in the 2013/2014 and 2014/2015 seasons.Participants with more than one comorbidity were vaccinated earlier than those with no underlying conditions in the 2012/2013 season, and smokers were vaccinated later during the 2013/2014 season.In the 2011/2012 season, late vaccinees had an aOR of hospital admission with influenza of 0.68.We found a more pronounced waning effect when we restricted our analysis to participants aged 65 years or older and when analysis was restricted to admissions with the dominant influenza virus among patients aged 65 years or older.In the 2012/2013 season, late vaccinees had an aOR of admission with influenza of 1.18.A similar value was found among those aged 65 years or older and when the analysis was restricted to admissions with the dominant virus among patients aged 65 years or older.In the 2013/2014 season, the aOR of admission with influenza for late vaccinees was 0.98.A similar result was obtained among participants aged 65 years or older and when analysis was restricted to admissions with the dominant virus among patients aged 65 years or older.In the 2014/2015 season, late vaccinees had an aOR of admission with influenza of 0.69.A similar result was obtained among patients aged 65 years or older and when the analysis was restricted to admissions with the dominant virus among patients aged 65 years or older.We observed a significant 39% and 31% waning of vaccine effectiveness among participants aged 65 years or older during the two A –dominant seasons, respectively.Estimates were similar when we excluded from analysis those participants vaccinated after the beginning of the influenza season.The adjusted hazard ratio of admission with influenza for late vaccinees in the Cox proportional hazards model was 0.87 for the 2011/2012 season, 2.90 for the 2012/2013 season, 1.11 for the 2013/2014 season, and 0.72 for the 2014/2015 season.For hospital admissions matched by week, we found the presence of a waning effect during the 2011/2012 and 2014/2015 seasons, which was statistically significant only in the first season:.When taking days between vaccination and admission as exposure, we found a non-significant waning effect in the 2011/2012, 2012/2013, and 2014/2015 seasons, with a lower risk of hospital admission with influenza among individuals with 90 days or less between vaccination and admission, as compared with those who had more than 90 days between vaccination and admission.Estimates obtained by these different methods were homogenous for the 2012/2013, 2013/2014, and 2014/2015 seasons, with I2 < 25%; there was moderate evidence of heterogeneity for the 2011/2012 season.In this prospective, systematic, hospital-based, surveillance study, we observed a lower risk of admission to the hospital with influenza among participants who were vaccinated later in the vaccination campaign compared with those vaccinated earlier, during the two A-dominant seasons.In both A-dominant seasons, the circulating A viruses were reported to have increasingly poor reactivity with serum against the vaccine strains .We did not observe the same phenomenon during the B/Yamagata lineage-dominant or Apdm09-dominant seasons.Similar results were obtained after applying different analytic approaches.Other authors have reported similar observations of waning influenza vaccine effectiveness for preventing influenza-related disease .Most of those studies estimated the risk of confirmed influenza using the number of days elapsed between vaccination and symptom onset or defining different periods during the season.Some authors have reported conflicting results using the early or late season approach, whereas others did not observe a waning effect when using days between vaccination and symptom onset .In a placebo-controlled trial of inactivated influenza vaccine and live attenuated influenza vaccine, Petrie et al. used a Cox regression approach to model the time variability of vaccine effectiveness, as proposed in previous studies of cholera vaccines .Those authors reported waning effectiveness of the inactivated influenza vaccine to prevent infection with A virus in the United States during the 2007/2008 influenza season.The inactivated vaccine was efficacious; however, until the end of the season .We assert that results should be interpreted with caution when the explanatory variable is defined as the number of days between vaccination to outcome as the explanatory variable is related to the dependent .Additionally, estimates can be biased by sparse numbers , time of exposure divided into arbitrary blocks, no control of the different likelihoods of vaccination and influenza disease throughout the season, adjustment using broad categories of calendar time such as months , or use of pooled data across multiple seasons .As in other studies , we opted to use the vaccination calendar period as a less biased definition of exposure.Thus, we defined three groups per vaccination date: early, intermediate, and late vaccinees.We investigated, by season, explanatory patient characteristics that could explain earlier or later vaccination.We accounted for the time-variable influenza risk by adjusting the calendar time in weeks.A recruiting site clustering effect was considered by random/mixed-effects modeling.We calculated estimates by age and dominant circulating virus, and we performed a methods sensitivity analysis to check the consistency of results when different approaches were used.Overall, by following this approach, evidence of waning was observed during two A-dominant seasons.Potential explanations underlying waning protection of influenza vaccination have been discussed at length by other authors .Intraseasonal evolving mismatch among the circulating viruses combined with early vaccination, previous immunity owing to former influenza infection, vaccination with strains dissimilar to those involved in the studied season, and immunosenescence have been proposed as factors underlying a waning effect.Presently, the possible waning of influenza vaccines owing to immunosenescence or changes in the circulating strains throughout the season is poorly understood .The available vaccines and policies regarding the timing of vaccination have also been considered .However, any explanation of these mechanisms is speculative with the currently available knowledge.Despite reported results that are consistent with waning benefits of vaccination, as have been described during A mismatched seasons, the true relevance of these results in terms of magnitude or impact has not been established.The validity of our estimates could be affected by selection bias and information bias.Selection bias was accounted for by the consecutive enrollment of eligible patients, without knowledge of laboratory results or vaccination status at the time of recruitment, following a hospital-based active surveillance approach with incidence-density sampling.We restricted our analysis to vaccinees with vaccination data recorded in a population-based registry to reduce the heterogeneity introduced by the uncertainty status among unvaccinated participants.The VRVIS systematically records vaccine doses given at public and private vaccination points, with an estimated 90% sensitivity and 99% specificity .Accordingly, the effect of information bias in our estimates should be expected to be less pronounced.A test-negative design was used to estimate the association between the probability of being positive for influenza by RT-PCR and the vaccine administration date.We only included patients with onset of ILI symptoms within 7 days of admission from whom swabs were collected within 48 h after entering the hospital, and we finely adjusted by the calendar time.Given the aforementioned conditions, the test-negative approach has been shown to yield consistent and reliable results .Our main limitation was the sample size; therefore, we could not provide estimates for participants aged 18 to 64 years and could not restrict the analysis to those with no previous influenza vaccination.We cannot disregard the fact that our estimates of no effect of vaccine waning on preventing hospital admissions for infections with Apdm09 or B/Yamagata lineage virus were owing to a lack of statistical power .Our results support the need for further research on the effect of waning protection conferred by influenza vaccination across populations and seasons.The methodological approach of studying waning using observational data should be further developed and evaluated, and discussion should be initiated regarding the opportunity to study waning vaccine protection using observational data obtained by density sampling, following the same approach used in clinical trials to control varying time exposure and effects .There is a clear need to elaborate the definition of the main exposure variable and the methods used to establish the presence of waning.Our results add to the existing evidence that supports an urgent need for dedicated efforts to develop better influenza vaccines.
Background Concerns have been raised about intraseasonal waning of the protection conferred by influenza vaccination. Methods During four influenza seasons, we consecutively recruited individuals aged 18 years or older who had received seasonal influenza vaccine and were subsequently admitted to the hospital for influenza infection, as assessed by reverse transcription polymerase chain reaction. We estimated the adjusted odds ratio (aOR) of influenza infection by date of vaccination, defined by tertiles, as early, intermediate or late vaccination. We used a test-negative approach with early vaccination as reference to estimate the aOR of hospital admission with influenza among late vaccinees. We conducted sensitivity analyses by means of conditional logistic regression, Cox proportional hazards regression, and using days between vaccination and hospital admission rather than vaccination date. Results Among 3615 admitted vaccinees, 822 (23%) were positive for influenza. We observed a lower risk of influenza among late vaccinees during the 2011/2012 and 2014/2015A(H3N2)-dominant seasons: aOR = 0.68 (95% CI: 0.47–1.00) and 0.69 (95% CI: 0.50–0.95). We found no differences in the risk of admission with influenza among late versus early vaccinees in the 2012/2013A(H1N1)pdm09-dominant or 2013/2014B/Yamagata lineage-dominant seasons: aOR = 1.18 (95% CI: 0.58–2.41) and 0.98 (95% CI: 0.56–1.72). When we restricted our analysis to individuals aged 65 years or older, we found a statistically significant lower risk of admission with influenza among late vaccinees during the 2011/2012 and 2014/2015 A(H3N2)-dominant seasons: aOR = 0.61 (95% CI: 0.41–0.91) and 0.69 (95% CI: 0.49–0.96). We observed 39% (95% CI: 9–59%) and 31% (95% CI: 5–50%) waning of vaccine effectiveness among participants aged 65 years or older during the two A(H3N2)-dominant seasons. Similar results were obtained in the sensitivity analyses. Conclusion Waning of vaccine protection was observed among individuals aged 65 years old or over in two A(H3N2)-dominant influenza seasons.
164
Isolation and identification of acetovanillone from an extract of Boophone disticha (L.f.) herb (Amaryllidaceae)
Extracts of Boophone disticha Herb. have been used in Southern Africa for a variety of medicinal and other purposes.Whereas a number of phytochemicals have been isolated and identified from B. disticha, most researchers have focused largely on the alkaloidal constituents of the plant.This is not surprising since the pharmacological effects of the plant have been attributed to these alkaloids.Over a century ago, Tutin reported on the isolation of acetovanillone from bulbs of Boophone disticha.However, there exist no other reports in the published literature describing the isolation of this compound from B. disticha.Acetovanillone has recently been reported to have a number of pharmacological properties including anti-inflammatory activity.In this technical note, the only other published work on the isolation and identification of acetovanillone from an extract of B. disticha is reported.Details of collection and authentication of the plants used in this work have been reported elsewhere.Briefly, the dried and powdered sample of bulbs of B. disticha was extracted with 70% aqueous alcohol by soaking in the solvent over night at room temperature for three consecutive nights.The extracts collected were concentrated under reduced pressure using a rotary vapour at 40 °C.The resulting dark brown-black extract was further extracted sequentially with n-hexane, ethyl acetate and n-butanol respectively.The ethyl acetate fraction was allowed to evaporate in a fumehood.This extract was adsorbed on silica gel and then subjected to gradient column chromatography increasing in polarity initially with n-hexane/ethylacetate and eventually to ethlyacetate/methanol up to 100% methanol to give 22 fractions.Upon spraying with vanillin, fraction BDE15 gave a single spot which was distinctly orange-red in colour.This fraction was allowed to evaporate and dissolved in acetone for NMR analysis.NMR experiments were performed with a Bruker Avance DRX 300 MHz spectrometer using standard sequences and referenced to residual solvent signals.The ultraviolet and visible spectra were measured on a Spectro UV–Vis Double Beam PC8 scanning auto Cell UVD-3200, IR spectra were recorded by a Perkin Elmer-FTIR spectrometer Spectrum Two.An Agilent 7890 Gas Chromatography instrument coupled to an Agilent 5975C quadrupole Mass Spectrometer with EI 70 eV was used to confirm the formula of the compound.The compound obtained was an orange to red coloured paste with a sweet vanilla like smell.The compound had a molecular ion peak at m/z 166.1 which was consistent with the nominal mass of acetovanillone, with a base peak at 151.0 and other prominent peaks at 123 and 108.The data when run through National Institute of Standards and Technology database matched that of acetovanillone.The IR data showed peaks at 3401, 3013.6, 2925.6, 2850.7 and 1707 cm− 1 also comparable to published data on the compound.The UV/Vis λmax was 271.7 nm.The 1HNMR showed six clear signals with two singlets each integrated at δ 3.91 and δ 2.52.In addition a broad peak was observed at δ8.6.The remaining protons were aromatic ring protons.The 13C NMR spectrum values showed a nine carbon molecule with values δ196.5, 151.4, 147.4, 129.9, 123.4, 114.5, 110.7, 55.4, 25.4.Both the proton and carbon-13 NMR spectra were comparable to published literature.This technical note described the isolation and identification of acetovanillone from a hydroethanolic extract of Boophone disticha.The presence of this anti-inflammatory phytoconstituent in the extract of the plant could explain why extracts of this plant are also used in the management of inflammatory conditions.Moreover, given that acetovanillone has to date not been identified in another species of the Amaryllidaceae, the chemotaxonomic significance of this confirmatory finding cannot be overemphasized.Thus, acetovanillone could be used as a potential marker substance for quality control in medicinal and other preparations using extracts of B. disticha as an ingredient.The identification of acetovanillone was confirmed through infrared, NMR and GC–MS analyses as presented above.Our results support the findings of Tutin who reported the presence of this compound in an aqueous extract of bulbs of Boophone disticha.
Extracts from the bulb of Boophone disticha (L.f.) Herb. (Amaryllidaceae) are used in the management of various medical conditions in Southern Africa ranging from neuropsychiatric illnesses to inflammatory conditions. Over a century ago in 1911, Tutin reported the presence of the anti-inflammatory compound acetovanillone in an extract of B. disticha. This brief communication reports on the only other published work on the isolation and identification of acetovanillone from an extract of B. disticha.
165
Benefits of using virtual energy storage system for power system frequency response
The power system is rapidly integrating smart grid technologies to move towards an energy efficient future with lower carbon emissions.The increasing integration of Renewable Energy Sources, such as the photovoltaic and the wind, causes uncertainties in electricity supply which are usually uncontrollable.Hence, it is even more challenging to meet the power system demand.More reserve from partly-loaded fossil-fuel generators, which are costly and exacerbate the carbon emissions, is consequently required in order to maintain the balance between the supply and demand.The grid frequency indicates the real-time balance between generation and demand and is required to be maintained at around 50 Hz power system).The integration of RES through power electronics reduces the system inertia.A low inertia power system will encounter faster and more severe frequency deviations in cases of sudden changes in supply or demand .Therefore, the system operator is imperative to seek for smart grid technologies that can provide faster response to frequency changes.The Energy Storage System is one solution to facilitate the integration of RES by storing or releasing energy immediately in response to the system needs.A large-scale ESS is able to replace the spinning reserve capacity of conventional generators and hence reduces the carbon emissions.There are different types of ESS for different applications as shown in Fig. 1 .In terms of the forms of ESS, ESS is classified as electrochemical, mechanical, electrical and thermal energy storage.In terms of the functions of ESS, ESS is classified as high power rating ESS for power management applications and high energy rating ESS for energy management applications .The use of ESS for grid frequency regulation can be dated back to the 1980s , e.g. the Beacon Power Corporation has already implemented flywheels to provide fast frequency regulation services .However, ESS remains to be an expensive technology although there are declinations in the cost in recent years.For instance, the cost of installing a 20 MW/10 MW h Flywheel Energy Storage Systems is approx.£25 m–£28 m .The large-scale deployment of ESS is still not feasible in a short term.Aggregated Demand Response can resemble a Virtual Energy Storage System because DR can provide functions similar to charging/discharging an ESS by intelligently managing the power and energy consumption of loads.By well-utilizing the existing network assets, i.e. the flexible demand such as the domestic fridge-freezers, wet appliances and industrial heating loads, DR can be deployed at scale with a lower cost compared with the installation of the ESS.The control of demand to provide frequency support to the power system has been studied including both centralised and decentralised control.Centralised control of the flexible demand relies on the Information and Communications Technology infrastructure to establish communications between the flexible demand and its centralised controller, such as an aggregator or Distributed Network Operator .To reduce the communication costs and latency, decentralised demand control has also been investigated.The controller in regulates the temperature set-points of refrigerators to vary in line with the frequency deviations and therefore controls the refrigerator’s power consumption.A dynamic decentralised controller was developed in which changes the aggregated power consumption of refrigerators in linear with the frequency changes.The controller aims not to undermine the primary cold storage functions of refrigerators and the impact of the grid-scale DR on the grid frequency control was investigated.Considering the availability of refrigerators to provide frequency response depicted by , it is estimated that 20 MW of response requires approx. 1.5 million refrigerators.The total cost is approx.£3 m .This is far smaller than the cost of FESS that also provides the 20 MW of response.It is estimated in that DR has the potential to reduce the ESS market size by 50% in 2030.However, the challenges of DR include the uncertainty of the response and the consequent reduction in the diversity amongst loads .Simultaneous connection of loads may occur in several minutes after the provision of response to a severe drop in the frequency, which causes another frequency drop and hence challenges the system stability.A number of studies have been conducted to investigate the capability of ESS or DR to provide frequency response to the power system.However, the combination of both technologies for grid frequency response while mitigating the impact of uncertainties of DR and reducing the capacity of the costly ESSs has not yet been fully explored.Therefore, in this research, a VESS is firstly formulated by coordinating large numbers of distributed entities of ESS and DR. The coordination of both technologies aims to provide fast and reliable firm amount of dynamic frequency response with a lower cost compared to conventional ESSs.Moreover, the idea of merging both technologies into a single operation profile is defined and the benefits of operating a VESS for the delivery of frequency response service is analysed.In this paper, a VESS is formed as a single entity to provide the function of ESS for the delivery of frequency response in the power system.In Section 2, the concept and potential application of VESS is discussed.A VESS consisting of DR from domestic refrigerators in a large city and the response from small-size FESSs is modelled and controlled.The proposed control of VESS maintains the load diversity and the primary functions of cold storage of refrigerators while reducing the number of charging and discharging of each FESS and prolonging the lifetime of the costly FESS.Case studies were carried out in Section 3 to quantify the capability of VESS for frequency response.The results of using the VESS and the conventional FESS for frequency response were compared in Section 4.Discussions and the potential economic benefits of using VESS to participate in the GB frequency response market were also discussed.A Virtual Energy Storage System aggregates various controllable components of energy systems, which include conventional energy storage systems, flexible loads, distributed generators, Microgrids, local DC networks and multi-vector energy systems.Through the coordination of each unit, a VESS is formed as a single high capacity ESS with reasonable capital costs.It is integrated with power network operation and is able to vary its energy exchange with the power grid in response to external signals.A VESS allows the flexible loads, small-capacity ESS, distributed RES, etc. to get access to the wholesale market and to provide both transmission and distribution level services to the power system.Different from the Virtual Power Plant that aggregates distributed energy resources to act as a single power plant, VESS aims to store the surplus electricity or release the electricity according to system needs.A VESS is able to form a synthetic ESS at both transmission and distribution levels with different capacities as a result of the aggregation.In the project “hybrid urban energy storage” , different distributed energy systems in buildings), central and decentral energy storage systems are coordinated to create a Virtual Energy Storage System.The resources utilise the existing potentials of energy balancing components in cities for grid ancillary services with reduced costs.A VESS therefore presents the characteristics of both high power rating ESS and high energy rating ESS, and hence covers a wide spectrum of applications.The potential capabilities of VESS are listed below based on :Facilitate the integration of RES in the distribution networks,A VESS can charge/discharge to smooth the power output variations of renewable generation .Additionally, it can increase the distribution network hosting capacity for RES , where the integration of RES is limited by the voltage and thermal constraints .Defer transmission networks reinforcements,A VESS can increase the utilization of transmission networks by providing immediate actions following a system contingency .Additionally, a VESS can effectively mitigate the potential network congestions, and therefore postpones the transmission reinforcements.A VESS can reduce the required spinning reserve capacity and increase the generators loading capacity .With smart grid technologies, the available VESS capacity can be reported to the system operator in advance and even every second .During system contingencies and system emergencies , a VESS can provide voltage support and frequency support.In addition, primary frequency response requirements which are at present mainly met by the costly frequency-sensitive generation is expected to increase by 30–40% in the next 5 years in the GB power system .A VESS is technically feasible to provide such services because it is able to provide faster response, higher ramp rates and higher flexibilities than the conventional generating units .A VESS is formed to firmly provide the required amount of frequency response to the power system in order to participate in the GB Firm Frequency Response market as an aggregator.The FFR market is considered as the most lucrative ancillary services available on the MW basis in the GB power system.DR from domestic refrigerators in a city is implemented to meet the required amount of frequency response while the conventional FESS is used to compensate for the uncertainties caused by the DR. Other units with similar characteristics and capability of storing energy, such as EVs or other ESS types, can be added further to increase the total VESS capacity.A simplified model of FESS was developed as shown in Fig. 2.It has been validated with a detailed model which includes all the main components and control of converters .The simplified model provided accurate results with a significant reduction in the computational time.The simplified model facilitates the system level studies considering large numbers of small-size distributed flywheels.In order not to undermine the cold storage function of each refrigerator, a thermodynamic model of refrigerators was developed as illustrated in .Fig. 3 shows the temperature control of refrigerators.The variation of internal temperature of a refrigerator with time is modelled and dynamically compared with the temperature set-points Tlow and Thigh.If T rises to Thigh, a refrigerator is switched on.When a refrigerator is at ON-state, it is equivalent to the charging of an energy storage unit which consumes power and causes the decrease of T. Alternatively if T decreases to Tlow, a refrigerator is switched off.An OFF-state refrigerator is considered as a discharging process which causes the increase of T.In a refrigerator, temperature inherently controls the charging and discharging process.A control algorithm is developed for the units in a VESS to charge/discharge in response to regulation signals.In this paper, grid frequency is used as the regulation signal.A general local controller that can be applied to both refrigerators and FESS is developed as shown in Fig. 4.The control measures f constantly.For an FESS, the output is the change of power output.For a refrigerator, the output is the change of On/Off state and hence the power consumption.Each unit in the VESS is assigned a pair of frequency set-points, FON and FOFF.The range of FON is 50–50.5 Hz and the range of FOFF is 49.5–50 Hz which is consistent with the steady-state limits of grid frequency in the GB power system.The input f constantly compares with the set-points FON and FOFF.If f rises higher than FON of a unit, the unit will start charging/switch on as a result of the frequency rise.If f is higher than 50 Hz but lower than FON, the unit will standby.Alternatively, if f drops lower than FOFF, the unit will start discharging/switch off as a result of the frequency drop.If f is lower than 50 Hz but higher than FOFF, the unit will standby.FON and FOFF vary linearly with the State of Charge of each unit as shown in Fig. 4.For FESS, ω indicates SoC.A low ω designates a low SoC and vice versa.For refrigerators, T indicates SoC.A high T indicates a low SoC and vice versa.When f drops, the units in the VESS will start discharging from the one with the highest SoC.The more f drops, the more number of units will be committed to start discharging.Therefore, the more power will be discharged from FESS and the more power consumption of refrigerators will be reduced.Alternatively, when f rises, the units will start charging from the one with the lowest SoC.The more f rises, the more number of units will be committed to start charging.Therefore, the more power will be consumed by the refrigerators and the more power will charge the FESS.A set of logic gates is used to determine the final state of each unit.The control considers a priority list based on SoC when committing the units.Compared with the conventional frequency control of FESS, in which all units will start charging/discharging simultaneously according to the frequency deviations using the droop control, the proposed control with a priority list of commitment will reduce the number of charging/discharging cycles and hence prolongs the lifetime of units.However, when the frequency deviation is small, the proposed control based on the priority list will commit fewer FESS units to start charging/discharging according to frequency deviations using the droop control.The total power output from FESS is therefore smaller than that without the proposed control which commits all FESS units.To mitigate this impact and increase the output from FESS units even when the frequency deviation is small, an adaptive droop control is applied to replace the conventional droop control when determining the amount of power output changes of each committed FESS unit as shown in Fig. 4.It is to be noted that, the inherent control of each unit takes the priority in determining the final charging/discharging state.For refrigerators, the inherent control refers to the temperature control as illustrated in Fig. 3.For FESS, the inherent control is the charging control which limits the minimum and maximum rotating speed of each FESS.In summary, the proposed control in Fig. 4 is a distributed control on each unit in a VESS, but it is coordinated amongst all units by assigning frequency set-points based on SoC.The proposed control ensures that the aggregated response from a population of units is in linear with frequency deviations.This is similar to the droop control of frequency-sensitive generators.Each unit in the VESS has equal opportunity to charge/discharge.The lifetime of units is hence prolonged.Specifically for refrigerators, the control does not undermine the cold storage and the impact of the reduction in load diversity is mitigated.If a VESS tenders for the participation in the FFR market, the VESS is required to illustrate the firm capability of providing a constant amount of dynamic or non-dynamic frequency response during a specific time window .However, the high cost of ESS limits the grid-scale deployment.The uncertainty of DR makes it difficult to ensure the provision of a constant amount of response at all times.Therefore in a VESS, the coordination of FESS and DR aims to provide the capability of delivering a certain amount of frequency response at a lower cost.In this study, a two-way communication network is assumed to be available for the centralized VESS controller.The communication can be established through the Internet network protocols , smart meter infrastructure or other smart grid technologies in the near future.In the coordination, the VESS tenders for the provision of dynamic FFR.The maximum response is constantly fixed when the frequency changes outside the limits, i.e. ±0.5 Hz.Within the frequency limits, the response varies dynamically with frequency deviations.The performance of the model and control of VESS for the provision of frequency response service is evaluated by a series of simulations.The design of the case study is illustrated below.There are 3,220,300 households in London in 2014 .It is assumed that the refrigerator in each household is equipped with the frequency controller in Section 2.3.3.The amount of frequency response from refrigerators is estimated considering the time of day as shown in Fig. 6 .A maximum reduction in power consumption is 18.5% at 18:00 and a minimum reduction is 13.2% at 6:00.Considering the number of refrigerators in London, a maximum power reduction of 60 MW and a minimum power reduction of 40 MW is expected.Similarly, an availability of refrigerators to be switched on is approx. 50–56% which expects a minimum power increase of approx. 160 MW and a maximum power increase of approx. 180 MW from all refrigerators.This reveals that refrigerators have more potential to provide response to the frequency rise than to the frequency drop.Therefore, the VESS is planned to provide a linear dynamic frequency response of a maximum of 60 MW to the power system when frequency drops outside the limit and of 180 MW when frequency rises outside the limit over a day.For periods of the day that refrigerators cannot provide the required response, FESS is used to compensate for the mismatch between Preq and ΔPr.Because the maximum mismatch is 20 MW, 400 FESS are used in the VESS.The VESS is connected to a simplified GB power system model to assess the VESS capability to provide low frequency response and high frequency response to the power system.In the GB power system model, the synchronous power plants characteristics are modelled as governor, actuator and turbine transfer functions .The system inertia is represented by Heq and the damping effect of frequency-sensitive loads was represented by a damping factor .The flow of active and reactive power in transmission networks are assumed independent and only the active power is considered.The time constants of the governor, re-heater, turbine, and the load damping constant were set to: Tgov = 0.2 s, T1 = 2 s, T2 = 12 s, Tturb = 0.3 s and D = 1.Based on , the system inertia was estimated to be 4.5 s and the equivalent system droop was 9%.The parameters of the model were calibrated with a real frequency record following a 1220 MW loss of generation on the GB power system .Three case studies were carried out: Case 1 – low frequency response, Case 2 – high frequency response and Case 3 – continuous frequency response.Case 1 and Case 2 were undertaken on the simplified GB power system model.The system demand was 20 GW representing a summer night and the following three scenarios were compared.Scenario 1: S1: assumes that there is no ESS or VESS connected to the power system.Scenario 2: S2: connects 1200 FESS units each of 50 kW/30 kW h to the GB power system and tenders for the provision of 60 MW of frequency response.This case uses only conventional ESS.Scenario 3: S3: connects the VESS model including all refrigerators in London and 400 FESS units to the GB power system and tenders for the provision of 60 MW of frequency response.The FESS provides a maximum of 20 MW of response to the mismatch between Preq and ΔPr.In Case 3, the behaviour of VESS in the provision of continuous frequency response is studied.The VESS is procured to provide a proportional low frequency response of a maximum of 60 MW.Simulations were carried out by applying a loss of generation of 1.8 GW to the GB power system.This case simulates the discharging phase of the VESS.Results are shown in Figs. 8–10.The frequency drop in Fig. 8 is reduced with 60 MW of response) from either ESS or VESS.Since 60 MW of response is small in a 20 GW system, the improvements of frequency is approx. 0.01 Hz and seems hardly noticeable.If the installed capacity of units in the VESS is higher, the frequency drop will be significantly reduced.The number of FESS in the VESS was only one third of that in S2, however, VESS provided similar amount of frequency response to that of ESS in S2.The reduced capacity of FESS in S3 will reduce the cost significantly compared to S2.Fig. 10 shows the change of power output of generators in the three scenarios.It can be seen, with ESS or VESS, the required capacity of the costly frequency-responsive generators is reduced.The VESS is acquired to provide a maximum of 180 MW of high frequency response to the power system over a day when frequency rise outside the limit.A sudden loss of demand of 1 GW was applied to the GB power system.This case depicts the charging phase of the VESS.Results are shown in Figs. 11–13.In Figs. 11 and 12, ESS in S2 provides approx. 45 MW of response) from the 1200 FESS and the maximum frequency rise is slightly reduced by 0.01 Hz compared with s1.However, the VESS in S3 provides approx. 140 MW after the sudden loss of demand which is much higher than the response of ESS in S2 and the frequency rise is reduced by 0.05 Hz.In Fig. 12b, following the sudden loss of demand, the power consumption of refrigerators is increased by approx. 120 MW from 66 MW to 186 MW while the power output of FESS in S3 is 20 MW.By the response from both VESS and generators, the frequency is recovered to 50.2 Hz almost immediately.As a number of refrigerators were switched on following the frequency rise, their temperature started to drop.It took several minutes before the temperature reached the low set-point and refrigerators started to switch off.Therefore, FESS start discharging following the frequency recovery.Because of the limitation of the capacity of FESS, the total response was not in linear with the frequency recovery.However, the VESS in S3 provides much more response following the frequency rise than that of ESS in S2.This is specifically critical for the future power system with a low inertia.Fig. 13 also depicts that both ESS and VESS are able to reduce the required capacity of spinning reserve of frequency-responsive generators.The VESS in S3 shows a greater reduction compared with the ESS in S2.The VESS is applied to provide continuous frequency response in proportion to frequency changes.The VESS have maximum charging power of 60 MW when grid frequency drops to 49.5 Hz and maximum discharging power of 180 MW when grid frequency increases to 50.5 Hz.Simulations were implemented by injecting a profile recording the GB power system frequency into the VESS .The behaviour of the VESS in response to the continuous fluctuations of frequency is shown in Fig. 14.It can be seen, the power output of VESS dynamically changes following the frequency deviations.Because refrigerators have greater capability to be switched on, the VESS is able to provide greater high frequency response than low frequency response as depicted by Fig. 14.The VESS coordinates different types of distributed energy resources, such as ESS, flexible loads and DGs, in order to facilitate the connection of intermittent generation and also to provide services to network operators, energy suppliers and service aggregators.Therefore, the benefits of using VESS for different services can be massive.In this paper, the benefit of VESS for the provision of frequency response service is briefly estimated as an example.The investment costs of FESS in S2 and of VESS in S3 were roughly estimated first.It is assumed that the lifetime of FESS is approx. 20 years and the lifetime of refrigerators is 13 years .Considering a timescale of 20 years and using the investment costs shown in Table 1, the investment cost of FESS in S2 providing 60 MW of response is estimated to be approx.£75m–£84m .The investment cost of VESS in S3 providing 60 MW includes the cost of installing controllers on 3,220,300 refrigerators which is approx.£9.66 m for 13 years and therefore will be approx.£14.86 m for 20 years.In addition, the cost of FESS in the VESS providing 20 MW of response is approx.£25 m–£28 m. Therefore, the investment cost of VESS in S3 is approx.£39.8m–£42.8m which is less costly compared with the cost of ESS in S2.However, establishing the VESS communications was not considered in the VESS total investment costs.A VESS is formed by coordinating the DR of domestic refrigerators and the response of FESS in order to provide functions similar to conventional ESS with higher capacity and lower costs.The model and control algorithm of the VESS were developed.Amongst the population of distributed units in the VESS, the control is coordinated in order to provide an aggregated response which varies in linear with the regulation signals.The control minimizes the charging/discharging cycles of each unit and hence prolongs the lifetime of each unit.The control also maintains the primary function of loads and mitigates the impact of the reduction in load diversity amongst the population.Case studies were undertaken to evaluate the capability of VESS to provide the frequency response service by connecting the VESS model to a simplified GB power system model.Simulation results showed that VESS is able to provide low, high and continuous frequency response in a manner similar to the conventional ESS.The economic benefits of using ESS and VESS were compared considering the timescale of 20 years.Compared with the case that only uses ESS, VESS is estimated to obtain higher profits.
This paper forms a Virtual Energy Storage System (VESS) and validates that VESS is an innovative and cost-effective way to provide the function of conventional Energy Storage Systems (ESSs) through the utilization of the present network assets represented by the flexible demand. The VESS is a solution to convert to a low carbon power system and in this paper, is modelled to store and release energy in response to regulation signals by coordinating the Demand Response (DR) from domestic refrigerators in a city and the response from conventional Flywheel Energy Storage Systems (FESSs). The coordination aims to mitigate the impact of uncertainties of DR and to reduce the capacity of the costly FESS. The VESS is integrated with the power system to provide the frequency response service, which contributes to the reduction of carbon emissions through the replacement of spinning reserve capacity of fossil-fuel generators. Case studies were carried out to validate and quantify the capability of VESS to vary the stored energy in response to grid frequency. Economic benefits of using VESS for frequency response services were estimated.
166
Modeling maternal fetal RSV F vaccine induced antibody transfer in guinea pigs
Protection of newborns and young infants against infectious disease via maternal immunization is an increasingly utilized intervention with strikingly effective outcomes .Maternal immunization is an important strategy to protect infants against RSV where the peak hospitalization rates occur in the youngest infants .In mammals, the passage of maternal antibodies to naïve offspring is largely achieved by either in utero transplacental transfer of IgG or post-partum breast-feeding of colostral milk containing high levels of IgG and sIgA .In ungulates, the transfer of antibodies from mother to infant occurs exclusively through colostral milk given at the first feeding .In these animals the intestinal uptake of maternally derived colostral antibody via intestinal Fc receptors results in high serum levels of antibodies .Poor outcomes due to common pathogens in otherwise healthy newborn animals are frequently observed when colostral feeding does not occur.For human infants, maternal-derived IgG appears to be provided entirely in utero, via transplacental Fc receptor-mediated antibody transfer .Although humans seem to lack, or at least have inefficient colostral/intestinal IgG antibody uptake, there is important and well-described documentation of neonatal Fc receptor-mediated transport of IgG across the epithelial barrier; i.e., from the systemic circulation to the mucosal surface and sampling of antigen-antibody complexes in the intestine by local immune cells .While it is clear that human breast milk contains both polyclonal IgG and IgA and confers protection against respiratory and enteric illness in general .The precise mechanism underlying this protection is unclear, but may be due to direct contact between antibodies in milk and pathogens in the upper respiratory tract and gut.Conversely, the protective effects of human serum IgG, transferred from mothers to infants in utero , has been well-described against many infectious disease agents .As the acceptance of maternal immunization via transplacental transfer grows as a viable strategy for protection of newborns against disease early in life, so does the urgency to develop robust safety and immunogenicity datasets in a relevant preclinical model.The placental structure supports the physiologic transfer of nutrients, oxygen and antibodies to the fetus—a function achieved without admixing of maternal and fetal blood .The hemichorial placental architecture in guinea pigs reflects the anatomy found in humans and thus represents a potentially appropriate model to evaluate placental transfer of antibodies.An experimental RSV recombinant F nanoparticle vaccine has progressed to evaluation in third-trimester pregnant women, with earlier studies in women of childbearing age having shown the vaccine induces F protein specific antibodies as measured by several clinically relevant assays ELISA, and microneutralization ).In support of this work, the guinea pig model was employed to further assess the immunogenicity of this experimental vaccine in pregnant sows and to compare antibody levels placentally-transferred to their pups using the same immunological measures aforementioned.Although the guinea pig model is an immune naïve model and may differ in some respects compared to adult women who are primed with years of repeat RSV infections, the results of this study represent a key dataset toward building a risk-benefit profile for use of the RSV F nanoparticle vaccine in pregnant women.The RSV F vaccine was manufactured by infecting Spodoptera frugiperda cells in exponential growth with baculovirus containing the near full-length RSV F gene sequence, as previously described .The F protein, which forms nanoparticles of approximately 40 nm in diameter, was further formulated in buffer containing 25 mM sodium phosphate, pH 6.2, 1% histidine, 0.01% PS80 and then adsorbed to aluminum phosphate.The in-life portion of the study was performed under Good Laboratory Practice regulations at Smithers Avanza to generate controlled pregnancy and pup delivery data.Thirty presumed pregnant female Hartley guinea pigs were obtained from Charles River Laboratories, Canada, on the 20th day of gestation.To improve the rate of pregnancy, guinea pigs with prior history of more than one successful pregnancy were selected for the current study.In addition, females were not palpated or bled during the gestation period to avoid stress to the pregnant sows and fetuses.Guinea pigs were weighed upon arrival and randomized two days later based on similar body weight distribution into 1 of 3 treatment groups with 10 animals each.All serologies were performed via orbital sinus or cardiac puncture following isoflurane inhalation or intraperitoneal injection of Euthansol, an euthanasia solution.Presumed pregnant guinea pigs were immunized as described in Table 1.Pregnant females in Group 1 received an intramuscular administration of PBS in a volume of 250 μL per guinea pig on GD25 and 46.Animals in Group 2 and Group 3 received an IM administration of antigen alone or with adjuvant in a volume of 250 μL per guinea pig on GD25 and GD46 also.Guinea pigs were bled on Post Natal Day 0.Pups assigned to the PND0 phlebotomy cohort were terminally bled on delivery day.The remaining pups that were left with birth mothers until weaning, were bled on PND15 and PND30.Anti-RSV antibody levels based on anti-RSV F IgG and palivizumab-competitive binding with biotin-labeled palivizumab were assessed at Novavax as previously described on sera obtained from pregnant guinea pigs on PND0 and from their pups on PND0, PND15, and PND30.For anti-F IgG, data was analyzed using SoftMax pro software using a 4-parameter logistics curve fit analysis.Titers were reported as the reciprocal dilution that resulted in a reading of 50% the maximum OD.Titer values recorded as below the lower limit of detection were assigned a titer of <100 for the sample, and a titer of 50 for the geometric mean titer analysis.Titer values recorded below the LLOD were assigned a titer of <10 for the sample, and a titer of 5 for the GMT analysis.To obtain the equivalent result in concentration, the titer value was multiplied by 2 μg/mL, a coefficient derived from several reference curves from multiple experiments.To determine whether passively transferred anti-RSV F antibodies and palivizumab can elicit RSV F neutralizing antibodies, sera samples from sows and pups were assayed in a RSV/A neutralization assay as previously described .Two-fold serial dilutions of guinea pig sera, starting at 1:10, were prepared in 96-well plates.The neutralizing antibody titer was identified as the last dilution that resulted in 60% inhibition of cytopathic effect observed in virus-only infected wells.Sera that showed no inhibition in CPE at the 1:20 dilution were assigned a value of 10 for the GMT analysis.The GMT or GMC and associated 95% confidence interval were calculated for each treatment group.In this study, three treatment groups of presumed pregnant guinea pigs received immunization with active vaccine or placebo in their third trimester on GD25 and 46.Pregnant females carried to term and delivered pups between 61 and 71 days of gestation.The overall rate of pregnancy achieved in this study was 83.3% with litter size range 1 to 8.In Group 1, 8/10 females delivered 33 live and 9 stillborn pups, which resulted in an 80% delivery rate.Similarly, 8/10 guinea pigs in Group 2 delivered 29 live pups, resulting in an 80% delivery rate.In Group 3, 9/10 females delivered 32 live and 3 stillborn pups indicating a delivery rate of 90%.One female from Group 1 delivered 7 stillborn from a litter of 8 pups due to the large litter size compared to an average of 3 seen with most females for this species .The two remaining stillbirths in the placebo group and the 3 stillbirths in the adjuvanted RSV F group, involved only 1 pup per female; indicating the stillbirths were not related to vaccine administration as it occurred with similar frequency in the placebo and the adjuvanted RSV F vaccine groups.Anti-RSV F IgG responses, PCA, and RSV/A MN were measured in serum samples obtained from individual sows on the day of delivery, designated as Post Natal Day 0.The titers for maternal samples in the placebo group were below the detection or minimum dilution level for the anti-RSV F IgG, PCA, and RSV/A MN assays.Pregnant guinea pigs immunized with unadjuvanted and AlPO4-adjuvanted RSV F resulted in high anti-RSV F IgG titers, with 18.2-fold higher RSV F IgG titers in adjuvanted vaccine recipients.Similarly, PCA concentrations obtained with unadjuvanted and AlPO4-adjuvanted RSV F were also robust and levels were 11.2-fold higher with the adjuvanted vaccine.RSV/A MN levels in unadjuvanted RSV F vaccine recipients reached a GMT of 73, but were 9.8-fold higher in adjuvanted vaccine recipients.Pups assigned to PND0 were terminally bled to determine their anti-RSV F IgG titers, PCA concentrations, and RSV/A MN titers.The GMTs for pups born to sows that received placebo were very low or below the detection level for all three assays.All the pups born to RSV F vaccinated sows generated anti-RSV F specific immune responses indicating efficient maternal antibody transfer during the gestation period.Pups born to unadjuvanted RSV F vaccinated sows had anti-RSV F IgG GMT of 63,036; PCA concentration of 292 μg/mL; and RSV/A MN GMT of 160.Importantly, antibody levels were higher compared to those measured in matched maternal sera, indicating a relative concentration of antibody in pups by 306, 210, and 219%, respectively.Similarly, pups born to adjuvanted RSV F vaccinated sows had anti-RSV F IgG, PCA, and RSV/A neutralizing GMTs of 832,277; 2,427 μg/mL; and 1,194, respectively.As seen for the RSV F treatment group, antibody levels were higher in the pups compared to matched sows, indicating a relative concentration of antibody in pup sera by 222, 156, and 166%, respectively.All remaining pups not assigned to PND0 were housed with their mother until weaning on PND15.Relative to PND0 antibody levels, anti-RSV F IgG, PCA, and RSV/A MN GMTs/concentrations declined on PND15 in all pups of both RSV F vaccine recipients and adjuvanted RSV F vaccine recipients.Although further decreases in antibodies were noted at PND30; detectable levels remained, demonstrating the persistence of maternally-transferred RSV F-specific antibodies in pups.Given that pups continued to breastfeed through PND15 and intestinal Fc receptor-mediated antibody uptake is a significant mechanism to attain high-level serum titers post-partum, the half-life of RSV F antibodies was not calculated.Maternal immunization has been most successfully implemented by a WHO global immunization campaign to address neonatal tetanus."Since the program's inception, the incidence of neonatal tetanus has been significantly reduced .Similarly, influenza immunization of pregnant women has demonstrated efficacy against influenza disease in both the woman and her infant, in the absence of any adverse impacts on either the course of the pregnancy or the neonate .Finally, the implementation of third-trimester maternal immunization with pertussis has had dramatic effects on the incidence of pertussis where the uptake of the vaccine has been high .In humans, maternal IgG antibodies are transplacentally-transferred to the fetus by active transcytosis facilitated by IgG Fc receptors expressed on the placenta .The active transport of IgG antibodies generally results in enhanced antibody concentrations in cord blood, often in excess of maternal levels and favors IgG1 .Accordingly, it is a reasonable expectation that raising functional anti-RSV F antibody levels in pregnant woman will increase titers of functional anti-RSV F antibodies in infants ; thus we endeavored to model this in guinea pigs.In this GLP study, pregnant guinea pigs were immunized with the clinical stage RSV F nanoparticle vaccine, and vaccine effects on rates of pregnancy and rates of live births were observed in a controlled setting.When assessed in the context of the published literature , no untoward effects were observed as all pregnant guinea pigs vaccinated with unadjuvanted or adjuvanted RSV F on GD25 and 46 carried to full-term.The overall pregnancy rate was 83.3% for all three treatment groups, with a litter size range of 1 to 8 pups, for a total of 94 live births and 12 stillbirths.Among the stillborn pups, 9 were in the placebo group and 3 were in the adjuvanted RSV F group.Among the 9 stillbirths in the placebo group, 7 were from one guinea pig with a litter of 8 pups indicating the stillbirth was associated with the unusually large litter size compared to the average of 3 pups for this species .All of the other stillbirths that occurred in the placebo and the adjuvanted RSV F groups involved only one pup per female.Overall, the delivery rates and the number of pups born between the placebo group and the active vaccine groups were similar, indicating RSV F vaccination of pregnant guinea pigs did not result in adverse effects on gestation.In general, these outcomes are in line with historical rates of pregnancy and stillbirth in lab-raised guinea pigs .Robust anti-RSV F IgG, PCA concentrations, and RSV/A neutralizing titers were detected in sera from RSV F vaccinated sows, consistent with previous observations in cotton rats , mice , and humans .Similar to previous studies in all species, AlPO4-adjuvanted RSV F vaccine administered to pregnant guinea pigs elicited a higher RSV F-specific antibody response than the unadjuvanted RSV F vaccine.Pups born to RSV F vaccinated sows had high levels of anti-RSV F IgG, PCA, and RSV/A neutralizing antibodies at birth, and concentration effects, relative to levels in matched maternal sera, was observed with all serological measures.The observed transplacental concentration of antibodies is similar, or possibly more robust to that generally observed in humans , with the exception of the MN antibody response.Several studies in humans measuring MN antibodies derived from natural infection, have described a relative parity of infant and maternal MN levels .This may be explained by the high variability of neutralizing antibodies elicited by natural infection in humans , whereas the vaccine, given at a consistent dose in naïve guinea pigs, may induce a relatively constant MN titer that allows a more precise measure of placental transfer.Alternatively, the RSV F vaccine may elicit antibody responses of higher quality, characterized by increased binding affinity to the F protein, which may be more efficiently transferred.It is also possible the guinea pig model may not be a suitable paradigm to model human placental antibody transfer as measured by MN.The current trial of the RSV F nanoparticle vaccine in third-trimester pregnant women and their infants is designed to measure the placental transfer of these antibodies, and will either confirm or reject these hypotheses.As noted, pups born to RSV F-immunized sows had robust anti-RSV F IgG, PCA, and neutralizing antibody levels in their serum.Levels declined after birth, but persisted through 30 days post-partum, and were notably highest in pups born to sows vaccinated with AlLPO4-adjuvanted RSV F.The relatively short half-life, consistent with the literature on guinea pigs , does not reflect the general experience with transplacentally-derived human IgG directed to RSV .There are several factors, including the comparatively more rapid increase in relative body weight and subsequent increase in volume of distribution for serum antibodies that may explain this finding.Specific confirmation of the decay kinetics of these transplacentally-derived antibodies in humans may shed further light in determining whether the guinea pig is a suitable model to predict the half-life of RSV F-specific antibodies in humans.In summary, this study provides further evidence that the RSV F vaccine may be safely administered during pregnancy—a finding in line with the first in human study with a virus-derived purified F protein in pregnant women .It also demonstrates the transfer of placental anti-RSV F antibodies, as measured using different immunoassays, and concentration of these antibodies in pups in comparison to their respective birth mothers.Although further evaluation of the safety of the vaccine in pregnant women and their infants should be approached with caution, the disease burden in humans and the preclinical safety, immunogenicity, efficacy, and transplacental transfer data collectively underscore that development of the RSV F vaccine to protect mothers and their infants against RSV disease should continue.The authors are employees of Novavax.
Background: Protection of newborns and young infants against RSV disease via maternal immunization mediated by transplacental transfer of antibodies is under evaluation in third-trimester pregnant women with the RSV recombinant F nanoparticle vaccine (RSV F vaccine). Since the hemichorial placental architecture in guinea pigs and humans is similar, the guinea pig model was employed to assess RSV F vaccine immunogenicity in pregnant sows and to compare RSV-specific maternal antibody levels in their pups. Methods: Thirty (30) presumptive pregnant guinea pigs were immunized on gestational day 25 and 46 with placebo (PBS), 30. μg RSV F, or 30. μg RSV F. +. 400. μg aluminum phosphate. Sera at delivery/birth (sows/pups) and 15 and 30 days post-partum (pups) were analyzed for the presence of anti-F IgG, palivizumab-competitive antibody (PCA) and RSV/A microneutralization (MN). Results: The rates of pregnancy and stillbirth were similar between controls and vaccinees. The vaccine induced high levels of anti-F IgG, PCA and MN in sows, with the highest levels observed in adjuvanted vaccinees. Placental transfer to pups was proportional to the maternal antibody levels, with concentration effects observed for all immune measures. Conclusions: The RSV F vaccine was safe and immunogenic in pregnant guinea pigs and supported robust transplacental antibody transfer to their pups. Relative concentration of antibodies in the pups was observed even in the presence of high levels of maternal antibody. Guinea pigs may be an important safety and immunogenicity model for preclinical assessment of candidate vaccines for maternal immunization.
167
Comparison of gene expression responses of zebrafish larvae to Vibrio parahaemolyticus infection by static immersion and caudal vein microinjection
The zebrafish has recently emerged as a valuable model for studying infectious agents and host–pathogen interactions.The innate immune system in zebrafish can be detected and activated as early as 1 day post-fertilization, and only innate immunity is present before 3 weeks post fertilization.Novoa and Figueras have reviewed the characteristics of zebrafish as a model for innate immunity and inflammation studies, and zebrafish larvae have been widely used to study the innate immune system following infection with bacteria, fungi, and viruses.However, different routes of infection may activate different response pathways, resulting in complex innate immune responses in zebrafish larvae.Immersion and microinjection have both been used extensively to infect zebrafish larvae and adults.Immersion is an easier method of infection for mutant or drug screening, and also activated inflammatory marker genes in individual embryos.Although various anatomical structures have been used as target sites for microinjection in zebrafish embryos and larvae, including the caudal vein, duct of cuvier, hindbrain ventricle, tail muscle, otic vesicle, notochord, and yolk, the main sites are the hindbrain ventricle and caudal vein.Hindbrain ventricle infection has been used to study the recruitment of macrophages and stimulate chemotaxis due to the absence of macrophages in this tissue, while caudal vein microinjection causes systemic infection and has been used to examine aspects such as bacteria–host cell interactions, bacterial virulence, and drug treatment effects.Vibrio parahaemolyticus is an important pathogen that has recently been shown to be responsible for serious illnesses, hospitalizations, and even deaths in China.Disease due to V. parahaemolyticus was first reported in Japan in the 1950s, with most cases found in coastal areas and related to infections from contaminated seafood.However, this bacterium also has been isolated from freshwater fish and thus poses a potential threat to wider populations, in both coastal and inland areas.The zebrafish have been successfully used as a model for understanding the interactions between the host and V. parahaemolyticus.Moreover, bacterial infection models in zebrafish differ in infection route and host response.Joost J van Soest et al. shown that 25 hpf zebrafish larvae infected with Edwardsiella tarda by immersion and caudal vein microinjection may reflect an epithelial or other tissue response to cell membrane and activate the inflammatory related genes using microarray analysis methods.Additionally, Francisco Díaz-Pascual et al. used global proteomic profiling method to prove that 3 dpf zebrafish larvae infected with Pseudomonas aeruginosa by immersion and caudal artery microinjection could activate the angiogenesis and integrin signaling pathway and the inflammatory responses through chemokine and cytokine signaling pathways.From the previous study of Edwardsiella tarda and Pseudomonas aeruginosa, we have obtained a lot of useful information on the innate immune response in zebrafish.However, whether the innate immune response activated by V. parahaemolyticus was different to E. tarda or P. aeruginosa and the difference of innate immune response between immersion and microinjection caused by V. parahaemolyticus were both unknown.In the present study, we infected 3 dpf zebrafish larvae with V. parahaemolyticus to improve our understanding of the innate immune response induced by V. parahaemolyticus in zebrafish larvae by identifying differentially expressed genes, GO and KEGG analysis following immersion and microinjection infection.Comparing with previous studies, V. parahaemolyticus can cause specific DEGs, including il11a, ccl34a.4, ccl20a.3, cxcl18b, and ccl35.1.In addition, immersion infection may mainly affect initial dorsal determination, cytochromes, and fatty acid-binding proteins, as well as inflammation, while microinjection infection may mainly directly affect the immune response.The results of this study will further our understanding of the infection and host immune response and facilitate the development of efficient clinical interventions for V. parahaemolyticus infection.The pathogenic V. parahaemolyticus Vp13 strain, isolated from infected Litopenaeus vannamei, was a gift from Dr. Yong Zhao.Vp13 strain was cultivated with trypticase soy broth and trypticase soy agar containing 3% NaCl.Vp13 strain was incubated at 28 °C and 200 rpm, and then serially diluted with sterile phosphate-buffered saline to a concentration of 10−6.Each diluted solution was plated on thiosulfate citrate bile salts sucrose agar culture medium, and incubated for approximately 24 h at 28 °C before counting the number of colony-forming units.The Vp13 growth curve was measured as described previously with a few modifications.Briefly, 100 μL Vp13 suspension in logarithmic phase was inoculated into TSB liquid medium contained 3% NaCl and incubated at 28 °C and 200 rpm.The optical density at 600 nm was measured using a Nanodrop 2000C.After incubation for 1 h, the bacterial culture solution was continuously diluted, coated on TSA, and incubated for 24 h. CFU were then counted at 1 h and every other hour from 2 to 22 h, and again after 30 h.The growth curves for OD600 vs CFU/mL and OD600 vs time were calculated.Based on the relationship between OD600 and Vp13 growth, we selected Vp13 in the logarithmic phase with strong vitality and calculated the concentration of Vp13 according to the linear relationship.Wild-type adult zebrafish AB strain were obtained from Shanghai Institute of Biochemistry and Cell Biology.Zebrafish were handled according to the procedures of the Institutional Animal Care and Use Committee of Shanghai Ocean University, Shanghai, China, and maintained according to standard protocols.Adult zebrafish were maintained in an Aquaneering system, and normal water quality was maintained.The methodology was approved by the Shanghai Ocean University Experimentation Ethics Review Committee.Embryos were grown and maintained at 28.5 °C in egg water.The other detailed parameters were the same as for adult zebrafish maintenance.Zebrafish larvae at 3 dpf were challenged with Vp13 by immersion infection, as described previously with a few modifications, or by microinjection infection, as described previously.The LD50 for immersion infection was determined by exposing 10 larvae to 5 mL Vp13 suspension in egg water, and a control group to egg water alone, and the LD50 for microinjection infection was determined by injecting larvae with 1 nL Vp13 suspended in PBS, and a control group injected with 1 nL PBS alone.Three parallel samples were used in each group."The mortalities of immersion and microinjection infection zebrafish larvae were recorded for 4 days and the LD50 was calculated using Bliss's method.In previous studies, a dose of >107 CFU/mL was usually required to activate the innate immune response in zebrafish larvae infected by immersion, while doses ranging from 5 × 101 to 5 × 103 CFU/nL were required to activate the innate immune response by caudal vein microinjection.Based on the LD50 dose, 3dpf zebrafish larvae underwent immersion and microinjection infection for 2 and 4 h, respectively.Samples at 2 h post-infection were used for transcriptome data analysis and subsequent reverse-transcription quantitative PCR validation, and 4 hpi samples were only used for subsequent RT-qPCR validation.Each experiment had three parallel samples, with 10 larvae per sample.All samples of whole larvae were flash-frozen in liquid nitrogen and stored at −80 °C until RNA extraction."Total RNA was extracted using TRIzol reagent according to the manufacturer's instructions.Random hexamers were used to synthesize first-strand cDNA and the complementary strand was then synthesized.Paired-end sequencing was carried out using an Illumina Hiseq 4000.Mapping and enrichment were performed as described previously.The accession number of the National Center for Biotechnology Information Sequence Read Archive Database were SRR9325685, SRR9325684 and SRR9325687 in immersion control groups, SRR9325686, SRR9325689 and SRR9325688 in immersion infection groups, SRR9333941, SRR9333942 and SRR9333943 in microinjection control groups, and SRR9333944, SRR9333945 and SRR9333946 in microinjection infection groups."cDNA synthesis reactions were performed in 20 μL reaction mixtures containing 1 μg RNA and HiScript Ⅲ SuperMix for qPCR according to the manufacturer's instructions. "RT-qPCR was performed using a Roche 480 real-time PCR detection system according to the manufacturer's instructions.Results were normalized to the zebrafish β-actin gene, which showed no changes over the time course of the infection.Results were analyzed using the 2−△△Ct method.Each sample contained three replicate pools.The primer sequences are listed in Table 1.Statistical analyses were performed using Excel and GraphPad Prism7.t-tests were performed and values were expressed as mean ± standard error.Vp13 growth was monitored by measuring changes in OD600 at different time points at 28 °C.Vp13 initially grew rapidly from OD600 0–0.9, followed by a stationary phase from OD600 0.9–1.8, with the end of the logarithmic phase after OD600 1.8.An OD600 of 0.9 was chosen to calculate the initial CFU based on the growth curve equation, followed by serial dilutions.The LD50 values were 3.63 × 107 CFU/mL in the immersion group and 5.76 × 102 CFU/nL in the microinjection group.We investigated and compared the effects of Vp13 infection by immersion and microinjection on the zebrafish larvae transcriptome after challenge for 2 h with 3.63 × 107 CFU/mL for immersion and 5.76 × 102 CFU/nL for microinjection.A total of 602 genes were differentially expressed in the immersion group, of which 427 were significantly down-regulated and 175 were significantly up-regulated.In contrast, only 359 genes were differentially expressed in the microinjection group, including 246 significantly down-regulated and 113 significantly up-regulated.The most significantly up-regulated and down-regulated genes in the immersion and microinjection groups are listed in Table 4.We then compared the transcriptome data between the immersion control groups and microinjection groups and found no significant difference between these two groups.We also compared the DEGs between the immersion and microinjection groups and found 21 genes that were differentially expressed in both immersion and microinjection groups.However, the expression patterns of nine genes significantly differed between the two groups: pla2g, ctslb, and haao were down-regulated in the immersion group but up-regulated in the microinjection group, while, guca1c, aqp9b, nqas4b, usp21, cxcl8a, and arr3b were significantly up-regulated in the immersion group but significantly down-regulated in the microinjection group.fosl1a, si:ch73, hamp, prr33, si:dkey, cfb, and synpo2b were down-regulated in both groups, and cyp1a, ccl20a.3, si:dkey-57, plekhf1, and tnfb were up-regulated in both groups.We compared the DEGs related to the innate immune response in zebrafish larvae following immersion and microinjection challenge with Vp13.Genes related to innate immunity that were up-regulated in both groups included tnfb and ccl20a.3, while fosl1a was down-regulated in both groups.The main DEGs related to innate immunity in the immersion group were il11a, ccl34a.4, fosl1a, atf3, cmya5, c3a.1, c8a, c8b, arg2, mmp13a, and ctslb, and tnfb, cxcl8a, ccl20a.3, and cxcr4a.The main DEGs related to innate immunity in the microinjection group were il1b, il6, il34, tnfa, ccl19b, cxcr3.3, cxcl8a, cxcl18b, fosab, rel, fosl1a, c5, irak3, nfkbiaa, ncf, and mpeg1.2, and tnfb, ccl35.1, ccl20a.3, irak3, and ctslb, respectively.To validate the DEGs identified by transcriptome analysis following immersion and microinjection infection, we analyzed the mRNA expression levels of seven DEGs in the immersion group and 10 in the microinjection group with RT-qPCR.The RT-qPCR results for all the examined genes matched the expression patterns shown by transcriptome sequencing.Sixty-three GO terms and four KEGG pathways were significantly enriched in the immersion infection group, compared with only three GO terms and no KEGG pathways in the microinjection group.The most significantly enriched GO terms in the immersion group were visual perception, sensory perception of light stimulus, and lipoprotein metabolic process, which were largely unrelated to the innate immune response, while the enriched KEGG pathways were complement and coagulation cascades, phototransduction, vitamin digestion and absorption, and fat digestion and absorption.In contrast, the three enriched GO terms in the microinjection group were closely related to innate immunity, including cytokine activity, cytokine receptor binding, and immune response.Immersion or microinjection of zebrafish larvae with the respective LD50 dose of Vp13 could activate the innate immune response.We investigated if challenge with a higher dose could cause more severe infection.Compared with infection with the lower dose, infection of zebrafish larvae with this higher dose for 2 h resulted in significant up-regulation of il11a, tnfa, tnfb, il1b, ccl34a.4, ccl20a.3, irak3, cxcr3.3, cxcl18b, ccl35.1, and il6, while cxcr4a, ctslb, and c3a.1 were not significantly up-regulated in either group.Although all these genes were up-regulated in both groups, the expression levels of all the tested genes were lower in the microinjection group compared with the immersion group.The most changed genes were il11a, tnfa, il1b, and ccl35.1 in the immersion group.The expression levels of il11a, tnfa, tnfb, il1b, ccl34a.4, irak3, cxcl18b, and ccl35.1 were further increased at 4 h post-infection compared with at 2 hpi in the immersion group.In contrast, the expression levels of most of these genes were lower, apart from ccl20a.3 which was higher, at 4 hpi compared with 2 hpi in the microinjection group.In this study, we aimed to determine if immersion and microinjection infection routes affected the innate immune response differently, and if they had other influences on zebrafish larvae infected by V. parahaemolyticus.We therefore analyzed and compared the DEGs following infection by the two different routes.Interestingly, tnfb and ccl20a.3 were up-regulated in both groups, and ccl20a.3 has been shown to attract immature dendritic cells and mediate epithelial migration.However, ctslb, which is a member of the lysosomal cathepsin family involved in yolk-processing mechanisms in embryos was down-regulated in the immersion group but up-regulated in the microinjection group.In contrast, cxcl8a, which belongs to the chemokine family and has been reported to direct the migration of gut epithelial cells, was up-regulated in the immersion group but down-regulated in the microinjection group.These results suggest that immersion infection may cause up-regulation of skin- and epithelial-related genes to protect zebrafish larvae, while microinjection may up-regulate genes that regulate innate immune related cell migration and intracellular processes, such as lysosomal cathepsin.To better understand the innate immune response, we analyzed genes reportedly associated with innate immunity.Most innate immune-related genes were down-regulated in both groups, and only tnfb, cxcl8a, ccl20a.3, and cxcr4a in the immersion group and tnfb, ccl35.1, irak3, and ctslb in the microinjection group were up-regulated.The RT-qPCR results were consistent with the transcriptome data, indicating that incubation with Vp13 for 2 h initiated an innate immune response in zebrafish larvae.We also used a higher dose of Vp13 to cause more severe infections in the immersion and microinjection groups and showed that classical cytokine genes, including il1b, tnfb, and il6, were significantly up-regulated after 2 and 4 hpi in both groups, and il11a, ccl34a.4, and cxcl18b, which were differentially expressed according to our transcriptome data, were also significantly up-regulated after 2 and 4 hpi in both groups.However, the specific roles of these genes in the innate immune response remains unclear.The il11 gene, a member of the il6 family, and il11a and il11b, have previously been identified in fish, while ccl34a and ccl35.1 may be inflammation-related genes conserved between zebrafish and mammals, and cxcl18b has demonstrated chemotactic activity towards neutrophils, similar to cxcl8a.These genes that were activated in both the immersion and microinjection groups may play major roles in the defense against bacterial infection, but further studies are needed to determine their specific roles in the innate immune response.Previous results showed that E. tarda-immersed and injected larvae demonstrated an epithelial or other tissue response to the cell membrane with activation of inflammation-related genes including mmp9, cyp1a, zgc:154020, irg1l and stc1, P. aeruginosa-immersed and injected larvae showed activation of the angiogenesis and integrin signaling pathway and inflammatory responses by mediating chemokine and cytokine signaling pathways, and differential expression of the innate immune related genes tnfa, il1b, mmp9, il8l1, tnfb, cxcl11.6, and cyp1a.Interestingly, cyp1a was differentially expressed by immersion infection with E. tarda, P. aeruginosa, and V. parahaemolyticus, indicating that cyp1a may play an important role in the response to immersion infection.Our results also suggested that, in addition to the classical innate immunity genes tnfa, tnfb, il1b, and il6, the genes il11a, ccl34a.4, ccl20a.3, cxcl18b, and ccl35.1 were also important for defending against Vp13 infection.The current study also found that GO terms significantly enriched in the immersion group were more related to early developmental processes, such as dorsal determination, cytochrome formation, and fatty acid metabolism, while the enriched KEGG pathways included the complement and coagulation cascades, phototransduction, vitamin digestion and absorption, and fat digestion and absorption.In contrast, the GO terms enriched in the microinjection group were cytokine activity, cytokine receptor binding, and immune response, which were all directly related to immunity.The most-changed gene in the immersion group was al929131.1, which has not previously been reported in the literature and thus warrants further exploration.Other significantly changed genes included wnt6a, which may cooperate with maternal wnt8a in initial dorsal determination, cyp1a, cyp1b, and cyp1c, which are part of a large superfamily of enzymes that primarily catalyze mixed-function oxidation reactions, and fabp10a, which encodes a fatty acid-binding protein.In contrast, mpeg1.2, which is expressed in macrophages and has shown an anti-bacterial function in zebrafish, and cd83, which is required for the induction of protective immunity, were significantly up-regulated in the microinjection group.These results suggest that immersion infection may affect biological processes other than the innate immune response, such as initial dorsal determination, cytochromes, and fatty acid-binding, besides inflammation, while microinjection infection mainly affects the innate immune response.We also considered that the innate immune response induced by the LD50 of Vp13 at 2 hpi represented the initial stage of infection.Only some important immune-related genes were up-regulated in the two infection groups, while a higher dose may significantly up-regulate some immune marker genes that were down-regulated in the LD50 infection groups.We also compared the gene expression profiles after different infection doses.Expression levels of il11a, tnfa, tnfb, il1b, ccl34a.4, ccl20a.3, irak3, cxcr3.3, cxcl18b, ccl35.1, and il6 were higher following infection with a >LD50 dose compared with an LD50 dose for both infection routes, though these genes were more significantly up-regulated in the immersion compared with the microinjection group.This may be because high dose of immersion infection might activate more immune systems, including the skin, mucosal, and respiratory systems, though further studies are needed to verify this hypothesis.This study also had some limitations.Induction of the innate immune response by either the immersion or injection route involved a complicated process.The innate immune response of long-term infection remains unknown.Further studies are also needed to determine if some of the genes differentially expressed in our transcriptome data responded specifically to Vp13 infection.More attention should also be paid to genes that are differentially expressed between the immersion and microinjection groups in relation to the possible cure of diseases resulting from these different infection routes.In this study, we compared innate immune related transcriptome changes in zebrafish larvae following infection with V. parahaemolyticus Vp13 by static immersion and caudal vein microinjection.Different infection methods could activate different innate response genes and induced different immune-defense pathways.These results provide an insight into the role of differentially expressed immune-related genes during the early stage of Vp13 infection and suggest that some genes related to Vp13 infection defense might be specific to different infection routes.Further studies should be conducted to clarify the roles of these genes in the mechanism of V. parahaemolyticus infection.
The innate immune response can be activated by infection via different routes. Zebrafish provide a useful infection model for studying inflammation and the innate immune response. We investigated the genes and signaling pathways activated by static immersion and caudal vein microinjection infection using transcriptome profiling and reverse-transcription quantitative PCR (RT-qPCR) to compare the innate immune response in 3 days post-fertilization (dpf) zebrafish larvae infected by Vibrio parahaemolyticus Vp13 strain. The median lethal dose (LD50) values at 96 h following immersion and microinjection were 3.63 × 107 CFU/mL and 5.76 × 102 CFU/nL, respectively. An innate immune response was initiated after 2 h of incubation with the respective LD50 for each infection method. Six hundred and two genes in the immersion group and 359 genes in the microinjection group were activated and differentially expressed post-infection. Sixty-three Gene Ontology (GO) terms and four Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways were significantly enriched in the immersion group, compared with only three GO terms and no KEGG pathways in the microinjection group. Two genes, tnfb and ccl20a.3, were significantly up-regulated in both groups. We speculated that immersion infection may affect initial dorsal determination, cytochromes, and fatty acid-binding proteins, as well as inflammation, while microinjection infection may mainly directly affect the immune response. Infection with doses > LD50 (1.09 × 109 CFU/mL and 1.09 × 103 CFU/nL by immersion and microinjection, respectively) caused more significant up-regulation of il11a, tnfa, tnfb, il1b, ccl34a.4, ccl20a.3, irak3, cxcl18b, and ccl35.1, suggesting that in addition to the classical innate immunity genes tnfa, tnfb, il1b, and il6, the genes il11a, ccl34a.4, ccl20a.3, cxcl18b, and ccl35.1 were also important for defending against Vp13 infection. These findings highlight the genes involved in the responses of zebrafish to Vp13 infection via different routes and doses, and thus provide the basis for further analyses of immune response signaling pathways.
168
Enriching Great Britain's National Landslide Database by searching newspaper archives
Risk management decisions can only ever be as good as the risk assessments upon which they rest.The United Nations Hyogo Framework for Action on Disaster Risk Reduction identifies the development and improvement of relevant databases as a key capacity-building priority.In the particular case of landslide risk, the limitations of existing landslide inventories have been repeatedly highlighted as the greatest source of error in the landslide susceptibility and risk maps used to inform land-use planning and other mitigation measures.Better data are also important for estimating landslide damage functions and thus for assessing risk in the classic sense of the combined probability and consequences of suffering landslide losses.In Great Britain, landslides commonly occur due to physical factors such as coastal erosion and maritime climate, particularly during very wet seasons.Coupled with vulnerability factors such as high population densities and high-value infrastructure, impacts from landslide events range from economic losses and infrastructure damage, disruption, injuries and fatalities.For example, in 2012 Great Britain experienced the highest monthly rainfalls for the last hundred years in many regions.This resulted in approximately five times as many landslides as usually recorded, impacts such as major transport disruptions, evacuations and four fatalities.These losses have peaked policy interest in better understanding landslide impact and in developing a country-wide landslide hazard impact model to forecast and thereby help prevent them in future.The principal source of data regarding landslide occurrence in Great Britain, what causes them and the history of their impacts is the National Landslide Database of Great Britain.The NLD is an archive of the location, date, characteristics and impact of landsliding in the past, with records dating from the last glaciation to present.First created in the early 1980s by Geomorphological Services Ltd, the NLD is now maintained and constantly updated by the British Geological Survey.Since its creation, the strategies of data collection have been variable, due to shifts in the underlying resources available, change in available technologies and variation in the intended applications of the database.The variation in the methods and intensity of past data collection make it reasonable to assume that there are additional landslide events to be found, and more information to be added about existing landslides in the NLD.In this paper, we present a method to increase the richness of the NLD by searching a digital archive of 568 regional newspapers for articles referring to landslide events."Our aim is not to 'complete' the NLD, but rather to complement existing sources by providing more and richer information about landslide phenomena in Great Britain.In particular, we demonstrate the capacity of this method to enrich the NLD in two ways: adding records of additional landslide events not previously documented in the NLD and supplementing currently recorded NLD landslide event information, particularly about impacts.As this method draws consistently upon an independent dataset, comparing the results to the contents of the NLD can also provide a way to assess potential bias in the NLD and enhance overall confidence in its data.The method we present here could also be applied to enhance understanding of other natural hazards, such as surface water flooding, whose incidence and impacts are not systematically recorded in existing datasets, particularly when examining records pre-remote sensing.This paper is organised as follows: In Section 2, we discuss the broader difficulties of producing landslide inventories and how these relate to the NLD.We then consider the potential of newspaper articles as a supplementary source of landslide inventory data and review existing studies using this approach before introducing the particular newspaper archive used in our research.In Section 3, we describe the methodology we developed for searching and filtering digital archives of regional newspapers to collect news stories about landslide events and extract factual information from them to enrich the NLD.Then in Section 4, we present results of our newspaper searches for two search periods.In Section 5 we discuss the implications and uncertainties of our methodology and how this methodology might be applied in other contexts.In Section 6 we summarise results and draw conclusions.Detailed information about the nature of past events is important for understanding, predicting and managing landslide risk.Van Westen et al. identify four basic types of information about past landsliding needed to support risk assessment and management:The environment surrounding the landslide,What triggered the landslide,What elements are/were at risk.Of the four categories given above, van Westen et al. and Van Den Eeckhaut and Hervás demonstrate that the first category, landslide inventories, is the most important when considering potential risk for the future.Compiling such inventories is complicated by a number of factors, including the following: There are first order conceptual questions about the definition of a landslide ‘event’ to be recorded as distinct from a landslide triggering event. Compared to other hazards, where we often have direct instrumental measurements of the phenomena over a wide region, landslide deposits observed on the ground are the outcome of a set of interacting processes that are rarely feasible to measure systematically instrumentally.Consequently, to produce a landslide inventory, one must actively search for them across a landscape, through methods such as remote sensing and photogrammetry, field investigations, public reporting/interviews and archival research or a combination thereof. It can also be difficult to identify and extract landslide events from public databases.For example, in the UK the Highways Agency Road Impact Database, landslides do not have a specific event code.Landslides and engineered slope failures are sometimes noted in a free text field but are more commonly recorded in their database of traffic disruption as "other".For the above three reasons, it is rare to have databases of all landslides that have occurred over a region within a given time period, and there may be biases towards locations where humans are affected or larger landslides that are more discernible in imagery/field studies."The 'completeness' of an inventory will also be affected by the time lag between the landslides occurring and when they are inventoried, as smaller landslides may be eroded/erased from the landscape within a few months of occurring. "In a survey of 22 European countries that have or are developing national landslide databases, Van Den Eeckhaut and Hervás found that 68% of respondents estimated the completeness of their country's database to be less than 50%.The above difficulties with the completeness of landslide inventories limit the quality and predictive power of landslide susceptibility assessment.Consequently, landslide risk may be under or overestimated depending on the completeness and homogeneity of coverage of the landslide inventory.The NLD is the most extensive source of information about British landslide occurrence.A metadata description with examples of its content can be found online at BGS.The NLD currently contains over 16,500 records of individual landslides occurring between the last glaciation and present day.For each landslide, more than 35 possible attributes can be recorded.These can broadly be categorised into:Landslide location,Landslide timing,Type of landslide,Cause of landslide,Size of landslide,Impact of landslide,Geological setting of landslide.Perhaps due to the somewhat episodic nature of landslide activity in Great Britain, policy concern for landsliding has waxed and waned, as have resources for NLD data collection and database maintenance, resulting in temporal and spatial variations in database richness.The first national landslide database was initially established in the early 1980s to raise awareness of the nature and distribution of landslides for planning purposes at a local authority level."As the method employed was a desk-based review of secondary sources such as technical reports, theses, maps and diaries, the spatial extent of records in the original NLD were biased towards locations of human interest, such as high impact landslides or 'classic' field study locations.During the 1990s, sources of revenue from the database were not large enough to fund the maintenance and regular updating of the database and the project was mothballed.In the early 2000s, the Department of the Environment made the database available to the BGS, who over the next few years devoted considerable effort to restructuring, quality controlling, and supplementing this database into a more user-friendly and commercially relevant resource."As of 2006, the NLD can be considered to be in its 'contemporary' phase, where information about new landslide events is systematically recorded and added in 'live'.In addition to landslides occurring under natural conditions, since 2012 the BGS also records information about failures in engineered slopes, as they often cause considerable human impact.Information about landslides is added to the NLD through a number of primary and secondary research channels, which are described in detail in Foster et al. and Pennington et al.These can broadly be separated into:BGS maps and archive documents,Academic literature,Searches of archive media documents,Online keyword searches of current media sources,Personal communication,Keyword searches of social media implemented since August 2012,Citizen science reporting via the BGS “report a landslide” web-portal since 2009 and BGS Twitter profile, implemented in 2012.From 2008 to 2013, the search of current media which helps inform the NLD, was performed by Meltwater.Meltwater is a subscription media monitoring service aimed primarily at assisting organisations to manage their PR by scanning online media.They provided the BGS with a daily report based on the results returned from an automated Boolean search of a database of 190,000 online sources, including news, social media and blogs.However, the actual sources searched and how they may have changed over time are commercially confidential.With the rise of social media, Twitter has become, along with traditional media reports, a primary channel by which the BGS is alerted of landslide events.Where possible, alerts are followed up via field investigation or contact with affected groups/land owners, prior to inclusion in the NLD.Pennington et al. estimate that the addition of social media and inclusion of engineered slope failures since 2012, and improved traditional media search strategies, have increased the number of NLD additions per year by a factor of 10 compared to the start of the contemporary phase.In the following sections, we describe the use of newspaper articles as a source of information about landslide events, introduce the Nexis UK archive of regional newspaper stories and discuss differences between the current media search strategy used by the BGS and that of Nexis UK.Mass media is generally the first and primary source of information about hazards for the public.Yet, mass media is also used by scientists and practitioners in the field of hazards in a number of ways, with varying levels of depth of engagement with the media:First alert.A news article may be the first way a practitioner hears that a hazard event has happened.From this first alert, s/he may decide whether any follow-up is required.Archives.Archives of news stories about various events can be searched to create or add to a database or inventory of hazard occurrence.Documenting impacts.Media can be used as a way of documenting impacts of events from desk based studies, both at the time of occurrence and through future updates/press releases and reports.Public perception of risk.Analysis of the interactions between mass media coverage and public understanding of hazards and risk can be performed.For example, media coverage of a particular hazard can be assessed over time to understand changes in how issues such as responsibility are framed or assessing variation in interest in a particular story over time.Public communication.Information can be disseminated through interviews and press statements.The use of newspaper articles as a proxy for records of various hazards is not a new technique.In a review of proxy records, Trimble lists examples of studies as early as 1932 using newspaper reports to construct a record of major landslides occurring in Switzerland from AD 1563 onwards and in 1946 using newspaper reports to reconstruct a record of flooding in Utah.The technique is also well established in historic climate reconstruction.Raška et al. provide an overview of natural hazard databases that use newspaper and other documentary evidence.For landslides, perhaps the most cited national database is the Italian AVI project, containing records of > 32,000 landside and > 29,000 flood events, going back 1000 years, but with most recorded between 1900 and early 2000, of which ~ 78% of the information comes from newspaper reports.More recently, the growing capacity to search freely available digital archives of global newspaper reports and online sources has prompted the construction of the Durham Fatal Landslide Database, which is a global record of landslides triggered by rainfall that have resulted in fatalities since 2004.For the seven year period, 2004–2010, the database includes 2620 landslides, which resulted in 32,322 fatalities.Other examples of landslide databases using newspaper articles as a source of information include Domı́nguez-Cuesta et al. in the North of Spain, Glade and Crozier in New Zealand, Devoli et al. in Nicaragua and Kirschbaum et al. at the global scale.There are clear biases in newspaper articles as a proxy for information about hazards, such as an overemphasis on events with human impact, increased media interest following a number of events, a focus on high magnitude events or underreporting of low magnitude events and scientific correctness of information.Nonetheless, the regular publishing intervals and relative ease and low associated costs of performing a desk-based study means that analysis of newspaper articles is widely seen as a useful complement to other methods for building hazard databases.For example, in a review by Tschoegl et al. of 31 major international, regional, national and sub-national hazards databases, newspaper reports are used as a regular and/or major source of records about hazard events in 10 of the databases.In the last decade, there have been considerable advances in the digitisation and indexing of archives of newspapers in the UK, for example, The British Newspaper Archive and The Nineteenth Century Serials Editions.Here we have explored the use of a digital subscription archive, Nexis UK, to add richness to the NLD.The archive was chosen due to its national scope, coverage up to present day and the relative ease of searching.The method described in the following sections could be applied to other archives and extended back in time, as we will discuss in Section 5.The Nexis UK archive of regional newspapers contains records of the print versions of 568 newspapers from across the United Kingdom.For our purposes, we focus on the information that can be extracted from them to enrich the NLD which covers just Great Britain.Whilst Nexis UK coverage is continuous from 1998 to present, some selected newspapers have records going back further, although Deacon cautions that there are some small inconsistencies in how data have been archived.For storage reasons, the Nexis UK archive does not include any original photographs from the news story, so some potentially useful information is lost.Although national newspapers are also archived within Nexis UK, we decided to focus efforts on UK regional newspapers rather than national ones.By their nature, most landslides are local events with local impacts that would be newsworthy at a local to regional level.Any landslides large enough to make the national news would most likely also be captured in the regional press.At the time of undertaking this research, the BGS had already used media sources to add information to the NLD.However, there are distinct differences between the media sources used by the BGS and the large archive of regional newspapers, Nexis UK, proposed here.Although both sources are digital online services, Meltwater is a record of online news, whereas Nexis UK is a record of printed news.Even if both Meltwater and Nexis UK return records from the same newspaper, the content and length of the stories may vary.In an example given by Greer and Mensing, a study comparing coverage of a news story about genetic cloning across three national broadcast news websites and three national newspaper websites, researchers found that online news stories were generally 20–70% shorter, around 50% of stories were written by newswire services and generally the websites contained fewer citations.It is not clear how many of the regional newspapers included in Nexis UK database also have an online outlet that is being searched by Meltwater, but it is clear that the content may well differ between the two, and as we will show, the Nexis UK database adds a large number of ‘new’ records of landslides to the NLD.In this section, we present our methodology for searching the Nexis UK archive of regional newspapers to enhance the NLD.This process involves five major steps:Construct a set of Boolean search terms to query the Nexis UK archive.Apply the search terms to obtain all articles from a given time period to return a corpus of potentially relevant articles.Skim-read each article from this corpus to identify those which are relevant.Identify whether relevant article refers to a landslide already recorded within the NLD.Extract and code relevant information from the relevant articles.Pass information on to BGS for quality assurance, cross-checking and NLD data upload.Nexis UK returns newspaper articles based on a Boolean keyword search.There were multiple criteria for the search:Maximise the number of articles about landslides in Great Britain, particularly those that are lesser-known or unlikely to be recorded in the NLD.Minimise the number of false positives.Ensure search terms capture any regional or temporal variation in landslide terminology.For instance the Oxford English Dictionary notes that landslip is used chiefly in British English and thus would be a less appropriate search term for other parts of the world.Words also change over time, for example the word “slough”.The OED notes one meaning for slough comes from old English and connotes soft, muddy ground or mires, and another comes from Middle English and meant outer skin or peel.It was extended metaphorically by 19th century geologists to describe the surficial material shed by engineered embankments and steep scree slopes.In verb form the two meanings come together, insofar as the sloughing of rock or soil is usually down into a hole or depression.Landslides are referenced using many different words by scientists, practitioners and the public, thus we use several Boolean search terms.To refine search terms satisfying the criteria listed above, five sub-steps were completed within Step A:Identify key landslide terminology from the sciences and the media.Apply search based on A1 for selected training periods.Read through all resulting articles from Step A2.Identify landslide events and compare these to those already existing within the NLD.Identify any additional terms used to refer to landslides as well as co-occurring words associated with false positives.Incrementally add the additional search terms found in Step A4 to the existing search terms in A2 and re-apply search.At each stage verify if any articles about landslide events are being filtered out and/or a large number of false positives are being added in.The search terms were applied to two time periods in the Nexis UK archive: all articles published between 1 January and 31 December for both 2006 and 2012.Once the search was applied, all newspaper articles were downloaded and input into a database to aid categorisation, creating a corpus of potentially relevant stories.The title of each article was skim-read to ascertain whether it was relevant.This is demonstrated in Table 2 where article 1 on Fleetwood Mac is clearly irrelevant from the title and is thus rejected and categorised as “I”.If the title suggests the article could be relevant, the full text was read to locate and date the possible landslide.In some cases, further desk-based research was required to ascertain whether the article truly referred to a landslide event or not.For example, one newspaper article referred to a landslide but then further described the event as “a building collapsed into a construction site”.In such examples, desk-based research was undertaken to identify the exact location of the event using tools including Google Earth time lapse imagery, Google Street View, property websites, social media and other online sources to identify whether this was a landslide, a sinkhole, an issue with slope excavation or another type of event.As detailed in Step A3, if a relevant article contained enough information to approximately locate and date, a search was performed upon the existing NLD to see whether a record of the landslide existed.If so, the article was linked by ID to that landslide event, creating additional confirmation of this event and a potential source of further information to be processed at a later date.Newspaper articles containing more precise information, were used to update the original landslide event.If the landslide did not exist in the NLD, as much information as possible was extracted from the article and categorised according to the BGS NLD pro-forma and a case-by-case judgement of the precision of that information made.An example article is shown in Fig. 4.In this section, we present the results of applying the Nexis UK search method to all regional newspaper articles contained in the database published during the calendar years of both 2006 and 2012.In Section 4.1, we present the overall results of the search before detailed analysis of individual landslide events is undertaken.We then describe how this method adds richness to the NLD through finding previously undetected events and the addition of information to existing events.Finally, in Section 4.4, we discuss the precision to which this information can be estimated from newspaper articles.In Section 5, we will discuss the reliability of this information and potential further applications of the method.The Nexis UK regional newspaper archive was searched using the terms listed in Step A5, Section 3.1.5 for all articles published between 1 January and 31 December during 2006 and 2012.The initial search resulted in 711 articles in 2006 and 1668 articles in 2012.All articles were then skim-read and categorised into broad types, which are listed in Fig. 5.For both periods, around 20% of articles were categorised as completely irrelevant, and around 20% of articles were categorised as “general landslide discussion”, meaning they referred to landslide phenomena but were not specifically about any particular landslide event.Broadly, there was a decline in the number of articles discussing landslide events abroad and historical landslides between 2006 and 2012."This is countered by an increase in the proportion of 'relevant' articles referring to a landslide event occurring in Great Britain, which rose from 18% in 2006 to 42% in 2012.This is possibly due to the fact that 2012 was a record year for landslides in Great Britain, resulting in increasing public and media interest.There was also an increase in the number of articles discussing landslide related policy in 2012.This is largely attributable to relatively unusual high-impact events occurring in 2012, such as fatalities, region-wide railway delays and repeated closure of stretches of road such as the A83 road at Rest and Be Thankful, resulting in questioning from the press about what should be done to prevent landslides from a policy perspective.A similar effect has been noted in post-flood event coverage.Relevant articles referring to landslide events in Great Britain were then analysed more closely to associate them with particular landslides and extract information about those events with which to enrich the NLD in two ways:Adding landslide events not previously recorded in the NLD,Capturing more information about landslide events already in the NLD.In the following Section 4.2, we discuss these two ways of enriching the NLD, starting with additional events and their spatial patterning before turning to the additional information that our method of searching Nexis UK can generate about events already recorded in the NLD.Although using Nexis UK we found 268 news articles referring to landslides not previously recorded in the NLD, many of these articles were referring to the same, rather smaller, subset of events.Once this repetition in our corpus of articles was accounted for, the final number of additions to the NLD was 39 events for 2006 and 72 events for 2012.This represents a 122% and 40% increase in the number of landslide events in the NLD for 2006 and 2012, respectively.We attribute these NLD additions principally to more and different sources now being searched, along with the majority of new landslides being relatively small in size and thus only of interest to the community in the immediate vicinity.Fig. 6 shows the number of additional landslide events per month for both years.In both years, the seasonal temporal trend in number of landslides per month is roughly the same: high landslide occurrence in the winter, and also a peak in the mid-summer.The pattern in number of additions from the Nexis UK method appears to vary between the years.In 2006, the percentage increase in number of landslides added to the NLD per month varies between 0% and 600% and there does not appear to be a strong relationship between the number of landslides already in the NLD and the number of additions.Whereas in 2012, the percentage increase in number of landslides per month varies less and appears to be weakly linked to the number of landslides already in the NLD for a given month."This suggests that the 'contemporary' phase NLD is a reasonably representative sample of the temporal patterns of landsliding in Great Britain and that the BGS's development of search methods has been effective.Moreover, these results suggest that there is no strong bias for the month landslides are reported in by the media, although testing of more years of data would be required to confirm this.Fig. 7 shows separately for 2006 and 2012, the spatial distribution of landslide events already recorded in the NLD at the time of this research, and additional landslides added based on Nexis UK news coverage.The pattern in both years is broadly similar, suggesting no shift over time in the detection biases of this method.The distribution of events previously recorded in the NLD roughly matches that of the additional events detected from the Nexis UK regional newspaper archive but not yet recorded in the NLD.In both 2006 and 2012, both NLD and Nexis UK archive landslides are clustered in the South West of England, with smaller clusters in the North West, North Wales and the Highlands of Scotland; these areas of significant activity can be directly related to rainfall patterns and topography.In Fig. 8 we show the spatial distribution of the combined landslides from 2006 and 2012, again for both landslides in the NLD at the time of this research, and additional landslides from Nexis UK, overlaid on a map of landslide susceptibility created from records within the NLD.Broadly, the spatial extent of additional landslides correlates with regions of medium to high susceptibility in the existing susceptibility map.As well as adding new landslide events to the NLD, the corpus of relevant stories generated by searching Nexis UK was also mined to enrich the NLD by capturing additional information about landslide events.As noted in Section 2.1, the existing BGS pro-forma records > 35 attributes.For ten landslide events, additions and amendments were made to the records already in the NLD based on information included in Nexis UK articles.This included more precise dates and locations and additional impact information.Moreover, there are now 55 and 500 additional newspaper articles from Nexis UK for 2006 and 2012 respectively that are linked to individual NLD landslide event entries by ID, acting as additional confirmation for that event and a potential source of further information to be mined at a later date.Fig. 9 shows a breakdown of the type and/or availability of information available from newspaper articles for each additional landslide event identified from the Nexis UK search, compared to the types of information available from a subset of the NLD.Newspaper articles are a good source of information for landslide date, approximate location and description of impacts."However, newspaper articles rarely contain more 'geotechnical' information such as the type of landslide, trigger and size.Elliott and Kirschbaum highlight the difficulty in classifying the type of landslide.Generally, landslide type classification was only possible from the articles in the Nexis UK archive for rock falls, which can be attributed to the relative simplicity of descriptions of large boulders rolling/detaching versus the more visually subtle difference between a planar/rotational slide.Fig. 9B shows that a trigger for a given landslide event could be identified from newspapers in less than half of cases.Typically the only trigger that could be inferred from an article was heavy or prolonged antecedent rainfall, which articles often described.Our findings based on newspaper articles are broadly consistent with the NLD, which indicates that 63% of landslide events in the NLD in Great Britain were triggered by rainfall.It seems likely that many of the landslides from the Nexis search method missing this triggering information were quite possibly triggered by rainfall.Newspapers could also be mined for information about the impacts and size of landslides."As these are primarily 'free text' rather than categorical fields in the NLD, results are presented in binary terms of whether information was present or not.Fig. 9C highlights the relative success of extracting landslide impact information from newspaper articles.As mentioned previously, this is most likely due to preferential coverage of landslides that have caused human impact over those that have not.Fig. 9D illustrates that landslide records both from the NLD and newspapers rarely contain information about the size of landslides.Where this information was available, it was generally quoted as a weight in tonnes.Some articles would state the size of a landslide qualitatively, but we did not use these classifications on the grounds that landslide size varies by many orders of magnitude, and truly larger landslides are very rarely seen in Great Britain."Thus, a 'large' landslide to a British journalist may represent a relatively small landslide based on globally observed frequency–size statistics, and even in other British regions might be considered 'medium' or 'small'.The precision to which each landslide event can be dated and located from newspaper information was estimated for all additional landslides identified from the Nexis UK archive.Spatial precision is expressed in metres as a radius from the point location given in the database.Date precision is expressed as the amount of time either side of the date given in the database in which the landslide could have occurred.This is generally recorded in categories with increasing units of time.Fig. 10 shows frequency–size plots for the spatial and temporal precision respectively.Approximately 30% of landslide events already existing in the NLD include an estimate of the spatial precision.Results are reasonably similar for the 2006 and 2012 periods.In both cases the spatial precision of landslide events from the Nexis UK archive is slightly poorer than those landslides already existing in the NLD; in the NLD, the spatial precision peaks at a 100 m radius from the point location of a landslide event, whereas for the Nexis UK, the spatial precision peaks at a 1000 m radius.The date precision of additional landslides identified from the Nexis UK archive is generally good, with 45% of landslides dated to within 1 day of occurrence and 65–75% of landslides dated to within 1 day to 1 week of occurrence.We hypothesise that this is attributable to a generally short lag between event occurrence and reporting.In Fig. 11, boxplots were used to show the time lag in weeks between a landslide event occurring and being reported in Nexis UK newspaper articles, classified by the dating precision of that landslide.For landslides where dating precision was within 1 day, the median time lag between the event and reporting is 2 days.For landslides dated within 1 week, month and quarter, the median lag is equal to 1 unit of that time period.For landslides identified in both newspapers and the NLD, an estimate of the date precision is not available, but the median time lag for all these events was 2 days.In this paper, we have demonstrated that searching digital newspaper archives is an effective and robust method for adding richness to the NLD.In particular, the search methods we developed were consistently successful in:Adding previously unrecorded landslide events to the NLD for all but 1 month of the 24 months analysed.Adding further confidence to many of the existing landslide entries in the NLD by adding additional sources of information.Augmenting the information recorded for landslides in the NLD, particularly about their impact.With this proof of concept test, it should now be possible to apply our method to enrich NLD records of historic landslides occurring throughout the period covered by Nexis UK.Moving forward, our search terms could also be applied to supplement the existing sources of information used to alert of BGS of landslide events."This would provide the BGS with a relatively rapid method of 'reconnaissance' to guide whether further investigation may be required.The most successful element of this work was the addition of landslide events to the NLD.This has resulted in a 122% increase and 40% increase in the total number of landslide events recorded in the NLD.The spatial and temporal distribution, types and triggers for these additional landslides recorded using this method are consistent with existing understandings of landslide susceptibility in Great Britain."These additional landslides also agree broadly with those already recorded in the NLD, which by definition is a ‘patchwork' of methods and efforts devoted to data collection strategies.This agreement provides a basis for added confidence in the NLD as a representative sample of contemporary landsliding in Great Britain, which looks to be growing more complete over time.No single resource will ever provide a complete record of recent landslide events, as events in rural or coastal areas with no impacts are likely to stay unreported, but this research reassures and enhances the current spatio-temporal record.The increasing proportion of events recorded in the NLD relative to those identified from the Nexis UK search highlights the influence of evolving data search-and-capture methodologies.Access to more social media resources, systematic processing and the adaptions of rules regarding the addition of smaller and engineered slope failures has greatly enhanced the ‘live’ recording of events.Beyond this immediate application to enriching the NLD, our paper has wider aims.By outlining in detail a clear methodology for developing and applying Boolean operators for searching digital archives of text data, we have provided earth scientists with a guide for exploiting the new sources of data about earth system processes opened up by the ‘digital humanities’ and projects like the British Newspaper Archive, which is scanning the vast holdings of historic newspapers held by the British Library to make them available for online searching.Following the systematic approach we have described in the paper, it should be possible to develop terms for searching these and other digital archives in order to enrich the records of historic landslides held in the NLD and other landslide inventories develop similar databases for other hazards.As with any method, there are uncertainties and biases involved in using such an approach, which we discuss in Section 5.1 along with ways of overcoming them.We then go on to discuss how the bias towards events impacting humans could actually be useful in providing a rich source of data for quantifying the costs and other societal impacts of landsliding.In Section 5.3 we go into more detail on how others might extend this research by applying to longer time periods and in its own rights adopting a more automated approach.Whilst searching newspaper archives offers an effective, relatively low cost method for gathering additional data about landslides and other natural hazard events, there are inevitably uncertainties and limitations to be considered.First, it requires subjective expert judgement to translate journalistic text into the data fields of the NLD.Sometimes relevant information is not explicitly within the news article, but can be inferred, and such inferences can vary between operators.In our case, we explicitly used two different people to search the Nexis UK regional newspapers and a one-day training period was performed to ensure consistent interpretation of results.Such ‘investigator triangulation’ is a well-established method for ensuring the robustness of qualitative research in social science.Second, there are also systematic biases in media coverage that affect its use as a source of landslide inventory data."Media coverage tends to focus attention on large or ‘novel' events and those with human interest such as an impact on society.Also, whilst landslide events are relatively unusual and therefore generally newsworthy, media attention depends on perceptions of salience and if a small landslide occurs on the same day as a large election, the landslide may go unreported, whereas in a period of major landslide impacts, landslides may rank high in public interest and receive proportionally more coverage due to an availability bias.Thus, although the search strategy used here is systematic, the database we are searching is not a spatially or temporally homogeneous record of events.By their very nature, newspaper articles primarily report on “landslides with consequences”.In a major review of news coverage of disaster events, Quarantelli found that individual newspapers tend to report on average 90 stories about a particular disaster event, and are most active in the post-event period, providing analytical coverage, resulting in a rich source of information about impacts.In Fig. 9C, we showed that just over 50% of landslide events in the NLD from 2006 onwards contain some information about impact, whereas 60–90% of landslide events identified from the Nexis UK archive contained impact information.Moreover, we found examples of longitudinal reporting of impacts, such as one newspaper article at the time of the event and another article a few months later reporting the remediation works undertaken.One challenge in compiling records of landslide impacts is defining categories by which impact can be measured.For example, Guzzetti uses a measure of the number of annual fatalities caused by landslides, Klose et al. put forward a methodology for measuring the impacts of landslides in economic terms, and Guzzetti et al. quantify the impact at a regional scale on population, transportation and properties.Schuster and Highland also note that very few studies consider the impact of landslides upon natural, non-human environments.Because of these difficulties and discrepancies in recording past events, there are few examples in the literature of robust, large-scale forecasting of the impacts of landslides.Due to the original design and intended research purposes of the NLD, the existing categories in the NLD for recording the impacts of landslides were found to be somewhat insufficient for capturing the rich variety of information available in newspaper articles.Whilst there are fields for number of fatalities, number of persons injured and cost, other impact information is largely recorded as free text.After analysis of Nexis UK articles from 2012 was complete and additional events and information added to the NLD, the list of impact information was organised into broad categories, which provide a first indication of the main types of impact observed in Great Britain in a particularly severe year.Fig. 12 shows an infographic of the principal types of impact observed — although it has been noted that the majority of landslides that occurred in 2012 were small shallow failures and in the coming years there may be different types of impact caused by larger, deep seated landslides that have a longer lag time between rainfall and triggering.Nonetheless, this impact information from 2012 now provides a baseline for comparison to other hazard impact data recording structures for a recent review).Although there is clearly potential to further mine newspaper articles for information about landslide impacts, there are biases such as overestimations, selective coverage and errors in interpretation of impact that must be taken into account.Typically, this would be countered by using the statements made from a range of articles.Such ‘source triangulation’ is well accepted in the social sciences for dealing with these problems.However, due to their local nature, we found that 65% of landslide events were reported in only one article and where the event appeared in multiple articles, the information contained was often repeated verbatim.Nevertheless, newspaper reports can act as a near-real time alert that an impact has occurred and may need to be further investigated.As already described above, we are not the first to use newspaper articles as a source of information about landslide events.Newspaper have also been successfully drawn on as a major source of information about historical events and to supplement other landslide inventories.Although these studies have undoubtedly been performed with attention to detail and in a systematic way, there is relatively little discussion within the literature of the detailed process of constructing a robust search strategy with the aim of capturing as many relevant articles as possible.It is hoped that by detailing the methodological steps involved and addressing related issues of uncertainty, this paper will make it easier for others to apply this method.We now discuss three potential extensions to the method we have explored in this paper: extend archival searching farther back in time, increase speed and automation of the archival searching, and extend archival searching method for landslides to other countries or other hazard databases.The method outlined in this paper has demonstrated a good ability to identify small landslides that might otherwise be missed by other methods of inventory production, historical landslides that may have been erased from the landscape and more generally, detailed accounts of hazard impact."The search terms outlined in Step A5, Section 3 could be applied 'as is' to the remaining years of the Nexis UK archive, and perhaps with some further verification of temporal variations in terminology to the British Newspaper archive, which dates back to the 1800s.The Nexis archive also contains material from many countries across the globe, and has a similar level of coverage for France, Germany and the Netherlands.By clearly outlining the steps involved in search terminology experimentation, this method can now be applied broadly to other countries or other hazards to create robust, systematic inventories of hazard information from newspaper articles.This paper has set out a method to construct a set of Boolean terms and systematically search the Nexis UK archive of 568 regional newspapers for information about landslide events in Great Britain.When applied to all newspaper articles published in 2006 and 2012, this method added richness to the existing National Landslide Database in three ways: Additional landside events were added that had not previously been recorded in the NLD, resulting in a 120% and 40% increase in the number of documented landslides in Great Britain in 2006 and 2012 respectively; NLD records of landslide events were augmented, by populating more fields of information and also providing additional sources of confirmation to many events, thus increasing the robustness of the database; and Landslide impact information could be obtained from newspaper reports.There are some issues with uncertainty and inhomogeneities in media coverage of hazard events, which require caution.This method should be considered as supplementary to more robust methods of landslide database production.Nonetheless, this method represents a relatively quick, low-cost way of identifying events that may require further investigation.In explicitly outlining the steps involved in creating a robust, systematic search, we hope this method can be applied to other landslide and other hazard databases to increase the richness of past records and thus improve the ability to forecast future events.Key landslide terminology outlined by Varnes and Cruden and Varnes were assessed, selecting the terms that are more commonly used in the English language or styles of landslide particularly prevalent in Great Britain.More commonly used terms known to be used in the British media were added, based on previous BGS experience of searching media, including “landslip”, “slope failure” and “slope instability”.To test the robustness of the combination of search terms from Stage A1, they were applied to the Nexis UK archive of newspaper articles over four sample training periods: 1–31 December for 2004, 2005, 2006 and 2012.Landslide events during December 2012 had a high media profile, with events routinely recorded from national press and social media.During the years 2004 to 2006, ‘live’ data collection and recording of events were not so systematic, in addition to engineered slope failures and smaller events being rejected.These particular time periods were therefore chosen in order to test semantic variability over the range of the Nexis UK archive and to compare with and add richness to the NLD.Each newspaper article was skim-read to check whether it satisfied the following criteria:Is the article relevant?, "Is the article about a landslide 'event'?",Is the article about a landslide event that occurred in Great Britain?,Is it possible to roughly locate and date the landslide event?,If any of the four criteria above were not satisfied, the article was rejected and basic information about the article systematically recorded.If all of the four criteria above were satisfied, a search of all landslides already existing in the NLD was performed to check whether the landslide was already recorded.If the landslide was already recorded, the newspaper article ID was linked to the NLD landslide ID as a potential source of more information and confirmation.If the landslide event was not in the NLD, as much information as possible about the landslide was extracted from the article and systematically recorded using the same structure as the existing NLD.All articles referring to landslides were read carefully to identify any additional terms for landslides used within the texts.This resulted in the additions “cliff collapse” and “land movement”.Variations of “cliff collapse” were also added in.We also identified co-occurring words associated with false positives; all irrelevant articles were coded into themes, and key words selected based on these themes to modify the Boolean filter to remove any articles containing the words “elect”, “victory”, “win”, “won”, “majority”, “submarine” and “porn”.At each stage, the search of Nexis UK for the training periods was re-applied, and the resulting articles checked to verify that no landslides previously identified were now being filtered out and no large number of false positives were being added in.In this Step A5, constructing the final set of search terms used in the rest of our research, there were two cases where a large number of irrelevant articles were returned.The decision was made not to filter results because this would inevitably filter out relevant articles.The first of these was "cliff falls", which captured reports about people falling from the top of cliffs as well as ones about the coastal cliff instability.Given the semantic overlap between these two reporting themes, automated methods could not distinguish between them easily, so it was decided to use manual ones instead.The second case included articles about landslide events occurring abroad.Nexis UK offers some additional search filters, such as searching by geography and newspaper section.However, we chose not to use these filters as sample testing showed that regional newspaper articles are not consistently classified in Nexis UK, therefore the results were too limiting.Manual filtering was used to deal with articles from regional newspapers in Northern Ireland, so as to only choose stories that referred to landslides in Great Britain.The final search terms that we used for all subsequent searches are given below.This includes the use of Boolean logic and wild cards,to search for different derivatives of given terms:,Terms in bold underline indicate that if one instance of that term appears, then the article will be flagged as a potential landslide relevant article.Terms in italics indicate that if an article contains any of the bold-underlined black words but also contains one of the italicised words, the article will be filtered out of search results.* = wildcard of 1 character; !,= wildcard of 1 or more characters.Fig. 2 shows results from applying the final search terms to the four training periods.For the December 2004 and 2005 test periods, the NLD did not have any records of landslide events, whereas 4 landslides were identified in each month using the Nexis UK archive."This demonstrates the potential value of applying the method outlined here to enrich the NLD for the period prior to 2006 period when the BGS entered its 'contemporary' phase of data collection.For the December 2006 test period, the NLD contains records of 7 landslide events, 3 of which were also identified in Nexis UK articles.In that month, we also detected 4 additional landslide events not previously recorded in the NLD, representing a 57% increase in database entries for December 2006 by using the Nexis UK archive as an additional source of information.December 2012 was part of a particularly wet season, resulting in many more reported landslides than usual.At the time of performing this research, there were 75 landslides in the NLD for December 2012.Of these, 18 events were also identified in the Nexis UK archive.We also detected an additional 6 landslides not recorded in the NLD, increasing the total number of landslide events recorded for December 2012 in the NLD by 8%.The decline between 2006 and 2012 in the proportion of landslides detected using the Nexis method but not currently existing in the NLD, can be explained by the addition of social media as a source of information and the subsequent inclusion of engineered slope failures in the database.In December 2012, there appear to be proportionally more events in the NLD that were not found in Nexis UK than in December 2006.This contrast was investigated for the December 2012 test period by examining the source of information for each landslide event that was found in the NLD but not in the Nexis UK newspaper archive.Fig. 3 shows a breakdown of these sources.The principal reason for these landslide events being in the NLD but not Nexis UK was that they were reported in the media after 31 December 2012.There is good reason to expect that many of these December 2012 events would have been detected using the Nexis UK archive, if instead of searching for a single test month, the time horizon for searching had been extended to overcome this lag time between an event occurring and a story being published about it.The second most frequent reason that we found for landslides not being identified in Nexis UK is the source being an online newspaper article from the Newsquest Media Group.This group publishes some 300 local/regional newspapers, but only the print version of many of these newspaper titles is available to search in the Nexis UK archive.From our experience, the content and frequency of publishing vary considerably between the online and print versions.For instance, online news articles may be uploaded daily, whereas the paper is printed once per week, and neither the online nor print version contain all stories of the other, leading to discrepancies in the search results we generated using the Nexis UK method and the media scans provided to the BGS for the NLD by the Meltwater method.There were a small number of cases where the source was available in the Nexis UK archive, but the specific article was not.This was confirmed by performing additional searches of Nexis using the title of the article and just searching a specific source.This can sometimes happen with freelance or newswire stories where the newspaper does not own copyright and cannot make it available for searching in Nexis UK.The majority of the remaining landslide events not identified in the Nexis UK archive search were from sources not available to search in the Nexis archive.None of the landslide events recorded in the NLD but not returned from the Nexis UK archive appeared to be caused by filtering/errors with the search terms."Although it is not possible to validate these results against the 'true' number of landslides that actually occurred in Great Britain in this period, it does appear that the search terms and method used here has relatively good agreement with existing records in the NLD and is also able to add richness by identifying additional landslide events.We did not identify any particular regional or temporal variations in landslide terminology.However, all test periods are relatively recent.It is possible that if the search was applied to more historical archives that spatial or temporal trends may appear in the landslide terminology used.To produce high quality landslide susceptibility maps and broadly have a good understanding of the landscape setting in which landslides occur across a region, we often require multi-temporal inventories of landslides, extending back over a number of decades."This is an issue for retrospective studies, as many landslides are 'erased' from the landscape via erosional processes within a few months to years.Thus, to produce historical inventories, we often rely on records of landslides from proxy sources.Indeed, in perhaps the best example of a long-term archive of landslide events, over 60% of records of landslide events come from newspapers, and the others from reports and interviews.Other examples include a database of historical landslides occurring in Utah from 1850 to 1978 and landslides occurring before 1990 in Nicaragua.Although the Nexis UK archive only extends back to 1998, there have been many advances in the digitisation, character recognition and compilation of historical UK newspaper sources going back considerably further, suggesting that this method could be applied to much longer time periods to gain a better long-term understanding of landslide phenomena.For example, the British library has been undertaking a project to digitise its archive of newspapers extending back to 1800.It is likely that the search terms listed in Section 3 would need to be adjusted to take into account historical variations in terminology, but this presents an opportunity to gain further insight into landsliding in Great Britain over a relatively long timescale.There have been considerable developments in the field of automated newspaper content analysis using computers to identify the meaning of sentences within a text and extract information into a database; and this has been applied to fields such as political science, economics and the policy dimensions of environmental phenomena such as hurricanes and climate change.This could be of use to more rapidly process the large number of articles returned and retrospectively populate the database over longer time periods, particularly in countries where a large number of landslides occur annually.There are questions, however, about how easily this automated approach could be adapted to the creation of landslide event databases due to the indirect descriptions of events and the need for additional research to extract information.There have been considerable advances in the ability to automate searches of large volumes of social media, so it is possible that now robust search terms have been developed, it may be possible to apply a more automated approach to the task.The issues of database completeness are not specific to the field of landslides in Great Britain.As mentioned in Section 2.1, Van Den Eeckhaut et al. found that the majority of European countries that maintain landslide databases estimate the completeness to be around 50%.At a global level, Guzzetti et al. estimated that only around 1% of slopes have associated landslide inventory maps.Yet, detailed, systematic, well-produced landslide inventories are fundamental in both applied risk analysis and scientific research.Indeed, it is acknowledged across many hazard-related disciplines that database incompleteness is an issue, and various proxy records have the potential to fill some of the gaps in our knowledge.Examples include Stucchi et al. for seismology, Barredo for flooding and Blackford and Chambers with respect to climatology.
Our understanding of where landslide hazard and impact will be greatest is largely based on our knowledge of past events. Here, we present a method to supplement existing records of landslides in Great Britain by searching an electronic archive of regional newspapers. In Great Britain, the British Geological Survey (BGS) is responsible for updating and maintaining records of landslide events and their impacts in the National Landslide Database (NLD). The NLD contains records of more than 16,500 landslide events in Great Britain. Data sources for the NLD include field surveys, academic articles, grey literature, news, public reports and, since 2012, social media. We aim to supplement the richness of the NLD by (i) identifying additional landslide events, (ii) acting as an additional source of confirmation of events existing in the NLD and (iii) adding more detail to existing database entries. This is done by systematically searching the Nexis UK digital archive of 568 regional newspapers published in the UK. In this paper, we construct a robust Boolean search criterion by experimenting with landslide terminology for four training periods. We then apply this search to all articles published in 2006 and 2012. This resulted in the addition of 111 records of landslide events to the NLD over the 2 years investigated (2006 and 2012). We also find that we were able to obtain information about landslide impact for 60-90% of landslide events identified from newspaper articles. Spatial and temporal patterns of additional landslides identified from newspaper articles are broadly in line with those existing in the NLD, confirming that the NLD is a representative sample of landsliding in Great Britain. This method could now be applied to more time periods and/or other hazards to add richness to databases and thus improve our ability to forecast future events based on records of past events.
169
How health professionals regulate their learning in massive open online courses
The invention of the Internet provided opportunity for radically new models of online learning.However, online learning provision has tended to mimic conventional teaching in an online setting and models of online learning have largely been adaptations of conventional approaches to teaching, rather than new innovations.For example in Higher Education, campus-based universities tend to use online learning as a complement to face to face instruction, while open universities have largely applied models of distance education that move from the delivery of paper-based materials to online distribution of digital content.Over the last few years, Massive Open Online Courses have emerged as a way for millions of learners worldwide to access learning opportunities more flexibly with the advent of thousands of courses, attracting millions of learners.While the original proponents of MOOCs envisaged them as a radical departure from conventional, online learning, the enormous growth of MOOC offerings has been through the emergence of courses that adopt more traditional pedagogical approaches, prioritising scale over pedagogical innovation.There are two distinctive features of MOOCs that differentiate them from other forms of online learning: that they offer open access to Higher Education for learners irrespective of their previous qualifications or experience; and that they facilitate learning on a massive scale with thousands, or even tens of thousands, of learners signing up for each course.To enable learning at such scale, and reduce the cost of learning support, MOOCs tend to be designed around a self-guided format that assumes learners are able to regulate their own learning, rather than relying on instructor guidance.However, MOOCs attract a diverse spectrum of learners, who vary in their ability to regulate their learning.The capacity to self-regulate learning is influenced by personal psychological and environmental factors.There is evidence that self-regulated learners adopt effective learning strategies in conventional, online contexts, planning, monitoring, and coordinating their sources of learning.MOOCs, however, are qualitatively different from conventional, online courses, particularly in terms of their scale and openness.Gaining insight into self-regulated learning of individual participants in MOOCs is critical in understanding whether and how open, online courses are effective in supporting learning.This qualitative study examines how learners regulate their learning in a MOOC.The context of study is the Fundamentals of Clinical Trials MOOC offered by edX, a leading provider of open, online courses based in the United States.The study explores the research question: How do professionals self-regulate their learning in a MOOC?,by collecting and analysing narrative accounts of learning provided by health professionals participating in the MOOC.The paper begins with a review of current research in MOOCs, focusing on studies that address aspects of SRL and further our understanding of MOOC learning.This review is followed by a description of the design and context of this study, and of the instrument used.The results are then presented and discussed.The paper concludes with a discussion of the main findings and their implications, alongside a reflection on the limitations of the study and prospects for future research.The past decades have been marked by changing societal expectations around access to Higher Education.The internet and digital technologies have been viewed as a potential means of opening access to Higher Education to people irrespective of their previous educational experience.However, there is a tension between cost and scale, and universities have sought ways to provide cost-effective access.MOOCs have been promoted as a potential solution to the cost-scale conundrum.MOOC providers, such as edX, Coursera, and FutureLearn, have worked in partnership with universities to provide scalable solutions by designing courses that foreground content presentation, typically lecture video and automated assessment, over opportunities for interaction.This design has led some authors to question the utility of MOOCs as an effective environment for online learning.Nevertheless, MOOCs have become a popular choice for individuals seeking learning opportunities, and this has stimulated research effort focused on understanding learning within MOOCs.While initial MOOC research was often qualitative, quantitative studies have become dominant with the emergence of large scale MOOC platforms that permit the generation and analysis of ‘clickstream’ data."Attempts to interpret clickstream data include mining the data tracking learners' access to MOOC resources and classifying learners according to their patterns of interaction with content or with other learners in online discussion forums. "Other studies have focused on MOOC participants' prior education, gender and geographic location to explore the factors underlying poor rates of completion that are typical of MOOCs.But while these quantitative studies of learner activity within MOOC platforms provide us with greater understanding of what populations of learners do within MOOCs, our understanding of why individual MOOC participants learn as they do, and how they actually learn is less developed.Unlike in traditional HE courses where learner expectations are largely standardised, the diversity of learners in a MOOC results in a range of motivations for participation and potentially leads to different levels of engagement which may not be focused on completion.In a MOOC, where certification may be absent, or of little value, learners are required to be more intrinsically motivated, recognising their own goals and indicators of success.Breslow et al. argue that it is important to understand the influence of learner motivation on learning in MOOCs."Similarly, Gašević, Kovanović, Joksimović and Siemens call for studies that improve our understanding of ‘motivation, metacognitive skills, learning strategies and attitudes’ in MOOCs arguing that because levels of tutor support are lower than in traditional online courses, there is a need for greater emphasis on the individual learner's capacity to self-regulate their learning.Self-regulation is the ‘self-generated thoughts, feelings and actions that are planned and cyclically adapted to the attainment of personal goals’.Zimmerman identified a number of components of self-regulation including goal-setting, self-efficacy, learning and task strategies, and help-seeking.Although originally conceptualised in formal settings, SRL and its sub-processes have subsequently been studied extensively in online contexts and SRL is increasingly being used to investigate learning in MOOCs.Research that explores these aspects of SRL in MOOCs is described below.Zimmerman highlights goal-setting as a central component of SRL.By setting goals, the learner is able to monitor progress towards those goals, adjusting their learning as necessary.Different types of goals are recognised, ranging from specific, learning focused goals driven by intrinsic motives to extrinsically motivated performance goals.Setting goals and monitoring them is motivational as it provides evidence of progress to the learner.Haug, Wodzicki, Cress, and Moskaliuk explored the utility of badges in a MOOC focused on emerging educational technologies.The authors used self-report questionnaires and log files to explore patterns of participation, and found that learners who had set a goal to complete the course were more likely to sustain their participation than those who did not set a goal.Completion of the course provided an extrinsic motivation for these learners.However, as highlighted above, MOOC learners may not be motivated by completion, so it is important to understand different types of motivation for MOOC study.Zheng, Rosson, Shih, and Carroll conducted interviews with learners who had undertaken a variety of MOOCs and identified four categories of MOOC learner motivation: fulfilling current needs, preparing for the future, satisfying curiosity, and connecting with people.Their findings suggest that completion is just one outcome of MOOC participation, with key motivations to study being intrinsic in nature, related primarily to personal improvement.In a larger, survey based study, exploring motivations of MOOC learners based in the United Kingdom, Spain and Syria, seven different types of motivation were identified, mirroring the categories identified by the Zheng et al. study, and in addition identifying categories of motivation reflecting other extrinsic factors: the free and open nature of MOOCs, their convenience, and the prestige of courses run by high quality institutions.These studies identify the types of goals learners may be setting, but do not tell us about how different types of goals influence learning in MOOCs.Self-efficacy, the personal belief about having the means to perform effectively in a given situation, represents another component of self-regulation."An individual's self-efficacy influences how they respond to setbacks in their learning, with highly self-efficacious individuals redoubling their efforts in an attempt to meet their goals when faced with a challenge, while those lacking self-efficacy may give up or become negative.In a study of learners registered for a MOOC on economics, Poellhuber, Roy, Bouchoucha, and Anderson explored the relation between self-efficacy and persistence using clickstream data and scales for self-efficacy and self-regulation.Their study found a positive link between self-efficacy and persistence, though the main predictor they identified was initial engagement.Wang and Baker studied participants on a Coursera MOOC on big data in education to explore the link between motivation, self-efficacy, and completion.The study found that participants who self-reported higher levels of self-efficacy at the outset of the course were more likely to persist to the end, echoing findings from online learning research.Our own parallel study of participants in a MOOC on Data Science linked a range of factors: previous experience of MOOC learning, familiarity with content, and current role to learner self-efficacy.Learners draw on a range of cognitive and metacognitive strategies to support their learning, including taking notes, revising, supplementing core learning materials, exercising time management and undertaking on-going planning and monitoring.Highly self-regulated learners draw on a wider range of strategies and recognise the applicability of different strategies to different situations.They are also able to effectively monitor their learning, changing strategies when they become ineffective.Veletsianos et al. explored the learning strategies of a small group of learners who had completed at least one MOOC, focusing on note-taking and content consumption."Their interviews uncovered a variety of note-taking strategies that facilitated these individuals' engagement with the course content.The range of note-taking strategies utilised illustrated how different approaches such as taking digital notes, using a dedicated notebook, or annotating printed slides, complemented different patterns of participation and engagement.Other learning and task strategies are also important.For example, in a survey-based study exploring the causes of high drop-out rates in MOOCs, Nawrot and Doucet, identified time management as the primary reason for MOOC drop-out, being cited by more than half of their survey respondents, though their study did not collect detailed descriptions of how time management skills contribute to effective learning in MOOCs."Help-seeking: recognising the limits of one's own knowledge and understanding the role that others can play in one's learning is another key attribute of self-regulated learners.Studies by Cho and others have demonstrated that learner interaction such as seeking help is important for high quality, online learning.The importance of learner interaction was also highlighted by Abrami, Bernard, Bures, Borokhovski, and Tamim in their meta-analysis of a range of studies of distance education and online learning.That study concluded that online learning designs should incorporate learner interaction, but that such an approach is dependent upon learners having the capacity to self-regulate their learning.MOOC researchers have explored the impact of social interaction on MOOC learning.In a small scale case study, Chen and Chen looked at how participation in a local face-to-face study group improved motivation, broadened perspectives, and led to shared learning strategies among MOOC learners.Interaction with peers can also be effective online.Gillani and Eynon established a link between forum participation and MOOC completion by analysing patterns of interaction of highly performing students in a MOOC focused on business strategy.Learning can occur with other students in the same cohort, or with others in existing networks.Veletsianos et al. describe how learners in their study who took digital notes shared them with their peers through social networks.The study, which focused on interactions which took place outside course platforms, found that learners consistently described these learning focused social interactions as meaningful, though the authors concede that their analysis was unable to provide an insight into how these interactions affect learning.In addition to these studies that use concepts of SRL to explore individual learning in MOOCs, a complementary strand of MOOC research has used SRL to critique and inform MOOC design.Bartolomé and Steffens used SRL as a lens to critically evaluate the utility of MOOCs as a learning environment.Their theoretical study applied criteria originally developed to evaluate online learning platforms and concluded that MOOC platforms such as those offered by Coursera and edX could be categorised as a ‘content system without tutor’ supporting cognitive and motivational components of SRL, but providing little support for emotional and social components of SRL.This analysis highlights the inherent shortcomings of MOOC platforms, and signals the type of skills that learners need to possess and use to learn effectively in these courses.Gutiérrez-Rojas, Alario-Hoyos, Sanagustín, Leony, and Kloos argue that the lack of interaction opportunities offered by these MOOC platforms disadvantage learners who have poor study skills and may contribute to early drop-out seen in MOOCs.To address this inherent shortcoming of MOOC platforms, Gutiérrez-Rojas et al. designed a mobile application that supported novice learners as they studied on a MOOC, scaffolding their interaction with content and replacing some of the functions of a tutor.The design of the application is mature, but its effectiveness has yet to be evaluated.The studies described above suggest that self-regulation is important for effective learning, and that learners differ in the extent to which they self-regulate their learning.However, these studies give little insight into the actions and behaviours learners adopt to learn in open, online, non-formal contexts.This study builds on earlier work examining the self-regulated learning undertaken by professionals in a variety of contexts to investigate how professionals self-regulate their learning in the context of a MOOC.The study design utilises a qualitative SRL instrument to reveal narrative accounts of learning from participants in the MOOC and through them to identify patterns of self-regulation.The Fundamentals of Clinical Trials MOOC, provided an introduction to the research designs, statistical approaches, and ethical considerations of clinical trials.The 12 week course was aimed at health professionals and those studying for a health professional role and attracted 22,000 registrants from 168 countries.Participants for the study were drawn from a larger cohort of learners who responded to a message posted to the course website in week four inviting them to complete a survey instrument designed to provide a measure of their self-regulation.The makeup of the study cohort was representative of the overall demographic profile of the course cohort in terms of gender, age, education background and geographical distribution.Participants who completed the survey instrument, and who identified as healthcare professionals, were invited to take part in a semi-structured interview designed to explore their self-regulated learning in the MOOC using a script developed iteratively over a number of studies.Relevant questions are included in the Section 3 below, with the full interview script available online .Thirty-five Skype interviews were conducted during November and December 2013."The interview transcripts were analysed to probe how participants' self-regulate their learning in relation to each of the sub-processes described by Zimmerman.Each of the 35 transcripts were coded independently by two researchers, and codes assigned corresponding to these SRL sub-processes as well as other coding structures reported separately.Discrepancies in the coding between the two researchers were minor and were resolved prior to the commencement of a second round of analysis.The transcripts were then re-analysed by two researchers to identify emergent patterns of self-regulated learning behaviour.This section describes the analysis of data from the interviews, arranged thematically by SRL sub-process."The transcript analysis uncovered detailed accounts describing participants' goal-setting, self-efficacy, learning and task strategies, and help-seeking.These accounts are presented in turn below with a summary and initial synthesis at the end of each sub-section.The interview questions did not elicit detailed descriptions of the other SRL sub-processes and these sub-processes are not discussed further.Table 1 lists the study participants, their gender, role, and geographic location.Descriptions of goal-setting and identification of diverse types of learning goals were elicited through questions including: Can you summarise your main aim in this MOOC?,and Did you set specific goals at the outset of this MOOC?,Most participants described setting goals.21 participants described goals focused on what they aimed to learn from the course, while 19 participants described performance goals focused on their completion of the course or attainment of the course certificate.There was some overlap between these two groups with 12 participants describing both learning and performance focused goals.Goals focused on learning primarily articulated how the course content related to, or enhanced career prospects or job requirements.A Nurse Teacher described how the course complemented his existing knowledge and skills in his current role:“I know the material, but I need and I am looking for different explanations of the syllabus.So these open courses are giving me helpful information of how to resolve and how to explain the same issues in another way.,Similarly, when asked about her goals, a Clinical Research Consultant clearly indicated how she expected the course to supplement her existing knowledge:"‘It's learning, getting to know more about some things that I already know, but I wanted to go into more depth, to get more information because there are some areas that I am not good at, like biostatistics, even study the design because I was not doing that a lot.’",Most goals were articulated at this broad level, with only a few participants reporting goals focused on specific aspects of the course.A Physician described a narrow learning-focused goal: ‘My goal was to be very confident of fundamentals of probability in clinical trials.’,This learner had already taken three statistics MOOCs and was focused on addressing a particular gap in her knowledge.Other learners described goals that focused solely on extrinsic criteria – completion of the course or attainment of the certificate – making no detailed reference to what they expected to learn.A Neurologist described how his goals were ‘… to watch the videos and access the learning material and then to gain the certificate.’,These non-specific goals could apply to studying on most MOOCs and do not indicate the same level of engagement as focused learning goals.The Fundamentals of Clinical Trials course originated at Harvard Medical School, and several respondents, such as this R&D Innovation Projects Coordinator, articulated goals focused on certification of their learning: ‘The goal is to have an in depth knowledge of this area from a very prestigious university like Harvard and having it certified with a certificate, it will be great for me and my job afterwards.’,This ‘Harvard brand’ was attractive to many.Twelve respondents articulated both learning-focused and performance-focused goals.A Clinical Trials Project Manager listed two goals, the second clearly articulating how it would benefit her future work:‘I was expecting to be able to complete the course first of all and second to have an overview from A–Z. I was not expecting to learn a lot of things in depth, which is normal.’,adding ‘But my two objectives were an A-Z learning/understanding and to understand better my day to day work, my day to day practice at the organisation of clinical trials where I work.’,Similarly, 366 described her goals and her ambition to expand her learning:"“The first goal is to pass and get the certificate of achievement… second goal is to participate in the discussion forums as much as I can, try to benefit from the teaching assistants and professor and understanding new things that I didn't know before”.The personal nature of this goal was unusual, with few participants focusing explicitly on their intrinsic motivation.One other example is from a Paediatric Pharmacist, who, while acknowledging the attraction of certification, indicated that her primary motivation was simply to ‘learn from the best’:"‘I would like to have finished the class, to get the certificate, but it wasn't really for that. "I think it's more personal, like a personal goal, like I just wanted to learn from the best. "So it's great that you have a certificate, but I'm not about the piece of paper, I'm about the learning opportunity.’",Finally, a small number of participants appeared not to have set goals, though in response to the question, most of this group articulated goals based on completion or certification."For example, the Physician responded to the question of whether she had set any goals with ‘I didn't.I tried to get through the course.’,In summary, most participants articulated goals, but these goals varied greatly, with some respondents focused on extrinsic outcomes, such as course participation, completion and certification, while others articulated more specific goals related to course content, or the intrinsic benefits of their study to their career, current role or personal satisfaction.Goals of any type can be motivating, though intrinsic goal orientation is more strongly associated with academic achievement in online contexts.The range of goals identified in this study matches the motivation types identified by White et al.Only a very small number of participants set goals relating to mastery of specific concepts of expertise development.Instead the learning goals articulated more general descriptions making reference to the overall course topic.In this course, learning objectives provide a clear structure to the course for all participants, and across the groups there was a clear awareness of the course content."The objectives are intended to guide the learner's participation in the MOOC, by signalling what the learner should learn.However these objectives might also encourage learners to adopt a passive approach to their learning, viewing and reading the learning material without engaging in learning activities that fulfil their original learning needs.Interview transcripts were analysed for indicators of self-efficacy.Questions designed to probe self-efficacy included: Do you feel able to manage your learning in this MOOC?,and Do you feel able to integrate your learning on this course with your professional practice?,The majority of learners interviewed provided accounts that demonstrated good self-efficacy.These participants typically provided clear and detailed descriptions of their learning.For example, when asked about his experience of this course, a Medical Laboratory Scientist reported no problems with his learning: ‘Yes I think I have been quite comfortable doing it.I work full time, I study part time in my own time, but yeah I have really had no problems.’,He then went on to indicate recognition of his individual responsibility as a learner:‘I knew it was going to be a course which would be taking quite a bit of my time … there was a lot of material which I had to cover, so I knew I had to commit myself … and actually find time for the course.’, "This inherent understanding of how the MOOC fitted into their on-going learning was also evidenced during the interview of an R&D Innovation Projects Coordinator who described how the course helped her expand her existing knowledge: ‘I′m very familiar with the subject, I already have a good background, I have all the resources and knowledge about this issue … that's why it's not hard for me to grasp what they say in the lectures.’", "She then affirmed her confidence in her own ability and persistence: ‘Actually it's related to character and personality and commitment. "I'm this kind of person if I have a commitment to have a certificate then I will have a certificate.’",Not all participants were as assured of their ability to learn and succeed in the course.A minority of those interviewed provided descriptions of their learning that indicated lower levels of self-efficacy in this context.One Physician reported having difficulty with learning: ‘The course material is quite a lot.… I hoped I can get the certificate, but I found it quite difficult for me.’,Although a health professional, this participant was not currently involved in clinical research and would have been unfamiliar with much of the content of the course, unlike some participants already working in the field who may have a basic understanding and were seeking to formalise their learning rather than learn something entirely new.For other participants, lack of self-efficacy may have been due to a lack of familiarity with the MOOC format.A Clinical Pharmacy Lecturer, also appeared to doubt his ability to learn, but in this case, it was his lack of experience that was important: ‘sometimes during the course I found myself lost, this is due to the background may be that I was deficient.’,He indicated a need for assistance: ‘I always start searching on an internet engine, but it needs some sort of assistance ….’,suggesting that he would have preferred to have received more guidance in the course.Most of the accounts indicated high levels of self-efficacy as may be expected given the background of the participants in this study.However it is clear that some participants were not as confident of their ability to succeed in the MOOC due to their lack of prior experience of MOOC learning or lack of familiarity with the content.These findings reflect those of a companion study which also found that self-efficacy was impacted by previous familiarity with learning content or platform.Self-efficacy is highly context dependent and linked to task familiarity and experienced MOOC-takers often talked in their interviews about how they had settled on an approach to MOOC learning.Learners without prior MOOC experience would benefit from additional support in the form of tutor guidance, additional resources or orientation to the course environment.Interview transcripts were analysed to look for the range of learning and task strategies utilised by study participants.Two aspects were probed in particular: whether and how an individual had taken notes,and how active their approach to learning on the course had been.More than half of those interviewed took notes to support their learning in the MOOC, with accounts describing how they contributed to learn in different ways.For the most part, notes were taken as a means of summarising the video lectures, perhaps deploying strategies learned at University."As one Pharmacist reported: ‘I behave as if I am in a lecture theatre when I'm watching these videos.So I would take the sort of notes that I would have taken at university or any other lecture theatre.’,These lectures were delivered in English, and for non-native speakers, notes represented an effective way of reinforcing their understanding.A Psychiatrist described how: ‘I write notes because it is hard for me to understand the videos because they are in English …usually I write down in Spanish.’,While there were differences in how notes were taken, with some preferring paper and others preferring digital notes, the descriptions provided almost always related to text based notes, with only two reports of non-text based notes.For example, a Lecturer who was using the course to improve the support she could give her students described how she made tables and charts and remarked: ‘I try to transfer the information to easier forms.’,Taking notes may be part of a strategic approach to learning.For example, a Nurse Teacher described how he recognised signals in the learning materials, for instance when the lecturer presented highly structured information: ‘I′m taking notes because the exercise will ask me about these points.’,He then expressed the value he perceived in note-taking: ‘Learning is not just watching videos or attending classes, learning is better when the student is pushed to take notes to read and then answer some exercise which will involve the readings and the notes.’,A minority of those interviewed did not take notes.For some, such as this Physician, note-taking was not a learning and task strategy she would routinely use:‘I do download the study material which is provided, but while I watch the video I do not have a habit of making notes and I am a person who is organised in a mess."So even if I make a note I don't recollect and read those notes.’", "Similarly, a Clinical Pharmacy Lecturer, recognised the value of note-taking as an aid to learning, yet did not write notes: ‘Notes, no I didn't make notes … my professor always tells me that you have to take notes and reply and comment and I think this is one of my disadvantages regarding reading and I know it's a deficiency.’",Sometimes, not taking notes was an active decision.As the course was wholly online, some participants opted simply to collect all the resources in one place on their computer and save these for future reference.A teacher remarked: ‘I do it on the computer … but not taking notes of anything."Sometimes I go back to the video, sometimes I print the articles they recommended, so I read these, but I'm not taking notes at all.’", "To understand learning and task strategies in a broader context, learners' active engagement and how they managed their time in the course was also examined.Analysis focused on how learners matched their effort to the demands of the course and the extent to which they had adapted their learning during the course.The course followed a regular structure and learners typically set aside time to watch videos and explore the recommended texts, often fitting in their study around other professional and personal commitments.A neurologist described his study pattern:"‘I do the course at the end of my activities, between 9 and 10 o'clock at night.… I take an hour probably a day to go through the materials and when I have a doubt I read first and then go back to the material.’,Just over half of those interviewed appeared not to have changed their approach to the course.For some, this was because they knew what to expect from MOOC study or due to previous familiarity with course content.For example, a Clinical Research Associate described how she felt ‘the course is quite easy’, reporting that she was able to follow the course with a minimum of effort.A minority of this group provided descriptions of their learning that indicated that they had faced challenges, yet did not adapt their approach.For three learners, time had been a factor."For example a Consultant recognised the importance of extra reading, but had not earmarked time to read: ‘I have access to the text book, I should have tried to go through it and learn from it, but I'm not able to give time for that.’",A physiotherapist reported similar actions: ‘Yes they provided us with the name of a textbook … I downloaded this book, but I never have time to have a look at it.’,A Physician in this group repeatedly indicated she found the course challenging.When asked about her intention to complete the quizzes she responded: ‘I tried to, but some questions I cannot get the correct answers.’,or when asked about the course reading ‘In my previous learning I expected to think about the articles.In this course we have to think more after reading research articles.’,Despite recognising problems with their learning, these learners did not appear to have changed her approach.The remainder of those interviewed had changed their learning approach during the course.For some, it was clear that MOOC learning was a new experience for them, demanding greater effort than anticipated.A Clinical Trials Project Manager described this in detail:"For me it's new to learn something by watching a video. "Sometimes I have to be more critical … it's not A + B = C, I have to think differently and even if I take good notes, I don't find the answer … and I have to go back to the transcript and to the chapter and read again what the professor said and then during your reflection find the solution, find the right answer.For others, it was not lack of familiarity with the format that had motivated their change, but instead, a recognition of a limitation of their own inherent learning behaviour.For example, a Clinical Research Consultant described how: ‘I sometimes suffer from procrastination, so I have to make myself do it in a certain time … I made a plan, like an action plan, for each module’.Finally in this group there were some learners who had adjusted their approach not to address a learning challenge, but rather to match the benefit they felt they were gaining from the course.Two learners described how they had found the forums helpful, and had increased the time they spent reading posts in response.A Nurse Teacher focused his effort on particular aspects of the course that were of interest.He described his strategic approach as follows:"Well every unit I review what's going on. "So if I'm very interested in one unit I more time, so then I read more papers, I read more material, more references, websites and then even I can watch the videos more times. "But if I'm not really interested, less motivated, I'm just watching the video, answer the exercise and then go on to the next.In summary, there was evidence of some participants taking control of their learning, actively modifying their approach and managing their time to match their effort to the benefit they perceived, and to increase the effectiveness of their learning.It also appears that some participants lacked the skills or motivation to monitor effort or manage their time effectively.Nawrot and Doucet argue that MOOC designs should support effective time management strategies including the provision of example study plans, assigning time estimates to all activities, and providing tools to support learners to schedule and plan their learning.The provision of a scheduling tool in particular could support learners who are less skilled in time management to develop these skills.The accounts of note-taking indicate that this learning strategy was primarily used as a means of summarising video lectures, applying learning skills developed in formal education.Summary notes were used particularly by learners whose first language was not English.Only a few examples of more sophisticated note-taking approaches were reported and although some participants recorded their notes digitally, this did not seem to facilitate sharing in this cohort, unlike the MOOC learners studied by Veletsianos et al.The edX MOOC platform incorporates a discussion forum which acted as a space for a case analysis exercise, as well as a locus for informal course related discussion.Transcripts were analysed to identify indicators of positive and negative attitudes to learning with others as well as accounts of help-seeking and engagement with others during the course.Participants were questioned about interactions both within and outside the course.However, there was little evidence of learners interacting with people within networks outside the MOOC.The analysis presented here is, therefore, focused on discussion forum activity.All participants had an on-going opportunity to interact through the course discussion forums, either through actively posting questions or providing answers, or by choosing instead to observe others.While all participants interviewed had looked at the forums, only around half had actively participated in the forum, with most of this group recognising the clear benefit it provided, as illustrated by this quote from a Physician, describing its overall value: ‘a lot of people from different backgrounds will be coming to the course, which is definitely an advantage over an offline course.’,For those with the skills to interact with others, the forum could be a valuable source of learning, as reported by another Physician.When asked whether she interacted with other course participants, she responded:‘I do it every day.My experience with the MOOC so far is equal learning, if not more, happens in the discussion forum.It is a great place and I make it a point that I visit the discussion board every single day, read through most of the posts and try and participate/share my views as well."It's an amazing place.’",This benefit of learning from others may be unanticipated: a Medical Epidemiologist described how she had found the forum more useful than she had expected: ‘at the beginning I was not planning to participate in the forum and as the course went on I am learning more.I mean I read more in the forum and I try to participate.’,Whatever their intentions, these experienced practitioners were drawn into discussions as they saw that their own experience would be of value to others.A Physician described how she was able to bring her own professional perspective into the discussion, illustrating the potential of this multidisciplinary course:"‘I found myself commenting on a couple just because I knew the answer to their question and a lot of them would talk about just the practice of medicine in general, you know they make comments about how this ethically related to the clinical studies and stuff like that and so I wanted to give it a perspective from my educational background which is being a doctor, I know what it's like.’", "For some, including this Lecturer, the forum was central to their study on the course: ‘Second goal is to participate in the discussion forums as much as I can, try to benefit from the teaching assistants and professor and understanding new things that I didn't know before.’",The same Lecturer described how she routinely interacted with peers: ‘actually today I was thinking to share some thoughts about or some conclusion and collecting some ideas … the first thought was to share it with the MOOC course.’,Similarly, a Pharmacist described how she enjoyed discussing ideas with other professionals in depth, using the forum space to:‘give arguments and discuss what you think, what are your experiences."That's a nice thing because this course is a little bit specific… you have who know what they are talking about, they are not…I don't know how to say it, civilians that don't understand professional words and that's what I like, something a little bit more serious.’",Interactions in the discussion forum can provide a relational dimension to learning.A Data Manager highlighted how sharing ideas on the discussion board had provided a mechanism for expanding his professional network:"‘ … because we are looking at, even after the end of the course, we'll still keep in touch and create a network for the health workers, those working in health because the course has so many professionals, we have surgeons, we have nurses, we have doctors, we have public health practitioners.So we are looking at creating a network at the end of the course’,A small number of learners who had actively participated in the forums were less positive about their experiences."A Clinical Pharmacy Lecturer appeared to lack self-efficacy, describing his experience negatively as follows: ‘… you reply to someone who exhibits her ideas regarding maybe a certain question or certain discussion, but no response … I don't know, maybe they are busy, your participation is maybe not convincing to them.’",A Surgeon, was even more negative, eventually giving up on the forum:‘No one was helpful."Most of them didn't even understand what I meant at all, … I have tried 2 or 3 times to try and explain my problem and they couldn't understand me at all, I gave up and I really honestly don't have the time to spend so much time on the discussion board.’", "For the third member of this group, a Clinical Trials Administrator engaging with the forums was not worth the effort: ‘it's really difficult to find anything specific in there, it seems a bit unorganised and lots of things are repeated.So sometimes you find a really good comment from someone, but it seems to be more a matter of luck.’,When asked whether she had anticipated using the forum more, she responded: ‘Yeah I think I did."Also to get the feeling that there were other students as well, but yeah I couldn't really find a useful way to dig into it.Slightly more than half of the group interviewed limited their activity to reading posts by others.In forums where there are large numbers of users, an individual may find the help they need by browsing existing content, rather than actively requesting assistance.The decision not to engage was an active one, with learners finding a level of engagement that suited them.Lack of time was a key factor in choosing how to engage with the forums, and for many, this optional activity was sacrificed in favour of core course activities."A Physician described how time had limited her activity: ‘If I had more time I would have interacted more with the forums, but time was a little problem, so that is why I couldn't interact much more.’",All of the participants interviewed were working as health professionals and few had been permitted to set aside time to study by their employers.For others, not posting reflected the preference of individual learners."For example, a Psychiatrist, appeared to have limited her interaction with others because of the platform design rather than any negative views of learning with others: ‘I haven't discussed with anybody because it isn't a good format to discuss.’",She went on to suggest that real time discussions, with teachers, would have been more useful for her learning, a type of interaction that is not possible in a MOOC of this scale."When asked whether she had interacted with others, a Clinical Research Consultant replied ‘I am weak in that area because I don't like chatting online and sending messages and participating in discussion boards.’",indicating that her problem was not caused by the edX MOOC platform.Here, the underlying reason could be cultural as this Serbian participant, who had previous experience of online study stated:‘where I live, … there is a totally different attitude to learning.When you go to school you are served a certain amount of information and you are supposed to memorise them.You are not supposed to learn to think and to participate in discussion.’,For some participants, there was a clear preference to learn alone.A Medical Laboratory Scientist described how: ‘I prefer to work alone … the only time I have visited the discussion boards would be when I have really run out of ideas and to get clues of what others may have.A Paediatric Pharmacist expressed a similar negative view:"‘you know if you learn from other people who don't know what they're talking about you could teach yourself the wrong thing. "So my focus is , I read them but I take them with a grain of salt, I'm like “I don't know if this person knows what they're talking about”. "So I just keep the information that the researchers are telling me and then I'll use that for my own knowledge.’",Elaborating on her attitude to help-seeking, she remarked: ‘I′ve never really been a study group person, I′ve always been a study group leader … I′ve always kind of worked with them to help them.’,In summary, around half of those interviewed participated actively in the forum, and for most this had been a positive experience."These learners saw the potential value of learning from one's peers, and how this could broaden their learning.Other participants recognised the role they could play in passing on their professional experience to others and the role of the discussion forum in growing their learning network.A small number of participants had negative experience of the forum, expressing frustration at the quality of discourse, or the utility of the platform.Half the participants did not actively engage in the forums but instead utilised the forum simply as a source of information, with a few participants harbouring explicit reservations about learning from their peers.Unlike Veletsianos et al. this study found little evidence of learners using social networks outside the course environment to support their learning.Indeed in this MOOC, there was little evidence of learners talking about their learning outside the course even within their face to face professional networks.There may be two explanations for this.First, Veletsianos et al. recruited their participants through social networks and may have self-selected a sample comprised of enthusiastic users of social networks.Second, the highly structured nature of the Fundamentals of Clinical Trials MOOC may have encouraged participants to perceive it as a self-contained course.This study examined narrative descriptions of learning from MOOC participants, illustrating the range of self-regulation of learning that occurs during MOOC study.There were clear differences in the types of goals set, and help seeking behaviour, with less distinct differences in the learning strategies adopted and levels of self-efficacy reported.The narrative descriptions collected provide an insight into how learning occurs in MOOCS and how learners who self-regulate their learning to different extents might be supported within MOOC platforms.The different sub-processes of self-regulated learning are of course highly inter-connected, with evidence from quantitative studies using self-report instruments to measure these sub-processes indicating that they are strongly correlated in different populations.Bringing together the four sub-process described here, some clear patterns of learning emerge that illustrate their inter-relationship.On the one hand, there are highly self-regulating learners who have a clear understanding of what they want to learn and how it will impact their career, job or personal development.These individuals assume control of their learning, monitoring their progress and adjusting their effort to maximise the benefit they gain from their studies.These learners go beyond the core tasks of the course, searching for additional resources and engaging with others in the forums to develop their ideas and grow their learning network.They are also strategic in their approach and may miss out parts of the course that are of less interest – finishing the course is not necessarily a measure of success for them.At the other end of the spectrum, there are learners in the MOOC that do not seem to be self-regulating their learning to any significant degree.These learners focus on completion and certification as their measure of success, and who appear not to have considered the personal benefit that participation will bring them.These learners are content to closely follow the course structure of video lectures, readings and quizzes, devoting the same amount of time each week, but can become derailed if they begin to find the material more challenging as they are unable or not prepared to change their approach.MOOCs are positioned as accessible to all.Support should be available for those who lack confidence to interact, or articulate their own expectations of the course, or who do not have strategies to learn effectively in MOOC platforms.For those who are unable or choose not to self-regulate their learning, MOOC platforms could incorporate tools to scaffold learning, and encouraging the development of skills such as time management, goal setting, reflection, and help-seeking, elevating these platforms beyond the ‘content system without tutor’ category described by Bartolomé and Steffens.Tools such as the MyLearningMentor application described by Gutiérrez-Rojas et al. could scaffold the learning of participants exhibiting low self-efficacy due to lack of familiarity with the content or environment.Extending the self-set badge system described by Haug et al. to encourage learners to articulate what they want from the course would increase engagement.Such a system could be of particular benefit to learners who would not otherwise have set goals.For those who are highly self-regulated, MOOC environments should seek to be flexible, allowing these learners to take assume greater control of their learning experience: to choose alternate routes through content that suit their specific goals and motivations, to integrate learning content with their existing knowledge, and to share their learning with peers within and beyond the course boundaries.The eLDa platform, developed at the University of Warwick incorporates some of these features, while also providing a more highly structured environment to suit learners who require more support.Improving our understanding of the range and underlying basis of learning in MOOCs will enable designers to design more supportive learning environments and effective learning tasks.Despite the variations in learning observed in this study, all learners were persisting with the course and almost all confident of completing it.This may be due in part to the design of this MOOC that focused on content delivery.The course objectives provided a clear set of goals to follow that would ensure completion, the course content provided all the information necessary to complete the course tasks and aside from two compulsory discussion forum tasks, there was no requirement for participants to interact with their peers.The platform and course design may be effective at content delivery, but a question remains over its utility as an environment for learning.There are some inherent weaknesses within the design of the study.First, only a single MOOC was studied.Without repeating the study in other MOOC contexts, there is no way of knowing if the range of learning patterns and strategies reported here would be observed in a different MOOC context, particularly one where demands on learners to manage their own learning were greater.Second, the study recruitment method captures only those participants who are still active some weeks into the course.MOOCs suffer from significant attrition rates, particularly in the first few weeks and it is not known why these learners dropped out.Third, this study provides no direct insight into any link between learning patterns and strategies reported and academic success.By working more closely with MOOC providers, it may be possible to gain access to participants at an earlier stage, and to gain the necessary ethical approval to link qualitative data to quantitative course data such as forum use, content access, and final mark.Combining self-report data with clickstream data can lead to more robust conclusions and has been used in online courses to study SRL.Working more closely with providers must be managed sensitively however, as it is important that MOOC research is perceived as objective and independent of the MOOC provider.Fourth, this study utilised an interview script designed to elicit narrative descriptions of self-regulated learning that could be analysed with respect to the sub-processes of SRL described by Zimmerman, but for some sub-processes, the data collected was insufficient to allow extensive analysis.The different sub-processes of SRL are heavily interconnected and therefore it can be difficult examine these sub-processes in isolation.Meanwhile, some sub-processes, such as those relating to the reflection phase of SRL are inherently difficult to explore through interview without affecting the response of the participant.Interview questions could be further refined to make the instrument more effective.Alternatively, future studies could focus on individual sub-processes of SRL in detail.Further research could explore the efficacy of environments and tasks designed specifically to support the full range of learners who choose to study in MOOCs.By recognising and supporting the varied needs and skills of these learners, MOOCs can fulfil their potential to provide free, high quality learning for all.
Massive Open Online Courses (MOOCs) are typically designed around a self-guided format that assumes learners can regulate their own learning, rather than relying on tutor guidance. However, MOOCs attract a diverse spectrum of learners, who differ in their ability and motivation to manage their own learning. This study addresses the research question ‘How do professionals self-regulate their learning in a MOOC?’ The study examined the ‘Fundamentals of Clinical Trials’ MOOC offered by edX, and presents narrative descriptions of learning drawn from interviews with 35 course participants. The descriptions provide an insight into the goal-setting, self-efficacy, learning and task strategies, and help-seeking of professionals choosing to study this MOOC. Gaining an insight into how these self-regulatory processes are or are not enacted highlights potential opportunities for pedagogic and technical design of MOOCs.
170
Seasonal change of leaf and woody area profiles in a midlatitude deciduous forest canopy from classified dual-wavelength terrestrial lidar point clouds
Forest canopy structure regulates radiation interception through the canopy, affects the canopy microclimate, and consequently influences the energy, water, and carbon fluxes between soil, vegetation and atmosphere through interactions with leaf photosynthesis.Leaf area index, defined as half of the total leaf surface area per unit ground area, governs the radiation interception through forest canopy and the capacity of canopy photosynthesis, and thus is one of the primary canopy structural measures used in both ecophysiological models and remote-sensing based estimation of net primary productivity.In addition to LAI, detailed ecophysiological modeling of NPP requires realistic representations of the 2-D and 3-D distribution of leaf areas, e.g. vertical foliage profile, especially for open canopies and multi-layered forest stands.Measurements of vertical foliage profile have been shown to be closely related to forest functioning measures, demonstrating the important role of accurate 3-D distributions of leaf area.LAI and vertical foliage profiles are typically measured across different spatial scales.Almost all the methods to estimate LAI and vertical foliage profiles over large areas require ground truth data to calibrate and validate the empirical and physical retrieval models.Thus, the quality and detail of the ground truth data are quite important.The major methodologies for ground-based LAI measurements generally fall into two categories: direct, which involves destructive sampling or litter-fall collection, and indirect, which involves tree allometry, or gap probability measurements.The indirect, noncontact optical methods based on gap probability theory have been evaluated and adopted extensively across numerous field campaigns and studies due to their low cost, consistency and practicality of data collection.Ground-based measurements of vertical foliage profiles date back to early work, including stratified clipping and inversion of leaf contact frequency measured by point quadrats or by a camera with telephoto lens.Vertical foliage profile has also been obtained by taking LAI measurements using hemispherical photography acquired from a crane over increasing canopy height.All these early methods are time-consuming, inconvenient, and often impractical.Recent ground-based active optical methods with terrestrial laser scanners have demonstrated great potential to expeditiously measure gap probabilities at different canopy heights from lidar returns associated with ranges in 3-D space, and thus the accurate retrieval of vertical foliage profiles.However, the gap probability measurements used by all of these optical methods to measure LAI or vertical foliage profile typically include both leaves and woody materials, and thus actually measure plant area index and its vertical profile.The contribution of woody material to LAI measurements is usually removed with an empirical estimate of the woody-to-total ratio.Kucharik et al. found the nonrandom positioning of branches/stems with regard to leaves can cause inaccurate LAI values with the use of this simple ratio correction, especially when branches/stems are not preferentially shaded by leaves.Therefore, they proceeded to remove the woody contribution in the PAI directly using a Multiband Vegetation Imager.But this approach cannot remove the woody contribution at different canopy heights to correct vertical foliage profiles.Thus, a measure of the separation of leaves from woody materials in 3-D space is needed to remove the woody contribution in vertical foliage profiles.Moreover, the separation between leaves and woody materials in 3-D can also improve the simulation and inversion of ecophysiological and 3-D radiative transfer models.Kobayashi et al. found the effect of woody elements on energy balance simulations in ecophysiological modeling is not negligible for heterogeneous landscapes due to the radiation absorption and heat storage by the woody elements.Some studies have also shown that the explicit inclusion of woody elements in 3-D radiative transfer models improves the canopy reflectance modeling and thus model inversions to estimate both biophysical and biochemical variables at high resolution.Furthermore, measures of the 3-D separation between leaf and woody materials is also required by recent advances in fine-scale architectural tree modeling and aboveground biomass estimation methodology.Recent studies reconstructed near-realistic architectural tree models of deciduous forest sites from TLS data but the approach used requires the skeletonization of leaf-off scans for branch and twig modelling.The separation of leaf and woody materials will help such architectural modeling of mixed or evergreen forests when leaf-off scans are not possible.Recent TLS-based nondestructive approaches to AGB estimation combine a priori wood density information with wood volumes that have been directly calculated from cylinder tree models built from TLS point clouds using Quantitative Structure Modeling.This nondestructive approach is independent of allometric equations, and has been validated against destructive sampling of a eucalyptus forest, showing overestimation errors of ∼10% by QSM as compared to an underestimation error of ∼30%–37% using allometric equations.Previous QSM trials on both simulated and real TLS point clouds have suggested that lidar returns from leaves are an important error source in modeling trunk and branch structures for wood volume estimates, leading to the conclusion that the removal of leaf points from 3-D lidar point clouds should improve the accuracy of woody structure modeling.Three-dimensional scans of forests by TLS have shown the potential of separating leaves from woody materials in 3-D space.However, currently only a few studies have been focused on the classification of leaves and woody materials in 3-D space.Some earlier studies explored coarse discrimination of leaves from trunks or from both trunks and big branches via the manual manipulation of lidar scans.Automatic 3-D classification of lidar point clouds has been attempted to separate leaves and woody materials by thresholding the lidar return intensities from TLS operating with shortwave infrared bands or with a green band, though the selection of intensity thresholds is rather subjective and needs adjustment from scan to scan.Yang et al. used lidar return pulse shapes from a full-waveform TLS for classification of leaf and branch lidar hits.However, the compound effects of reflectance, size, and orientation of targets may generate similar return intensities or return pulse shapes from different target classes.Ma et al. attempted to improve the point classification by developing a geometric-based automatic forest point classification algorithm using spatial distribution patterns of points for preliminary separation and a series of post-processing filters of various threshold parameterizations to achieve final classifications.Zhu et al. found that the 3-D point classification was improved by using both radiometric and geometric features of points at varying spatial scales.With a growing number of 3-D point classification approaches being developed, appropriate accuracy assessments of these classifications are needed to compare and assess the different methods and inform consequent impacts on the estimation of leaf and woody areas and on vertical profiles.Achieving these objectives needs more than just visual inspections of classified point clouds but instead requires classification accuracy estimates from rigorous quantitative assessments.While quantitative assessment of 2-D image classifications has been practiced by numerous studies and recently thoroughly summarized by Olofsson et al., few studies have addressed appropriate quantitative accuracy assessment for these newly emerged 3-D point classification methods.One of the most challenging problems for assessing 3-D point classifications is to find an independent reference classification of higher quality than the 3-D classified point cloud to be evaluated.Such 3-D reference classification datasets are extremely rare or lacking, particularly for forest scans.Therefore most current studies of 3-D point classifications to date have simply presented visual inspections as a demonstration of their classification quality.Some studies have further addressed the assessment of 3-D classifications via the destructive sampling of leaf and woody biomass, which is not only quite difficult and costly but also only provides a proxy of the overall accuracy.A recent study by Ma et al. reported quantitative accuracy assessment including the values of overall, user’s and producer’s accuracies that have been commonly used in 2-D image classification assessments.Zhu et al. also reported the overall accuracies of point classifications and carried out an indirect assessment by comparing the leaf-wood ratios of a whole canopy from 3-D point classifications and digital hemispherical photo classifications.The reference dataset for these accuracy assessment of point classifications came from the visual inspection of points in lidar scans aided by information from hemispherical photos.However, depending on laser beam divergence and target ranges, targets in forest canopies may only partially intercept laser beams and thus may not form the distinctive shapes that are usually needed for visual inspections.Moreover, an efficient, probability-based sampling design for selection of reference data points is essential for precise estimates of accuracies and their variances due to sampling variabilities.No studies have yet addressed these issues for 3-D point classification.Therefore, a primary objective of this study is to retrieve separate leaf and woody area profiles from classified 3-D lidar points and reduce the biases in the profiles due to the classification errors that are quantified with rigorous accuracy estimators.The first part of the study concerns the development of an indirect approach to estimating the accuracies and their standard errors for 3-D point classifications of leaves and woody materials from bispectral lidar point clouds of a largely deciduous midlatitude forest site.These bispectral lidar point clouds were generated from simultaneous dual-wavelength laser scans in both the leaf-off and leaf-on seasons by the Dual-Wavelength Echidna Lidar, a novel terrestrial lidar that uses two coaxial lasers at near-infrared and shortwave infrared wavelengths.The gap probabilities resulting from leaves and woody materials were separately estimated for both the leaf-off and leaf-on scans.The second part of the study utilizes the rigorous estimates of 3-D point classification accuracies to adjust the proportions of separate gap probabilities in order to reduce the biases in separate leaf and woody area profiles due to point classification errors.Lastly, we assessed the variances and changes of the estimated leaf and woody profiles from the leaf-off to leaf-on seasons.Fig. 1 describes the overall procedures in the two parts of this study to estimate the separate leaf and woody area profiles from classified point clouds.The Dual-Wavelength Echidna Lidar, is based on the heritage design of the Echidna Validation Instrument built by Australia’s Commonwealth Scientific and Industrial Research Organization, and uses two pulsed lasers to acquire full-waveform scans at both near-infrared and shortwave infrared wavelengths with simultaneous laser pulses.DWEL uses a rapidly rotating zenithal scan mirror and a slowly rotating azimuth platform to provide full coverage of the angular scan space.Each rotation of the scan mirror directs the laser beam through 360°; returns from the environment are acquired at zenith angles from –117° to +117°, while returns from the instrument housing, used for calibration, are acquired as the laser beam passes through angles of +117° to –117°.The instrument platform rotates azimuthally through 180°, thus providing a complete spherical scan.The scanning resolution was set at 2 mrad with a slightly larger beam divergence of 2.5 mrad to ensure continuous coverage of the hemispheres for the scans used in this study.DWEL detects and digitizes the return signal at 2 GHz, and records the returns as full waveforms.It samples the return waveforms at the pulse repetition rate of 2 KHz.For this study, we established a 100 m by 100 m deciduous forest site at Harvard Forest in central Massachusetts, USA.This generally flat 1 ha site is dominated by red maple, red oak and white birch, with an understory of these species accompanied by American beech, American chestnut and others.Three large white pines and a few understory hemlocks are also present within the site.At five circular plots of 20 m radius, we collected biometric data including diameter at breast height, species, location, and crown position.For a systematic subsample of 10 trees of dominant and co-dominant canopy positions in each circular plot, we also acquired tree heights, crown diameters at two orthogonal dimensions, and crown heights.For the data acquired in September 2014, the average stem density at this site was 769 trees ha−1, and the basal area was around 38.5 m2 ha−1.The average tree height of the sampled trees at this site was 20.3 m and the average crown diameter was 8.7 m.On the same dates as the DWEL scanning, we also took digital hemispherical photos at the plot centers and scan locations, using a NIKON E995 camera with a fisheye lens.The f-stop was set to 5.3 and the ISO of the camera was set to 100 after several tests to get best possible color contrast between wood, leaves, and sky in the daylight.The exposure time was determined accordingly with a reading from a light meter under the canopy.When the sun was visible through the canopy gap in the camera’s view, an occlusion disk was used to block direct sunlight on the camera, which induced lens flare.The DHPs provided plant area index measurements of the whole forest stand and indirect comparison with DWEL scanning images for classification accuracy assessments.In June 2014, between leaf-off and leaf-on scanning, we collected spectral measurements of green-leaf and bark samples of dominant tree species at the site with an ASD FieldSpec spectrometer and plant probe.In this paper, we improved the classification assessment with an indirect approach to the quantification of classification accuracies and their variances.Quantitative classification accuracy assessment generally calls for a reference data source of higher quality than data used to create the classification.However, as mentioned, currently there is no reference classification of points in 3-D of higher quality for our study site.Accordingly, the indirect approach we developed here uses 2-D pixel samples from random stratified sampling of true-color DHPs as the bridging reference data for the 3-D point classification assessments.We first projected the 3-D point clouds of dual-wavelength apparent reflectance and classification into a 2-D hemispherical projection at the angular resolution of 2 mrad, the same as the lidar scanning resolution.This hemispherical projection is the same as used by the DHPs, an equidistance projection that maintains the angular distances in proportion to the radial distance in image pixels.The class label of a projected pixel is assigned as the more frequently-occurring class label of points projected into the pixel.The apparent reflectance value of a projected pixel is assigned as the average apparent reflectance value of points projected into the pixel.We set the view point of each hemispherical projection at a standard height so that the projected images from different scans collected at slightly different heights are directly comparable.1,As a source of reference data, we interpreted the true-color hemispherical photos taken at the same scan locations and the false-color projected images from DWEL scans.We registered the hemispherical photos to the hemispherical projected images of the points with the ENVI registration module using identifiable branch forks and crossings in images as tie points.However, such clear and sharp branch forks and crossings were limited, and their locations could not be pinpointed exactly.These location errors translated into registration errors causing the inexact alignment between the registered DHPs and projected images of points.Thus, visual interpretation using image context sometimes was required to give a reference label to each selected reference pixel.To deal with possible bias in the labeling, we chose primary and secondary reference labels for each reference pixel.To be specific, in ambiguous cases without a clear and decisive visual interpretation, we let the mapped class stand as the primary label, while the alternate class was assigned the secondary label.In this way, the primary reference labels would set an upper limit of user’s, producer’s and overall accuracies in the assessment, while the secondary reference labels would set a lower limit.To ensure sufficient statistical representation of Leaf and Wood in both leaf-on and leaf-off classifications, a sample of pixels was selected by stratified random sampling with the mapped pixel classes defined as strata.A sample of the same size was selected for each scan and reference observations at each sample locations were collected.The sample data from each of the five scans of the site were merged to generate the classification error matrix.The total sample size for the five scans was determined according to Olofsson et al. with conjectured user’s accuracy of the two classes and targeted standard error of overall accuracy estimates.The allocation of sample pixels to each stratum followed the recommendation by Olofsson et al. to balance the standard errors of user’s accuracy estimates for rare classes and overall accuracy estimates.In this procedure, 75 sample pixels were allocated to the rare class in a two-class classification.Here, Leaf was considered the rare class in the leaf-off classification while Wood was considered the rare class in the leaf-on classification.Our goal was to estimate the accuracy of the point classification.However, our reference label applies to each projected pixel, not to each point.To solve this mismatch, we first obtained an error matrix using point proportions for each sampled pixel based on a simple enumeration method by considering all possible allocations of individual point labels that would provide a more frequent class matching the observed label.Then the error matrices using point proportions from all the sampled pixels were summed together with the weights as the number of laser shots in each projected pixel.The projected pixels at smaller zenith angles in the hemispherical projection have more laser shots in each pixel, cover more angular space of the hemisphere scanned by the lasers, and thus are given larger weights.The weighted sum of these error matrices is an indirect error matrix from the point samples covered by the projected pixel samples.Following a similar procedure given by the Eq. in Olofsson et al. that estimates the population error matrix using pixel proportions for 2-D image classification from a sample error matrix, we converted our sample error matrix using point proportions to an estimate of population error matrix with the proportions of classified leaf and woody points in all the point clouds.From this estimate of the population error matrix in terms of point proportions, we obtained overall, producer’s, and user’s accuracies as well as their variance using the estimators described by Olofsson et al.Specifically, of the paper by Olofsson et al., the Eqs.– were used to estimate overall, producer’s and user’s accuracies from the population matrix, and the Eqs., and were used to estimate their variances respectively.Finally, to establish the range of accuracies, we compared assessment accuracies using primary and secondary reference labels.Although the above equations are formulated in terms of LAI and LAVD, most optical methods measure gap probability of all vegetative elements without differentiating leaves from woody materials.In other words, the actual measurement is plant area index, including both leaves and woody materials.LAI is then calculated from PAI using an empirical woody-to-total ratio usually obtained by destructive sampling, either for a given site or more generically for a particular vegetation type.Estimating LAIe and WAIe independently assumes no mutual occlusion between leaves and woody materials.However, a simple analysis can determine the possible extent of such occlusion, given leaf-off and leaf-on scans.For leaf-off scans, no mutual occlusion is a reasonable assumption; leaf hits are a small proportion of the total and most are hits of evergreen white pine branchlets that are clustered at twig tips.For leaf-on scans, leaves will occlude many branches and stems, while few stems and branches will occlude leaves.A reasonable assumption is thus that the occlusion of leaves by woody materials is zero, but that the woody materials will be significantly occluded.If we assume that the true woody plant area remains the same in leaf-on condition as in leaf-off condition, the difference in two observed woody areas will estimate the proportion of woody materials occluded by leaves in the leaf-on scan.For variance from the heterogeneity of canopy structure at the study site, we calculated the variance of the five sets of vegetation area profiles from the five scan locations at the site for each season.The color-composited images of apparent reflectance and the classifications in hemispherical projections along with the registered hemispherical photos for both the leaf-off and leaf-on seasons from the north plot is displayed in Fig. 4.In the color composited images, leaves display green colors, while trunks and big branches display a spectrum of greenish-yellow, yellow, and brown colors.In Fig. 4, the hemispherical photos are in true color.The classified point clouds and the projected images display woody points in blue, leaf points in green and ground points in red.Tables 3 and 4 display the error matrices and accuracies of the classifications of five scans together for leaf-off and leaf-on seasons respectively.The first two error matrices in each table are from the primary and secondary reference labels, giving the best and worst accuracy estimates respectively.The third error matrix in each table gives the average accuracies.The leaf-off overall classification accuracy, ranges from 0.60 ± 0.01 to 0.77 ± 0.01 with an average of 0.68 ± 0.01, while the overall accuracy for the leaf-on ranges from 0.71 ± 0.02 to 0.78 ± 0.01 with an average of 0.74 ± 0.01.The leaf-off season classification has a high user’s accuracy for woody materials and a high producer’s accuracy for leaves, but slightly lower producer’s accuracy for woody materials and a low user’s accuracy for leaves.The low user’s accuracy for leaves suggests a large commission error in the leaf classification of leaf-off scans, particularly caused by misclassification of fine branches as leaves.The smaller omission error than commission error of leaves in leaf-off scans implies that the classification of leaf-off scans overestimates the number of leaf points, or conversely underestimates the number of woody points.In the leaf-on season classification, woody materials have much lower producer’s accuracy than user’s accuracy, i.e. larger omission error than commission error, which implies the classification of leaf-on scans underestimates the number of woody points, or conversely overestimates the number of leaf points, particularly caused by misclassification of fine branches at far ranges such as the canopy top.The user’s and producer’s accuracies of leaves in the leaf-on classification are both reasonably high and better than the leaf-off.Table 5 provides the values for LAIe, WAIe, and PAIe.The PAIe values are 1.11 for the leaf-off season and 3.42 for the leaf-on.As noted previously, the PAIe values are estimated from hemispherical photos and applied to the LAIe and WAIe proportions.Leaf-on PAIe is close to the estimates determined by a previous study at the same site using both hemispherical photos and LI-COR LAI-2000.The table also shows that band-averaged WAIe values changed from 0.735 in the leaf-off season to 0.675 in the leaf-on season.As previously noted, the leaf-on WAIe is an apparent value because branches and stems are occluded by leaves, as our estimate does not account for mutual occlusion.The difference in WAIe from the leaf-off season to the leaf-on above suggests that about 8 percent of the WAIe is occluded by leaves in the leaf-on season at our study site.The leaf, woody, and plant area profiles from the NIR and SWIR data for the leaf-off and leaf-on seasons are displayed in Fig. 5 for the retrievals with the adjustment by the point classification accuracies, and in Fig. 6 for the retrievals without the adjustment.The improvement in the estimates of vegetation area profiles from taking into account the point classification accuracies is illustrated by the unrealistic higher LAIe than WAIe in Fig. 6 in comparison with the much more reasonable profiles in Fig. 5.For the leaf-off season, the cumulative effective VAI curves increase smoothly to about 24 m, where the canopy begins to thin as the dominant trees emerge at the canopy top.In the leaf-on graphs, the WAVD curve is relatively constant with height, while the LAVD peaks at about 5–7 m) due to the leafy clusters around this height at the north site.The LAVD in the leaf-on season has a secondary peak around 12 m and then gradually decrease until the canopy top.The secondary peak around the 12 m height for the leaf-on canopies at this site appears somewhat lower than the actual height of the majority of the crowns.This may be due to weak or missing returns at these higher zenith angles because of insufficient laser power at this distance.With some variance, the cumulative PAI for the leaf-on results plateaus at around 23–24 m.Table 5 shows the results estimated from the NIR and SWIR data are very close, demonstrating the efficacy of resolving the areal and reflective contributions of targets to the apparent reflectance in the estimation of leaf and woody areas by our approach.Accordingly, we will present and interpret only the NIR data from now on for simplicity; Appendix A.5 provides the corresponding SWIR graphs and tables.The woody-to-total ratios from the VAVD and VAI results show variation in both the leaf-off) and leaf-on conditions) with canopy height.In the leaf-off condition, the woody-to-total ratios are stable at around 0.7 from 8 m to 18 m with some variation.Above 18 m, the woody-to-total ratio of WAVD to PAVD decreases with canopy height.This may be an artificial decrease as a result of the misclassification of fine branches at the far ranges in the high canopies due either to larger relative uncertainty in the spectral attributes or to similar spatial attributes with the leaf clusters.In the leaf-on condition, the woody-to-total ratio stays somewhat stable at around 0.1 above 5 m with some variation.Resampling of the classifications shows that the variance in the vegetation profiles at each scan location, due to the classification error, is very small for both the leaf-off scans and the leaf-on scans.Therefore, the variance in the vegetation profiles is largely dominated by the between-scan variances that represent the heterogeneity of canopy structure across this site.Because we used a single averaged PAIe that was split into LAIe and WAIe for the five scans, the variance in vegetation profiles does not include the heterogeneity in the total leaf, woody and plant area at different scan locations across the site.Instead, the variance describes the heterogeneity in the proportions of leaves and woody materials and the distributions of leaf, woody, and plant area along canopy heights across the site.To further describe the variance in vegetation profiles, we normalized the between-scan variances in the VAVD profiles by the corresponding VAIe values along the canopy heights.In the leaf-off season, the relative between-scan variances in the VAVD profiles) are larger for the leafy proportions than for both the woody proportions and the total plant areas.In contrast, in the leaf-on season), the relative variances in leaf, woody and plant area volume density profiles are similar to each other.An accuracy assessment of a point classification in 3-D space is challenging because of the difficulty of obtaining independent reference data of higher accuracy.The manual interpretation of individual points was used as the training data in the classification and therefore is not independent or of higher accuracy to be reference data for the accuracy assessment of 3-D point classifications.Furthermore, the manual interpretation of individual points is prone to subjectivity, particularly for those points from partial target hits, and may contain noisy labels.While such individual point labeling from manual interpretation can be taken to train the classifier, as the chosen random forest is known to be robust to noise in training data, these labels are not a good reference data source to quantify classification accuracy because of their subjectivity and lack of independence from point classifications.Our indirect accuracy assessment approach uses registered hemispherical photos and generates estimates of the population error matrixes expressed in terms of the point proportions with a rigorous sampling design and a quantification of the variance of the accuracy estimates.This approach avoids the subjective manual interpretations of individual points in 3-D space for use as reference data and therefore ensures that the accuracy assessment of 3-D point classifications is as objective as possible.In general, the user’s and producer’s accuracies for the leaf-off and leaf-on scans in this study concur with the findings about the uncertainty of this classification method in our previous study.For example, the low producer’s accuracy for woody materials in the leaf-on scans is a result of omissions of far branches in the tree canopies using our classification method.Targets at a farther range are more likely to be misclassified due to a low quality in the spectral and/or spatial attributes.The spectral attributes for far-range targets can have large errors due to low signal-to-noise ratios in apparent reflectance and/or to laser misalignment.Therefore, the spatial attributes for these branches can be unreliable due to low point densities and occlusion by leaves, which omit the sufficient hits needed to correctly describe the spatial arrangements.In the leaf-off scans, the producer’s accuracy of the woody materials is higher because fewer occlusions by leaves lead to more returns from the far branches, improving the spatial attributes available for the classification.Therefore, our estimates of classification accuracies appear reasonable and are in line with the previous examinations by visual inspection and by cross-validation assessments.The classification benefits from the synergistic use of both spatial and spectral attributes of individual points, but also suffers from the deterioration of the qualities of these attributes particularly at far ranges from scanning positions primarily due to the lidar data quality.The strengths and weaknesses of this classification with both the spectral and spatial attributes from the dual-wavelength lidar are discussed in more depth by Li et al.We do note that our indirect approach can be limited by the registration error between the hemispherical photos and the projected images of lidar point clouds.In future work, artificial targets on the trees, e.g. reflective crosses or bands, visible in both the hemispherical photos and the DWEL scans, may help the registration and thus reduce uncertainties in the classification accuracy assessment.On the other hand, many widely-used commercial lidar instruments, such as the Riegl VZ-400i, now provide the option to integrate visible-band camera photos with laser scanning data through the precise position and orientation of a camera with regard to the laser scanner.The indirect approach to accuracy assessment used here could easily be applied to such lidar point clouds with preregistered camera photos.Collecting true reference observations in 3-D space for individual points is difficult, if not impossible.An alternative is to simulate the lidar scanning data and point clouds from tree models for an assessment of classification algorithms, as the “truth” is known from the tree models.However, such simulations need to incorporate the complexity and the error sources of the actual laser scanning for a realistic assessment of the classification accuracy.For example, wet lichen spots could be placed on stems of tree models to change the woody surface reflectance.Some small stems could be assigned lower reflectance values at the SWIR wavelength.Leaf and woody reflectance could be varied randomly according to the variance specified from spectral measurements, and the effects of leaf angle distributions could also be assessed.Two-laser alignment error could also be included in the simulated lidar data, allowing the sensitivity of classification accuracy in response to laser alignment error to be estimated.A rigorous quantification of 3-D point classification accuracy, such as the indirect approach introduced here, not only supports benchmarking the growing collection of 3-D point classification algorithms but also helps tracing uncertainty sources in the emerging 3-D mapping of biophysical and biochemical properties of forest canopies.This uncertainty source trace ensures that the new findings of forest structures and functions will be robust and reliable from the new 3-D perspective of properties of forest ecosystems.The vertical profiles of the PAI in both leaf-off ad leaf-on seasons plateau between around 23 m to 24 m, which is close to the average tree height of 20.3 m from the field measurement samples.The slightly shorter height derived from field measurements is most likely an effect of tree selection, which used a systematic sample of trees of dominant and co-dominant canopy positions with equal probability.Since smaller and shorter co-dominant trees are more frequent and have lower, smaller crowns than dominant trees, the profiles, which are based on lidar data acquired from a zenith range of 10°–35° covering both dominant and co-dominant trees, will show a greater canopy height.The leaf-on PAVD profile measured here is different from that obtained by Zhao et al. using the Echidna Validation Instrument at the same site in 2007 only retrieved plant area, i.e. the combination of leaf and woody areas together).They observed a smoother profile that peaked around 21 m and had a slightly but steadily increasing trend of plant area between 10 m–20 m, rather than a steady decrease.This difference occurred even though our profile estimation method was similar to that used in the EVI scanning.The main reason for the different plant profiles from the current DWEL data in 2014 and the heritage EVI data in 2007 may be due to the limited measurement range of the DWEL.Only laser beams at zenith angles of 35° or less were able to pass all the way through a 25 m canopy to provide a proper estimate of gap probability up to the top of canopy heights.As a result, we used only the zenith rings between 10° to 35°, a much smaller zenith angle range than that used by Zhao et al. for EVI data.Even at these smaller zenith angles, it appears that weak and partial hits are being lost with distance, producing the gradual reduction in both leaf and woody area volume density noted in Fig. 5.The smoother profile of Zhao et al. is probably due to scanning a larger volume of the canopy and thus averaging more points to reduce the variance.This comparison shows the importance of an adequate measurement range and signal-to-noise ratio to ensure the instrument ability to measure gap probability through to the top of the canopy with zenith angles of up to 60° or greater.Considering the low reflectance of leaves at the SWIR wavelength, this requirement is significantly more challenging for the DWEL than for an NIR instrument like the EVI.The separate profiles of leaf and woody areas from the leaf-off and leaf-on scans demonstrate the seasonal change in the amount of leaves and woody materials that are visible from below the canopy.The separation of PAIe into LAIe and WAIe, as well as the separate vertical profiles, takes into account the classification accuracies and reduces the biases in these leaf and woody area profiles due to the omission and commission errors in the 3-D point classifications.Without such correction, some leaf-off scans yield unrealistically larger LAIe values than WAIe values due to the misclassification of woody branches as leaves.Our work thus demonstrates the importance of the quantification of the lidar point classification accuracy from the standpoint of both rigorous sampling designs and objective reference data of high quality.This study demonstrates that the variance in leaf and woody area profiles is dominated by the spatial heterogeneity of the sites, while classification error introduces only a small amount of variance.Therefore, the inclusion of additional scan locations can improve the separate estimates of leaf and woody area profiles and can better characterize the spatial heterogeneity of the leaf-versus-wood proportions along canopy heights across the site.The change in the spatial heterogeneity of LAVD and WAVD from the leaf-off season to the leaf-on season is captured by the different variances of the leaf and woody profiles.The higher variance in the LAVD profile than in the WAVD profile from the leaf-off scans) occurs because of the spatial variation in evergreen canopies across this forest site.The relatively similar variance in the leaf and woody area volume density profiles of the leaf-on season) suggests a similar distribution pattern of leaves and woody materials along the canopy heights across the site.It is also noted that the current retrieval of vegetation area profiles is limited to using only zenith rings between 10° to 35° as explained at the beginning of the Section 6.2.This narrow zenith range covers smaller canopy areas per scan.If a full zenith range were permitted by the instrument measurement range, the variance in canopy profiles between scans would be reduced, as the retrieval from each scan would cover larger canopy areas with overlaps between scans.The height profiles of the woody-to-total ratios suggest a generally stable value of 0.7–0.8 for the leaf-off season and about 0.2 for the leaf-on season above about 5–8 m in height.These results suggest that constant seasonal ratios could be used to remove the woody contribution to vegetation profiles above a certain canopy height.The woody-to-total ratio in leaf-off season is lower than 1, suggesting the presence of a few evergreen white pine trees at the site.At lower canopy heights, the woody-to-total ratio is more variable, and the removal of the woody contribution may depend on the scan location.For the profiles of understory leaf area at low canopy heights, it may be better to estimate the ratio from the direct discrimination of leaves and woody materials from the DWEL scanning data.The comparison of WAIe from the leaf-off and leaf-on scans found that only about 8 percent of the WAIe was occluded by leaves in the leaf-on season at our study site probably because most woody areas at this site are contributed by trunks and big branches in the lower canopy.This occlusion ratio may be used as an empirical value to correct the apparent WAIe value from leaf-on data for the occlusion by leaves to estimate an actual WAIe for similar forest stands.The explicit retrievals of separate leaf and woody area profiles along canopy heights from this TLS-based effort improve the details of forest structure measurements at local scales.Such retrievals from more sites and more diverse forest types will improve the parameterization of radiative transfer models and ecophysiological models in 3-D space.Leaves and woody materials exchange energy and water with the atmosphere at different rates, and store carbon differently.Therefore, the explicit separation of these two components in a 3-D space improves the simulation of the fluxes of heat, water and carbon in a forest ecosystem.Leaves and woody materials also reflects radiation differently.This separation also improves the simulation of remotely sensed radiometric signals at larger scales and as such improves the inversion and calibration of airborne and spaceborne remote sensing data to estimate biophysical properties of forest ecosystems.Explicit separation of leaves and woody materials in the 3-D space of a forest improves the retrieval of LAI and the vertical profiles of LAI from gap probability methods by directly removing the woody contribution explicitly with canopy height without the use of empirical woody-to-total ratios of a whole canopy.We obtained separate vertical profiles of leaf and woody areas at a midlatitude deciduous forest through the 3-D classification of leaves and woody materials in dual-wavelength point clouds from the Dual-Wavelength Echidna Lidar, a terrestrial laser scanner.The 3-D classification was created using both the spectral information from NIR and SWIR apparent reflectance as well as the spatial context information given by the 3-D spatial distribution pattern of each point and its near neighbors.The overall classification accuracy from such an indirect assessment approach, using registered hemispherical photos is 0.60 ± 0.01 – 0.77 ± 0.01 for leaf-off point clouds and 0.71 ± 0.02 – 0.78 ± 0.01 for leaf-on point clouds.This indirect accuracy assessment approach uses an independent reference data source from the visual interpretation of 2-D DHPs rather than that of individual lidar points that usually lack independence from point classifications, and therefore improves the objectivity of the accuracy assessment of the 3-D point classifications.This indirect approach also allows for identification of the omission and commission errors of leaves and woody materials; by adjusting proportions of separate gap probabilities for such errors, more accurate estimates of separate area indexes and height profiles of leaves and woody materials are obtained.The uncertainty in leaf and woody area profiles from classification errors appears to be negligible based on a bootstrapping analysis.Instead, the variance in leaf and wood area profiles over this forest site appear to be dominated by the spatial heterogeneity of canopy vertical structures across the five scan locations.The contrast in the variance of the leaf and woody profiles captures the change in the spatial distributions of leaves and woody materials from the leaf-off season to the leaf-on.The variance in the LAVD profiles was relatively larger than that of the WAVD profile in the leaf-off season because of the presence of a few large evergreen crowns scattered throughout the site, while the two variances were relatively similar in the leaf-on season.The five-scan averaged woody-to-total ratios at different canopy heights are generally stable in the middle and upper canopy for the stand but vary in the lower canopy as sampled by the DWEL instrument.Although the estimates of total LAI and WAI were affected by DWEL’s measurement range limitations, the relative proportions of leaf and woody areas were successfully retrieved from the classified dual-wavelength point clouds.Such 3-D leaf-wood separation over more diverse forest types, such as mixed and evergreen, needs more TLS data collection and future investigation for thorough examination of its efficacy.The indirect approach to 3-D classification accuracy assessments developed here offers a benchmarking practice for future research efforts.Currently, rapid advances in TLS instrumentation and data processing for forest studies provide enhanced opportunities to study the biophysical and biochemical properties of forest ecosystems at finer spatial scales in 3-D space.These improvements of ground-based measurements at local scales from TLS will translate to advances in larger-scale understanding of forest structure and function through the calibration and validation efforts of airborne and spaceborne remote sensing.The interpretation and understanding of the new 3-D retrievals of vegetation properties from TLS are tied to a corresponding need for accurate vegetation element classifications in 3-D space.Therefore, it is important to be able to compare different 3-D classification methods, and to quantify the impact of classification errors on the downstream retrievals of vegetation properties.Our study helps point the way towards better TLS applications in forest ecosystems by providing an indirect accuracy assessment approach to 3-D point classifications, coupled with a means to correct leaf and woody area profiles for estimated classification errors.
This study demonstrates the retrieval of separate vertical height profiles of leaf and woody areas in both leaf-off and leaf-on seasons at a largely broadleaf deciduous forest site in the Harvard Forest of central Massachusetts, USA, using point clouds acquired by a terrestrial laser scanner (TLS), the Dual-Wavelength Echidna® Lidar (DWEL). Drawing on dual-wavelength information from the DWEL, we classified points as leafy or woody hits using their near-infrared (1064 nm) and shortwave infrared (1548 nm) apparent reflectance coupled with the 3-D spatial distribution patterns of points. We developed a new indirect assessment approach that quantified the accuracies (user's, producer's and overall) and variance of accuracies of the 3-D point classifications. The overall classification accuracy estimated by this indirect approach was 0.60 ± 0.01 – 0.77 ± 0.01 for leaf-off points and 0.71 ± 0.02 – 0.78 ± 0.01 for leaf-on points. These estimated accuracies were then utilized to adjust the proportions of separate gap probabilities to reduce the biases in the separate leaf and woody area profiles due to classification errors. Separate retrievals of leaf and woody area profiles revealed the change in their spatial heterogeneity over the 1-ha plot with season. These retrievals also allowed height-explicit estimation of the woody-to-total ratio, which is an empirical parameter often used to remove woody contributions to leaf area index retrievals made by optical methods. The estimates suggested the woody-to-total ratios generally stayed stable along height in the middle and upper canopy for this site but varied in the lower canopy. More accurate estimates of leaf area and its vertical profile are important for better measurement and modeling of the radiation regime of forest canopies, and thus their photosynthetic capacity. By separating leafy and woody materials in three dimensions, dual-wavelength TLS offers the prospect of better understanding of forest cycling of matter and energy within local and global ecosystems.
171
Anion diffusion in clay-rich sedimentary rocks – A pore network modelling
Clays are considered as the ideal materials of disposing high level radioactive waste in various countries due to their low permeability, high structural charge, strong adsorption of radionuclides and surface area associated with the clay minerals.A comprehensive study of the mobility of species in clays is thus necessary to assess the safety of the geological disposal.A number of factors affect the mass transport through clays:The physical and chemical properties of the clay;,The environmental system, e.g. pH and temperature;,The nature of species, e.g. charges, sizes and their interactions.For instance, the pore size distribution has significant effect on the transport behaviour, because if the thickness of the electrical double layer is greater than one half of the pore width, the overlapping of EDL makes the pore not accessible by anions.Thus the smaller the pore sizes, the greater percentages of pores not accessible by the anions.In addition, some species adsorbed onto the pore walls block smaller pores.Further, the charge of the species may change due to the pH and temperature.A number of methods at various length-scales have been proposed to study the impacts of different factors on the transport of species in clay and the interaction between the clay surface and the solutes.A class of microscopic models operating at a pore scale, including molecular dynamics and Monte Carlo, have the advantage of giving accurate representations of the water, anion and cation diffusivities and concentration profiles.In contrast, macroscopic models are more computation efficient, enable analysis of larger systems and provide direct access to transport properties.Macroscopic models include the continuum hydro-geo-chemical transport model, PHREEQC, and the discrete pore network models.PHREEQC has been developed to simulate cations, anions and neutral species in clays.It is implemented by obtaining the accessible porosities of different species firstly, and based on these simulating the diffusion of other species.The disadvantage of this method is that it cannot analyse the impacts of pore size distribution on the transport properties of species.In contrast to PHREEQC, which is an indirect method that only considers one dimensional homogeneous porous media, the 3D pore network models can take into account the heterogeneity and anisotropy of the porous media.PNMs have been widely used to simulate the relative permeation, dissolution and precipitation, diffusion and adsorption and biomass growth in different porous materials.PNMs which have been applied to shale and gas extraction, waste disposal and microbial enhanced oil recovery etc.The model proposed here directly characterises the realistic pore size distribution of porous materials based on the measured data.When species travel through the pores in clays they interact with the pore walls.In general, the cations diffusivity is enhanced, and that of anions inhibited, compared to a neutral tracer such as tritiated water in charged pores.The models used to simulate this interaction can be divided into two categories: Traditional empirical models, e.g. Freundlich model, Langmuir model and Brunauer-Emmett-Teller model; Models based on thermodynamic mechanisms including ion exchange model and surface complexation models.The empirical models only consider the chemical interaction but cannot explain the mechanism of solid-solution interaction on the solid-water interface.While the models based on thermodynamics consider both the chemical and electrostatic effects.The surface complexation models include constant capacitance model, diffuse layer model, triple layer model and basic stern model.In the surface complexation models, the Gouy–Chapman model or modified Couy-Chapman is usually used to describe the electrostatic interactions between the species, e.g. cations and anions, and the excess of charges of the clay minerals.The Poisson–Boltzmann equation and Navier–Stokes equation are then used to characterize the effects of the electrical charges on ionic and water fluxes.Another method is to average the exponential distribution of the ions in the EDL over a Donnan volume.This is computationally faster and gives equivalent results compared with MGC by integrating the Boltzmann equation."While PNMs have been used to simulate transport of species with traditional empirical models, PNMs based on the thermodynamic models have not been explored yet to the authors' knowledge.This work is intended to fill the existing gap.The effects of different pore size distribution and ionic strength are analysed.The reason to study the effects of ionic strength is that there is a lack of information about diffusion in porous clays with different pore space properties at different ionic strengths.It is known that an electric double layer arises from the screening of the negative surface charge of clay minerals by accumulating cations, and that EDL is compressed at high ionic strength.At high ionic strength a lower volume of the diffuse layer is necessary to compensate the negative charge on the surface, see Fig. 1.Thus, the ionic strength of the pore water has significant effects on the mass transport in clays.The pore network proposed here is constructed based on experimental pore space information and mineral characterisation, i.e. pore size distribution, connectivity and surface area.The constructed pore network can represent the anisotropy and heterogeneity of the clays by using different length parameters and percentage of pores in different directions.Then in each single pore, the pore space is divided into free pore water space, stern layer and a diffuse double layer as the clay surface is negatively charged.The stern layer completely excludes the anions.In the free pore water space, the diffusion is similar to diffusion in pure water.In the diffuse double layer, there is excess of cations and deficit of anions relative to these species in the free pore water.This results in the enhanced diffusion for cations and reduced diffusion for anions.The thickness of the diffuse double layer and the ionic concentration in the diffuse double layer are determined by electrochemical mechanisms taking into account the cation exchange capacity, surface area and ionic strength.The geometric and physic-chemical information is embedded in a 3D network to solve the diffusion problems.This model is used to perform a systematic investigation of the anion transport behaviour in natural clay rocks.The anion diffusion behaviour and the anion exclusion effect in clay rocks with different porosity, pore size distribution, cation exchange capacity and specific surface area studied to understand better the relationship between the ionic strength of the pore water, the properties of the rock, and the anion diffusion in these clay rocks.This enables us to assess accurately the anion accessible porosity with respect to the estimation of the pore water composition.In addition, the developed PNM in this work can be easily coupled with lattice model for mechanical behaviour to investigate the effects of micro-crack generation, competition, and coalescence on transport, which is a difficult and attractive topic in assessing the performance of clays.In Opalinus Clay, diffusion of cations is enhanced, and that of anions supressed, relative to a neutral tracer such as tritiated water.This is suggested by results from many diffusion experiments with clayrock, bentonite or pure montmorillonite.The effect can be explained by the negatively charged clay surface.An excess of cations and a deficit of anions are required to neutralize the remaining charge at the outer surface of the clays."When all charges on the surface are neutralized, the transport of species is similar to transport in pure water, but the flux is less because the tortuous path in the pores is longer than the straight-line distance used for the concentration gradient in Fick's laws.The negative surface charge can be explained at the microscopic scale as caused by the isomorphic substitutions in the crystal lattices.For example, the isomorphic substitution of Si4+ by Al3+ in the silica tetrahedra and Al3+ by Fe2+ and/or Mg2+ in the octahedral layer of the crystal lattice in smectite particles.Meanwhile, different clay minerals have different microscopic structures, e.g. smectitie and illite are built of a succession of dioctahedal layers while kaolinite particles are built by a series of TO layers.This influences the cation exchange capacity of illite, kaolinite and smectite, which is around 0.22 equiv./kg, 0.03–0.04 equiv./kg and about 0.01 equiv./kg, respectively.There are some K cations in illite particles and the interlayer porosity is not accessible for diffusive transport in illite.Opalinus Clay displays anisotropic responses to deformation and transport due to preferred orientation of clay minerals attained during sedimentation and compaction.On a regional scale the lateral variability of facies and lithology is low, with three lithological sub-facies: Shaly, Sandy and Sandy carbonate-rich.This paper focuses on the Shaly facies of Opalinus clay, labelled BDR in, which typically contains 66% clay minerals, 13% calcite, 14% quartz, 2% feldspars, pyrite and organic carbon.The pore space of Opalinus clay samples is characterized in the same manner as in a previous work, where a large number of pores are located predominantly within the fine-grained clay mineral matrix.It should be noted that the pore sizes derived from ad- and desorption isotherms are assumed with cylindrical pore geometries.The results are thus expressed as equivalent pore radii.This is a conceptual simplification of the complex pore geometry in the rock, but at least allows a rough classification of the pore size distribution.A dimensionless shape factor from the TEM and FIB-nt experiments will be introduced in the diffusive transport simulation to represent the angularity of the pore cross sections.This would make the model more realistic.The larger pores with sizes > 10 nm, were elongated in the bedding plane, which was resolved by Focused Ion Beam nano-tomography.The porosity of larger pores was thus θmes = 0.018.The smaller pores with sizes < 10 nm obtained from N2 adsorption analysis occupied approximately 9.7 vol.%, thus the porosity of smaller pores is θmic = 0.097.Further, the larger pores were largely isolated and did not provide a percolating network through the sample.These definitions of smaller and larger pores are aligned with the commonly used in physical chemistry and may differ from other fields of study.The data from the two measurements is concatenated into a single ‘cumulative pore volume fraction — pore radius’ curve given in Fig. 2.For model construction the experimental distribution of Fig. 2 is re-evaluated as cumulative probability separately for larger and smaller pores.These are shown in Fig. 2 and, respectively.With regard to the solid phase, Keller et al. reported 18 vol.% non-porous carbonates with grain sizes ranging between 100 nm and 300 nm and 17 vol.% of non-porous quartz, the grain size distribution of which was undetermined.For constructing the model in this study, the reported data were converted into cumulative probability of carbonate grain sizes as shown in Fig. 2.As both carbonates and quartz are non-porous, the quartz is assumed to follow the size distribution of the carbonate particles due to lack of quartz-specific experimental data.In this work, the measured pore space information is firstly used to qualitatively verify the model by comparing the predicted results with the experimental diffusivities.Then the sensitivity of different porosities, connectivity and pore size distribution on the diffusion behaviour is analysed.The construction of pore network models of Opalinus clay is based on previously proposed method: The truncated octahedron is selected as the cellular basis for a regular space tessellation.One truncated octahedron represents the neighbourhood of a solid particle or a pore, which has been used to analyse deformation and failure of quasi-brittle media and transport in porous media, respectively; The particles are assigned in the cell centre subject to experimentally measured grain volume fraction and size distribution.Based on this, the lattice length scale is calculated; Pores are assigned to the bonds which connect the neighbouring face centre within a cell.According to the meso-pores size distribution, porosity and preferred directions, the meso-pores are allocated along the cell boundaries; The micro-pores are assigned in the positions not occupied by meso-pores according to their size distribution and relative porosity.The anisotropy and heterogeneity of the Opalinus clay are thus achieved by choosing different length scales at different directions and allocation of meso-pores and micro-pores in different domains.Specific construction details can be referred to Xiong et al.The shape of the cellular basis and a pore network example are shown in Fig. 3.The swelling and hydraulic properties of the clay minerals depend on the mineral compositions as they have different exchangeable cations in the pore space.As the exchangeable dehydrated potassium cations occupy the interlayer porosity of illites, species are inaccessible to the interlayer porosity.Thus the interlayer porosity is not considered in this work.The negative charge of the clay mineral layers is responsible for the presence of a negative electrostatic potential field at the clay mineral basal surface–water interface.The concentrations of ions in the vicinity of basal planar surfaces of clay minerals depend on the distance from the surface considered.In a region known as the electrical double layer, concentrations of cations increase with proximity to the surface, while concentrations of anions decrease.At infinite distance from the surface, the solution is neutral and is commonly described as bulk or free solution.This spatial distribution of anions and cations gives rise to the anion exclusion process that is observed in diffusion experiments.In this work, each pore in the pore network is divided into three parts: stern layer, Donnan volume and free pore water volume.In stern layer, the anions are completely excluded.In Donnan volume, the Donnan layer is set to two Debye lengths and the concentration in the Donnan layer is calculated according to Eqs.–.In the free pore water volume, the anions transport is identical to transport in pure water.In compacted clay material, if the thickness of Donnan layer is larger than the pore size, the corresponding pore is not accessible by anions.This is due to the overlapping of the EDL.The pore network, illustrated in Section 2.2, is represented by a mathematical graph embedded in 3D, where bonds/throats are graph edges and sites/pores are graph nodes.An incidence matrix A of dimensions E × N is used to represent the graph topological structure where E is the number of edges and N is the number of nodes.The node n in the first, second or not incident of edge e corresponds to −1, +1, or 0 at an element of this matrix, aen.The incidence matrix describes the derivative of a discrete function defined on the nodes and the topological structure of the system, i.e. connectivity.Specifically, a concentration field on the graph nodes, vector C of dimension N, has a gradient given by the matrix product AC, which is a discrete field on the edges, vector ∇C of dimension E.As importantly, the transpose of the incidence matrix describes the derivative of a discrete function defined on the edges.Specifically, a mass flow field on the graph edges, a vector J of dimension E, has a gradient given by ATJ, which is a discrete field on the nodes, a vector ∇J of dimension N.Eq. is used to calculate the components of the discrete field of edge flows, J.An edge weight, We = 4πGijAijDw, scales the edge gradient for pre scale anion diffusion in each component.Clearly, the edge weight depends on the assigned pore radius, ionic strength, CEC, radius of diffusing species, and surface area.For convenience the edge weights are arranged in a diagonal matrix, W, of dimensions E × E, where the element in row and column e is the weight of edge e. Thus, the discrete flow field is given by J = -WAC.Compared to traditional finite element or finite difference formulations, the discrete formulation on graphs is highly beneficial for incorporating system evolution.Conceptually, the incidence matrix describes the intrinsic relation between the pore space topological structure and transport.Practically, the incidence matrix A captures the effects of connectivity while the edge weight matrix W captures the geometry and physics.This facilitates the incorporation of potential pore space changing mechanisms in a computationally effective way.More specifically, corrosion, adsorption, bacterial film growth, deformation, etc., will affect not only W but also A.In these simulations, the change of local pore geometry can be achieved by modifying W and the change of connectivity can be achieved by altering A.The macroscopic diffusivity has been calculated by the immutable pore systems.The realism of the model construction in terms of initial pore space geometry and topology will be verified by comparing the predicted diffusion coefficients of HTO with experimental ones reported in the literature.After this, the effects of ionic strength and mineral compositions are analysed based on previous work.A pore network skeleton within the boxed region was used here with respect to a coordinate system normal to the square faces of the unit cell.Specifically, N is the number of cells in each coordinate direction and S1, S2, S3, are the cell sizes in the three coordinate directions.Pore networks were built on skeletons with increasing N using the process from sub-section 2.2.It was found that for N = 20, the variation of results from the average based on the 10 cases of random spatial allocation of pores was reduced to under 10%.This is considered to be an acceptable accuracy.The diffusion hereafter simulated on a skeleton of N = 20 and the results obtained from the average values based on 10 realisations.The boundary conditions are subject to concentrations C0 and C1 on two opposite boundaries, and zero flux on the remaining four boundaries.This reflects a particular experimental setup.The selection of the two boundaries depends on the macroscopic transport properties being analysed.For example, the boundary conditions used to calculate the macroscopic transport properties perpendicular to bedding direction are: prescribed concentration C0 in all nodes on plane X2 = 0; prescribed concentration C1 in all nodes on plane X2 = 20S1; zero flux through all nodes on planes X1 = 0, X1 = 20S2, X3 = 0, X3 = 20S3.For calculating the macroscopic transport properties parallel to the bedding direction, say, the boundary conditions are: prescribed concentration C0 in all nodes on plane X1 = 0; prescribed concentration C1 in all nodes on plane X1 = 20S2; zero flux through all nodes on planes X2 = 0, X2 = 20S1, X3 = 0, X3 = 20S3.Zero concentration of diffusing species is taken as initial conditions in all nodes.The constructed pore networks exhibit macroscopic tortuosity, introduced by the selection of transport pathways along the interfaces between solid phases regions.The ratios of the cell length parameters in three perpendicular directions can dictate the different tortuosity in different directions.In Opalinus clay, the tortuosity is observed to be smaller in the bedding direction in experiments.Therefore, the larger cell length in the bedding direction and smaller cell lengths in the directions perpendicular to bedding are applied.When the ratio of the cell length parameters is: S1/S2 = 2 and the out-of-bedding directions are not differentiated, i.e. S2 = S3, the calculated effective diffusivity considering both carbonates and quartz as particles is closest to the experimental values.In this work, the same ratio of the cell length parameters and volume percentage of particles are adopted.For cellular assembly of this given ratio, 10 realisations of pore spatial distributions were analysed to obtain the transport in the bedding, S1, and out-of-bedding, S2, directions.The results reported are the averaged values of these analyses.The N2 BET surface of OPA is 20–21 m2/g and the rock density is assumed to be 2.28 g/cm3.This corresponds to a pore surface density of around 45–48 μm−1.The average calculated pore surface density of the constructed pore network is 50 μm−1.The value calculated after network construction is very close to the measured data, which demonstrates the topological and geometric realism of the network used.Opalinus Clay contains about 10% illite-smectite mixed layers, dominated by illite.It can be calculated that the volume of smectite interlayer water is 2.8 mL/kg clayrock if the smectite forms 3% of the rock and is present as a 1-layer hydrate).The interlayer porosity in Opalinus clay is not accessible for the tracers tested due to the presence of non-exchangeable K+ ions.The interlayer water is thus probably not measured in diffusion experiments with HTO; it can be neglected except for diffusion of very strongly adsorbing cations.As the effective diffusivity of HTO is almost independent of the external salt concentration for all the tested clays, the effect of external salt concentration on the effective diffusivity of HTO is neglected in this work.In Opalinus clay, experimentally obtained values for HTO diffusion in OPA are in the range of D1 = × 10−11 m2/s and D2 = × 10−11 m2/s s, × 10−11 m2/s, × 10−11 m2/s, × 10−11 m2/s, × 10−11 m2/s, determined for different bore core samples.The calculated average effective diffusivity of HTO by the model is in the following ranges: D1 = 5.75 × 10−11 m2/s; D2 = 1.83 × 10−11 m2/s.The simulated effective diffusivities of HTO parallel and perpendicular to the bedding are in good agreement with the experimentally measured values.The measured effective diffusivities of Cl− parallel to the bedding direction are × 10−11 m2/s when the ionic strength is 0.3.In addition, the measured effective diffusivities of Cl− perpendicular to the bedding direction are × 10−12 m2/s, × 10−12 m2/s when the ionic strength is 0.39 and × 10−12 m2/s when the ionic strength is 0.42.For modelling the diffusion behaviour of Cl−, the measured specific surface area of OPA, 20 m2/g, is used in this work first to validate the model and the effects of specific surface area on the anionic transport behaviour is discussed in the Section 4.7.The cation exchange capacity of OPA is 0.22 equiv./kg is used first and then its effects are analysed as well in Section 4.6.As there is a minimum distance of Cl− approach to the surface, i.e. the stern layer, the anions are completely excluded in this area.In this work, the thickness of stern layer is considered to be 0.184 nm.The predicted dependence of effective diffusivities of Cl− on ionic strength obtained with and without considering stern layer are shown in Fig. 4, together with the available experimentally measured values.The model shows that the effective diffusivities without considering stern layer are larger than the values considering the stern layer both in the direction parallel and perpendicular to the bedding plane for all ionic strengths.The predicted results considering stern layer agree better with the experimental measured effective diffusivities in the direction parallel to the bedding plane.While in the direction perpendicular to the bedding plane, the experimental data are in good agreement with the predicted results either considering or not considering stern layer.In all, the model with stern layer provides better results.Hereafter, the effects of porosity, pore size distribution, cation exchange capacity and specific surface area is analysed based on the model with stern layer.Notably, high ionic strengths have insignificant influence on the values of the effective diffusion coefficient.The effective diffusivities of Cl− increase as the external salt concentration increase, which is in line with the observed experimental phenomena.The effective diffusion coefficients in all directions increase fast when the ionic strength is smaller than 0.1.After the ionic strength reached 0.1, the effective diffusivities increase very slowly and approach stable values.The effects of porosity on the transport of HTO and Cl− are analysed by varying porosity but using one and the same pore size distribution obtained from experimental data.As the porosity usually can be calculated by the ratio of bulk dry density of clays and the grain density, i.e. θ = ρbulk/ρgrain, the effects of porosity can also reflect the effects of different bulk dry densities.For the investigated porosities, the increase of porosity results in increase of effective diffusion coefficients of HTO and Cl−, which is illustrated in Figs. 5 and 6.Furthermore, the variation of the effective diffusion coefficients when the porosity increases from 0.23 to 0.345 is larger than the difference under the porosity increasing from 0.115 to 0.23.This is due to that the number of pores in the pore network increasing as the porosity increases, which increases the possibility of pores connected to each other.Thus the connectivity of the pore network increases as well.Physically, as the porosity increases, not only the number of transport paths increases but also the tortuosity of the transport paths reduces.In addition, the effective diffusion coefficients for ionic strengths do not differ significantly after reaching to 0.1 for the investigated clays with different porosities.The effective diffusion coefficients of HTO and Cl− parallel and perpendicular to the bedding are analysed with different pore size distribution without changing the porosity.In this section, the porosity of 0.115 is used and four kinds of pore size distribution are simulated.Specifically, the cases are: pore size distribution obtained from experimental measurements, pore size distribution with all pore radii being 6.7 nm, pore size distribution with all pore radii being 10 nm, and the radii of all pores being 20 nm in the pore network.The effective diffusion coefficients of HTO decrease with increasing pore radii both in the direction of perpendicular and parallel to the bedding.This can be explained by that the number of pores required achieving the porosity decreases as the pore radii increasing from 6.7 nm to 20 nm."The reduction of pores' number in the pore network leads to the reduction of the connectivity of the pore network.As less pores are connected to each other, the species becomes more difficult to transport through the model.The tortuosity of transport paths in the pore network increases as well.Hence, the effective diffusivities of HTO decrease.The relationship between the effective diffusion coefficients of Cl− and different pore size distributions under different ionic strength is shown in Fig. 7.The change of the effective diffusion coefficients of Cl− is different from the change of HTO effective diffusion coefficients under different pore size distributions at various ionic strengths: the effective diffusion coefficient of HTO decreases as pore radii increase from 6.7 nm to 20 nm, while the effective diffusion coefficient of Cl− with radii 10 nm or 20 nm is larger than the values with radii 6.7 nm for most ionic strengths.The change of the effective diffusivities of Cl− becomes smaller with pores increasing from 6.7 nm and 20 nm under various ionic strengths.This can be explained as follows.For the same ionic strength, the thickness of the Donnan pore-space is constant.For pore radii smaller than the thickness of the Donnan pore-space, the overlapping of the EDL happens which completely excludes the anions.For all pores in the network being 6.7 nm, pores are inaccessible for anions at low ionic strengths due to the overlapping of EDL in the pores.For all pores being 10 nm and 20 nm, as the pore size is larger than 6.7 nm, the overlapping of EDL in 6.7 nm pore network at low ionic strength is not happening.That is why the effective diffusivities of Cl− in pore network with pores being 6.7 nm are smaller.Since the overlapping of EDL is not happening in the pore with pores being 10 nm and 20 nm, the change of effective diffusivities of Cl− under different ionic strengths shows the change of the values of HTO.In this case, the pore geometry and topology plays a more important role than the thermodynamic in the pore.The inaccessibility of these pores reduces the total transport through the pore network and increases the tortuosity of the transport path.The predicted evolution of the effective diffusion coefficients with different cation exchange capacity is shown in Fig. 8.The cation exchange capacity has a little effect on the effective diffusivities as the ionic strength increases.In the direction parallel and perpendicular to the bedding plane, the effective diffusion coefficients increase sharply when the ionic strength is smaller than 0.1and after this value, the effective diffusion coefficients increase very slowly for the variation of ionic strength.This is due to that when the cation exchange capacity is equal to 0.22 molc/kg, the concentration in the Donnan pore-space is already very small.In addition, the concentration in the Donnan pore-space decreases with the increase of cation exchange capacity.Therefore, the diffusion in the Donnan pore-space does not make much contribution to the total flux in the pore network.The specific surface area influences the diffusion in OPA in two ways.Firstly, the pore volume in a given pore network is fixed for a given porosity.Thus the increase of the surface area reduces the shape factor in Eqs. and which results in the reduction of diffusion flux through the pores.Secondly, the surface area influences the concentration of anion in the Donnan pore-space.The larger the specific surface area is, the larger the concentration of anion in the Donnan pore-space under various species.Therefore, the evolution of effective diffusion coefficients with specific surface area depends on which factor plays a more important role.It can be observed from Fig. 9 that the diffusion flux through pore increases as the specific surface area decreases.This means that the shape factor makes a bigger contribution to determine the mass transport.The decrease of effective diffusion coefficients of Cl− is larger for the specific surface area increasing from 60 m2/g to 100 m2/g than the reduced values increasing from 20 m2/g to 60 m2/g in both directions.The presented work allows for the following conclusions:Pore network modelling can be an effective and efficient way for calculating diffusivities in complex systems, provided sufficient information about the microstructure and mechanisms is incorporated;,Applied to OPA with measured microstructure properties and thermodynamic framework for solid-solute interactions, the proposed modelling approach delivers estimates for diffusivity in very close agreement with experimental data, simultaneously for neutral and anionic species.This lends a strong support for the physical realism of the model, particularly with included stern layer;,Solute ionic strength has a pronounced effect on diffusivity up to 0.1 mol/L, namely diffusivity increases sharply with ionic strength, after which the effect diminishes;,Porosity changes, which can be viewed as changes in dry bulk density without affecting pore size distribution, lead to proportional changes in diffusivity, namely increase of porosity results in increased diffusivity;,Pore size distribution variations have a significant impact on the effective diffusivity in combination with charge.While the effective diffusivities of neutral species decrease with increasing pore sizes for given porosity, the effective diffusivities of anions increase as a result of reduced exclusion effect;,There is no obvious effects of cation exchange capacity on the effective diffusion coefficients of Cl−;,Specific surface has a pronounced influence on the effective diffusion of Cl−, which results in proportional changes in the diffusivity.The effects of ionic strength on the diffusion properties in clays have been studied by Wigger and Van Loon which have shown that the ionic strength does have significant impacts on the diffusion behaviour in clays.This work would have been very useful for validating our model beyond the effect of ionic strength."However, in Wigger and Van Loon's paper, the Opalinus clay from the Benken deep borehole and Helvetic Marl from Wellenbergy are chosen for study.The experimental information for pore network construction of BE and HM is not readily available and these clay has different structure than the one used for our network construction.So at this point we cannot provide such a comparison.Hence our work focused on the ionic strength effects and studied the sensitivity of physical parameters of the network.Collecting data in a wider range of ionic strength especially at lower ionic strength is a priority in the future developments of our work.In addition, in our predicted results, the predicted diffusivities vary more at lower ionic strength no matter which physical parameter changes.This fits well with the revolution of effective diffusions with ionic strength in the published paper.We remain open to demonstrate the model capabilities once both microstructure data and macroscopic measurements are available for a given material.The proposed modelling framework is extendable to include mechanical effects on transport behaviour, which is a subject of on-going work.
Mass transport in compacted clays is predominantly by diffusion, making them preferred candidates for barriers to radionuclide transport in future repositories for nuclear waste. Diffusivity of species depends on a number of factors: (1) the physical and chemical properties of the clay, such as the geometry and topology of the pore space and the functional groups on the surface; (2) the environmental system, e.g. pH, ionic strength, temperature and organic matter; (3) the nature of species, e.g. charges, sizes and their interactions. Existing models do not consider these characteristics integrally in diffusion simulations and analysis. In this work, a developed pore network model is used to study the diffusion behaviour of Cl− with different ionic strengths. The network is constructed based on experimental pore space information and mineral compositions, and captures the anisotropy and heterogeneity of clays. Opalinus clay is selected for testing the model. The effects of pore size distribution, porosity, cation exchange capacity and surface area on the transport behaviour of anions with different ionic strength are analysed. This improves the understanding of the effects of different factors on the species transport in Opalinus clay. The ionic strength of the pore water is varied between 0.01 and 5 M to evaluate its effect on the diffusion of Cl− in clays. It is shown that the model predictions are in good agreement with measured experimental data of Opalinus clay. The agreement demonstrated, both qualitatively and quantitatively, suggests that the proposed approach for integrating known effects on diffusivity is reliable and efficient.
172
Environmental stress effect on the phytochemistry and antioxidant activity of a South African bulbous geophyte, Gethyllis multifolia L. Bolus
The genus Gethyllis belongs to the plant family Amaryllidaceae and is better known as “Kukumakranka” by the Khoi-San people.The genus comprises 37 currently accepted species in Southern Africa, among which many are considered to be endangered.Presently, very little is known about the chemical composition and bioactivities of this genus."The word “Kukumakranka” is described by farmers as meaning “goed vir my krank maag” in Afrikaans, one of South Africa's eleven languages, which translates to “cure for my upset stomach” in English.Watt and Breyer-Brandwijk reported that “Kukumakranka brandy”, which is made from the fruit of Gethyllis afra and Gethyllis ciliaris, is believed to contain oils and esters of low molecular weight, and is an old Cape remedy that was used for colic and indigestion.According to Rood the early Cape colonialists used an alcoholic infusion of the fruit of Gethyllis linearis and Gethyllis spiralis as a remedy for digestive disturbances.In more recent times, a diluted infusion of the flower has been used for teething problems, and the skin of the fruit as a local application on boils, bruises and insect bites.Further reports by Rood indicated that the fruit was boiled by the Khoi-San and used as an aphrodisiac, while Van der Walt mentioned that G. ciliaris was used as a tonic for fatigue.Further pharmaceutical studies by Elgorashi and Van Staden revealed some anti-inflammatory and antibacterial activities in certain Gethyllis species and reported that the findings were in agreement with their uses as a traditional medicine.Previously, the following compounds: dihydroxydimethylbenzopyran-4-one, isoeugenitol, its 5-O-glycoside and 9Z-octadec-9-enamide, had been isolated from the roots and bulbs of G. ciliaris.During head-space analysis of the volatiles from the fruits of G. afra and G. ciliaris, the following major compounds were characterized for G. afra: α-pinene, n-butyl n-butyrate, isoamyl acetate, β-pinene and 2-methylbutyl butyrate and for G. ciliaris: pentacosane; ethyl octanoate; ethyl isovalerate; ethyl hexanoate and ethyl benzoate.It was further reported that these compounds are responsible for the sweet/banana/piney odors of the fruit of these two species.The interpretation of the species Gethyllis multifolia L. Bolus was last used in its ‘Red Data Assessment’ of 1996 and was classified as ‘Vulnerable’.In the latest ‘Red List of Southern African Plants’ of 2009, G. multifolia has provisionally been subsumed under Gethyllis campanulata, but G. multifolia has not formerly been placed into synonomy with G. campanulata.According to Du Plessis and Delpierre G. multifolia is a deciduous, winter-growing, summer-blooming and bulbous geophyte, 120 mm in height and indigenous to South Africa.The flowers measure 60–80 mm in diameter, colored white to cream with 12 anthers and the flowering period is from November to January.The highly fragrant, tasty and edible fruit berries are produced from mid-March to mid-April at the onset of the new growing season.An antioxidant capacity-and -content study of plant parts of G. multifolia revealed higher polyphenol content and antioxidant activity in its root system when compared to the leaves and bulbs.This study further revealed the highest total polyphenols and antioxidant activity in the fruits and flowers, which is comparable to blueberries, strawberries and raisins.According to Babajide et al., the brine shrimp lethality assay, which indicates toxicology levels of bioactive compounds, revealed that methanolic extracts of G. multifolia whole plants indicated a high potential for antimicrobial and antiviral activities."Plants possess different antioxidant properties, depending on their antioxidant molecule content, which is strongly affected by the plant's growing conditions.Environmental stress factors such as shade, abnormal salt levels, high temperature and drought, may result in the generation of reactive oxygen species in plants which in turn may cause oxidative stress when in excess.In plant cells, oxidative stress reactions are associated with the production of toxic free radicals.Plants have evolved a wide range of enzymatic and non-enzymatic mechanisms to scavenge ROS and protect their cells against oxygen toxicity.According to Di Carlo et al. the relationship between plant stress acclimation and human health comprises a broad array of metabolites some of which possess “desirable” pharmacological properties."Many examples can be found in nature as in the case of hyperforin, which is the active ingredient in St. John's wort and is known for alleviating mild depression. "When St. John's wort plants are subjected to heat stress it substantially increases hyperforin concentration in the shoots.According to Hilton-Taylor G. multifolia is threatened in its natural habitat, which stresses the need for future cultivation of this species by pharmaceutical companies, traditional healers and farmers.Should certain environmental stresses increase the antioxidant content or activity of this species, it can be incorporated in future cultivation practices to induce increased antioxidant levels in essential plant parts during production.To date no published data are available on how important biological properties, such as antioxidant activity of this Gethyllis species, are affected by environmental stress factors.Thus, the aim of this study was to investigate the changes in the antioxidative capacity and levels in the leaves, bulbs and roots of G. multifolia during controlled photo- and drought environmental stresses over one growth season.Furthermore, phytochemical screening was undertaken to isolate and characterize some natural compounds from the dried leaves, bulbs and roots of G. multifolia in an attempt to understand the chemistry behind the claimed medicinal values of this “Kukumakranka”.G. multifolia plants were authenticated and obtained with permission from the Karoo National Biodiversity Garden towards the end of their winter growth phase.G. multifolia is threatened in its natural habitat, and for conservation purposes the exact location of this species is omitted.Mature dormant bulbs were selected for this investigation.The dormant bulbs were transferred into 15 cm nursery pots in sandy soil obtained from the natural habitat.The plant samples were grown for 12 months, which included one dormant phase and one growth phase at the nursery of the Department of Horticultural Sciences, Cape Peninsula University of Technology, Cape Town.Plant samples which represented the control were grown under full sunlight and irrigated by the ambient rainfall of the Western Cape.The mean photosynthetic photon flux density on cloudless days at 1200 h was 4450 ± 155 μM m−2 s −1.Temperatures around the plant samples varied from 8–24 °C and the relative humidity from 36–100%.Plants which represented the drought stressed samples were grown under full sunlight and covered with a 6 mm clear glass sheet, placed 300 mm above the plants.The PPFD, temperature and relative humidity environmental conditions were similar to those of the control.The drought stressed plants were irrigated at a rate of 40 mm/plant once a month with de-ionized water.Plants representing the photo-stressed samples were grown under a shade structure covered with 80% neutral black shade cloth, which has a neutral effect on light quality.During the experimental period, the mean PPFD on cloudless days at 1200 h was 570 ± 40 μM m− 2 s− 1 which was approximately 20% of full sunlight.The temperature around the photo-stressed plant samples was ~ 1–2 °C lower than that of the control, and the relative humidity 2–4% higher than that of the control.The readings of all the environmental conditions under all treatments were taken daily at the following time intervals: 0900 h, 1200 h and 1500 h.All plants were separated into leaves, bulbs and roots and dried in a fan-drying laboratory oven at 50 °C for 48 h.The bulbs had an extended drying period of five days.Individual plant parts were ground to a powder in a portable spice grinder using a 0.5 mm mesh and stored in air-tight stoppered glassware prior to analysis.Crude extracts of the leaves, bulbs and roots were prepared by stirring the various dried, powdered plant materials in 80% ethanol and thereafter it was centrifuged at 4000 rpm for 5 min.The supernatants were used for all analyses.The same sample preparation technique was followed for all assays and all analyses were performed in triplicate.The FRAP assay was performed using the method of Benzie and Strain.In a 96-well clear microplate, 10 μL of the crude sample extract was mixed with 300 μL FRAP reagent and incubated for 30 min at 37 °C in the plate reader.Absorbance was measured at 593 nm.l-Ascorbic acid was used as a standard with concentrations varying between 0 and 1 000 μM.The results were expressed as μM ascorbic acid equivalents per g dry weight.The ABTS assay was performed following the method of Re et al.The stock solutions included 7 mM ABTS and 140 mM potassium-peroxodisulfate solutions.The working solution was then prepared by adding 88 μL K2S2O8 solution to 5 mL ABTS solution.The two solutions were mixed well and allowed to react for 24 h at room temperature in the dark.Trolox was used as the standard with concentrations ranging between 0 and 500 μM.The ABTS mix solution was diluted with ethanol to read the start-up absorbance of approximately 2.0.Crude sample extracts were allowed to react with 300 μL ABTS in the dark at room temperature for 30 min.before the absorbance was read at 734 nm at 25 °C in a plate reader.The results were expressed as μM Trolox equivalents per g dry weight.The H-ORACFL values were determined according to the methods of Prior et al. and Wu et al.A stock standard solution of Trolox was diluted in phosphate buffer to provide calibration standards ranging from 5 to 25 μM.The Fluoroskan ascent plate reader equipped with an incubator was set at 37 °C.Fluorescence filters with an excitation wavelength of 485 nm and emission wavelength of 538 nm were used.A fluorescein stock solution was prepared in a phosphate buffer and further diluted to provide a final concentration of 14 μM per well.The peroxyl generator, 2,2′-azobis dihydrochloride, was added with a multichannel pipette to give a final AAPH concentration of 4.8 mM in each well.The fluorescence from each well, containing 12 μL diluted hydrophilic extract, was read every 5 min.for 2 h.The final ORACFL values were calculated using the regression equation y = ax2 + bx + c between the Trolox concentration and the area under the curve.The results were expressed as μM Trolox equivalents per g dry weight.The statistical significance between antioxidant content and activity values of the various crude plant extracts were determined by an analysis of variance where P < 0.05 was considered to be statistically significant.The computer software employed for the statistical analysis was Medcalc version 9.4.2.0.The computer program, Microsoft Office Excel 2006, version 12 was employed to determine the correlation between antioxidant contents and activity.All laboratory grade solvents were distilled prior to use and all spectroscopic grade solvents were used as such.Cleaning up of crude isolates was performed using Sephadex LH-20.Preparative thin layer chromatography was performed using Merck Silica gel 60 PF254 on glass plates with a thickness of 0.5 mm.Analytical TLC was conducted on normal-phase Merck Silica gel 60 PF254 pre-coated aluminium plates.Separated compounds on TLC were visualized under ultra-violet light at and spraying of the plates where required, was carried out using 2% vanillin in H2SO4, followed by heating at 120 °C for 3–4 min.All extracts were concentrated on a rotary evaporator at 45 °C.Column chromatography was performed using Merck Silica gel 60 H."Melting points were determined on a Fisher-John's melting point apparatus.Ultra-violet spectra of some of the isolated compounds were obtained with a Unicam UV4-100 UV/Vis Recording Spectrophotometer.Infra-red spectra were recorded on a Perkin Elmer Universal ATR Spectrum 100 series FT-IR spectrometer.Mass spectrometry was performed on a Waters Synapt G2API Q-TOF Ultima LC–MS-ESI instrument in the positive mode, while nuclear magnetic resonance spectra were recorded on a Varian Inova 600 MHz NMR spectrometer in MeOH-d4, using the solvent signals as internal reference.The dried, powdered parts of the plants, namely the leaves, bulbs and roots were separately extracted and the dry weights for excised plant parts were as follows: leaves, bulbs and roots.Extraction was carried out sequentially using hexane, dichloromethane, ethyl acetate and methanol.Extraction was done under the ambient light conditions of the laboratory facility.By means of occasional stirring using a mechanical stirrer, each portion of plant material was macerated twice in 250 ml of each solvent at room temperature for 24 h, and the extracts were evaporated on a rotary evaporator at 45 °C.Each extract was screened for the presence of tannins, flavonoids, phenolics, saponins, glycosides, alkaloids, steroids, essential oils and terpenes according to the method of Wagner and Bladt.Since the chromatographic profiles of the leaves and bulbs were similar, while higher recoveries were indicated in the roots, a more in-depth investigation of the natural product content was thus conducted on the roots of G. multifolia.Furthermore, the chromatographic profiles for the abovementioned plant parts were also found to be similar for the control and photo- and drought stress treatments.The height of the column was 750 mm with an internal column diameter of 25 mm.The flow rate of the eluent was measured at 2 ml/min and the volumes collected were 10 ml in Pyrex test tubes.The ethyl acetate extract of the root of G. multifolia was adsorbed on silica gel 60 and chromatographed using the solvent mixtures: 100 ml of toluene and then 50 ml each of the following mixtures, toluene–EtOAc,, and.This was followed by EtOAc and 50 ml each of the following mixtures, EtOAc–MeOH,,.Finally the column was washed with 70 ml of MeOH.Fractions collected were analyzed by TLC using toluene: EtOAc:MeOH.Fractions showing the same TLC profile were pooled and concentrated in vacuo.Three distinct major fractions were selected and coded A–C. Fraction B was rechromatographed using the same solvent mixtures, column diameter and flow rate as described above.Out of the four fractions collected, fraction B was chromatographed on Sephadex LH 20 using toluene–MeOH, to yield a brownish-yellow powder, which upon further purification by preparative TLC using toluene:EtOAc:MeOH, afforded compound 1.Fraction C was rechromatographed using the same solvent mixtures as shown above.Successive chromatography, followed by preparative TLC of the fraction obtained using toluene–MeOH, afforded compound 2 as an off-white powder.Although this isolate appeared as a single spot on TLC, MS and NMR, it was subsequently revealed to be a mixture of two compounds.The methanol extract of the root of G. multifolia was chromatographed using the same solvent mixtures as for the EtOAc extract, except that the final volume of MeOH was 80 ml.Sixty-five fractions were analyzed by TLC and those showing the same profile were pooled.Five fractions coded D − H, were obtained.One of the fractions F was rechromatographed using the same solvent mixtures.Repeated preparative TLC afforded compound 3 as a dull, yellow powder.Compound 3 becomes compound 4 in the ‘Results and discussion’ section because the isolated compound 2 is a mixture of two different compounds, and was discussed as such.In general, the antioxidant-capacity and -content fluctuated in the leaves and roots of G. multifolia compared to low and stable activity in the bulb, when plant parts were subjected to both environmental stresses.When compared to the control, it was evident in this investigation that the total polyphenol and flavonol/flavone content increased in the roots of G. multifolia when the plants were subjected to the drought stress.In comparison to a previous investigation of natural populations of the same species which were not subjected to any form of environmental stress factors, the root system in this investigation under drought stress indicated the highest polyphenol and flavonol content for both studies.Though not significant, higher flavonol/flavone levels were also recorded in the leaves when subjected to the drought stress treatment.Similarly, it was reported that in Arbutus unedo plants, severe drought stress resulted in significantly higher ascorbate levels in plant parts.Furthermore, a study by Herbinger et al. reported that under drought stress, α-tocopherol and glutathione concentrations increased in certain wheat cultivars.In contrast to the above reports, a study by Kirakosyan et al. reported that drought stress effected a reduction in the flavonoid levels of Crataegus laevigata and Crataegus monogyna plants.The increase in the total polyphenols and flavonol/ flavone content of G. multifolia when subjected to drought stress, could suggest that this species produces higher levels of total polyphenol antioxidants to possibly serve as a protective measure or adaptive strategy to cope with this specific environmental stress.Furthermore, G. multifolia significantly decreased its flavanone content in all plant parts when the plants were subjected to the photo-stress treatment.Evidence of similar plant responses to light was mentioned in a study by Tattini et al. where high light intensities effected an increase in the flavonoid concentrations in the leaves of Ligustrum vulgare.Furthermore, Heuberger et al. reported that under chronic UV-exposure conditions the ascorbate levels in certain plant species increased, which reflected the stress acclimation process.In support of the above observations, Dixon and Paiva reported that plants which are subjected to full-sun conditions have been shown to contain higher levels of polyphenolic compounds than shade plants.These findings are line with a previous comparative investigation of the natural habitat of two species of Gethyllis, which suggested that G. villosa adapts better than G. multifolia when exposed to seasonal drought periods.The FRAP was found to be significantly higher in the roots under the photo-stress and drought stress treatments for G. multifolia.This response was also evident when this investigation was compared to a previous study on natural populations of Gethyllis multifolia and Gethyllis villosa which were not subjected to environmental stress factors.Similar higher reduction ability responses were reported when sweet potato leaves were subjected to drought stress.The ORAC values increased significantly in the underground organs of G. multifolia after exposure to the shade- and drought stress treatments, but effected a significant decrease in the aerial parts.Conversely, increased ORAC values were recorded in the leaves of Sushu 18 and Simon 1 when these two sweet potato cultivars were subjected to the drought stress treatment.This investigation, however, revealed that under the drought stress treatment the FRAP, ORAC and ABTS radical cation scavenging ability is significantly increased in the root system of G. multifolia, which could form part of the acclimation process or adaptive strategy to this environmental stress factor These findings are further supported by a previous comparative study on natural populations where no increases in the ABTS and ORAC levels in the root system were evident.According to Fennell and Van Staden, the majority of compounds found in the Amaryllidaceae family are usually alkaloids."Further studies reported that alkaloids were not detected in either the dichloromethane or 90% methanolic extracts of G. ciliaris, using Dragendorff's reagent; only tannins, flavonoids, phenolics, saponins, anthraquinones, glycosides and essential oils tested positively in all the extracts.A study conducted by Babajide et al. also revealed the presence of the same phytochemical compounds, and the absence of alkaloids from the methanol and water extracts of G. multifolia and G. villosa whole plants.The latter study also revealed the absence of saponins in the water extracts for both species, an observation that was further confirmed in all the plant parts in this study.Earlier reports by Viladomat et al. had revealed that the bulbs of Gethyllis species also contained flavonols, organic acids, carbohydrates and soluble nitrogen compounds.The preliminary phytochemical screening results in this investigation indicated the presence of tannins, flavonoids, phenolics, saponins, glycosides as well as essential oils, while the tests were negative for alkaloids in all the plant parts tested.The following compounds were isolated from the roots by means of column chromatography:A brownish-yellow powder with m.p. 203–205 °C.UV λmax nm: 261, 365; 262, 366; 280, 443; 268, 412.IR pronounced peaks: 3400, 2931, 1656, 1615 and 1556.ESI-MS: m/z 241.08; 1H NMR δ 7.73, 7.49, 7.40, 7.35, 6.51, 6.39, 5.48, 3.02, 2.75; 13C NMR δ 193.0, 166.8, 165.4, 140.7, 129,9, 129.7, 129.5, 127.3, 115.1, 111.9, 103.9, 81.0, 49.4.The molecular formula of compound 1 was determined as C15H12O3 by ESI-MS, on the basis of the pseudomolecular ion peak at m/z 241.08 +.The flavanone characteristics were evident from the presence of an ABX spin system due to the protons H-3e, H-3a and H-2, along with the typical coupling constants.Further evidence for the structural assignment came from both 1-D and 2-D NMR measurements.H-5 appeared as the most deshielded proton at δ 7.73 due to its occupancy of a β position in an α-β unsaturated carbonyl system.This observation, along with the complete absence of the familiar chelated OH-5, characteristic of most naturally occurring flavonoids, was indicative of the fact that this was one of those unusual flavonoids.This flavonoid has recently been reported as isolated from Zuccagnia punctata, Spatholobus suberectus and Dalbergia cochinchinensis, among many other sources.Its synthesis has also been reported for the purpose of crystallographic and conformational studies.Compound 2 and 3,An off-white powder with m.p. 225–227 °C.IR pronounced peaks: 3385, 2828, and 1690.2; ESI-MS:m/z 243.10; 1H NMR δ 12.70, 7.67, 7.23, 7.14, 6.32, 3.20, 2.98; 13C NMR δ 205.2, 133.7, 129.4, 127.1, 109.1, 40.5, 31.6.3; ESI-MS:m/z 257.08; 1H NMR δ 12.05, 7.47, 7.39, 7.34, 6.24, 5.91, 5.41, 3.05, 2.74; 13C NMR δ 197.3, 129.7, 129.6, 127.3, 103.7, 97.2, 80.4, 44.2.A dull, yellow powder with m.p. 215–217 °C.IR pronounced peaks: 3500–3400, 1652, 1613, 1568, 1556.ESI-MS:m/z 341.10 and m/z 363.08; 1H NMR δ 7.45, 7.41, 7.36, 6.81, 6.60, 5.56, 3.10, 2.77, 2.31, 2.27, 13C NMR δ 191.1, 170.9 and 169.7, 129.8, 127.4, 111.7, 110.3, 81.0, 45.9.The molecular formula of compound 3 was determined as C15H12O4 by ESI-MS, on the basis of the pseudomolecular ion peak at m/z 257.08.NMR spectroscopy revealed features of a flavanone structure, as well as a chelated OH.Comparison with literature data, suggested the structure to be that of a well-known flavanone, pinocembrin.Pinocembrin has been shown to be a flavonoid constituent in several plants, such as litchi and has reportedly displayed antibacterial and antifungal activity in vitro.Furthermore, Ahn et al. reported that pinocembrin exhibited very low antioxidant activity but possessed a considerable degree of antiangiogenic activities.Although the phenolic group is considered to be the most fundamental structural feature essential for antioxidant activity, the multiple presence of this functionality, as well as their specific locations on the flavonoid skeleton relative to other functional groups, has been shown to be critical for the enhancement of antioxidant activity in selected flavonoids.Thus, the presence of one or more of the following features in a flavonoid is known to contribute to enhanced antioxidant activity: the pyrogallol group; the catechol group; the 2,3-double bond in conjugation with a 4-oxo group and a 3-hydroxyl group; and additional resonance-effective substituents.Compound 4 had a molecular formula of C19H16O6 as determined by ESI-MS, .The NMR spectroscopic features of 4 were very similar to those of 3, except that 4 carried two acetyl groups at O positions 5 and 7, and of course showed no evidence for a chelated OH.The presence of the acetyl groups was also evident from the significant downfield shift observed for H-6 and H-8, as well as the appearance of two carbonyl 13C signals at δ 170.9 and 169.7, in addition to δ 191.1 for C-4.Deacetylation of 4 with sodium methoxide in methanol yielded a mixture of 3 and the partially deacetylated product.The occurrence of the latter compound as a by-product may be attributed to some degree of hindrance from the C-4 carbonyl group to the approach of the methoxide anion.Furthermore, acetylation of pinocembrin gave a product whose spectral data was identical to that of 4.The occurrence of pinocembrin as its diacetylated form in nature has, to our knowledge, not been reported previously.However, based on the existence of a broad spectrum of acetyl transferases in plants, it is possible that similar enzymes may be responsible for the production of this diacetate.On the other hand, that this compound may be an artifact cannot be ruled out, especially given that ethyl acetate was a prominent solvent constitutent of the eluent during column chromatography.The acetyl group has been shown to migrate between hydroxyl groups in the presence of a suitable catalyst, such as certain brands of silica gel.Compound 2 displayed a molecular formula of C15H14O3 as determined by ESI-MS, pseudomolecular ion peak at m/z 243.10.It also displayed the characteristic AB spin system usually observed for dihydrochalcones.This dihydrochalcone, which was also shown to possess a chelated OH group was assigned the proposed structure on the basis of 1-D and 2-D NMR spectral analysis.This compound has previously been reported both as a synthetic as well as a natural product.It can be concluded that the antioxidant activities in the mentioned plant parts under drought stress may be a protective and acclimation mechanism against drought stress, which is found to be very significant in the root system of this species of Gethyllis.The responses of plants in this investigation have also given a good indication as to how the different plant organs respond to different environmental stresses by increasing and decreasing their secondary metabolites or changing their organ morphology and physiological processes as possible protective mechanisms.Since the flavonoids reported in this paper lack these key features, while they will contribute to the total polyphenolic content, they may not be considered to be the key contributing compounds towards the antioxidant activities which have been reported for the roots in this investigation.Results from this study could have a significant impact on how traditional healers, pharmaceutical companies and farmers choose conducive environmental conditions for the cultivation of this Gethyllis species in order to ensure enhanced polyphenolic content and antioxidant activities in the relevant plant “parts” that are traditionally used in medicinal practices.Future research is needed on the effect of drought on the antioxidant activities of the flowers and fruit, and how irrigation strategies can be effectively manipulated to increase biomass yield of this species but still simulate the effect of drought.Further research is also warranted on the flowers and fruit of G. multifolia to isolate more natural compounds and to further elucidate other biological properties of this endemic plant species, but also to confirm the antioxidant activity and other medicinal benefits in vivo.The total polyphenol content of the various crude extracts was determined by the Folin Ciocalteu method.Using a 96-well clear microplate, 25 μL of sample was mixed with 125 μL Folin Ciocalteu reagent, diluted 1:10 with distilled water.After 5 min., 100 μL aqueous sodium carbonate was added to each well.The plates were incubated for 2 h at room temperature before the absorbance was read at 765 nm using a Multiskan plate reader.The standard curve was prepared using 0, 20, 50, 100, 250 and 500 mg/L gallic acid in 10% EtOH and the results were expressed as mg gallic acid equivalents per g dry weight.The flavonol content was determined using quercetin in 95% ethanol as standard.This assay measures both flavonols and flavones since both groups absorb ultra-violet light maximally around 360 nm.In the sample wells, 12.5 μL of the crude sample extracts was mixed with 12.5 μL 0.1% HCl in 95% ethanol, 225 μL 2% HCl and incubated for 30 min.at room temperature.The absorbance was read at 360 nm, at a temperature of 25 °C.The results were expressed as mg quercetin equivalents per g dry weight.The flavanone content was determined using an adapted version of the method as described by Kosalek et al.This method was adapted with minor modifications such as reducing assay volumes for the 96-well plates.Briefly, 100 μL of sample was mixed with 200 μL .After incubation at 50 °C for 50 min., 700 μL of 10% potassium hydroxide in 70% MeOH was added.The samples were centrifuged at 4000 rpm for 5 min.and 30 μL of the resulting supernatant mixed with 270 μL MeOH in a 96-well clear plate and the absorbance read at 495 nm.A linear standard curve using 0, 0.2, 0.5, 1.0, 1.5, 2.0 mg/mL naringenin in methanol was included.The results were expressed as mg naringenin equivalents per g dry weight.
Gethyllis multifolia is a South African bulbous geophyte with medicinal properties and on which very limited research has been conducted. This research investigated the effect of drought and shade, which are experienced in the natural habitat, on the antioxidant properties, as well as the isolation of natural compounds from certain plant parts. The total polyphenol, flavonol/flavone and flavanone contents, oxygen radical absorbance capacity (ORAC), ferric reducing antioxidant power (FRAP) and radical cation scavenging ability (ABTS) were measured in the leaves, bulbs and roots (dry weight) of G. multifolia under photo- and drought stress. A significantly higher total polyphenol content was observed in the roots under the photo. - and drought stresses when compared to the control. When all the plant parts were compared, the highest total polyphenol content was observed in the drought-stressed roots of G. multifolia. An increased antioxidant capacity was observed in the root system of G. multifolia where the FRAP, ORAC and ABTS were found to be significantly higher during drought stress when compared to the control. Phytochemical investigation of the leaves, bulbs and roots of G. multifolia revealed the presence of tannins, flavonoids, phenolics, saponins, glycosides (phenolic and terpenoid) as well as essential oils, while the test for alkaloids was negative. Further in -depth studies on the roots of G. multifolia led to the isolation of three known flavonoids, of which one was also isolated as its endogenously acetylated derivative. Their structures were elucidated by chemical and spectroscopic methods as 2,3-dihydro-7-hydroxy-2-phenyl-4. H-1-benzopyran-4-one (. 1), (1-[2.4-dihydroxyphenyl]-3-phenylpropan-1-one) (. 2), 2,3-dihydro-5,7-dihydroxy-2-phenyl-4. H-1-benzopyran-4-one or pinocembrin (. 3) and 5,7-diacetoxy-2,3-dihydro-2-phenyl-4. H-1-benzopyran-4-one (. 4). This investigation indicated how environmental conditions can be manipulated to enhance the antioxidant properties of certain plant parts for future cultivation of this species and the isolation of the four natural compounds elucidated its medicinal potential and created a platform for future in vivo research.
173
Hemorrhoid is associated with increased risk of peripheral artery occlusive disease: A nationwide cohort study
Hemorrhoid, which varies clinically from asymptomatic to manifestations of bleeding, prolapse, and thrombosis, is becoming a huge medical burden worldwide.1–4,Several theories have been proposed for the development of the hemorrhoid; among them, inflammation is one of the pathogenic processes that has gained attention recently.1–6,Peripheral artery occlusive disease is one of the leading causes of mortality worldwide.7–10,Patients with PAOD are usually asymptomatic and are easily overlooked.7–12,The risk factors of developing PAOD have been well established in previous investigations.13–15,Matrix metalloproteinases, key players in the pathogenesis of PAOD, have recently been reported to be associated with hemorrhoid development.5,16–18,However, no study has addressed the relationship between hemorrhoid and the risk of incident PAOD.Therefore, this study was designed to evaluate the association between hemorrhoid and the subsequent PAOD risk using a nationwide population-based database.A nationwide population-based retrospective cohort study was performed using the Taiwanese Longitudinal Health Insurance Database 2000.The LHID2000 comprises one million randomly sampled beneficiaries enrolled in the National Health Insurance program, which collected all records on these individuals from 1996 to 2011.The NHI program includes the complete medical information of more than 23.74 million Taiwanese residents, with a coverage rate of over than 99%.19,The NHI program and LHID2000 have been described in detail previously.20,21,The identification numbers of patients have been scrambled to protect the privacy of insured residents before releasing the LHID2000.Diseases diagnoses were identified and coded using the International Classification of Diseases, 9th Revision, Clinical Modification.The Ethics Review Board of China Medical University and Hospital in Taiwan approved this study.Subjects with hemorrhoids newly diagnosed from January 2000 through December 2010 were included in the hemorrhoid cohort.The first date of hemorrhoid diagnosed was defined as the entry date.We excluded patients with a history of PAOD before the entry date or those aged <20 years.The non-hemorrhoid cohort was identified from the LHID2000 during the same period of 2000–2010, with exclusion criteria similar to the hemorrhoid cohort.Patients in the hemorrhoid and non-hemorrhoid cohorts were selected by 1:1 frequency matching by sex, age, index year, and comorbidities of diabetes, hypertension, hyperlipidemia, chronic obstructive pulmonary disease, heart failure, coronary artery disease, stroke, and asthma.The comorbidities diagnosed before the end of the study were included for adjustment.Both these cohorts were followed up from the index date until the date of PAOD diagnosis, withdrawal from the NHI program, or the database ended, whichever came first.Distributions of demographic variables, including sex, age, and comorbidities were compared between the hemorrhoid and the non-hemorrhoid cohorts.The categorical variables were analyzed using the chi-square test, and the continuous variables of the baseline characteristics of these cohorts were analyzed using the Student t-test.To assess the difference of the cumulative incidence of PAOD between the hemorrhoid and non-hemorrhoid cohorts, we applied the Kaplan–Meier analysis and the log-rank test.We computed the incidence density rate of PAOD for each cohort.Cox proportional hazard model was used to assess the risk of PAOD between the hemorrhoid and the non-hemorrhoid cohorts.Sex, age, and comorbidities of diabetes, hypertension, hyperlipidemia, COPD, heart failure, CAD, stroke, and asthma were included in the multivariable model for adjustment.We estimated the hazard ratios and 95% confidence intervals using the Cox model.We performed all statistical analyses using SAS 9.4, with P < 0.05 in two-tailed tests considered significant.Eligible study patients included 37,992 patients with hemorrhoids and 37,992 patients without hemorrhoids.No significant differences regarding the distributions of sex, age, and comorbidities between the hemorrhoid and non-hemorrhoid cohorts were found.Males represented the majority of the study cohorts; most people were less than 50-years-old.The mean age of the patients in the hemorrhoid and the non-hemorrhoid cohorts was 47.2 and 47.0 years, respectively.The mean follow-up period was 6.82 and 6.70 years in the hemorrhoid and non-hemorrhoid cohorts, respectively.The plot of the Kaplan–Meier analysis showed that, by the end of the 12-year follow-up period, the cumulative incidence of PAOD was significantly higher for the hemorrhoid cohort than for the non-hemorrhoid cohort.The overall, sex-, age-, and comorbidity-specific incidence density rates and HR of these two cohorts are shown in Table 2.The overall incidence rate of PAOD was significantly higher in the hemorrhoid cohort than in the non-hemorrhoid cohort with an adjusted hazard ratio of 1.25.The risk of PAOD for the hemorrhoid relative to the non-hemorrhoid cohort was significantly higher in both women and men.The incidence of PAOD increased with age in both cohorts, and the age-specific aHR of PAOD for the hemorrhoid relative to the non-hemorrhoid cohort was significantly higher for those aged 50–64 years and ≧65 years.The risk of PAOD for the hemorrhoid relative to non-hemorrhoid cohort was significantly higher for those without comorbidity and with comorbidity.The results of the univariable and multivariable Cox proportional hazards regression models for analyzing the risk of variables contributing to PAOD are shown in Table 3.The aHR of PAOD increased 1.04-fold with age.The risk of PAOD was greater in patients with comorbidities, namely diabetes, hypertension, hyperlipidemia, and CAD.To the best of our knowledge, this study is the first to identify the association between hemorrhoid and risk of incident PAOD.After adjustment for age, gender, and comorbidities, patients with hemorrhoids had a significantly increased risk of developing subsequent PAOD.The strength of our investigation is that it is based upon a nationwide population dataset with an adequate number of participants who were followed-up for a very long time to enable significant analysis.19,Therefore, the association between hemorrhoid and the subsequent PAOD risk was highly convincing.In this study, men represented the majority of the hemorrhoid patients, and the mean age of the patients with hemorrhoids was 47.2 years.Our findings are comparable with those of previous investigations,1–4 further verifying the reliability of National Health Insurance Research Database hemorrhoid cohort data.In this study, we found that hemorrhoid patients had a 25% increased risk of subsequent PAOD development after adjustment for age, gender, and other medical comorbidities.Additionally, the risk of developing PAOD for the hemorrhoid cohort relative to the non-hemorrhoid cohort was significantly higher in the subgroups of older patients and those with no comorbidity, further implying that the association between hemorrhoid and risk of incident PAOD might be unrelated to underlying comorbidities.More studies are mandatory to substantiate our findings.A subgroup analysis was conducted to evaluate the impact of hemorrhoid and respective medical comorbidity on the development of PAOD.Our finding is compatible with the current knowledge that the risk of developing PAOD is higher among patients with diabetes, hypertension, and CAD.13–15,Though the impact of hemorrhoid on the development of PAOD was not as high as that for conventional PAOD-associated risk factors,13–15 hemorrhoid still conferred a significantly increased risk of PAOD, with steadily increased during the 12-year follow-up period, after minimizing confounding factors.Further large-scale studies to explore the association between hemorrhoid and subsequent PAOD risk are worthwhile.Several possible factors may explain the higher PAOD risk among patients with hemorrhoids.1–4,22–24,First, the role of inflammation, which has been well established as a major trigger of acute atherosclerotic events,25–27 in the development of hemorrhoid has recently gained increasing attention.5,6,Second, patients with hemorrhoids tend to lead a sedentary lifestyle and be obese, factors which are strongly associated with the development of PAOD.1–4,22–24,Further investigations are warranted to verify the role of PAOD in hemorrhoid and to explore the underlying mechanism.First, the diagnoses of diseases were identified and coded using the ICD-9-CM, and the severity and classification of hemorrhoid, PAOD, and other medical comorbidities could not be obtained via the LHID2000.Second, we could not retrieve detailed information regarding family history of PAOD, smoking, obesity, and physical activity from the LHID2000.Finally, this study is a retrospective cohort study, with certain inherent methodological limitations.In conclusion, a significantly increased PAOD risk in patients with hemorrhoids was found in this nationwide cohort study.Further studies are required to confirm the clinical significance of our findings and to explore the underlying mechanism.
Background: This study was conducted to evaluate the association between hemorrhoid and risk of incident peripheral artery occlusive disease (PAOD). Methods: Using the Taiwanese Longitudinal Health Insurance Database 2000, we compared the incident PAOD risk between the hemorrhoid and the non-hemorrhoid cohorts. Both of these cohorts were followed up from the index date until the date of PAOD diagnosis, withdrawal from the National Health Insurance program, or the end of 2011. Results: The mean follow-up period was 6.82 (standard deviation [SD], 3.22) and 6.70 (SD, 3.23) years in the hemorrhoid and non-hemorrhoid cohorts, respectively. The plot of the KaplaneMeier analysis showed that, by the end of the 12-year follow-up period, the cumulative incidence of PAOD was significantly higher for the hemorrhoid cohort than for the non-hemorrhoid cohort (log-rank test: P < 0.001). Conclusions: A significantly increased PAOD risk in patients with hemorrhoids was found in this nationwide cohort study.
174
Long-term effect of different management regimes on the survival and population structure of Gladiolus imbricatus in Estonian coastal meadows
Most vegetation in Europe evolved under the constant influence of man.Species-rich grasslands are semi-natural heritage communities that developed under long-term traditional mowing and grazing However, cessation of traditional land use measures, intensified grazing and fertilising have reduced the species richness of plant communities and diminished the provision of grassland-specific ecosystem services.This dramatic decline in semi-natural species-rich grasslands in Europe and the loss of habitat connectivity for the species that rely on these habitats have resulted in the extinction of local species, thereby causing a decline in biodiversity at different trophic levels of semi-natural communities.Grassland abandonment, as a conservational problem, can be addressed as an opportunity for restoration, but this leads to numerous challenges.Numerous authors have outlined that species diversity in heritage communities significantly depends on the management history—that is, the historical context—of the site.This is also referred to as traditional ecological knowledge.Those aiming to conserve and perform restoration management for semi-natural communities must identify methods and economically viable practices that are appropriate for those activities.Although the reintroduction of traditional management regimes is most appropriate for grassland restoration from an ecological perspective, it is not feasible in most cases.Numerous authors have experimented to find contemporary replacements for traditional land use, evaluating their ecological and economical trade-offs.A meta-analysis of various experiments on benefits of grassland management by either grazing or mowing for biodiversity revealed that grazing has a more positive effect than mowing.However, another meta-analysis of meadow mowing regimes indicated that the most effective mowing frequency depends on the productivity of the given site, but in general, less frequent mowing regimes yield better results for biodiversity.The effects of different herbivore species and breed-grazing strategies on grassland biodiversity were thoroughly analysed by Metera et al.The authors conclude that grazing species have different food preferences and suggest that mixed grazing systems may be a way to guarantee diversity and that local conditions should be considered instead of using blanket stocking rates, as suggested by agri-environment schemes.Different restoration experiments compared the re-introduction and replacement of old breeds by allowing sheep, goats, cattle and horses to graze.These efforts yielded the expected results in terms of restoration.In Europe, it is a common practice to replace milk cattle with beef cattle, both equally contributed to semi-natural grassland management and restoration activities."Numerous works on bird and arthropod species have examined different management and restoration activities in wet and coastal meadows.Compared to studies of animals, however, long-term and large-scale demographic studies of plants are scarce.A 32-year study of Ophrys sphegoides indicated that sheep grazing is more favourable for the species than cattle grazing, however, the study indicated that over half of the plants were browsed by livestock.Schrautzer et al., 2011Schrautzer et al., reported that mowing had a positive effect on Dactylorhiza incarnata populations, as there was an exponential increase in the number of flowering plants during the first 10 years of the experiment.Further, Lundberg et al. reported that several protected species in Norwegian dry coastal dunes had a positive reaction to mowing only after 10 years of annual efforts.The coastal meadow restoration efforts in the Luitemaa Nature Reserve are not focused on maintaining the G. imbricatus population in particular, but on creating a habitat for rare shorebirds and natterjack toads.Our grassland management experiment studied a very important side effect of the grassland restoration process: the response of a rare grassland species to various types of maintenance.Since we have already observed the positive reaction of G. imbricatus to restoration activities during the first three years of management and its uniform reaction to all management types, here we examine whether the trend continues in the long term.We hypothesise that different management regimes have different effects on the structure of the G. imbricatus population and its survival during long-term restoration efforts.The sword lily Gladiolus imbricatus is a decorative tuberous clonal plant that is native to Central and Eastern Europe, the Mediterranean, Caucasia and West Siberia.G. imbricatus grows up to 30–80 cm tall, and it forms bulb-like tubers that are 1–2 cm in length and tubercules for vegetative reproduction.Vegetative plants start as a single-leaf juveniles and then grow to become two-leaved premature plants.Generative plants have single slender stalks with 2 rosette leaves and 1–3 leaves on the flower stalk and 3–10 purple flowers within a one-sided inflorescence.In Estonia, flowering occurs in July, and relatively large seeds ripen during the first half of August.One plant can produce 200–400 seeds, and a chilling period of several months is needed for the seeds to germinate when temperatures increase in late spring.Prior studies reported that the success of establishment in reintroduction field experiments can range from 60% to 20%.Reaching the generative stage is rare and may be time-consuming.Seeds can survive in a seed bank, but the success of establishment after storage in a seed bank depends on the height of vegetation, availability of light and level of nearby disturbance.Significantly higher seed germination can be achieved by removing litter, bryophytes and the above-ground parts of plants; ensuring the availability of larger gaps in vegetation; and planting in open meadows.No specific literature is available on G. imbricatus dormancy patterns, but in their review of dormancy among perennial herbaceous plants Shefferson et al., 2018 Shefferson et al., found that rhizomatous species have the longest maximum dormancy values, while those with corms or bulbs have the shortest.In our special study, we observed only some hypogeal germination and no dormant bulbs.G. imbricatus is plastic in its responses to the environment, as its productivity and traits depend on the light conditions and vegetation density.The G. imbricatus species is categorised as threatened, red-listed or under protection across Europe and has become locally extinct in numerous regions.In Estonia, G. imbricatus is under legal protection and is considered to be vulnerable, as its population is in decline.G. imbricatus occurs in various habitats across Europe, from termophilous oak forests to wet meadows, including floodplains, coastal grasslands and marshes Kostrakiewicz-Gieralt, 2014b.In Estonia, the distribution of the species is restricted to a sub-region of Livland, forming a west–east belt from coastal meadows in the west to flooded meadows near the River Emajõgi in the east.The species is threatened by the picking of flowering plants and changes in land use.During the previous century, the abandonment of seashore and floodplain grasslands resulted in the encroachment of reeds and bushes.Grazing, which is the most traditional measure of grassland restoration, is unadvisable for the species.Thus, reintroduction has been recommended.In the restoration management planning and EU agri-environmental schemes, coastal grasslands are intended to be maintained as low-sward homogeneous permanent grasslands.Many grasslands managers use the opportunity for a short term contract for habitat restoration to begin with, but are then required to switch to the contract system of agri-environmental schemes.Grazing is the prescribed as the main management method by the agri-environmental support scheme, while restoration support schemes additionally allow the use of mowing, mulching and other methods as well.Grazing with different beef cattle is the main method of conservation management in coastal grasslands in Estonia.Sheep do not frequently graze in coastal grasslands due to wet conditions and numerous specific diseases.Moreover, large carnivores have become a threat to sheep as their populations have increased.The research area is located in the Luitemaa Nature Reserve on the southwestern coast of Estonia.Luitemaa hosts approximately 800 ha of an EU priority habitat called the Boreal Baltic coastal meadow, which represents approximately 10% of the current area of this type of habitat in the country.The area is edaphically homogeneous and was established on an area that used to be at the bottom of the sea due to the post-glacial land uplift.The sandy loam is covered by a humus layer 10–20 cm deep.The plant community in the experimental sites belongs to the association of the Deschampsio-caricetum nigrae type, which is typical of coastal areas in Estonia.The prevailing species in the community are Molinia caerulea and Sesleria caerulea, with Festuca rubra occasionally co-dominating in more grazed areas.Historically, the lower parts of the meadow have been used for grazing by a variety of domestic animals, including dairy cows, heifers, sheep and horses.Historically, the lower part was separated from the higher parts of the meadow by different types of fences or was guarded by shepherds.The higher parts of the meadow were used for haymaking and late-summer grazing.This management regime declined in the 1950s, and the entire area was abandoned from the 1970s to the 1990s.All the experimental plots are situated in the upper zone of the meadow, which was mown and grazed in history.However, these upper areas continue to be flooded by brackish seawater, with floods ranging in frequency from once to several times a year.In 2001, an intensive habitat restoration project focusing on rare birds and natterjack toads was commenced in the Boreal Baltic coastal meadow with the support of the EU LIFE Nature programme.The restoration and management activities have continued and been extended by EUagri-environmental support and various other projects until the present.Farmers in this area chose different types of livestock for the restoration activities and introduced new adaptive management patterns in different parts of Luitemaa.In 2002, within abandoned grasslands of the upper part of the coastal meadow, we identified distinct areas in which G. imbricatus populations had survived and specimens were abundant enough for analytical experiments.These locations were very scarce and scattered along 7 km of the coastline.We selected four different management regimes: 1) grazing by beef cattle, 2) grazing by sheep, 3) late July mowing and 4) the continuation of abandonment.Each treatment was repeated at two subsites within a larger site given the same treatment.The treatments were partly spatially clustered because of the scarcity of G. imbricatus populations and the low management stability of the land owners at that time.The clustering, however, probably has only some negative effects on the representativeness of the study, as the base environmental conditions are similar throughout the coastline examined in the experiment and, after some years, management intensity became different between subsites given the same treatment because of heterogeneous behaviour of grazing animals.The grass is mown after the 15th of July each year, dried and then collected.Thereafter, the areas are exposed to occasional grazing by beef cattle as part of a larger paddock.Grazing in both treatments was not intensive as legislation has set the limit of average grazing pressure up to 0.8–1.2 livestock units per hectare throughout the vegetation period from early May until late September.Further, a paddock system was utilised to regulate grazing intensity and guarantee food availability for livestock.All grazed and mown areas are fenced permanently year-round except the shoreline, where fences are removed in the winter for safety reasons.The abandoned portion of the meadow, which has remained unused since the 1980s, was used as the control for the current study.The abandoned areas are slowly becoming overgrown with Alnus glutinosa; Salix ssp.; and tall herbs such as Filipendula ulmaria, Molinia caerulea, Carex disticha, Selinum carvifolia, Angelica palustris and Angelica archangelica.During the research period, the control areas and nearby sites remained open.In 2002, two 20 × 20 m subsites were randomly located within each site.Ten 1-square metre plots were randomly placed in these subsites each year.Within these plots, G. imbricatus specimens were counted at three ontogenetic stages: 1) juveniles, 2) premature plants and 3) generative plants.The plant coverage, species composition, maximum height and upper height limit of leaves were reported for each plot.Measurement was done during the second half of July, when the plants were fully flowering and mowing had not yet begun.Sampling was performed annually from 2002 to 2004 and then the sample was re-surveyed annually from 2014 to 2016.From 2005 to 2013, no plants were measured, but coastal meadow management was performed in the same manner.In the last years of the experiment, vegetation in the managed plots was lower than in the abandoned plots.However, there are significant variations in treatments between years.The maximum height of vegetation reflects the higher peaks of flowering shoots in the plots.It varies significantly over time, although it is significantly higher in abandoned plots.The average height of vegetation and the upper height of leaves in grassland plants follow the same pattern.Additionally, the plots grazed by sheep and cattle have significantly lower average vegetation than abandoned areas.The species richness of plots also contrasted between treatments.Specifically, the abandoned subsites had the lowest species richness and the mown subsites had the highest species richness.In 2016, an additional study was carried out.Some parameters of G. imbricatus specimens were measured for comparison with the vegetation parameters.Up to 20 specimens were collected per treatment when the number of specimens within the given age group was available.In 2019, juvenile and premature plants were excavated from three 20 × 20 cm plots for each treatment to estimate the potential age of plants according to the morphology of bulbs/tubers.One-leaved specimens were distributed quite evenly in terms of the three developmental stages of tubers: seedlings, second-year plants and older plants.One-leaved G. imbricatus specimens were all regarded as juveniles, even though they were different ages.The proportion of juveniles of each bulb stage was similar for all treatments.Plot-level data were pooled at the subsite level, as sampling plots were located randomly within the subsite each year.The effect of treatments, successive years, ontogenetic stages and their interactions were evaluated based on the log-transformed count of individuals and a general linear mixed model.In the model, subplots were defined as random factors.The post-hoc pair-wise differences among specific management regimes were estimated using the Tukey HSD multiple comparison test.Another analogously structured model was run using logit-transformed frequency data regarding the ontogenetic stages of specimens in various plots within a subsite.The SAS 9.3 MIXED procedure was used for both analyses.The model-based least-square means were back-transformed to real-life estimates, with 95% confidence interval ranges.The mixed model results show very complex dynamics in terms of population size.G. imbricatus juveniles increased in number during the starting phase of the restoration project for all treatments, particularly in mown plots.The abundance of juveniles in mown areas remained relatively high in the long term, even though the numbers reported from 2014 to 2016 were slightly lower than the peak observed in the third year of the experiment.For the other treatments, however, after 10 years, the number of juveniles declined to the starting level or below.This was the case for the unmanaged areas in 2015 and the sheep management plots in 2016.Further, the abundance of vegetative and generative shoots did not vary significantly between the treatments during the first two years of restoration.However, in 2004, the number of premature shoots had declined in grazed plots and differed significantly from the estimates in the mown areas.Moreover, the numbers of premature and generative specimens were not statistically different from the numbers in the starting year across treatments, even though they did decrease under both grazing treatments.The unmanaged plots showed the most stable populations of premature and generative specimens.The generative reproduction in the sheep-managed pasture was very poor, as all the shoots were bitten and none had flowers or fruits.In 2016, the height of G. imbricatus vegetative leaves was comparatively measured and found to be significantly higher during all ontogenetic stages in abandoned plots than in plots given other treatments.The leaf height of juveniles corresponds to the upper height of leaves of grasses in the plots.The proportion of browsed G. imbricatus shoots of different ontogenetic stages in grazed plots differed significantly across years.In 2014, the proportion of browsed shoots in all plots grazed by cattle was higher than that in sheep pastures, while in 2015 and 2016, the opposite was true.The average browsing rate of juveniles was 45–50% for cattle and 15–40% for sheep.The average browsing rate for generative shoots was 70–100% in both treatments.The most significant difference was observed in 2016 for browsing of premature shoots, with an average of 5% for sheep and almost 100% for cattle.There was a gradual decline in population frequency within the subsites under all grazing treatments as well as in abandoned plots in certain years.A particular difference in the occurrence frequency dynamics of premature and generative plants was observed between the mowing and sheep grazing treatments in the long term, although analogous trends were observed for juvenile plants.Less evident but similar trends were also observed for cattle grazing plots.The same trend was reported for premature plants, but the differences are not statistically significant.In 2002, when restoration activities began, all areas chosen for the experimental management regimes had a similar number and frequency of G. imbricatus specimens at each development stage.The mowing treatment resulted in a tenfold increase in the number of juveniles between 2002 and 2004, which was much more than the number reported in other years.The significant increase in the number of juveniles in mown plots in 2003 and 2004 may indicate that management disturbances had a positive effect on seed recruitment from the seed bank or new seeds from more abundant generative specimens.The increase was probably induced by the increased availability of establishment microsites and improved light conditions for germination, as reported by Kostrakiewicz-Gierałt and Kostrakiewicz-Gieralt.The regeneration intensity in mown plots declined but stabilised after 10 years and was still at a higher level than before the restoration began.This short-term positive reaction was confirmed in an additional observation from nearby site in 2019, where long-term management of combined grazing and mowing led to the formation of tall-sedge areas with only a few G. imbricatus specimens, but the change in management to end-milling cutting in autumn 2018 led to a boost in juveniles and flowering shoots in 2019.An analogous short-term reaction of G. imbricatus to mowing was observed by in the Na Bystrem meadow in Moravia.The dynamics of premature and flowering shoots were different from those of juveniles.In abandoned areas and both types of grazed areas, the number of specimens of both stages began to decline after the second year of the experiment.By 2004, the frequency of flowering shoots decreased from almost 100% to 20% in the plots grazed by sheep.Further, in abandoned areas, the number of flowering individuals declined in a similar way to the grazing treatments, but the plants were more evenly distributed in the abandoned areas than in grazed areas.G. imbricatus is a phenotypically plastic plant, as it can adjust leaf length to rising competition with taller herb-layer vegetation during abandonment and in the early stage of encroachment of its habitats.Indeed, in the last year of the survey, the rosette leaves of generative G. imbricatus specimens were much taller in the long-term abandoned sites than in other treatments, indicating the plants’ phenotypical plasticity to long-term encroachment.The average height of vegetation or the upper limit of leaves of vegetation was significantly lower in grazed plots.However, this was the case in all areas, indicating that the areas reflected annual environmental conditions in similar ways.The average height of the rosette leaves of G. imbricatus corresponds to the pattern of average grass leaf level across management regimes.The results confirm the conclusions of earlier studies regarding the abandonment effect on G. imbricatus: that plants become less abundant but flowering shoots elongate in response to competition for light and pollinators and, consequently, G. imbricatus populations survive meadow abandonment and overgrowth for a rather long time.In contrast to positive trends in short-term counts, the re-survey of sites from 2012 to 2016 revealed that the population of G. imbricatus declined in grazed areas and continued to flourish only in mown plots.The contrast between the long-term and short-term observations supports the objective assessment, which suggested that the goals of ecological restoration can be achieved only after 10 years of treatment.Lundberg et al. observed that, over 16 years of mowing treatment, the increase in the target species became significant only after year 10.These results warn against prematurely making conclusions regarding the degree of success in the early stages of restoration.Different restoration measures applied to G. imbricatus led to different population performances after 15 years of management.Mowing is the most—and only truly—favourable management regime for G. imbricatus, as suggested by several other recent studies.However, mowing should be moved to later in the season, just after the ripening of seeds.Neither grazing regime is favourable, as both showed a decline in population, particularly in the premature and flowering stages.Previous research indicates that sheep browse Gladiolus more selectively than cows;)."From 2014 to 2016, we observed that browsing habits differ annually, and while sheep browse almost half of the juveniles from the grass, the cows' browsing can vary yearly from 20 to 40%, although the availability of plants is similar.Two-leaved plants’ leaves are more visible in grass and are browsed significantly more by sheep.Additionally, the flowering shoots are highly distinguishable from the rest of the grass and are eaten selectively by sheep.Yearly differences may indicate the heterogeneity of grazing patterns in different subsites, i.e. the patterns of animal behaviour may also affect the population and structure.For example, one of the cattle treatment sites, which was selected in 2002 and features the only population in a large area, has become a favourable resting place for animals.Plants almost disappeared from there, but a large number have spread to the surrounding 50 ha.Field observations indicated that after late grazing with sheep in 2003 and 2004, in the following years, a large number of G. imbricatus seedlings appeared near the paths of sheep and in their resting places.Similar zoochory was reported by land managers throughout the restoration period.The prescribed grazing pressure was probably too high; Lyons et al. reported a long-term positive response to grazing pressure of 0.2 LU/ha in upland calcareous grasslands, although this is a habitat with much lower productivity.The low year-round horse grazing pressure was found to be favourable for rare species and communities in dry calcareous grasslands and are recommended for dry sandy grasslands.On the other hand, Tóth et al. suggest that livestock type is more crucial than grazing intensity in short-grass steppes and that sheep may be more selective grazers in cases of low grazing pressure.This could be the case for G. imbricatus.We suggest that management schemes that favour grassland biodiversity and rare plant species must consider the grazing habits of the available grazers, grazing pressure and timing.The diverse management patterns of grasslands have been suggested to be more effective for preserving arthropod diversity, pollinators, amphibians and breeding birds and feeding migratory waders.This is probably important for plants.Small- and large-scale heterogeneity is characteristic of natural ecological conditions, which must be considered while planning optimal and effective restoration treatments.We showed that the short-term part of our experiment leaves an overly positive impression about the effectivity of restoration management support scheme, while the long-term continuation of the same management types shows their negative effect on population restoration of G. imbricarus.The latter negative results, however, are attributed to the following maintenance support by agri-environmental schemes after the maximum three year support for habitat restoration.Additionally, restoration contracts are more flexible when it comes to choice of management type than agri-environmental schemes.Specifically mowing is not a conventional measure for coastal meadows maintenance under the agri-environmental support schemes and its application needs special permits.Late-summer mowing, however, is probably a more cost-effective on the upper parts of coastal meadows and supports more efficiently certain rare plant species than prescribed grazing.There have been doubts about EU Common Agricultural Policy ability to support achieving the biodiversity targets.We suggest that the problem might start from inadequate restoration and management methods, but the short-term monitoring prescribed for restoration schemes is not able to detect these problems.We suggest that the most favourable management types for upper parts of coastal meadows is rotational treatment in which mowing, grazing and no management are applied in different years, which promotes seed ripening and distribution, creates various microsites and disturbances.Finally, as we observed the boosting reaction of G.imbricatus in the consequence of changed grazing-mowing management type to the late-summer end-milling cutting at the meadow neighbouring the experiment, litter-free ground in the spring can be an additional critical factor for G. imbricatus, however, these late-summer removal treatments should be tested and promoted in future.Our study reveals that when coastal meadow restoration and maintenance managements target the general aims of agri-environmental schemes, such as the promotion of low-sward grassland and habitats for shoreline-breeding waders and migratory birds, while other more specific conservational aims may have been neglected.Therefore, restoration and agri-environmental management schemes need more precise multi-target planning, i.e. must consider all conservation values of the ecosystem.While grazing is the most common restoration and maintenance measure for coastal meadows, we recommend diversification of management types by promoting late-season mowing and reducing grazing intensity.Sheep grazing must be avoided or regulated to low intensity levels.The short-term evaluation results of restoration and management methods can be misleading, and the long-term multi-indicator monitoring of management contracts must be implemented.The study was supported by institutional research funding from the Estonian Research Council, EU Horizon 2020 project EFFECT, TAA Herbarium and the European Union through the European Regional Development Fund.
Questions: How does the population structure of the threatened plant species Gladiolus imbricatus differ in the early and late stages of habitat restoration under different management regimes? What is the best management regime for the species? Location: Luitemaa Nature Reserve in Southwest Estonia. Methods: A long-term field experiment (2002–2004 and 2014–2016) studied the effect of four management regimes: (1) mowing in late July, (2) grazing by cattle, (3) grazing by sheep and (4) continuous lack of management (i.e. the control). Results: In contrast to the highly positive short-term response to habitat restoration, in the long term, late-season mowing was the most favourable management type for G. imbricatus. The universal increase in juveniles across treatments during the early phase of the restoration project remained high only in mown plots. For the other treatments, after 10 years, the number of juveniles declined to the starting level or lower. Additionally, in contrast to the uniformly high number of premature and generative plants across treatments during the first two years of restoration, the number of premature plants in grazed sites declined. In particular, the frequency of premature and generative plants differed between the mowing and sheep grazing treatments in the long term. The success of generative reproduction was poor in the sheep-managed pasture, as all the shoots were grazed and none had any fruits or flowers. Conclusions: While grazing is the most commonly subsidised restoration measure applied to coastal meadows, we recommend diversification of management types by promoting late-season mowing and reducing grazing intensity. In particular, sheep grazing must be avoided. The results of short-term evaluation of restoration methods can be misleading, and long-term monitoring must be a default evaluation task in biodiversity management support schemes.
175
Cross-cutting challenges to innovation in land tenure documentation
Developing countries experience multiple challenges in securing rights to land through processes and techniques of formal land administration.Problems of tenure insecurity, limitations of availability of tenure information, and the recognition of the high costs of implementing comprehensive, large scale land information systems through public agencies or international bodies triggered dialogues that promote alternative approaches to generating and managing tenure rights on land.In how far formal registration of land rights is necessary and for whose benefits is one point of continued debate.Fourie, for instance, highlights the need for formal registration systems specifically to provide the poor both tenure security and access to spatial information through appropriate spatial data infrastructure, if the latter explicitly accommodate multiple forms tenures.Especially the urgency for governments to improve infrastructure and services in regions with high rates of land conversion and urbanization in institutional settings characterized by complex dynamics between “informal” and “formal,” “customary” and “modern,” “incremental,” and “master-planned” practices of urban land use and change, make it difficult to establish a large-scale LIS or Spatial Data Infrastructure.In light of these difficulties Deininger and van der Molen and Lemmen suggest that increasing security of tenure does not require issuing formal individual titles, because more simple and less costly measures inspired also by the concept of the continuum of land rights could be better alternatives.Successive and extensive dialogues led to the development of the Land Administration Domain Model in 2010 and later an International Standard Organization standard in 2012, and its special version, the “Social Tenure Domain Model” in 2010.The LADM is a conceptual model that provides an overview of requirements and standard packages for organizing land administration information, including information about people and organizations, as well as tenure rights and spatial units and documents to support diverse tenure rights.The timespan between 2008 and 2014 also witnessed the emergence of the concept of pro-poor land recordation and presently culminated in the idea of fit-for-purpose approaches for land administration.Fit-for-purpose promotes designing the land administration systems with the explicit vision of prioritizing the needs of the people and their relationships to land at a given point in time.In a FFP LA the underlying spatial framework for large scale mapping is designed to manage land issues at local or in country context, rather than strictly following bureaucratic and technical standards of the conventional registration systems.FFP LA, specifically LADM and STDM based approaches, provide a philosophy and a model for capturing tenure rights, including social tenures at the local level/community level, while using participatory approaches in the tenure recordation process.A number of initiatives specifically leveraging mobile technologies for data collection have emerged to record, manage and store tenure information at local community level.Scaling up of such local level tenure documentation activities could in the longer run support the FFP frameworks in different places and together contribute towards realizing countrywide availability of tenure information.These initiatives aim for higher speed and lower cost in tenure documentation, and address the concerns of weak administrative and legal statutory environments by advocating for openness of land tenure information for informed decision making by third parties, and by emphasizing the importance to document the existing diversity in land tenure systems and rights.They expressedly aim to include women’s and other vulnerable groups’ rights.The label “pro-poor land tools,” which is sometimes used as an umbrella term for the initiatives we describe here, reflects the aim to address especially the needs of the poor population, because it has been recognized that poor and marginalized groups have been neglected or negatively impacted by land rights documentation efforts in the past.Therefore, these new initaitves aspire to work with community driven and/or community generated digital data with the eventual aim of strengthening tenure security for as many people as possible.In the emergence of these initiatives around 2011 and the associated discourses we see several policy and technological developments from the past 20–30 years converging.On the policy discourse side these include aims of improved efficiency, open and transparent government, and the ideal of widespread participation of land stakeholders including citizens, politicans and actors from professional bodies.The aim of efficiency has driven e-government and LIS development since at least the 1980s and 90s stemming from an era of new public management in public administration.Especially since the 2000s the visions of open and transparent government have been promoted through worldwide Freedom-of-Information legislation and open data government initiatives inspired by the U.S. Obama government.These global policy discourses and longterm aims stand in dialoge with parallel developments of the internet.This evolved during approximately the same time from a mostly read-only Web 1.0 to the interactive and semantic Web 2.0 and 3.0.Accompanied by a global spread of mobile internet and phone devices as well as urban sensor networks the technological environment now provides a wide spectrum of possibilities for the public to also provide data to the government, interact with authorities via the internet, and to publish data via internet based services and portals.Further catalysts to the emergence of innovative approaches for tenure documentation are the growth of the open source software community; improved access and accuracy of geospatial data, as well as increased computing power and data storage space.In sum, at least three trends gave rise to the development of innovative approaches to land tenure documentation.First, the difficulties in and the high costs of implementing comprehensive, large-scale land information systems through public agencies was recognized and led to continued debate on the pros and cons, but also the how-s of recording land rights and tenure regimes in developing countries introducing the notion of fit-for-purose land administration.Second, the initiatives incorporate visions from policy discourse at international scale, including aims of improved efficiency, openness and transparency, as well as citizen participation.Third, the approaches leverage new mobile and Web2.0 technologies that have emerged in the past 20 years for data collection, storage, and exchange, including mobile apps, online platforms, and cloud services.In recent years innovative approaches have gained visibility through a variety of platforms, including traditional media, websites and social media; and through various events, such as professional meetings, conferences, workshops, seminars, and publications1 .Comparative overviews of initiatives similar to those discussed in this paper have been made for individual countries, for instance by Somerville et al. for Zambia.McLaren et al. have developed a practical guide for implementing new technologies for land administration, including crowdsourced data and drone technology for data capture.This guide also offers a comprehensive overview of emerging trends in land administration through various case studies.Our study seeks to contribute to the growing body of knowledge about the general nature of new technology trends in land administration and specifically contributes by outlining a set of cross-cutting challenges encountered by initiatives seeking to innovate land tenure documentation.In the final instance and based on the identified challenges we propose a series of future considerations for the implementation and evaluation of innovative approaches going beyond technocratic elements by shifting into focus questions related to the broader context of land and data governance.Our paper is structured according to these aims.After describing our methodology in terms of conceptual approach and data sources and analysis, we describe general characteristics of 6 initiatives in Section 3 of the paper.In Section 4 we identify at a more abstracted level four main challenges that span across the initiatives.These descriptions and the challenges then provide the basis for Section 5, where we distill a set of directions and questions for future research and for the longer term evaluation of the initiatives.We begin by describing the initiatives in terms of their organizational characteristics, financial and technical aspects, as well as scope and the application contexts in which they are implemented.As such our study does not seek to make a theoretical contribution in the first instance.One reason for taking this approach at the present moment is the relative newness of the initiatives and a sort of mushrooming of like-minded projects under changing labels and definitions.The empirical scene therefore warrants an approach that does not settle on a theoretical frame too early but allows for some experimentation with different conceptualizations to identify patterns in the empirical scene that unfolds before us.This allows for a more inductive analysis at this stage for choosing appropriate theoretical and explanatory frameworks later on.For example, each of the challenges we identify in Section 4 may call for different theoretical explanations in future research, but at this stage the aim was to first identify the types of challenges based on the material at hand.Nevertheless, a longer-term question that informs our research is also theoretically relevant, namely: how these data technology driven approaches, often initiated by foreign actors and/or non-governmental actors, come to terms with existing institutional and regulatory frameworks around land governance.Driven by this underlying question our research approach is summarized in the preliminary conceptual scheme shown in Fig. 1.The conceptual scheme emphasizes the initiatives’ characteristics and their implementation process.We focus our description on organizational set-up and financial mechanisms as main distinguishing characteristics between the initiatives.These in turn influence implementation processes and different initiatives’ application contexts.We describe differences between initiatives in terms of tenure data capture.An initiative may either implement tenure documentation in liaison with the formal land administration system, i.e. following path A in Fig. 1, or work in parallel to the formal land administration domain, i.e. following path B in the diagram.The latter approach would lead to issuance of documents as evidence of land and/or property holding, especially de facto tenure rights, and would require additional processes in order to upgrade to formal tenures.In this paper we use the term “de facto tenure” as short for legitimate tenures that are not currently protected by statutory law and which draw their legitimacy from social and non-statutory institutions, but we also use the term for forms of tenure that may be captured in statutory law, but recognition through official documentation and/or registration are lacking.What emerged from this descriptive, initiative-by-initiative analysis, were a number of challenges that are encountered during implementation.These challenges cut across the initiatives regardless of their differences in characteristics and implementation processes.As such these challenges provide a preliminary set of findings at an abstracted, that is more theoretical level, in response to our more long-term, underlying question formulated above.Our familiarity with the innovative approaches discussed here stems in large part from our work in developing and running an MSc program in Geoinformation Science for Land Administration at the ITC faculty of the University of Twente.This program advocates for responsible land administration and offers a course on “Innovative Approaches for Land Administration” in its curriculum.Since 2007 developers of innovative tools have been invited to lecture and demonstrate their tools to Land Administration graduate students and staff at the faculty.The developers are i) The Global Land Tool Network of UN-Habitat on the Social Tenure Domain Model – STDM2 ; ii) Food and Agriculture Organization of the United Nations on SOLA; iii) Landmapp3 on Landmapp; iv) Cadasta on Cadasta; and v) Thomson Reuters on Aumentum OpenTitle.It should be noted here that Landmapp has been renamed to Meridia, but throughout this paper we will refer to it by its previous name, because it was under this name that we conducted the interviews and prepared first drafts of the analysis.In the course, MSc students also explored through assignments the functions and applicability of innovative tools to document tenure rights at local level, and explored those innovative tools online, which had not been discussed by developers in class.Our interpretations here are further informed by our involvement in the supervision of MSc student thesis research about the implementation of innovative tools in land documentation as well as through our involvement in related workshops, conferences and seminars and regular communication with actors working in the field of innovative approaches for tenure documentation.In addition, board members from the Cadasta Foundation extended on the list of initiatives compiled by students; and we reviewed the initiatives’ websites, reports and documentation.This led to the identification of four initiatives in addition to the ones mentioned above, which are similar in purpose and nature.The initiatives identified so far are listed in Table A1 in the Appendix A.Based on the information sources and our involvements described above we developed questions for a series of one to two-hour semi-structured interviews between February to May 2017 with representatives from organizations involved in the development of six approaches: GLTN on STDM; FAO on SOLA OpenTenure – other SOLA family tools are designed to support the formal land administration systems are therefore seldom discussed in this study; Landmapp on Landmapp; Cadasta on Cadasta; and Thomson Reuters on Aumentum OpenTitle as well as CaVaTeCo.The interview questions related to: i) history of the initiative and overall current organization; ii) surveying and tenure documentation supported by their tools; iii) application context; and iv) the key challenges they encounter before, during and after tenure documentation process.The interviews were recorded and subsequently transcribed.This paper is the outcome of a first qualitative content analysis, to present differences between the initiatves in terms of financing mechanisms and organizational characteristics, as well as process design and application contexts following the conceptual scheme outlined above.After this first sorting of the material according to characteristics the four cross-cutting challenges were identified during a second reading across all transcripts.The first results were presented in a conference panel in 2017 and shared with interview respondents to check for both factual accuracy in the more descriptive elements of the results; and to receive input on our interpretation of main themes related to challenges in implementation.Later in 2018, the first draft of this paper was sent to interviewees, who revised by correcting or adding information so that it best represented their contribution in this article.This section presents results from the analysis of interviews with representatives from 6 of the initiatives.In Sections 3.1 and 3.2 we describe the main differences between initiatives in terms of financing mechanisms and organizational characteristics, as well as process design and application contexts emerge from our interview data following the conceptual scheme in Fig. 1.Four of the 6 initiatives in our study are small organizations or start up companies while two are under the UN, thus global actors i.e. STDM and the SOLA family.The STDM initiative is championed by GLTN partners at the country level with co-funding by GLTN and given partners on the ground.Initiatives based on the SOLA-family of tools for land tenure documentation are guided by and based in the Food and Agriculture Organization, as such being part of a large organization with a long history as global actor in the land governance domain.Both are not-for-profit initiatives.Landmapp, Cadasta, Aumentum Open-Title and CaVaTeCo, on the other hand, are developed by relatively small organizations and have different financing mechanisms.Landmapp, a for-profit company, was kick-started by two engineers and has since grown to ten employees and is regarded as a sort-of follow-up of Thomson Reuters’ Aumentum OpenTitle.Landmapp is a relatively small organization that is solely dedicated to the development of the tools described here and was founded as a social enterpreneurial company in the market of land tenure documentation.Cadasta is also a relatively small not-for-profit and donor funded enterprise.CaVaTeCo is developed by a private company - Terra Firma in Mozambique - and employs a value chain approach to tenure documentation.It is financed by the Department for International Development, under the LEGEND fund.Regardless of organizational character and financing mechanims all organizations act globally not only in terms of the places, in which tools are being piloted and implemented, but also intra-organizationally.Cadasta’s staff, for example, were located in different countries across the world and held meetings mostly through digital communication until 2018, but have started consolidating around Washington DC.The main distinction in financial terms is between for-profit and not-for profit initiatives, where the former need to finance their own efforts through the paid provision of land tenure documentaiton and data services.Of the six initiatives, the not-for-profit initiatives rely more on non-proprietary technologies, while for-profit initiatives deploy proprietary software and services as part of their product suite.In the case of not-for-profit initiatives, financial sustainabilty depends on external, and/or internal funding from within the larger organization, as in the case of the FAO, for example.A basic difference between initiatives lies in the size of the organization and this also influences financial means.The initiatives have similar points of departure in the context of surveying and tenure documentation techniques in that all promote and advocate for systematic, participatory methods and local community-based approaches, with special emphasis on land rights for women and other vulnerable groups in urban, rural, post-disaster and post-conflict contexts.Promoters and developers of innovative tools also advocate for openness of land tenure information for various reasons, for instance to improve data sharing for different uses in development and planning, to support improved decision making by third parties, including large-scale investors, but at the same time also to increase transparency of land sector activities for vulnerable and poor groups, who have the greatest difficulties in accessing information administered by government and third parties.There are two important justifications driving the initiatives.One is the need to support land tenure documentation, where formal land administration’s work does not suffice or has failed.The second is to acknowledge, in policy discourse as well as technology design, the diversity of land tenure regimes and legal, normative plurality in land governance.Therefore, we categorize process design in terms of how the processes align with established legal and administrative workflows of i) land surveying and ii) tenure documentation.For land surveying, all but Landmapp in Indonesia, use the general boundary approach through community based digital data capture, in many cases via mobile applications and advocate for the continuum of accuracy as described in the spatial framework principle of the FFP LA.For tenure documentation, initiatives may follow established legal frameworks and administrative workflows regarding the types of land rights being recorded.Those documenting statutory legal tenures follow closely government requirements in data collection and database design.As such, formal tenure rights as defined in the land laws or the administrative procedures of the government are documented on the parcels.The initiatives capturing de facto tenure rights and seek to introduce the resulting recorded tenure documents into the formal registration system for recognition by government.The aim here is to provide the tenure data to the government for the ultimate issuance of official documents, as information for land use planning and identification of community development priorities and/or for customary and statutory actors to sign documents at various stages in the process of documentation.The de facto tenure rights are delineated on small lots through piecemeal parcelization for individuals, or on community land.In so doing implementers refer to the Voluntary Guidelines on the Responsible Governance of Tenure, for example as follows:“VGGTs support the recognition of legitimate land rights.But saying legitimate does not mean that the rights are already recognized in the law.They are legitimate as it is for the customary rights – but not yet recognized in the formal law.So it is important to have a tool in this case that can map and capture information, which are not yet in the law”.The adjustment to diverse land rights, which may not be captured in statutory laws or administrative procedures, also reflects in the technological design, for instance of database structures, as the following interview excerpt illustrates:“We had some feedback from the users that the software needed flexibility.I mean, like easily adding database fields , easily adding dictionaries, or values.I mean … you know you may find that there some legal language that requires you fill in a field in a certain way and there must be some triggers related to that field.Then also to be filled in is, you know, like current ownership of land, that the woman is recognized, or beneficial interests or that percentages are recognized properly.So I think it’s like any technology implementation – does the way the software has been configured or is being configured in the field meet the needs of the project really”.The database referred to in the quote above is configured to capture the different types of land rights via a drop-down menu that allows the user to choose one type of rights, which is then linked to a specific land parcel.According to the implementers of the tools that focus on documenting de facto tenures, the approach supports GLTN’s idea of incremental improvement of tenure rights from de facto to statutory along the continuum of land rights, as well as of spatial accuracy.The latter in turn responds to demands for high accuracy surveys in the formal registration systems.The eventual aim of recording de facto tenures is therefore to transfer land documents and data into government holding to allow for official recognition of the tenure rights and for issuance of official documents later on.Finally, it should be noted that the process elements of surveying and tenure documentation, which we described here separately, are closely related in at least two ways.First, the nature of some land rights does not lend itself to a fixed boundary approach; and hence the aim of documenting the full diversity introduces a tendency towards a general boundary approach.Second, whether general or fixed boundary approaches are pursued also influence official recognition.For example, certificates of customary ownership in Uganda and occupancy licenses in Zambia have been issued following the general boundary approach by STDM, in collaboration with local land authorities.For CaVaTeCo, the only hindrance to the issuance of formal land certificates lies in its use of the general boundary approach, because meeting the accuracy standards for surveying as defined by law is required before formal land certificates are issued based on CaVaTeCo’s documentation.Landmapp’s use of fixed boundary approach and their cooperation with the formal land authorities during the data capture processes results in the issuance of formal land certificates in Indonesia.In Ghana Landmapp collaborates with both statutory and customary land authorities as both authoritative entities play a role in authorizing both the survey process and the documents at different stages.Application contexts also vary between initiatives, not only in terms of geographic or national regions.A basic differentiation is between urban and rural contexts.In urban and peri-urban areas, initiatives focus especially on land tenure documentation of poor and socially disadvantaged groups, and contexts where land tenure documentation coincides with data collection efforts in the course of urban housing and infrastructure development projects.In rural areas initatives contribute to land tenure documentation in association with cash crop production or for communities to access financial loans for a variety of community level improvements, e.g. in building construction and maintenance; as well as in the management of land in irrigation schemes and monitoring the impact of agricultural development programs for improving farmer productivity.The financial and organizational characteristics of each initiative influence also the application contexts.For example, for-profit initiatives need to find a balance between the aims of land tenure documentation, on one hand, and developing a feasible business strategy, which requires evidence of a market for the offered solutions and/or adjustment of the latter to the market of customers, on the other not-for-profit initiatives do not have to prove financial viability although they are dependent on and accountable to donors.The latter may come to influence application context in the longer run through their expectations.Furthermore, the application focus also depends on the local partners’ financial situation, and in turn dependent on larger donor agencies as expressed by this interviewee:“So, we have more or less the security to be stable for a while, but for the local NGOs – as it’s often these small NGOs that are reliant on piece meal funding –, it’s terrible, because they don’t know if they can, are able to start a project - they depend is going to spend money or not”.Initiatives embedded in organizations of a large reach and relative financial stability also show more diverse application contexts ranging from rural to urban and supporting land rights documentation on part of the communities as well as government, for example STDM and FAO.In sum, the specific application context is not only guided according to an initiatives longer term vision but has to be flexible and adjusted in response to a variety of factors, which are local as well as global and dynamic in nature.We elaborate more on the challenges that this poses in the next sections.In this section we discuss four challenges, that characterize the implemetation process across various initiatives.These are important in that they offer entry points for future implementation and evaluation as well as understanding the nexus between innovation, on one hand, and the existing institutional frameworks of land governance, in which a given initiative works, on the other.A common trend across the initiatives and relatively independent of their respective process design, business strategy or organizational history and financing mechanisms is that they promote explicitly the digital documentation of land tenure.This is important to note, because the digitalization of land documentation adds further complexity to the question of recording land rights in terms of data access, protection, and the need to provide both paper-based documents as well as digital databases to potentially different actors.The reasons for digitalization of land records are more or less explicitly those cited elsewhere in literature on e-government, later open data government and ICT for development, namely speed and ease of data collection, more efficient data management, inclusionary potentials for poor and vulnerable groups, and greater transparency.The aim of leveraging the promises of digital data technology, is combined with another aim in the case of initiatives to innovate land documentation, namely, to be as inclusive as possible of the plurality of land tenures that exist in many contexts of implementation.During community-based discussions, in liaison with government officials and customary authorities, the promoters of innovative approaches emphasize the need to record a plurality in land rights, including informal and customary land rights, temporary land uses and negotiated access to land, as well as rights as per statutory law.In so doing, especially women’s rights and rights of groups that have in past been marginalized from land tenure formalization efforts, are moved to the foreground of discussions and subsequent data collection and documentation efforts.The initiatives all take a community-based approach and work with local civil society organizations and NGOs and advocate for participatory approaches to land rights documentation:"“It is important to emphasize the importance of the 'people' component i.e. getting the buy-in from the stakeholders including the intended beneficiaries, building capacity and bringing out the fact that high-end technological tools and techniques do not always offer the required solution - pragmatic and 'unconventional' approaches are key”.The adjustments to various application contexts also play a role in the goal of seeking optimal inclusion of diverse types of land tenures.Here, the initiatives adjust to the legal pluralist environments they encounter by aligning needs and objectives of the communities in negotiation with other actors, especially government and customary authorities, who are in charge of legitimizing land tenure records.This approach requires advocacy also for surveying techniques that are less precision oriented than those embedded in administrative procedures.But this is also where a tension emerges – especially where mapping and documentation of land rights are implemented on a parcel-by-parcel basis.Documentation of de facto tenures through piecemeal parcellation is biased towards individualization of tenure rights rather than capturing a full spectrum of legitimate overlapping arrangements as observed by, who emphasize that true reflection of tenure arrangements could provide living laboratories for future legal-administrative innovations.Also, shown in the process description in Section 3.2 above, in order to produce documents that are officially recognized and endorsed by government requires adjustment of the documentation process to government requirements.In this process higher accuracy and a more reduced scope of tenure types can become re-introduced into the process.However, due to the general vision of community-based rights delineation and data capture, the initiatives act as catalysts in a discursive sense by discussing the role of multiple and sometimes overlapping land rights and the protection of vulnerable groups’ rights and land uses.The following quote from a not-for-profit initiative representative expresses both the general vision of documenting diverse land rights, but at the same time struggling with the challenges in gaining official recognition of the initiative’s tenure documentation:“e need particularly the surveyor general and those people to understand to move … towards a continuum of land rights,… to say, ‘look, other than being stiff when we see we cannot solve this, we need softer ways of solving the problem”.However, the challenge is not limited to the realm of relations to governmental institutions with respect to land rights recognition.The process of implementation is also influenced by the communities and non-governmental liaisons, whose interests and work context require flexibility.Because the emphasis in the initiatives rests on working with local communities and various governmental stakeholders across local to national scales, the original visions of the initiatives become adjusted and diversify in the process of implementation.It is not only the original legal pluralist environment, which requires database design and data collection to be flexible, but also the nature of the initiatives themselves, societal visions of involved stakeholders and their short-term interests, as well as the changes in objectives arising from an engagement with a variety of local and global actors, which require a data technology design “for flexibility to document evidence as defined by users – legal, customary, other”.Therefore, a second common trend across the initiatives is the association between flexibility in process design and data collection, on one hand, and flexibility in terms of an initiative’s original visions and aims.This pertains to longer term societal visions, but also in some cases to the initiative’s internal visions and philosophy.For example, the common interest in protecting women’s lands rights and access to land for vulnerable groups hints towards a social justice vision for societal development across the initiatives.At the same time, however, explicitly stated goals to innovative land tenure documentation is economic market growth.Both of these are large-scale, longer term and normative visions for societal development.In practice, however, these two may not be complimentary.Transparency and openness of governance processes are also large-scale, longer term development goals driving the initiatives, especially in association with the promotion of digital technologies.Here too a contradiction can present itself in practice between the protection of vulnerable groups and their data, on one hand, and the vision of publishing the data to third parties, including large-scale investors, on the other.Thus, the broader and longer-term visions for societal development are interpreted and translated in practice in different ways depending on a variety of factors, including application context and an initiative’s financial and organizational characteristics.For instance, whether an initiative has to make its own profit or not bears an influence.Obviously, for-profit initiatives need to proof viable business strategies and a market of customers for their product.This introduces a de facto differentiation of land rights holders into customers of the certificate and/or data services being offered and those land holders, who do not wish to buy the product or cannot afford the services.In some cases, the change in original aims is quite explicit and fast.For instance, in Landmapp’s case the original idea was to support local farmers in recording their land rights in order for the farmers to act as environmental stewards.In this case the original aim combined with the start-up’s spirit of entrepreneurism was motivated also by environmental protection concerns and nature conservation.Through the course of time, however, and with the need to develop a customer base and business strategy, the vision changed into objectives driven by local community needs as well as market potentials and feasibility.In Ghana Landmapp now focuses on cocoa farmers to support them to get land documentation, which in turn may be used for accessing loans or other services.In addition, in how far objectives for land tenure documentation become implemented depends largely on the funding situation and financial stability not only of the initiatives themselves, but of their local partners as well.Especially when data collection and database set up are driven by the data needs for a temporary local project, e.g. to gain access to a government provided service, the effort of land tenure documentation becomes limited to this context as well.The following quote illustrates the adjustment process as it highlights several influential factors including also questions of technical feasibility, and in-practice learning on part of the implementers:“ had been basically working on an idea to crowdsource land right claims, for indigenous communities, rural communities, by empowering them to, basically do that on their own.So, it was kind of build an open toolset, that was the idea at the beginning.And we subsequently learned … land tenure could be the key in stopping deforestation, and the communities they would protect their land.When I joined it was more from an access to finance perspective for smallholders; and I was trying to find out how you could unlock finance and finally, you know, land documentation is what does that best.And so, yeah, we decided to sort of develop this together.But what we quickly realized within the first few months, that it wouldn’t work to just do sort of crowd submitted claims that we could gradually verify over time.… we found this out by testing”.Across initiatives the aims behind land rights documentation become adjusted depending on implementation context, local and global actor constellations, and associated interests.This also reflects in the types of data being collected.For example, in many cases, data collection is not limited to land tenure data, but includes various socio-economic data depending on the needs of NGOs and the requirements of government induced community development projects.In these cases, land rights related data are collected alongside other information as explained by the representative of one of the for-profit initiatives:“I think it’s really about how to be intelligent about how you collect the data because you know how many survey teams are going into those communities…And what you want to be doing is a bit more strategic, right?,If we are doing health, or if we are doing land – we might as well collect health and education at the same time”.In combination, the challenges discussed so far – pertaining to the digitalization of diverse land tenure types and flexible adjustments of the documentation process – combine into a longer-term challenge related to the influence of innovation on the broader land governance scene in contexts of implementation.It is the question of how to legitimize innovation.Upon documentation of tenure rights, Hendriks et al. raise concern over the resultant documents in what they term ‘halfway documents’ and call for detailing and linking these documents to the ‘continuum of land rights’ concept.Land tenure documentation interventions create social change, which subsequently could have different impacts on local politics and social norms’, and yet managing the cultural and political shifts in communities is an often neglected component of tenure formalization.Even in the context of FFP approaches for LA, Barry, calls for attention for development of strategies on how “socio-cultural norms and power relations need to be changed for the new land certificates to deliver their intended benefits to community members, as well as contribute to the expected development outcomes”.In line with this we observed a general challenge encountered by the initiatives.It is the question of how, when and by whom both the analogue documents as well as the digital data are considered legitimate and for what purposes they can be legitimately used.Initiatives approach this issue in different ways, for instance by adjusting to existing formal administrative requirements for data collection and required content of documents, or by enrolling both community level and other land sector authorities early on in the process.One reaction by the implementers to the political relevance of and potential contestations in land documentation is what we might call “avoidance of the legitimacy question.,By this we mean that an initiative may position itself explicitly as external to land governance processes and policy making and to emphasize its focus on specific, temporarily bounded project needs.For example, the interviewees from Thomson Reuters’s Aumentum Open-Title explained that their initiative has moved out of the land governance and policy making domain and positions itself strictly as an IT solution provider.Cadasta and Landmapp representatives emphasized on being cautious not to engage in land litigations and situations of contestation or conflict.GLTN emphasized adherence to its own, GLTN’s - and more broadly, UN’s - values and principles.This is not to say, however, that implementers are not aware of the political nature of their projects as expressed in this reflection on a documentation project in West Africa:“I mean we have to be aware of that and I think you try and work your way through the most appropriate approach based on all these different competing interests”.What is then important for future research and implementation is to document and understand which and how documents gain legitimacy and on the basis of whose arguments and procedures, also if the documents do not become officially endorsed by the government.Similar questions can be asked about the digital data produced during tenure documentation.And here a fourth challenge exists, namely, how to strike a good balance between transparency of land tenure and transactions and protection of people, places and land.In other words, the question is, how to be responsibly open.Promoters and developers of innovative tools advocate strongly for openness of land tenure information for various reasons, for instance decision making by third parties, but also to increase transparency of the land sector for the benefit of vulnerable groups, who have difficulties in accessing government information and are hit hardest by opaque land deals.Openness has different meanings to the people we interviewed, but also to different communities, with whom implementers work.Many of these meanings have a positive connotation referring to efficiency due to sharing of data for different purposes, cost savings, because of use of open source technology and free licensing and pricing mechanisms, the ability to include local knowledge in governance processes by opening up “mental maps” of local community members, openness in terms of updating data regularly, if not continuously; and importantly the aim of creating a transparent land governance regime, where openness means improved access to information especially for less powerful and vulnerable groups of society.However, the ideas of “openness” and “open data” are met with many challenges.What matters here is, who gains access and for what uses of data considering local sensitivities and needs regarding the types of data being collected.Especially the FAO representative emphasized differing local sensitivities towards the idea of “openness,” for instance, among indigenous communities, who do not wish to see the location of sacred places exposed.In these cases secrecy and place knowledge held by only special members of the community constitute the very essence of sacredness and as such stand in fundamental contradiction to the paradigm of openness and transparency in information.In FAO’s initiatives the term “OpenTenure” created a lot of discussion and concern among local communities; and the organization has considered changing the name of the approach:“So when you go in the field you have also to agree and to inform about the terminology e.g. when we say OpenTenure … we had a workshop in Guatemala and they asked ‘what’s open’ what’s tenure?’,So you also have to tailor your language, terminology so to agree on the meaning of that single word.This is important.In fact we are working on this name actually, because it is confusing.The open was intended, because it is an open source system.And it is also open, because it is open to the use by communities who are not yet empowered.So in that sense it is open… But still it can cause ambiguity and confusion.So we were thinking to change that name.,.Similarly challenging are discussions with local stakeholders regarding the nature of technologies in relationship to data storage, sharing, and publication.For example, cloud storage, used by many of the initiatives, is problematic to explain to land holders.Landmapp has prepared charts and sketches of how “the cloud” works for purposes of explaining the technology to farmers, for instance.At the same time, it is important to remember that understanding how the so called cloud functions is by no means a challenge only for farmers, but a formidable challenge even to network engineers.The issue of privacy in the context of open data also gains a more complex meaning beyond individual rights as it relates to the socio-economic networks of people as expressed in the following note by the representative of an initiative that leverages German data privacy laws in choosing server locations:“Land documentation, I think, generally per definition, is, uhm, public domain data…So, the data that we consider private is actually much more: it’s socio-economic data – how many children do you have, are you married, what are your income streams, how old are you – all this kind of stuff, cause that is much … more risky, to put out there; and then their production data, which is pretty much 90% of their income … and there is huge social risk in sharing how much income you have.So, this is very private”.In this respect, changes in types of data collected, flexibility in database design and collection as described above merge with concerns about which of the various types of data to share, not only with whom.Because of these questions, STDM opted for community-based database storage, which however, poses its own challenges in that it requires additional capacities at community level.The initiatives we reviewed in this paper are innovative in the land tenure documentation not only in the sense that they use new technologies, e.g. mobile apps for surveying, but also in that they seek to transform land documentation from a government driven domain to a more community driven endeavor deriving from specific needs and purposes of a locality.In how far these seeds of innovation will upscale to transform land documentation processes and land goverance institutions at a larger scale remains to be seen.In addition, these intitatives introduce new actors and data links, which stretch, at least partially, far beyond a “locally circumscribed context.,A few points for implementation and evaluation efforts in future can be made based on the preceding review of characteristics and related challenges.Based on the review in Section 3 of this paper, we would expect that financial, organizational and application contexts will influence differently the dynamics between land holders and land governance insitutions in various contexts.For example both financing mechanism and application context influence who will participate in mapping and who receives different kinds of tenure documents.The participants may be viewed and treated as customers, as beneficiaries of a government project, as members of a community with similar rights to a portion of land, or as clients and end users of a data service.These perceptions are important to note and to differentiate in future analysis, evaluations and discussion as they allow for a better understanding of the role of innovative approaches within the broader institutional landscape in different contexts and how the latter changes or not in response.Section 4 of our paper provides a basis for furture considerations.Each challenge identified here points to specific questions for implementation and evaluation in the future.First, we identified the challenge of digitaizing the plurality of land rights.While none of the initiatives actually record all overlapping land tenure rights in every possible situation and need to adjust to existing administrative workflows and procedural survey requirements in order to produce officially legitimate documents in many cases, they do act as catalysts in discussing the role of multiple and sometimes overlapping land rights and the use of faster and easier technologies for land tenure documentation.One rather empirical question here is what types of tenure and rights are being written into the database and onto documents by various initiatives and in how far does such inscription influence back onto the normative geography of land use and access rights in the longer run.Another more theoretical question, but relevant for implementation and evaluation, is how to strike a good balance between the need to adjust to existing institutional requirements, on one hand, and how to develop innovative, but also financially feasible and socially responsible processes to record land tenure rights at scale, on the other.Second, in Section 4.2 we discuss under the label “Flexibility in process design/flexibility in vision” how the initial aims of the initiatives become adjusted and diversified in the process of working with various stakeholders and their respective aims and interests.This is important to take into consideration in the evaluation of the initiatives’ outcomes both in the shorter and longer term.The stated aims at the beginning of implementation may not suffice for an assessment at a later point in time.A practical recommendation for implementers here is to document the process of implementation across different contexts to provide a basis for sustained analysis across time rather than only relying on a before-and-after quantitative assessment of output.This would provide opportunities for the initiatives to develop context-based process designs for tenure documentation based on their experiences, in order to inform and support future work.The flexibility challenge we describe in Section 4.2 raises at a more conceptual level the question of purpose.The flexibility to adjust aims to local context is important as it could provide the opportunity to develop theories, design elements and implementation strategies for varied local situations.Such theories, design elements and contextualization may help predict what Barry terms “Critical Success Factors” for tenure documentation initiatives to work where future tenure documentation interventions are contemplated.Contextualization of the approaches and respective factors with consideration of the preconditions and practices applicable before, during and after could act as guiding frameworks for these tenure documentation approaches.In short, nuanced research and evaluation would as if there are certain types of purposes and circumstances, for which a given approach is fit in future.The third challenge discussed in Section 4.3 relates to the issue of legitimacy of documents and in extension to the legitimacy of process through which the documents are created.Tackling this issue in practice also requires longitudinal engagement by both implementers and researchers to explore the various purposes that both documents and digital data are deployed for.In other words, we need to ask not only fit-for-what-purpose, but for whose purposes and at what point in time?,Here, the process of upgrading the resulting documents to official recognized tenure certificates is a gap that needs to be addressed.Whose responsibility should that be?,What are the procedures to follow, and at whose cost?,At the same time, we also need to learn, what happens with the documents that are being issued not only on the side of the government, but also in terms of different uses by the holders of the documents.For example, as one anecdote from an interviewee illustrates, a document may not be considered a legitimate proof of a person’s or group’s tenure right by government or large international banks, but it may well be accepted as a proof of identity and asset by local loan agencies, who then provide financing on the basis of these documents.If this money is used to finance construction, de facto tenure rights may be gained indirectly via the documents in contexts where statutory laws exist that acknowledge construction as a means to claim land rights.In this case the use of the document at local scale via loan and construction would strengthen officially legal tenure recognition despite the lack of official endorsement of the document or formal registration.The same questions posed with respect to paper documents being issued apply to the uses of the digital data, which may or may not be produced in the process of tenure documentation.And this leads to the final point for consideration in the future implementation and evaluation of the initiatives.The final points here are based on the discussion in Section 4.4 “How to be responsibly open?,Land tenure documentation, whether through digital or analogue technologies, whether carried out by government or on community basis, always entails the drawing of boundaries.This process is not only a technical question, but one that is closely linked to the governance of society and nature-society relations; and the uses of land and related resources are tightly knit into the associations between governmental and non-governmental actors.With the use and promotion of digital data technologies, matters become arguably more complex as land tenure data can now be shared much faster and at greater distances at a global scale.Concerns regarding data and privacy protection, potential misuses and unanticipated uses of data, and the risks of visibility and commercialization of people’s data in the context of development have recently found resonance among the land surveying community.Land tenure related data is highly sensitive.And yet, the arguments for transparency and openness of land data cannot be discarded.The initiatives we have described here, have begun to discuss and tackle these concerns in different ways ranging from communication with local communities about data storage to organization internal discussions about the choice of server locations to host data and services.Finding a just balance between openness and protection – of people, land and related data – will continue to be a significant concern in endeavors to innovate land tenure documentation by use of digital technologies.A scaling up of intiatives in terms of services, areas, and actors would coincide with an increase in data quantities and types, for which organizations are responsible if they become positioned as nodal points in new digital data flows and networks related to land governance.If, on the other hand, initiatives become abandoned, merge, or otherwise transfrom in organizational and financial network terms, the question is, what happens to the digital data that has been collected?,This is an important question, especially for those initiatives that position themselves as “IT solution provider,” “consumer data company” or “data service provider” while at the same time, in practice, taking on tasks that are conventionally those of statutory and customary governance institutions.In short, sustainability of organizations and their respective responsibilities in data publication, uses, and protection are important future considerations.In the final instance, in combination these considerations are important to address in future implementation and evaluation as they will influence the degrees and types of land tenure security that can be achieved, as well as whose tenure security and rights at different scales, localities and points in time.This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
Since around 2011 pilot projects to innovate land tenure documentation are being implemented in various countries in the global south in order to address the shortcomings of formal land registration. A longer-term question, underlying the present study, is how these innovations relate in the longer run to existing institutional arrangements of land governance in the respective context of implementation. Guided by this more general question, we discuss in this paper first the characteristics for 6 of these approaches. And second, we discuss four closely related challenges identified through a thematic analysis from interview transcripts with representatives of the initiatives. Regarding characteristics, we find a basic commonality of the initiatives is the general approach to tenure documentation through community based digital data capture, in many cases via mobile applications, and the acknowledgment of the plurality of land tenure regimes, which are often not accounted for in statutory land tenure registration laws and/or administrative procedures and practices. Looking at the initiatives in more detail a number of differences become apparent in terms of financing mechanisms and organizational characteristics, as well as application domains. We identify four challenges in the process of implementation. One is the need to strike a balance between inclusion of diverse land tenures, on one hand, and necessary adjustments to existing institutional norms and regulations in land governance. A second pertains to questions of purpose and longer-term goals as implementation requires a fair amount of flexibility in practice. The third challenge relates to the question of the legitimacy of both collected digital data and the paper documents that are being issued. And finally, on the side of digital data production, a longer-term challenge pertains to finding an good balance between transparency and openness, on one hand, and protection of people's land data, on the other. Based on these challenges, we discuss directions for future implementation, evaluation and research.
176
Metabolic expenditure and feeding performance in hatchling yellow pond turtles (Mauremys mutica) from different incubation temperatures
Growth rate is a key life history trait, and much research effort has been devoted to explore potential factors influencing animal growth rate.In oviparous species, the temperature experienced by post-oviposition embryos can have a marked impact on many aspects of hatchling phenotypes, such as body size, locomotor performance, growth and survival.However, incubation temperature can exhibit inconsistent effects on post-hatching growth in different turtle species.For example, relatively lower incubation temperatures produce faster-growing hatchlings in Pelodiscus sinensis and Mauremys mutica, whereas higher or intermediate temperatures produce faster-growing hatchlings in others.The between-sex differences in physiology have frequently been used to explain such variation in growth rate of turtles from different incubation temperatures.Especially in species with temperature-dependent sex determination, more males produced from low temperatures may show a faster growth rate than those of an opposite sex from high temperatures.In fact, inter-individual or between-treatment differences in growth rate might ultimately stem from some physiological changes induced by different environmental conditions.However, the physiological mechanism underlying growth rate variation in turtles has rarely been explored.Energy acquisition and expenditure should be the determinants of somatic growth rate of organisms.Differences in related physiological processes potentially contribute to growth rate variation.For example, higher food intake, or greater food conversion and assimilation efficiency have been associated with higher growth rate in some species of fish.Additionally, reduced metabolic expenditure of some physiological processes also potentially enhances the growth rates of fish.In commercial aquaculture, to enhance the growth rate of farm-raised animals is an essential issue for improving the culture efficiency.Accordingly, the knowledge of physiological changes influencing growth rate of farm-raised animals is highly needed, which can provide valuable guidelines in aquaculture practice.The Asian yellow pond turtle, M. mutica, is a freshwater species that is widely distributed in East Asia, including east part of China, Japan and Vietnam.Due to human over-exploitation for food, pets and traditional medicine, and habitat loss, wild populations of this species have declined dramatically in past decades.Artificial culture has currently become an important measure to meet commercial demand and to strengthen the conservation of turtle species.Previous studies have indicated that hatchlings from different incubation temperatures have different growth rates in M. mutica.However, whether incubation-related growth rate variation partially results from the differences in metabolic and digestive physiology is unclear.In this study, we compared the metabolic rate, food intake and digestive efficiency of energy of hatchling M. mutica from eggs incubated at two different temperatures to investigate the physiological mechanisms underlying growth rate variation.Specifically, we aimed to test: whether individuals from different incubation treatments differed in metabolic rate, food intake, and DEE; if so, whether these observed variations could explain the difference in growth rate.Based on previous results in other species, we predicted that hatchlings from cooler incubation temperature would have a greater growth rate, higher food intake and DEE, but lower metabolic rate than those from warmer incubation temperature.In mid-June 2014, a total of 32 fertilized eggs were collected from more than 18 clutches at a private hatchery in Haining.Eggs were transferred to our laboratory at Hangzhou Normal University, where they were numbered individually and incubated in 4 covered plastic containers filled with moist vermiculite.Containers were placed inside two artificial climate incubators that were set at 26 and 30 °C, respectively.The two temperatures fall within the range of incubation temperatures suitable for embryonic development in this species, and produce hatchlings with different growth rates.Normally, turtle eggs are incubated under thermally fluctuated conditions in the farms.However, the effect of temperature fluctuation on post-hatching growth rate of turtles remains unclear.Therefore, we conducted the experiment that only involved constant temperatures in the current study.Containers were weighed every five days and, if necessary, rehydrated the vermiculite with distilled water to maintain a constant water potential of the substrate.Eggs from a single clutch were evenly split into the two temperature treatments to minimize clutch effects.Twenty-nine eggs were hatched.Some newly hatched turtles still carried an external yolk sac, and they were individually housed in 33 × 23 × 20 cm3 containers with a layer of moist vermiculite until the yolk sac was absorbed within 1–2 days.After yolk sac absorption, each hatchling was weighed, and then housed in the container with 3-cm depth of water, that placed in two temperature-controlled rooms at 26 and 30 °C.Turtles from the same incubation temperature were randomly assigned into different rearing temperatures.Pieces of tile were placed in the containers to provide shelter for turtles, and fluorescent lights that fixed at the top of rooms provided a 12 h:12 h light/dark cycle.Hatchling turtles typically do not feed in the first few days after hatching.Turtles were fed an excess amount of fish meat daily from the 4th day.All turtles were re-weighed twice on the 30th and 60th days after hatching.The specific growth rate for each individual was calculated as SGR =/T × 100%, where W0 = initial wet body mass, Wt = final wet body mass.The metabolic rate of each hatchling was assessed at two test temperatures on the 4th, 30th and 60th days after hatching, respectively.All trials were conducted in the aforementioned temperature-controlled rooms.The carbon dioxide production of each turtle was determined using an open-flow respirometry system between 14:00–18:00.Turtles were housed individually in a 220 mL acrylic metabolic chamber.The air at a rate of 200 mL/min flowed sequentially through a Decarbite filter, metabolic chamber and desiccant, and then entered the CO2 analyzer.CO2 production was recorded continuously for at least 30 min using Logger Pro 3.7 software when turtles were resting.The metabolic measures performed here did not exclude the effect of postprandial state on CO2 production.Therefore, the metabolic rate that measured in this study actually reflected metabolic expenditure for maintenance, food digestion and some other processes, and it would be related to routine metabolic rate.From the 10th day, turtle feeding performance was also assessed in aforementioned temperature-controlled rooms.During the measurement of feeding performance, pre-weighed fish meat was provided to each turtle daily between 08:00−09:00, and the residual food was collected 2 h later.Turtle faeces were collected every 3 h from 09:00 to 21:00 using a spoon, and the water was filtered every morning to collect the faeces that produced during the night.Trials lasted for 30 days to allow the accumulation of sufficient faeces for accurate calorimetry.Faeces and residual fish meat were dried to constant mass at 65 °C and weighed.The energy densities of these samples were determined in a Parr 6300 automatic adiabatic calorimeter.The digestive efficiency of energy was calculated as DEE =/I × 100%, where I = total energy consumed, and F = energy in faeces.All turtles appeared to be healthy, and no deaths occurred during the experiment and the month after our measurements."Prior to statistical analyses, all data were tested for normality using Kolmogorov-Smirnov tests, and for homogeneity of variances using Bartlett's test.Preliminary analyses showed no significant container effects on all hatchling traits examined in this study , so this factor was excluded in subsequent analyses."Linear regression, student's t-test, one- way ANOVA, repeated-measures ANOVA and one-way analysis of covariance were used to analyze the corresponding data that met the assumptions for parametric analyses.There was no difference in egg size between incubation treatments.Incubation temperature did not affect the body size of hatchlings.Repeated measures ANOVA revealed that the body mass of turtles under different treatments gained significantly in the first two months after hatching; and both incubation and rearing temperature had significant impacts on the increase in body mass.Overall, turtles that reared in the warmer environment grew faster than those in the cooler environment, and turtles from eggs incubated at 26 °C gained mass more quickly than those at 30 °C in the first month, despite such effect vanishing in the second month.Turtles from eggs incubated at 26 °C appeared to have a higher metabolic rate than those at 30 °C, and turtles tested at 30 °C had a higher metabolic rate than those tested at 26 °C.However, no significant effects of rearing temperature on turtle metabolic rate were observed in this study.Incubation temperature had a significant effect on turtle daily food intake and DEE.Overall, turtles from eggs incubated at 26 °C had a greater daily food intake, but lower DEE than those from 30 °C.Rearing temperature and its interaction with incubation temperature had no significant effects on daily food intake and DEE.As reported for numerous other turtle species, incubation temperature had a significant effect on post-hatching growth in M. mutica.Eggs incubated at a cooler temperature produced faster-growing hatchling turtles than those from a warmer temperature, which was consistent with a previous study on this species.In other species, faster-growing individuals can be produced from eggs incubated at high or intermediate temperatures.Between-treatment differences in turtle growth rate can be attributable to various physiological modifications.Here, we would focus on the contributions of differences in metabolic expenditure, feeding and digestive performance to growth rate variation of hatchling M. mutica in later discussion.Compared with turtles from eggs incubated at warmer temperatures, turtles from eggs incubated at cooler temperatures showed a higher metabolic rate, probably reflecting a greater amount of energy that was used for routine physiological processes.Metabolic expenditure of animals can potentially influence the rate of growth.Here, our study appeared to show a positive relationship between growth rate and metabolic rate, which was inconsistent with the prediction that higher growth rate could be a result of reduced maintenance metabolism.A negative relationship between growth rate and metabolic rate normally occurs under the scenario when the amount of allocatable resources toward growth, maintenance and other processes are similar between individuals.However, if inter-individual variation in food acquisition is large, that trend could be reversed.Under identical laboratory conditions, turtles from eggs incubated at 26 °C tended to acquire more food than those from 30 °C.A greater food intake might elevate metabolic costs due to requiring relatively more energy for food digestion.Meanwhile, it meant more energy acquisition, which allowed turtles to allocate more energy towards growth.Based on the above assumptions, it might be plausible to produce a positive relationship between growth rate and metabolic rate in this study.In fact, increased growth rate would elevate energy expenditure due to tissue production.A positive relationship between growth rate and metabolic rate was also found in other species.For example, high-latitude Atlantic silversides grow faster and have higher routine metabolic rates than low-latitude individuals.Our results showed that turtles from a cooler incubation environment ate and consumed more food than those from a warmer incubation environment overall.Actually, there was a difference under different rearing temperatures.In this study, a significantly greater food intake and higher growth rate for cooler-incubated turtles was found under the rearing temperature of 30 °C, but not under 26 °C.These findings were not contradictory, but were consistent with our prediction, implying that growth rate variation for turtles from different incubation treatments might result from the difference in food and energy intake.Interestingly, the digestive efficiency of turtles from warmer incubation environment was higher than that of turtles from cooler incubation environment overall and under the rearing temperature of 30 °C.It might be a compensatory response for slow-growing turtles from warmer incubation environment.After deducting faecal energy, the amount of energy acquired from food for turtles from cooler incubation environment was still greater than that for turtles from warmer incubation environment.Accordingly, greater growth rates for turtles from cooler incubation environment might be largely due to more food and energy intake, rather than lower metabolic expenditure for other physiological processes or higher food digestive efficiency.Such a situation also occurs in some species of fish and amphibians.Physiological mechanisms underlying growth rate variation may vary in different species.Greater growth rates associated with increased food digestive efficiency have been documented in other fish and amphibians.In summary, significant effects of incubation temperature on growth rate, metabolic expenditure, feeding and digestive performance were exhibited in hatchling M. mutica.Relatively more food intake was thought to be the major contributor to faster early-stage growth for turtles from cooler incubation environment than those from warmer incubation environment, although sometimes the positive correlation between growth rate and food intake could be modified.Lower metabolic expenditure and higher digestive efficiency are the contributing factors for higher growth rates in some species of fish and amphibians.Contrary to those findings, the between-treatment differences in metabolic expenditure and digestive efficiency did not appear to be correlated with growth rate variation in M. mutica.
Growth rate variation of organisms might stem from a series of physiological changes induced by different environmental conditions. Hatchling turtles from different incubation temperatures frequently exhibited diverse growth rates. However, the physiological mechanisms underlying this variation remain largely unclear. Here, we investigated the metabolic rate and feeding performance of hatchling Asian yellow pond turtles (Mauremys mutica) from eggs incubated at two different temperatures (26 and 30 °C) to evaluate the role of differences in metabolic expenditure, food intake and digestive efficiency on growth rate variation. Hatchlings from the cooler temperature had a greater growth rate and metabolic rate, tended to eat more food but showed a lower digestive efficiency of energy (DEE) than those from the warmer temperature. Accordingly, the difference in energy acquisition acts as a potential source for growth rate variation of hatchling turtles from different incubation temperatures, and increased energy intake (rather than enhanced digestive efficiency and reduced metabolic expenditure of other physiological processes) might be associated with the higher growth rate of cooler-incubated hatchling M. mutica.
177
Urban transport and community severance: Linking research and policy to link people and places
The concept of community severance is used when transport infrastructure or motorised traffic acts as a physical or psychological barrier to the movement of pedestrians.The most extreme cases of community severance are caused by multi-lane roads with physical barriers preventing pedestrians from crossing.Even in the absence of these barriers, crossing may be difficult due to features of the road design such as median strips or to high motorised traffic volumes or speeds.Severance may also occur in narrow roads with low traffic volumes if there is a lack of basic pedestrian infrastructure such as pedestrian pavements.Despite the growing evidence of the potential impacts of this phenomenon on public health, there is a scarcity of tools to identify and measure the problem, limiting the scope of policy interventions.This may be because community severance has been approached by researchers in different disciplines, including public health, economics, geography, and urban studies.These researchers have used different concepts and methods to define and analyse the problem.The issue is also relevant to a range of stakeholders, including local communities, road users, and practitioners in different fields, including not only transport and health, but also urban planning and local economic and social policy.These stakeholders have different understandings of the problem and its solutions, and possibly even different opinions about whether this is a genuine problem that should be given priority.The objective of this paper is to establish an approach for cross-disciplinary research on community severance.It is hoped that this approach will facilitate the dialogue between the different disciplines with an interest in the problem and promote the exchange of data and results among researchers and practitioners currently working separately in the development of policy solutions.The paper is the outcome of three workshops organised by the Street Mobility and Network Accessibility research project, which is developing tools for identifying and measuring community severance.The objective of the first two workshops was to compare the multiple understandings of severance used in the different academic disciplines and establish a common language among the members of the project.The third workshop included external advisors and representatives from partner organisations and aimed at discussing opinions and experiences related to actual policies dealing with severance as well as to identify common ground and key points of distinction between the approaches of researchers and practitioners.The paper is structured as follows: Section 2 identifies community severance as a problem in the interface between transport, health, and other fields, and as an object of cross-disciplinary research.Section 3 reviews the issues commonly found in the production of cross-disciplinary research and in its integration into public policy.Section 4 develops a framework for cross-disciplinary research on community severance.Section 5 discusses the compatibility of this framework with public policy, taking into account issues raised by stakeholders on the problem.A final section summarizes the findings.The development of public policy to address community severance has been hampered by the fact that research on this topic has been produced by different disciplines working separately.The differences start with the terms used to define the issue, including variants such as community severance, barrier effect, social severance, and community effects.There is little consensus on the meaning of these terms, as evident in the review of Anciaes, who collected sixty different definitions of community severance and related concepts used in the literature since 1963.Handy has also noted that severance can be understood either as the converse of connectivity or community cohesion.Community severance has been regarded as an issue of transport policy because the transport system accounts for the majority of the barriers that separate urban neighbourhoods, including linear infrastructure such as roads and railways, and other large infrastructure such as airports, railway stations, and car parking areas.Severance is also a transport issue because it limits the mobility of users of non-motorised means of transport such as walking and cycling.In some countries, the appraisal of major transport projects considers severance impacts, although these impacts are not usually quantified or valued in monetary terms.A few studies have also called for greater awareness of severance at the level of transport network planning and road design.There is growing acknowledgement that community severance is also a public health issue.Studies in several countries have shown that high levels of motorised traffic and high traffic speeds discourage walking and limit social contacts between residents on opposite sides of the road.There is also growing evidence on the health impacts of insufficient physical activity, and poor accessibility to goods and services.However, few researchers have been able to disentangle the multiple cause-effect relationships between all these variables in order to understand the mechanisms through which busy roads ultimately lead to poor health outcomes.According to Mindell and Karlsen, community severance is "a plausible but unproven cause of poor health".While the studies above suggest that severance is in the interface between transport and health, the issue is also relevant to researchers and practitioners in other fields.The relationships between traffic barriers and social cohesion and spatial segregation are of interest to social scientists such as sociologists and anthropologists.Researchers in the fields of architecture, urban planning, built environment, and space syntax are also interested in community severance, given the evidence that the structure of the street network is linked with health and with social cohesion.The need to place severance on an equal footing with other impacts in the appraisal of large projects has also motivated economists to develop methods for estimating its costs in monetary units.Issues of local mobility have also been entering the agenda of grassroots movements such as neighbourhood and community improvement associations, which have become more empowered by developments on access to open data and mapping technology and online platforms for dissemination of their views.These developments have been facilitated by the growing interest in participatory action research, and frameworks for policy appraisal such as community impact assessment.Despite the potential for cross-disciplinary research, the only instances where community severance has been analysed from multiple perspectives were large-scale studies such as PhD theses.Of particular note is the work of Hine who studied traffic barriers using pedestrian video surveys, questionnaires, and in-depth interviews, in order to assess the impacts of motorised traffic on pedestrian attitudes and behaviour.The fact that this work remains the most comprehensive study of the topic two decades after it was conducted suggests that researchers have still not developed a cross-disciplinary approach to community severance.This may be explained in part by problems inherent to the production of cross-disciplinary research and to the application of this type of research in public policy.These aspects are explored in the following section.Cross-disciplinary research has the potential for analysing complex issues beyond the restrictions of individual disciplines, which are often affected by "generalising, decontextualising and reductionist tendencies".This type of research is particularly promising for the study of relationships between transport and health.In particular, the assessment of the links between mobility and well-being benefits from cross-disciplinary research because mobility depends not only on the availability of transport but also on environmental, social, and psychological aspects.Sallis et al. also suggested that promoting active travel requires input such as physical activity measurements and behaviour change models, methods of conceptualising environmental factors and strategies to advocate the adoption of policies.However, disciplines have contrasting, and sometimes conflicting, ways of producing knowledge, relying on a specific set of assumptions.They are also separated by the different concepts used to describe the same phenomena and by the different meanings assigned to the same word.The issue of unequal power among disciplines has also been documented.It is often the case that some pairs of disciplines are accustomed to working alongside one another, while other pairs overlap less.Some disciplines like geography are more inclined to cross-disciplinary work, as they themselves contain a diversity of distinct branches.Projects in broad fields such as urban research are especially vulnerable to these issues.Methods common in some disciplines, such as economic valuation and cost-benefit analysis, are viewed with suspicion in other disciplines.For example, the use of concepts such as the value of a statistical life and the value of essential environmental goods tend to elicit strong reactions.The study of urban mobility is a good example of a field where the inconsistency among different approaches has become a pressing issue.Coogan and Coogan noted that transport planners tend to study walking using measures such as the number or share of walking trips, in contrast to the physical activity measures used by epidemiologists.Cavoli et al. also reported that practitioners feel there is a lack of data sets linking transport and health.The literature on the links between research and practice also suggests that there is an entrenched view that policy-makers use research only for general orientation rather than for solving specific problems.This may be due to the ambiguity of the empirical results produced or to the lack of applicability of the solutions developed by researchers.In the case of transport, there is also a lack of belief in the usefulness of research for decision-making in areas such as the development of sophisticated transport models and the study of social aspects of transport.Bertolini et al. argue that successful and innovative concepts in transport planning can emerge only in the interaction between researchers, practitioners, and other stakeholders.Cross-disciplinary research may have an advantage in this respect, given its focus on specific “real world” problems and the involvement of different actors.This may facilitate network building with stakeholders and the generation of knowledge relevant for action.Researchers have tackled these issues by creating opportunities for debating how each field approaches specific problems and for initiating dialogue with stakeholders.For example, Straatemeier and Bertolini reported the result of workshops attended by researchers and practitioners to develop a framework to integrate land use and transport planning in the Netherlands, highlighting the need to adapt transport policies in order to contribute to broad economic, social, and environmental goals.James and McDonald and James et al. engaged with local practitioners in different fields and analysed their perspectives regarding the development of solutions to community severance, identifying common issues faced by those practitioners in their communities, such as the specific mobility needs of vulnerable groups and the limitations of solutions like footbridges and underpasses.The present paper builds on these efforts, by bringing researchers together to develop a common understanding of community severance and by assessing the compatibility of this understanding with the views of practitioners.The first two workshops were set up to identify the common ground that exists across the disciplines when approaching community severance, and to develop a framework for the research.The workshops were attended by the ten members of the Street Mobility and Network Accessibility project.Each participant presented three key issues from their discipline that related to severance.The presentations were followed by discussions.Table 1 list the three topics presented by each of the ten researchers, grouped by discipline.The table reveals some differences in the key issues presented by researchers from different disciplines.Some disciplines focused on spatial and physical aspects, others focused on social and psychological aspects, and others on the relationships between spatial and social aspects.However, there were also some important differences on the issues presented by researchers from the same discipline.For example, one of the participants from the field of economics focused exclusively on stated preference surveys as a way to derive people׳s willingness to pay for improvements in severance, while the other participant reflected on the broad objectives of economic science and policy, mentioning normative issues, such as equity, which are not captured by stated preference methods.It was agreed that the majority of the topics presented answered two broad questions: what is affected by community severance and what are the possible methods to identify and solve the problems presented by severance.In addition, these answers can be further integrated by considering that severance is a chain of effects and that the methods to analyse the issue have different degrees of complexity.Community severance can be defined as a continuum stemming from the presence of transport infrastructure or motorised traffic and including a chain of effects at the individual or community level.The challenge for any research project on severance is to track this chain of effects.Participants agreed that cross-disciplinary research brings added value because, as was obvious after the discussion in the workshops, different disciplines tend to focus on different parts of the chain and so the results of the analysis of one discipline can inform the definition of the research problem of the discipline focusing on the next effect.The participants agreed that the transmission of knowledge would be facilitated by distinguishing direct effects from indirect effects.However, it was also noted that the identification of causality may create problems in the dialogue between disciplines.For example, a fundamental aspect of space syntax research is not to attribute any causality between spatial and social changes, and epidemiology also emphasises the differences between association and causation.The direction of some of the effects presented can also be questioned.For example, severance impairs mobility but impaired mobility also means that the effects of severance are realised.Severance means fewer social contacts but social isolation also means that a road is not used for socialising.The types of relationships between variables are also important, as we cannot assume linearity in the impacts of severance across multiple aspects of individual and collective well-being.Participants noted that disciplines relying on quantitative methods may be more alert to this aspect, but there is a risk that these disciplines will try to model relationships that are difficult to interpret or translate into actual recommendations for public policy, as mentioned in Section 3.An important discussion point was the origin of this chain of effects.The term “busy roads” was mentioned in several presentations, but neither the words “road” nor “busy” were mutually agreed upon.Railways also create a barrier to mobility, which may be assessed using different methods from those used to study roads.Broader terms such as “areas where access is difficult” were proposed as alternatives.Just how busy the road must be in order to start the chain of effects, and how to define “busy” were also discussed.This is important because severance is related not only to the physical characteristics of the road but also to the perceptions of people living near the road.In addition, severance may impact pedestrians not only when they cross roads but also when they walk along them.This is relevant for public health and built environment research.A final challenge is determining the language that should be used to describe the issue.By using a common language, researchers can avoid the problem that arises when the use of specialised terms diverts attention from the more important issues.For example, the concept of "social determinants of health" is common in public health research, but participants from other disciplines questioned how "determinant" are the factors studied and whether they are really "social".A consensus was found by clarifying that the expression refers to a broad concept of the environment surrounding the individual, encompassing geographic, economic and social factors.More importantly, the purpose of the research is to investigate whether severance does play a role in people׳s health, regardless of the concepts used to describe that role.This example shows the importance in distinguishing between relevant conceptual mismatches between disciplines and “red herring” issues which are of little consequence for applied research projects.The discussion also produced a framework to identify the relationships between the methods used in different disciplines to analyse and solve community severance.Methods can be classified along six axes, with each axis representing the degree of complexity of that aspect.The first axis represents the complexity of the spatial scales used to analyse the issue.Transport planners supported the idea that it is necessary to study roads with high traffic levels to guarantee the applicability of the developed tools in new contexts.However, workshop participants from health and sociological disciplines noted that the effects of severance can be felt at a greater distance from the main road.The perspective from the experts on the built environment was that severance can similarly encompass wider areas, because severance causes breaks in the connectivity of the street network, thus can reduce accessibility across a wide area, as previous research has shown that the configuration of the street network shapes patterns of pedestrian movement as well as encounters across social or economic groups.The differences between approaches to the study of mobility and accessibility are clear when we quantify their spatial scales.For example, space syntax research usually analyses walking patterns at a range of scales from 400 m up to 1200 m in order to capture the potential for pedestrian traffic in an area, while also taking into account of a contextual area of up to 2 km radius from the studied site to the surrounding areas.In contrast, participants who had expertise in participatory methods reported that they tend to focus on areas of 1–4 km2 which define a neighbourhood or area that people walk around within 20–30 minutes.The need for the different disciplines to be aware of the implications of looking at issues related to urban land use using different scales has also been noted in previous literature.The second axis represents complexity in the time scale of the problem.Economics has sought to integrate community severance at the level of planning of individual projects, while geographic research has focused on spatial differences in mobility and accessibility across a region at one point in time.The space syntax perspective is broader, considering severance as a gradual process, emerging over time, and influenced both by changes in the street network structure and in the interactions between the layout of different neighbourhoods, resulting in changes in the mobility of people living in different parts of the city.The time scale in this case is measured in years.As anthropologists on the research team pointed out, aspects such as the length of residency of individuals are particularly relevant as such issues affect the perceptions of both the problem and the neighbourhood.In addition, a road may separate two communities that already define themselves separately for other reasons, such as differences in the characteristics of the population or housing.Interest in the time dimension of environmental inequalities has increased in the past decade, with several studies investigating whether hazardous facilities were constructed before or after the location of low-income populations or racial minorities in the affected areas.Similar approaches could be used to study the dynamic aspects of community severance over time.The third axis represents the complexity in the object of analysis, that is, who is affected by the problem.The existence of particular groups of concern came up as a common theme in several presentations.Those groups were based on age, gender, socio-economic position, occupation, car ownership, ethnic group, and disability.However, it was pointed out that the effects on one specific group can be assessed only if compared with the effects on other groups.Groups are also not homogeneous.For example, there are distinctions among people classified as "elderly".From an anthropological perspective, mobility needs are diverse and experiences are unique, and so the distinction between groups and individuals must also be considered.Within the same community, views differ regarding what is a barrier, and what is its relevance as a problem.The need to consider individual views is supported by previous studies such as those of Hine and Grieco, who studied the differences between studying "clusters" and “scatters” of social exclusion, and Davis and Jones, who noted that children are major users of their local environments but tend to be excluded from research projects and public consultations because surveys are usually aimed at adults.Participants in the workshops noted that complexity increases when the object of analysis is the individual, rather than the whole community.The in-depth analysis of problems affecting individuals may collide with the approach of disciplines whose methods rely on the aggregation of preferences over groups.For example the stated preference methods used by economists to find the monetary value of severance yield an average “willingness to pay” to solve the problem.Outliers tend to be removed from the analysis.However, the focus of other disciplines may be precisely the analysis of these outliers.The fourth axis measures complexity in the type of information that is collected and analysed.Traffic levels and speeds on major roads are available from routine data sources.The measurement of the value that the affected individuals attach to the problem is more complex, because this value depends not only on traffic levels and speeds, but also on a range of demographic and socio-economic characteristics.The design of tools to estimate this value also needs to be careful in order to choose the most suitable way to present the various attributes of the problem to individuals.It was also agreed by workshop participants that the absence of data is as relevant as its existence.It is as important to consider trips not made and destinations not visited as it is to collect data on existing travel patterns.To capture these factors, it is necessary to learn about people׳s perceptions about mobility within and beyond their community and how they delineate the borders of their community.Qualitative data provide in-depth information useful to understand the complex set of factors shaping those perceptions.Some workshop participants raised doubts regarding the degree to which this type of information may be compatible with quantitative data; however, it was generally agreed that qualitative data can help define the research questions and can be used to validate and explain the conclusions obtained from quantitative analyses.The fifth axis represents the complexity of the policy solutions for the problem.Policy-makers can use a range of traffic control measures such as speed limits or traffic restrictions.Solutions such as the redesign of the existing infrastructure are potentially more complex as they may have a wide impact on all modes of transport using that infrastructure and on functions of the infrastructure other than as links for movement for example, as spaces for social interaction.The planning of new infrastructure may be even more complex because it has effects on other domains, such as land use.Policy-makers can also design interventions in these other domains to achieve goals at the city level.Social policies can also be used as a method to mitigate the impact of severance on local communities.The sixth axis represents the level of community participation in the development of solutions to the problem.Academic research has traditionally used a ‘top down’ approach, involving the identification of communities living in the case study areas and the collection of data from a sample of individuals.This data is then analysed and disseminated without the involvement of feedback from the community.However, the use of alternative approaches is growing.Participatory action research holds particular promise for providing additional insight as it adopts a bottom-up approach to researching community issues, learning from the participants in the case study areas and allowing emergent ideas and data to inform subsequent research.However, there are different degrees to which communities can actively participate in the analysis of the problem and formulation of solutions, due to cultural and conceptual challenges and issues regarding access and production of data.The third and final workshop in the series brought together the researchers in the Street Mobility project and 20 other participants from local authorities, consultancy companies, national professional bodies, and non-governmental organisations.These participants represented a cross section of the skills and expertise related to community severance at the policy and practitioner level and enabled discussions on the relationships between transport and health based on experience gained in a variety of urban areas in the United Kingdom.The objective was to assess the main differences between the views of practitioners from different fields, such as transport planning and provision, urban planning and design, public health, and economic and social policy.The members of the project first presented the objectives and methods of the research.The participants were then split into groups to discuss the two broad issues arising from the first two workshops: the causes and effects of severance and the methods to analyse and solve the problem.The following sections identify some common topics raised in those groups, signalling some differences between the approaches of researchers and practitioners.The main points to retain from the discussion are the need to anticipate the multiplicity of users and applications for the tools produced by the research and the administrative, political, and financial problems that limit the application of the solutions proposed.The participants in the workshop highlighted the influence of the local context in the development and application of tools to address severance.For example, tools developed using case studies in urban and suburban locations do not address issues that are unique to rural areas, where the main accessibility issue is the lack of public transport.Ideally, areas where there is severance should also be compared with areas where severance is non-existent or has been overcome.In other words, the development of tools that can be routinely applied in different contexts requires testing in multiple circumstances.The discussion groups also noted the diversity of practitioners with a possible interest in the tools developed by researchers.Practitioners in different fields tend to be interested in different aspects of a research project and in different types of output.These outputs must then be tailored to different user groups’ specific requirements, for example, for transport appraisal procedures, health impact assessments, and street design plans.As such, the participants in the workshop discussions emphasised that it is important that the common dialogue established by researchers also makes sense to and satisfies the stakeholders on the problem.In addition, researchers should also facilitate dialogue between different stakeholders.However, the fact that research outputs need to cater for a diversity of practitioners does not imply the creation of multiple tools.Participants in the workshop mentioned the existence of tools already in use to identify the problems that road traffic can pose to pedestrians, including, for example, pedestrian counts, street audits, route assessments, walking accessibility mapping, and frameworks for community-led initiatives.Many of these tools have been developed with specific settings in mind, and have neither been widely disseminated nor developed in order to be generalisable to other settings.It was found during the discussion that tools used in some departments of local authorities are unknown even to staff in other departments working on the same issues in the same communities.It was agreed that any research project on severance should start by reviewing the existing tools and evidence and consider the possibility for the integration of different types of intervention currently made using different tools in different places and sectors, as well as the integration of methods proposed in previous research but never implemented.This approach could be an alternative to the development of a whole set of new tools.However, such tools must be reviewed and assessed from a cross-disciplinary perspective to determine the broader applicability of their findings to the issues raised in the preceding discussion.For a research project on severance, the considerations above mean that the selection of the relevant cause-effect relationships depicted in Fig. 1 and of particular methods from those represented in the different axes of Fig. 2 must be specific to the context being analysed and to the purpose of the tools produced.There is a risk that the level of detail required to cover all the relevant contexts and to design a coherent suite of tools may lead to loss of focus for the project.The comments of several participants also suggest that the complexity in the methods used in cross-disciplinary research on severance may in some cases limit the applicability of the outputs produced.For example, the scale of the problem may be incompatible with the scale of policy interventions.Practitioners have only a limited set of instruments available at each level.Professionals working at the level of town planning can decide on the location of roads and railways.Those working at the level of the neighbourhood or street can solve problems of particular areas, for example by street design solutions such as the removal of subways and guard railings.One of the conclusions from the workshop was that research projects should look into the compatibility between the policy solutions proposed and the instruments available to the institution implementing those solutions.It was further noted, however, that even when this compatibility is assured, the implementation may still clash with the policy priorities and agendas of governments at a higher administrative level.The success of policy interventions also depends on administrative boundaries, because these boundaries determine where people access services such as health care.The collection of data is complicated when the data is held by institutions in another administrative area.The need to fit the content of instruments such as surveys to administrative boundaries can also pose problems.It was noted by some participants that in public consultations, some respondents gave negative feedback to proposals where administrative boundaries were used to delimit communities.There was concern that the use of a cross-disciplinary approach may produce tools that are too expensive or require expertise that is not available for local authorities.For example, video surveys and stated preference surveys tend to be expensive, while space syntax methods require the use of special skills and software.The collection of information at different levels of detail and the consideration of different spatial and time scales may be unwieldy.The conclusion the workshop participants reached in light of these expressed concerns was that a cross-disciplinary research project on severance should have as one of its key goals the translation of a broad range of concerns into outputs that are manageable to end users.Research projects on community severance can also develop methods for simplifying the tools in the cases where practitioners do not have access to the technical or financial means to apply them.For example, the time and cost required to quantify motorised traffic levels and pedestrian flows or to conduct street audits can be reduced if the research produces a typology of the different links of the road and pedestrian networks and advises practitioners on methods for collecting data in only a small sample of links of each type.Stated preference surveys could also be used to derive monetary values for severance in different contexts, which can be used by local authorities that do not have the means to implement their own surveys.This paper set out to understand community severance as an issue that is relevant to transport and health and as an object of cross-disciplinary research.A framework for applied research on community severance was proposed, built on the reflections obtained in workshops attended by a cross-disciplinary team of researchers and a group of stakeholders that included local authorities, non-governmental organisations, and consultants.The input from different disciplines was used to break down community severance into a continuum of effects, which can be analysed by methods with different levels of complexity.We believe this framework can help researchers to locate their own perspective in relation to that of other disciplines, and that it thus helps clarify relevant issues when applying the research to actual problems affecting different stakeholders.Importantly, the discussion with the project stakeholders demonstrated the need to question the breadth of hypotheses, outputs, and audiences for the research, and administrative, political, and financial problems in the application of the solutions proposed.It is hoped that the reflections produced in this paper can inform research on other urban issues where transport and health are component parts of a broader, multifaceted phenomenon that is of interest to a range of domains of research and practice.The paper shows that the efforts for the coordination of different disciplines involved in a common project must begin with the mapping of objects and methods of analysis of each discipline.The advantages of cross-disciplinary research will only be achieved if the participants understand and communicate around issues related to their specific approaches.The research also needs to be framed in such a way that they can be understood and used within policy contexts.
Urban transport infrastructure and motorised road traffic contribute to the physical or psychological separation of neighbourhoods, with possible effects on the health and wellbeing of local residents. This issue, known as "community severance", has been approached by researchers from a range of disciplines, which have different ways of constructing scientific knowledge. The objective of this paper is to build bridges between these different approaches and provide a basis for the integration of the issue into public policy. A framework for cross-disciplinary research on community severance is developed, built on the results of two workshops attended by researchers from different disciplines. This framework takes into consideration the chain of direct and indirect effects of transport infrastructure and motorised traffic on local communities and the complexity in the methods used for analysing and formulating solutions to the problem. The framework is then compared with the views of practitioners, based on discussions held in a third and final workshop. It was concluded that to better understand community severance, researchers should frame their work in relation to that of other disciplines and develop tools that reflect the diversity of local contexts and stakeholders, balancing complexity with applicability.
178
Data on dielectric strength heterogeneity associated with printing orientation in additively manufactured polymer materials
Tabular data previously summarized in Ref. are presented for dielectric strength testing of 3D printed polymers.Dielectric strength testing was performed according to the ASTM D139 standard .Dielectric strength test samples were fabricated using four common 3D printing techniques: Stereolithography, Fused Deposition Modeling, Selective Laser Sintering, and Polymer Jetting .Data for SLA samples printed using the Watershed 11122 and ProtoGen 18420 polymers, with a layer resolution 0.051 mm, are provided in Tables 1 and 2, respectively.Data for FDM samples printed using the ABS-M30 and ABS-M30i polymers, with a layer resolution of 0.127 mm, are presented in Tables 3 and 4, respectively.Data for SLS samples printed using the DuraForm HST and Nylon EX polymers are presented in Tables 5 and 6, respectively.Data for PolyJet samples printed using the VeroBlue and VeroAmber polymers, with a layer resolution of 0.030 mm, are presented in Tables 7–9 and Tables 10–12.Test sample coupons, fabricated as assemblies in a disposable shell, were printed in either two or three different orientations, as depicted in Fig. 1.For vertically-aligned samples, the surface of each sample face was aligned perpendicularly to the build platform and either perpendicular or parallel to the sweep direction of the print head, as shown in Fig. 1, corresponding to the “” and “” designations, respectively.In cases where vertically-aligned samples are fabricated using printing methods in which layer deposition is performed without a print head or nozzle, such as SLS and SLA, it was not expected that there would be a difference in sample properties between vertical configurations and; as such, these cases are designated only as “.,For horizontally-aligned samples, the surface of each sample face was oriented parallel to the build volume, as depicted in Fig. 1.Cases involving horizontally-aligned samples are given “” designations.Upon completion of the printing process, any support materials associated with the printing process that were in the regions between test coupons or otherwise attached to the assembly were removed.As part of the standard manufacturer printing protocol, all SLA-printed parts were UV post-cured for one hour.Post cure procedures are potentially available for other printing methods; however, as they are not standard protocol, they were not performed for this study.In preparation for high voltage testing, each of the sample assemblies was separated into five sample coupons and a disposable protective shell, as shown in Fig. 2.After separation, each sample coupon was cleaned via gentle abrasion while immersed in Liquinox.Following six rinses with deionized water, sample coupons were placed between sheets of lint-free tissue and allowed to air dry at ambient temperature.Dry coupons were stored between clean sheets of lint-free tissue in a desiccated environment for transportation to the dielectric strength testing laboratory.Just prior to testing, the sample coupons were pre-conditioned for 40 hours at 23 °C and 50% relative humidity.All coupons were tested per ASTM D149-09, Paragraph 12.2.1, Method A using 2.54 cm diameter stainless steel electrodes in a transformer oil bath.A voltage ramp rate of 500 VAC, RMS/second was used.Ambient room conditions during testing were approximately 23 °C and 50% relative humidity.
The following data describe the dielectric performance of additively manufactured polymer materials printed in various orientations for four common additive manufacturing techniques. Data are presented for selected commercial 3D printing materials fabricated using four common 3D printing techniques: Stereolithography (SLA), Fused Deposition Modeling (FDM), Selective Laser Sintering (SLS), and Polymer Jetting (PolyJet). Dielectric strengths are compiled for the listed materials, based on the ASTM D139 standard. This article provides data related to “Dielectric Strength Heterogeneity Associated with Printing Orientation in Additively Manufactured Polymer Materials” [1].
179
Goal representation in the infant brain
From an early age, human infants interpret others' movements in terms of the goal towards which the movement is directed.Understanding the mechanisms that support action interpretation, and the development of the underlying brain systems, is important in the study of basic mechanisms of social interaction."Previous studies of goal understanding in infants commonly measure the infant's looking responses.In one such paradigm, infants are repeatedly shown an agent acting upon one of two objects.After infants have seen this repeated action, the objects switch location, and the infant is presented with the agent acting again on the previously chosen object or acting on the previously un-chosen object.Infants from as early as three months of age respond with longer looking towards the event in which the agent acts on the previously un-chosen object, suggesting that they had encoded the prior events as movements directed towards a specific object.In a different paradigm, infants repeatedly observe an agent acting towards an object in an efficient manner as dictated by the environment.In subsequent events, the obstacle is removed and a direct reach becomes the most efficient means to achieving the same goal.In accord with this expectation, infants from at least six months of age respond with increased looking when the agent continues to perform a detour action when it is no longer necessary.This suggests that infants interpreted the previous action as directed towards the goal object and expected the agent to continue to pursue the same goal by the most efficient means.Recently, there has been much debate over what cues and mechanisms support early goal representation."Some studies have suggested that it is the infants' own experience with an action that provides them with a concept of an action as directed towards a goal.Support for this position comes from studies showing that infants more readily attribute goals to actions that are part of their own motor repertoire than actions which are novel.However, there is also substantial evidence that young infants can represent the goals of actions that are beyond their own motor experience.For example, infants represent the goals of actions performed by animated shapes, mechanical claws or rods and hands performing actions in unusual ways, none of which they could have first person experience on which to draw.These studies suggest that early goal representation may be more dependent on the availability of certain cues than prior experience with that action.Which cues might be important for representing an action as goal-directed, and whether some cues have supremacy over others, is unclear.For example, it is often assumed that repetition of action on the same object is required for goal attribution but other studies have demonstrated goal attribution in the absence of repeated action and repeated action on a solitary object does not appear to result in goal attribution.An additional or alternative basis for goal attribution may be the presence of an action that is selective; an action that is directed towards one object in the presence of another object seems to generate an interpretation that the action is goal-directed.As mentioned earlier, numerous studies have confirmed that infants appear to exploit cues to action efficiency for goal representation, and some have proposed that efficiency may take precedence over cues to selectivity because infants apparently fail to represent an inefficient action directed towards one of two objects as a goal-directed action.However, it is nevertheless proposed that use of these different cues results in a unitary concept of goal, even in infancy."Finally, in the absence of alternative measures of goal representation, infants' failure to demonstrate the typical pattern of looking has become the litmus test for goal attribution, and such a reliance on one measure may be failing to provide an accurate picture of the underlying mechanisms.One way to elucidate these issues is to ask whether the same brain regions are recruited during the processing of events containing different cues, that ostensibly lead to the representation of a goal.Research in adults using fMRI has highlighted the inferior frontoparietal cortex as being involved in goal representation.Functional near infrared spectroscopy can record activity of the equivalent brain regions in typically-developing infants whilst they observe goal-directed actions, providing the opportunity to interrogate the mechanisms underlying early goal attribution without requiring overt responses from the infant.The current study is a first step towards this aim.Here, we investigate which cortical regions of the infant brain are involved in the processing of a simple goal-directed event.To this end, we used a repetition suppression design, similar to that used with adults, and which has previously identified regions of the cortex involved in goal representation.RS in response to the repeated presentation of a particular aspect of a stimulus, and a release from suppression when that aspect of the stimulus is changed, indicates that a particular brain region is sensitive to that property of the stimulus.Thus, in adults, the anterior intraparietal sulcus exhibits RS when the immediate goal of an action is repeated, but a release from suppression when the goal changes, strongly suggesting that the aIPS is involved in representing the goal of an action.Whilst a traditional blocked RS design has previously been employed in infants using fNIRS, in the current study we used a paired RS design in which activation in response to individual test events is measured following a directly preceding establishing event.Based on the fact that neural suppression in adults is clearly seen on a single repeated trial, and the need to obtain sufficient data from two conditions containing a lengthy dynamic event, a paradigm which measures activation on single test events that directly follow an establishing event provided the best design to localise goal representation in the infant brain.Infants were presented with animations in which a red triangle detours around a barrier to collect one of two shapes.In this way, the event contained several cues that are thought to enable infants to interpret an event as goal-directed.Similar animations have previously been shown to be interpreted by 9-month-olds as goal-directed events, and to elicit activation in the anterior parietal cortex in adults.Based on the existing studies with adults, we hypothesized that infants would show greater activation in the left parietal cortex when viewing actions directed towards novel goals compared to actions directed towards repeated goals.This result would establish the validity and feasibility of FNIRS for exploring the mechanisms underlying the development of goal understanding in infants.The final sample consisted of 18 9-month-old infants.An additional 22 infants were excluded due to fussiness), positioning of the fNIRS headgear), or due to excessive movement artefacts and/or inattention, which resulted in more than 30% of the contributed data being excluded.Animations were created with Maxon Cinema 4D and presented on a 102 by 58 cm plasma screen with MATLAB.Each animation showed a red cone detouring around a barrier towards either a blue cube or a green cylinder.The red cone then ‘collected’ its target and returned to its starting position.Each animation lasted 7.5 s and animations were separated by a 0.5 second gap, giving a total trial duration of 24 s. Each trial was interleaved with an 8 second baseline in which infants saw changing images of houses, outdoor scenes, animals and faces.The animations were presented to infants in a modified paired repetition suppression design in which each trial was composed of a set of three animations.The first two animations showed the red cone moving towards one target object.The third animation showed either the red cone moving towards the same target or the red cone moving towards the other target.For example, if the red cone approached the green cylinder in the first two events of the triplet, it would either continue to approach the green cylinder in the third event or would approach the blue cube in the third event.We included two repetitions of the goal-establishing event to maximize the chance that infants identified the goal of the red cone by the time they were presented with the third event of the triplet.This design also meant that if infants did not attend during one of these goal-establishing events, but viewed the other one and the test trial, the data from the test trial could still be used.To isolate activation that was the result of a goal change rather than a path change, we counterbalanced the path that the red cone took towards its target such that on some trials the path to the new goal would remain the same as that previously taken, or it would change.Thus, we had 16 different trials which were categorized in to 4 types: New Goal – New Trajectory, Repeated Goal – New Trajectory, New Goal-Repeated Trajectory and Repeated Goal-Repeated Trajectory.Trials were presented in a pseudo-randomized order with a stipulation that, within every four trials, each trial type would be presented.As previous fNIRS studies have excluded infants with less than 3 trials per condition, our pseudo-randomization additionally stipulated that, within the first 6 trials, infants would be presented with equal numbers of repeated goal and new goal trials.This maximized our chances of obtaining sufficient data for analysis given the length of our trials.To measure Hb concentration changes in the infant brain, we employed FNIRS), using two continuous wavelengths of source light at 770 and 850 nm.Infants wore a custom-built headgear, consisting of two source-detector arrays, containing a total of 38 channels, with source-detector separations at 2.5 cm.On the basis of an understanding of light transport and given that the cortex is approximately 0.75 cm from the skin surface in this age group) the 2.5 cm channel separations used in the current study were predicted to penetrate up to a depth of approximately 1.25 cm from the skin surface, potentially allowing measurement of both the gyri and parts of the sulci near the surface of the cortex.Before the infants began the study, head measurements were taken to align the headgear with the 10–20 coordinates.With the use of age-appropriate infant structural MRIs, anatomical scalp landmarks, and the 10–20 system, we can approximate the location of underlying cortical regions for the infants, and draw comparisons of general regional activation with findings in adults.Measurements from the final sample of infants showed that the distance from the glabella to the ear ranged from 11 to 12.5 cm, and the distance between ears as measured over the top of the head ranged from 11.5 to 13.5 cm.The distance from the midpoint of the headband over the forehead to the channels above the ears is fixed and aligned approximately with T3 and T4 of the 10–20 system on an average 9-month-old infant head.This allowed the more dorsal channels and 26, 27, 29, 30) to be positioned primarily over the supramarginal gyrus, the angular gyrus and the inferior parietal sulcus of the parietal lobe.Once the fNIRS headgear was fitted, infants were seated on their parents lap, approximately 140 cm from the screen.Infants watched the trial sequence whilst fNIRS data and video footage of the infant was recorded.Trials continued until the infant became inattentive or fussy.To maintain interest, 8 different sounds were played during the presentation of stimuli, in random order.During baseline, a sound was played at the beginning of each new image displayed and during trials a sound was played at the beginning of each video and at the point where the animated shape made contact with its target.Infants received between 4 and 8 Repeated Goal trials and between 4 and 8 New Goal trials.Infant looking towards the screen was analyzed off-line for attentiveness.Time points where the infant looked away from the screen were entered in to the analysis.Entire trials were excluded if the infant did not attend for 50% of at least one of the goal-establishing events and/or 50% of the test event.Trial exclusion resulted in included infants contributing between 3 and 8 Old Goal trials and between 4 and 7 New Goal trials.As with previous studies, a minimum of 3 valid trials in each of the two conditions was required to include an infant in the final sample.Data analysis was conducted using a combination of custom Matlab scripts and the SPM-NIRS toolbox.We took several steps to remove artefacts from the data.First, channels were excluded from the data if the coefficient of variation for all the data collected on that channel was over 10%.Any remaining channels that were continuously noisy were excluded based on visual inspection.Then, time periods affected by movement artefact were identified by subtracting the mean signal from each channel taking the absolute value of the signal in each channel averaging the signal across time points.This gave a measure of the global signal strength over all channels, allowing us to identify movement artefact as spikes in the global signal.An artefact threshold was set for each infant by visual inspection of all the good data channels and the global signal; the threshold was set to exclude time points contaminated with clear movement artefact.Movement-induced artefact removal based on visual inspection has also been used in several other infant fNIRS studies.The threshold for each infant was constant over the whole time course of the study and was set blind to the experimental condition.Data points in the time periods marked as ‘over threshold’ were then set to zero, effectively removing them from the analysis."In addition, the videos of the infants' behaviour during data recording were blindly coded for looking-time, to ensure infants were equally attending to all types of presented trials.There were no differences in the time infants spent looking at the stimuli between New Goal and Repeated Goal trials, nor between New Path and Repeated Path trials.We calculated the proportion of data per infant that was removed on the basis of inattention and/or movement artefact and excluded any infants for whom more than 30% of their data was excluded.We also excluded from analysis any channels that did not yield clean data in at least 70% of infants.This resulted in the exclusion of 8 channels.The preprocessed data was then converted from raw signals to oxygenated-Hb, deoxygenated-Hb, and total-Hb concentrations using the modified Beer–Lamberts law as implemented in the SPM-NIRS toolbox.For each infant, a design matrix was built which modelled six cognitive conditions.First, we created six regressors, each with the same length as the recorded data and a value of 0 at every timepoint.The first regressor modelled the two goal-establishing events, and values in this vector were set to 1 at each timepoint when a goal-establish event was on the screen.This gives a series of ‘boxcars’ each with 15 second duration.The second regressor modelled the New Goal events, with values of 1 whenever a New Goal video was on the screen, giving boxcars with 7.5 s duration.In the same way, the third regressor modelled the Repeated Goal events giving boxcars with 7.5 s duration.The fourth regressor modelled baseline periods between trials, giving boxcars with an 8 second duration.The fifth regressor modelled ‘invalid trials’, which were defined as trials where the baby did not attend for 50% of at least one of the goal-establishing events and/or 50% of the test event.In such trials, the relevant boxcar was removed from the ‘New Goal’ or ‘Repeated Goal’ regressor and placed instead in the ‘invalid trial’ regressor.The sixth regressor marked any time when the infant was not attending to the video with a value of 1, giving a series of boxcars of variable length.These six regressors were then convolved with the standard haemodynamic response function and its temporal and spatial derivatives to make the design matrix.This is a standard procedure which turns the model of what events were presented to the baby into a model of what haemodynamic response should be expected in the brain, taking into account delays in BOLD response.Thus, the final design matrix had 18 columns modelling the goal-establish; New Goal; Repeated Goal; baseline; invalid trials and non-attending time over the complete data recording session for each infant.For each of the 3 Hb measures, this design matrix was fit to the data using the general linear model as implemented in the SPM-NIRS toolbox.Beta parameters were obtained for each infant for each of the six regressors for the HRF and the temporal and spatial derivatives.The beta parameters were combined by calculating the length of the diagonal of a cuboid where the length of each side is given by one of the three beta parameters.This allows us to consider effects arising with a typical timecourse but also those with a slightly advanced or delayed timecourse or an atypical duration in a single model.The combined betas were used to calculated a contrast for the New Goal > Repeated Goal for each infant.This contrast was then submitted to statistical tests and plotted in the figures.As in previous infant NIRS studies, our analysis is based on changes in oxyHb.Whilst studies with adults typically find that increases in oxyHb are accompanied by a decrease in deoxyHb, studies with infants typically do not find any statistically significant deoxyHb changes.To ensure statistical reliability, we considered that activation at a single channel would be meaningful only if there is also significant activation at a spatially contiguous channel.Monte-Carlo simulations using this criterion on our dataset revealed that a per-channel threshold of p < 0.0292 gives a whole-array threshold of p < 0.05 for finding two adjacent channels activated by chance.Therefore, we only considered the effects present at p < 0.0292 in two adjacent channels to be significant results.We conducted t-tests on the HRF contrast for the effect of Goal at each channel.Several channels exhibited significant RS for the identity of the object goal approached by the red cone, but only two of these channels were contiguously located and met the p < 0.0292 channel threshold.Channels 8 and 9, found over the left anterior part of the parietal cortex, were significantly more active for New Goal than for Repeated Goal trials.A one-sampled t-test on data averaged over channels 8 and 9 revealed a significantly greater activation in response to viewing New Goal than Repeated Goal trials .We also conducted an equivalent analysis of RS for movement path but these analyses did not yield any significant findings.In the current study, we sought to identify regions of the infant brain that are involved in the representation of action goals.Employing a modified paired RS paradigm, we demonstrate that observation of an agent repeatedly performing an action on the same goal object results in suppression of the BOLD response in a region of the infant brain that is approximately located over the left anterior region of the parietal cortex, whilst observation of the agent approaching a new goal results in a release from suppression in this region."As we controlled for the trajectory of the action, we can isolate the agent's goal as the factor that modulated brain activity in this region.Thus, our results suggest that the left anterior parietal region is involved in goal representation in the infant brain.Notwithstanding the limitations of cortical localization estimation based on the methods used here, our results bear a strong resemblance to those previously reported in adults.Specifically, the response pattern and location of activation found in our study is similar to that found in previous work on the representation of immediate goals in adults.When adults observe a human hand reaching for a previously chosen goal object versus a novel goal object, the left anterior intraparietal sulcus exhibits greater activation for the novel goal event than the repeated goal event."Thus, both 9-month-old infants and adults appear to recruit similar brain regions when representing others' goals.Whilst our study with infants used animated shapes rather than human hands, data from adults show that the left aIPS is engaged both when the agent is a human hand and when it is an animated shape, suggesting that both may be processed in a similar manner.One further limitation of our interpretation is that we were unable to analyze data in channel 27, the right hemisphere equivalent of channel 8 which showed the strongest effects on the left.Whilst adult data suggests that the immediate goal of an action activates the left aIPS, a role for the right aIPS in goal understanding has also been identified.Specifically, when adults observe repeated action outcomes, the right aIPS exhibits RS.Thus, whilst the action goals that infants are presented with in our study mirror those that have resulted specifically in left aIPS activation in adults, an absence of data at channel 27 means that we cannot argue conclusively that our effect is left lateralized.Finally, whilst we used the two-contiguous-channel criterion for accepting activation as statistically important, a number of isolated channels did reach statistical significance.Currently, there is no established consensus concerning the most appropriate way to analyse infant fNIRS data or to correct for multiple comparisons, and different authors have used different practices.Thus, it is possible that a different, or less conservative criteria, would have highlighted effects in single channels as statistically important."Furthermore, it is possible that the small and inevitable variation in channel placement depending on infants' head circumferences introduced noise in to the data which may have weakened activation in some channels that might have otherwise have formed a contiguous pair.Fig. 3a indicates a line of 3 channels for which there is more activity for New Goal than Repeated Goal trials.However, whilst channels 10 and 18 exhibit significant effects at the single channel level, the effect at channel 14 does not reach statistical significance and so, under our criterion, the effects at channels 10 and 18 are not interpreted as statistically important.Similarly, channel 38 located over right hemisphere exhibits greater activation for New than Repeated Goal trials."These temporal channels are likely to lie over the left and right posterior part of the superior temporal sulcus which has been implicated in various aspects of social processing, including the processing of information relevant to others' goals and is considered part of the Action Observation Network.However, previous RS studies comparing activation to novel and repeated goals have not identified the STS as being involved in encoding goals in adults.Nevertheless, one theoretical position holds that the focus of activation narrows over development as areas of the cortex become increasingly specialized for processing particular types of stimuli and recent data support this position in the domain of face processing.Thus, whilst our criterion for interpreting channel activation has highlighted the left anterior parietal region as important for goal processing in infancy as it is in adults, further studies are needed to establish whether other cortical regions might also be involved early in development.This study demonstrates RS for immediate goals in the left anterior parietal cortex in 9-month-old infants, a finding which mirrors that reported when adults view similar stimuli.This suggests that the left anterior parietal cortex is already specialized for goal representation in the first year of life and provides additional support for the interpretation of behavioural studies."Moreover, the fact that a region of the infant cortex appears specialized for goal representation provides an invaluable tool by which to investigate the cues and mechanisms by which infants are able to make sense of others' actions.Whilst numerous studies have demonstrated that human infants structure observed movements in terms of goals from an early age, there is much debate surrounding the mechanisms involved in early goal understanding.One view is that early goal understanding is based on first person experience performing goal-directed actions."This is based on a growing number of studies demonstrating a relationship between infants' action competence and their ability to interpret actions as goal-directed.This dependence on self-experience has been interpreted as evidence that the mechanism that underlies early goal understanding is one that maps observed movements on to a pre-existing motor representation of that action in the observer.However, many behavioural studies show that infants can represent the goals of non-human agents whose movements would not be possible to map on to any existing motor representation.The current results provide further evidence that infants can encode the goals of non-executable actions.Furthermore, they suggest that goal representation in the anterior parietal region is not dependent on matching observed actions to a corresponding motor representation.An alternative view is that infants are sensitive to various cues which indicate that an action is goal-directed.These proposed cues include repeated action on the same object, movement directed towards one object over another, and movement which is efficiently related to an outcome.However, there has been debate over the importance of these different cues for action interpretation, whether these cues lead to the same goal representation and it has often been assumed that some of these cues are important despite an absence of evidence.The finding that the anterior parietal region is involved in goal representation in infants provides an opportunity to elucidate the nature of infant goal representation.Future studies can test if this anterior parietal region, which our data demonstrates is responsive to the identity of the goal in 9-month-old infants, is equally responsive to different combinations of cues to the goal."For example, it is commonly held that repeated action on an object is a sufficient cue for goal attribution, yet repeated action on an isolated object does not seem to lead to an enduring goal representation once that object is paired with a new object, because according to these authors, the infant cannot continue to assume that that object is the agent's goal when they have no information about the agent's disposition towards this novel object.This view has implications for the role of goal attribution in action prediction."If goal attributions do not endure in the face of novel potential targets, then it implies that goal attributions are not a good foundation on which to predict others' behaviour since it is very likely that there will frequently be new potential targets for which the infant has no information concerning the agent's disposition.Currently we do not know whether infants are generating a goal attribution when they observe an agent acting on a solitary target, but fNIRS may provide a means of elucidating these issues."For example, would the anterior parietal cortex exhibit RS if the red cone repeatedly approached a solitary blue cube, suggesting that the blue cube is indeed represented as the agent's goal?",Or, would there be an absence of RS in this case, suggesting that such a scenario presents insufficient cues for goal attribution?,Finally, our demonstration that infants recruit similar cortical regions during the observation of an agent pursuing a goal adds credence to the interpretation of behavioural studies."Whilst many behavioural studies have concluded that infants do interpret others' actions as goal-directed, other authors have argued that such looking-time data only provide evidence for infants abilities to form statistical associations during the course of an experiment.Our data suggest that the infant brain not only shows a parallel pattern of repetition suppression to the adult brain, but also shows this pattern in equivalent brain regions.This suggests that 9-month-old infants are beginning to use adult-like parietal brain networks to encode the events they see in terms of action goals.
It is well established that, from an early age, human infants interpret the movements of others as actions directed towards goals. However, the cognitive and neural mechanisms which underlie this ability are hotly debated. The current study was designed to identify brain regions involved in the representation of others' goals early in development. Studies with adults have demonstrated that the anterior intraparietal sulcus (aIPS) exhibits repetition suppression for repeated goals and a release from suppression for new goals, implicating this specific region in goal representation in adults. In the current study, we used a modified paired repetition suppression design with 9-month-old infants to identify which cortical regions are suppressed when the infant observes a repeated goal versus a new goal. We find a strikingly similar response pattern and location of activity as had been reported in adults; the only brain region displaying significant repetition suppression for repeated goals and a release from suppression for new goals was the left anterior parietal region. Not only does our data suggest that the left anterior parietal region is specialized for representing the goals of others' actions from early in life, this demonstration presents an opportunity to use this method and design to elucidate the debate over the mechanisms and cues which contribute to early action understanding. © 2013.
180
Effect of genetic polymorphism of brain-derived neurotrophic factor and serotonin transporter on smoking phenotypes: A pilot study of Japanese participants
Recent genome-wide association studies have identified associations between common allelic variants and smoking phenotypes .Notably, the brain-derived neurotrophic factor Val66Met polymorphism has been found to be strongly related to smoking initiation, as reported by the Tobacco and Genetics Consortium .High levels of BDNF mRNA are expressed in dopaminergic neurons projecting from the ventral tegmental area to the nucleus accumbens .Regarding function, BDNF specifically potentiates dopamine release in the nucleus accumbens through activation of the dopaminergic nerve terminals .Several studies have investigated the associations of BDNF Val66Met with smoking behavior and nicotine dependence.Lang et al reported a significantly higher Met allele proportion among smokers than among never smokers in a Caucasian sample, although the association appeared to be male-sex specific .However, Montag et al were unable to replicate this positive association in large samples of Caucasians.Similar studies of Asian populations have yielded conflicting results.Following a study of Chinese male volunteers, Zhang et al suggested that BDNF Val66Met influences the age at which smoking is initiated but not smoking behaviors or nicotine dependence.In a study of Chinese smokers with schizophrenia, those carrying the Met allele had significantly higher scores of nicotine dependence relative to those with the Val/Val genotype .However, a study of Thai men concluded that BDNF Val66Met was unlikely to influence susceptibility to smoking .Accordingly, another Asian population study is needed to examine and confirm the association between BDNF Val66Met and smoking phenotypes.BDNF significantly interacts with serotonin in the context of brain functions .Specifically, BDNF promotes the survival and differentiation of serotonergic neurons, and in turn, serotonergic transmission exerts powerful control over BDNF expression.Serotonergic transmission itself is influenced by the serotonin transporter gene-linked polymorphic region, which is characterized by two common variants: a long and a short allele .Lerman et al first hypothesized that the S allele may exert a protective effect against smoking and evaluated the association of smoking behavior with the 5-HTTLPR genotype but failed to find a significant association.The S allele is associated with high neuroticism, and thus, individuals with an S allele find it more difficult to quit smoking .Gerra et al also demonstrated an effect of S allele on neuroticism in heavy-smoking adolescents.The observed associations between the S allele and neuroticism suggest that this allele may enhance neuroticism and thus mediate nicotine addiction.To the best of our knowledge, previous reports have not discussed the effect of the gene-gene interaction on smoking phenotypes.In the present study, we, therefore, investigated the hypothesis that BDNF Val66Met is associated with smoking cessation and initiation, nicotine dependence and the age of smoking initiation in Japanese participants.Given the lack of clarity surrounding this subject, we also examined the possibility that the 5-HTTLPR S allele could modify the effects of BDNF Val66Met on smoking phenotypes.Japanese healthy participants were recruited among the students, staff, and their siblings at Hokuriku University.The institutional review committee of Hokuriku University approved this study.All participants were informed of the aims and methods of the study and provided both verbal and written consent.Participants were categorized as former smokers if they had quit cigarette smoking at least 1 year prior to the interview.Never smokers were individuals who had never smoked a cigarette in their lifetime.Current cigarette smokers responded to the survey regarding the number of cigarettes smoked per day, time of the first cigarette of the day and the age at which smoking was initiated.Nicotine dependence was estimated using the Heaviness of Smoking Index , which was calculated by summing the two scores of the time to smoke the first cigarette of the day after awakening and the number of cigarettes smoked per day.The HSI score is based on a 0–6 scale, and individuals with high nicotine dependence received HSI scores ≥4.In Japan, individuals younger than 20 years cannot purchase cigarettes or tobacco products, and the distributors are legally forbidden to sell tobacco products to them.Therefore, the current smokers were subdivided according to whether they had begun smoking before or after 20 years of age.The DNA in buccal cells was extracted using a kit.BDNF Val66Met polymorphisms were amplified with sense and antisense primers using the MightyAmp DNA Polymerase ver.2; the PCR conditions were as follows: initiation at 98 °C for 2 min, 30 cycles of 10 sec of denaturation at 98 °C, 15 sec of annealing at 60 °C, and 20 sec of extension at 68 °C.The PCR products were then purified by Gen Elute PCR Clean-Up Kit and were then genotyped by direct DNA sequencing with an inner antisense primer.To determine 5HTTLPR polymorphisms, PCR products were generated using the Tks Gflex DNA Polymerase with sense and antisense primers to yield 419 or 376 bp amplicons, which were resolved on 3.0% agarose gels; the PCR conditions were as follows: initiation at 94 °C for 1 min, 30 cycles of 10 sec of denaturation at 98 °C, 15 sec of annealing at 62 °C, and 15 sec of extension at 68 °C.The distributions of alleles and genotypes in current smokers, former smokers, never smokers and the whole study sample were tested for Hardy Weinberg Equilibrium.Genotypes associated with BDNF Val66Met or 5HTTLPR polymorphisms were classified according to Val allele homozygosity and Met allele presence or S allele homozygosity and L allele presence, respectively.We compared the BDNF Val66Met genotypic proportions among current, former and never smokers, individuals with high and low nicotine dependence and those who started smoking before or after 20 years of age in order to assess the effect of BDNF Val66Met on smoking cessation, nicotine dependence and the age at smoking initiation using the Mann–Whitney U test.In order to assess smoking initiation and cessation, smoking behavior was compared in ever smokers versus never smokers, and in current smokers versus former smokers, respectively, according to the procedure described by Munafò et al. Categorical variables were represented by a dominant model based on the BDFN Met allele, according to the report of Zhang et al. Ishikawa et al suggested that individuals with the S/S genotype are less prone to smoke compared to those with the L allele in a Japanese population.To verify the existence of an interaction between the BDNF Val66Met and the 5HTTLPR polymorphisms with respect to smoking behavior, we first assessed in individuals harboring the BDNF Met/Met genotype; these proportions were also subjected to the chi-squared tests to determine HWE."Fisher's exact test was performed to examine the associations of genotype with smoking status and nicotine dependence.A P value <0.05 and a 95% confidence interval that did not include a value of 1.0 were considered to indicate statistical significance.Associations were further expressed as odds ratios with a 95% CI.Statistical analysis was performed using Microsoft Excel and Easy R .A total of 148 Japanese participants were enrolled in our study, including 88 current smokers, 21 former smokers, and 39 never smokers.In the whole study sample, the average ages of current and former smokers were 31.73 years and 49.24 years, respectively.In the male subgroup, the average ages of current and former smokers were 31.53 years and 49.26 years, respectively.Both in the whole study sample and in the male subgroup, current and former smokers significantly differed with respect to age.No information was available as to the age of never smokers.Because female participants accounted for 14.2% of the study sample and only 8.3 % of the ever smokers, the analyses were conducted, in parallel, in the whole study sample and in the male subgroup to address the skewed sex ratio of the study population.Regarding the HSI, a measure of the degree of nicotine dependence, the mean scores and standard deviations for current smokers in the whole study sample and the male subgroup were 2.22 ± 1.67 and 2.26 ± 1.66, respectively.Regarding the age at smoking initiation, the means and standard deviations of current smokers in the whole study sample and male subgroup were 19.23 ± 1.89 years and 19.16 ± 1.91 years, respectively.Table 2 shows the genotype proportions of the BDNF Val66Met polymorphism among the current, former and never smokers.The genotype proportions of the 5-HTTLPR polymorphism among the 66 current, 14 former and 27 never smokers carrying the BDNF Met polymorphism are shown in Table 3.The genotypic distributions of the BDNF and 5HTTLPR genes did not significantly deviate from HWE among current smokers, former smokers, never smokers, and the whole study sample.In addition, the allelic proportion of BDNF Val66Met in the whole study sample was similar to the proportions observed in previous Japanese population studies .A previous study reported respective 5-HTTLPR S and L allele proportions of approximately 80% and 20% in a Japanese population , similar to the results of our study.No association was observed between the BDNF Val66Met proportion and smoking cessation or smoking initiation in either the whole study sample or the male subgroup.Among BDNF Met carriers, the proportion of former smokers harboring the S/S polymorphism in 5-HTTLPR did not differ from those of current smokers in both analyses.Moreover, no statistically significant differences were observed in the proportion of 5-HTTLPR allelic variants between ever smokers and never smokers.Therefore, BDNF Val66Met may not affect smoking cessation and initiation, and 5-HTTLPR polymorphism was not found to influence those associations.Next, the HSI scores of current smokers with the Val66Met genotype were compared in both in the whole study sample and the male subgroup.Within the group of total smokers, the proportion of participants with a low HSI score was significantly higher among the BDNF Met carriers than among carriers of the Val/Val genotype.This difference was confirmed, albeit marginally significant, in the male subgroup of smokers.In the latter subgroup, the numbers of BDNF Met carriers with low and high HSI score were 53 and 10, respectively, whereas the numbers of Val/Val carriers with low and high HSI score were 13 and 9, respectively.Among smokers with the Val/Val genotype, the proportion of individuals with a high HSI score was higher than among Met allele carriers.However, among current smokers carrying the Met allele, no association was found between the 5-HTTLPR polymorphism and the HSI score.Consequently, these results indicate that BDNF Val66Met has an effect on nicotine dependence, but this effect is not interactive with 5-HTTLPR polymorphism.No significant direct association was found between BDNF Val66Met and the age of smoking initiation, either in the whole study sample or in the male subgroup.However, among the participants who began smoking before age 20, the proportion of BDNF Met carriers was higher than that of Val/Val carriers in both the whole study sample and the male subgroup.Moreover, as shown in Table 11, a significant correlation was observed between the 5-HTTLPR polymorphism and the age of smoking initiation in current smokers carrying the Met allele.Specifically, in the latter subgroup, a significantly higher proportion of early smokers was found among participants homozygous for the 5-HTTLPR S allele than among carriers of the L allele.Odds ratios for the association, in the BDNF Met carriers, between the S/S genotype and early smoking initiation, were as follows: whole study sample, OR = 3.29; 95% CI, 1.13–9.57; P value, 0.041; male subgroup, OR = 3.18; 95% CI, 1.08–9.37; P value, 0.041.Therefore, a combined action of the BDNF Val/Met and the 5-HTTLPR polymorphism may contribute to individual differences in the age of smoking initiation.Our study demonstrated an association between BDNF Val66Met and the HSI, a measure of the degree of nicotine dependence, in our participants.The proportion of smokers who carried the Met allele was significantly higher than that of smokers carrying the Val/Val genotype; the former group had lower HSI scores.However, no significant association between 5-HTTLPR polymorphism and HSI was observed among Met allele carriers.Accordingly, the BDNF Met allele may be associated with reduced nicotine dependence independently of 5-HTTLPR polymorphism.A previous study of European–American men also demonstrated a significant association between BDNF Val66Met and HSI scores.These results could be explained by the negative effect of this polymorphism on mature BDNF secretion.Egan et al. reported that BDNF Val66Met affects the intracellular trafficking and packaging of pro-BDNF, thus decreasing the secretion of mature BDNF.The binding of mature BDNF to the TrkB receptor activates multiple intercellular cascades to regulate neuronal development, plasticity and long-term potentiation .Nicotine reinforces smoking addiction by activating the dopaminergic nervous system that projects from the ventral tegmental area to the nucleus accumbens via nicotinic acetylcholine receptors, a process that is affected by the induction of long-term potentiation in these areas .In the current study, we speculated that Met allele carriers exhibit reduced mature BDNF secretion and decreased long-term potentiation relative to smokers with the Val/Val genotype.Consequently, smokers who carry the Met allele may have low nicotine dependence as a result of the effect of nicotine on the dopaminergic system activation.We did not detect any statistically significant association between BDNF Val66Met and smoking cessation and initiation.Furthermore, smoking cessation among BDNF Met carriers did not appear to be related to the 5-HTTLPR polymorphism.Notably, a significant association was found between the 5-HTTLPR polymorphism and the age of smoking initiation in carriers of the BDNF Met allele.Early-age smoking initiation was more frequent in smokers carrying the 5-HTTLPR S/S genotype than in those with the S/L or L/L genotype.Previously, Zhang et al noted that smokers who carried the Met allele began smoking significantly earlier than those carrying the Val/Val genotype, suggesting that BDNF Val66Met may influence the age at which smoking is initiated.A strong association of BDNF Val66Met with smoking initiation, but not cessation, was reported by Furberg et al. In contrast, Breetvelt et al suggested that BDNF Val66Met is associated with smoking cessation, but not smoking initiation.These authors demonstrated that genetic variation in BDNF could alter the reward mechanism by modulating the dopamine reward circuits after an initial nicotine exposure and could thus contribute to altered drug-related memories.In addition, functional magnetic resonance imaging of human participants demonstrated that the Met allele was associated with poorer episodic memory and abnormal hippocampal activation .Regarding 5-HTTLPR, Nilsson et al reported that in adolescents, the likelihood of a positive smoking status and a higher rate of nicotine dependence was based on the relationship between the 5-HTTLPR genotype and family environment.For adolescents, a variety of psychosocial factors contribute to smoking .Therefore, genetically vulnerable individuals, particularly adolescents, may begin to smoke tobacco impulsively in response to triggers, such as negative moods and environmental factors.Notably, acute immobilization stress has been associated with marked reductions in hippocampal BDNF and raphe nuclei 5HTT mRNA in rats .Hiio et al found that adolescent BDNF Met allele carriers along with the 5-HTTLPR S/S genotype had the lowest conscientiousness scores, suggesting a significant interacting effect of the 5-HTTLPR and BDNF Val66Met polymorphisms on conscientiousness.Therefore, we speculate that the addictive behaviors and personality traits associated with a high risk of smoking initiation may be consistent with a synergistic effect of the BDNF Met variant, which reduces BDNF secretion among individuals harboring the S/S genotype of 5-HTTLPR and is presumed to be impaired with regard to brain 5-HT transmission.Therefore, our finding suggests that the Met allele of BDNF Val66Met may promote the initiation of smoking behavior at an early age through interactions with the S/S genotype of 5-HTTLPR.This study has several limitations.One major concern pertains to our small sample size."Although Fisher's exact test was adapted to detect any type of significant association in this study, according to a statistical calculator , the required sample size would be around 2,000 participants if Pearson's chi-square test was performed with a statistical power of 80% and a small effect size, which would be expected for the gene's effect on smoking.Additionally, participants were subdivided according to several variables, such as smoking status and gender.None of our significant results survived the Bonferroni correction for multiple hypothesis testing.Furthermore, interpretations of positive results should account for the fact that the P values were not corrected for multiple testing.Therefore, the statistical power does not appear to be adequate, thus raising the possibility of false positivity.Another concern is that the small sample size may have caused selection and confounding biases.A significant difference was observed between the ages of current and former smokers in our analysis of the association between BDNF polymorphism and smoking behavior.Additionally, given the very low proportion of female smokers, the analyses were conducted, in parallel, in the whole study sample and in the male subgroup, to overcome the possible problem represented by the skewed sex ratio.No major differences were observed between the whole study sample and the male subgroup.However, the association between BDNF Val66Met and changes in the brain has been suggested to be sex-specific .We must also note that personality traits comprise a crucial trigger by motivating people to initiate and continue smoking.Munafò et al demonstrated that participants harboring the S/S and L/L genotypes of 5-HTTLPR differed significantly with respect to anxiety-related traits."We did not standardize the participants' personality traits, chronic illnesses, particularly neuropsychiatric diseases, or medications, which may have led to bias.Nicotine dependence was assessed on the basis of the HSI, whereas no other measures were employed, such as the Fagerstrom test, pack-year smoking, and smoking years.The smoking status of participants was exclusively based on self-reporting.Furthermore, only two polymorphic variants were assessed.Therefore, our results are of low generalizability, and our findings should be validated in studies with larger samples.Our findings indicated reduced nicotine dependence among current smokers who carried the BDNF Met allele relative to those homozygous for the Val allele.Moreover, among the Met allele carriers, current smokers homozygous for the 5-HTTLPR S allele displayed significantly higher rates of smoking initiation before age 20, as compared to those harboring the 5-HTTLPR L allele.BDNF Val66Met had no direct effect on smoking cessation or initiation, and no interactive effect of the BDNF Val66Met and the 5-HTTLPR polymorphisms on smoking cessation was detected.The present study thereby provides preliminary data suggesting potential associations of BDNF polymorphism with nicotine dependence and the age at smoking initiation due to an interacting 5-HTTLPR polymorphism in a small number of Japanese participants.Masanori Ohmoto: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Wrote the paper.Tatsuo Takahashi: Conceived and designed the experiments; Wrote the paper.This work was supported by a general grant to the Faculty of Pharmaceutical Sciences, Hokuriku University.The funding source was not involved in the collection, analysis, interpretation of the data, preparation of the manuscript or the decision to submit the manuscript for publication.The authors declare no conflict of interest.No additional information is available for this paper.
Purpose: This study investigated whether a gene polymorphism causing a Val66Met substitution (rs6265) in brain-derived neurotrophic factor (BDNF) is associated with smoking initiation, smoking cessation, nicotine dependence and age of smoking initiation, in Japanese participants. Additionally, this study examined whether the S allele of the serotonin transporter gene-linked polymorphic region (5-HTTLPR) is associated with the BDNF Val66Met polymorphism on smoking phenotypes. Patients and methods: The genotypic proportion of the polymorphism responsible for BDNF Val66Met was determined in 148 participants including 88 current smokers, 21 former smokers, and 39 never smokers, and Fisher's exact test was used to investigate the relationship between this polymorphism and smoking cessation and initiation as well as the association between the genotypes of current smokers with a heavy smoking index (HSI) and the age of smoking initiation. In addition to the BDNF Val66Met polymorphism, the 5-HTTLPR polymorphism has also been evaluated in a specific subset of participants. Results: We found statistically significant correlations between the BDNF Val66Met polymorphism and the HSI, both in the whole study sample (P = 0.017) and in the male subgroup (P = 0.049). Moreover, the 5-HTTLPR polymorphism was associated with the age of smoking initiation in current smokers carrying the BDNF Met allele, in both the whole study sample (P = 0.041) and the male subgroup (P = 0.041). On the other hand, no association was observed between the BDNF Val66Met polymorphism, either alone or in combination with the 5-HTTLPR polymorphism, and the age of smoking cessation. Finally, no independent effects of the BDNF Val66Met genotype on the age of smoking initiation were detected. Conclusion: This pilot study provides preliminary findings regarding the influence of BDNF Val66Met on smoking phenotypes and the interacting effect of 5-HTTLPR on the association between BDNF Val66Met and smoking phenotypes in Japanese participants.
181
Evaluation of electrohysterogram measured from different gestational weeks for recognizing preterm delivery: a preliminary study using random Forest
Preterm delivery, defined as birth before 37 completed weeks of gestation, is a leading cause of neonatal morbidity and mortality, and has long-term adverse consequences for fetal health .Accurate diagnosis of preterm delivery is one of the most significant problems faced by obstetricians.The existing measurement techniques for diagnosing preterm delivery include tocodynamometer, ultrasound and fetal fibronectin.However, they are subjective, or suffer from high measurement variability and inaccurate diagnosis or prediction of preterm delivery .TOCO is often influenced by sensor position, the tightness of binding by the examiner and maternal movement.Short cervical length measured by transvaginal ultrasonography has been associated with an increased risk of preterm delivery.But its accuracy for prediction of preterm delivery is not satisfied due to the high false positive rate.Fetal fibronectin test, which is performed like a pap smear, has not been shown to accurately predict preterm delivery in women who are at low risk or who have no obvious symptoms.Comparatively, electrohysterogram which reflects the sum of the electrical activities of the uterine cells could be recorded noninvasively from the abdominal surface.The parameters of EHG signals might provide an effective tool for the diagnosis and prediction of preterm delivery .Therefore, using EHG signal is a reliable method at evaluating uterine activity and it has been used in analyzing uterine activity of non-pregnant women as well .Many features have been extracted from EHG signals to recognize preterm delivery, which can be grouped into three classes: linear features, nonlinear features and features related to EHG propagation .Time, frequency and time-frequency features, such as root mean square, median frequency, peak frequency and energy distribution have been used to characterize EHG signals and distinguish between term and preterm delivery .Besides, nonlinear features, including correlation dimension , sample entropy , Lyapunov exponent , and multivariate multiscale fuzzy entropy have been applied to describe the nonlinear interactions between billions of myometrium cells .In recent years, the propagation velocity, direction of the EHG signals, intrinsic mode functions from empirical model decomposition have been proposed as the potential discriminators to predict the progress of pregnancy.However, selection of EHG features was somehow arbitrary in these published studies.A comprehensive analysis of these feature differences between preterm and term delivery would therefore be clinically and physiologically useful.Machine-learning algorithms have been investigated to recognize the preterm delivery using EHG signals .Conventional classifiers include the K-nearest neighbors, linear and quadratic discriminant analysis, support vector machine , artificial neural network classifiers , decision tree , penalized logistic regression, rule-based classifier and stacked sparse autoencoder .However, the K value of the K-NN classifier is set subjectively, LDA and QDA are affected by sample distribution, ANN and SSAE have high computational complexity , and SVM requires additional steps to reduce the dimension of the extracted features .The published studies have reported that ANN, SSAE, Adaboost, DT, SVM, logistic and polynomial classifier have achieved better performance in recognizing preterm delivery.However, these classifiers were evaluated on different database using different EHG features, and therefore unable to determine the most significant features for predicting preterm delivery.Random forest is an ensemble learning method for classification.DT is the base learner in RF, which has been employed in data mining and feature selection .Classification accuracy could be improved by growing an ensemble of trees and letting them vote for the most popular class.Ren et al. reported that RF with simpler structure achieved the same accuracy as ANN for classifying preterm delivery with EHG signals .Idowu et al. also indicated that RF performed the best and robust learning ability.The main aim of this study was to evaluate the EHG signals recorded at different gestational weeks for recognizing preterm and term delivery using RF.Meanwhile, the importance of EHG features for predicting preterm delivery would be ranked.The overview flowchart of the proposed method in this study is shown in Fig. 1.Briefly, EHG signals from 300 pregnant women were divided into two groups depending on whether the EHG signals were recorded before or after 26th week of gestation.Thirty-one linear and nonlinear features were then derived from each EHG signal and fed to a RF classifier for automatic identification of term and preterm delivery, and the importance of features was ranked by DTs.The performance of RF for recognizing preterm delivery was then evaluated and compared between EHG signals recorded at different gestational weeks.The details of each step are presented in Fig. 1.EHG signals in our study were from the open access term-preterm EHG database developed in 2008 at the Faculty of Computer and Information Science, University of Ljubljana, Ljubljana .Three channels of EHG signals were recorded from the abdominal surface using four electrodes, as shown in Fig. 2.Three-channel EHG signals were measured between the topmost electrodes, the leftmost electrodes, the lower electrodes separately.The recording time was 30 min with the sampling frequency of 20 Hz.A previously published research has confirmed that the EHG from channel 3 was regarded as the most distinguishable signals for classifying preterm and term delivery .Therefore, as a pilot study, channel 3 was selected for further analysis.EHG signals from 300 pregnant women were divided into two groups depending on when the signals were recorded: i) preterm and term delivery with EHG recorded before the 26th week of gestation, and ii) preterm and term delivery with EHG recorded during or after the 26th week of gestation.Table1 shows the number of EHG recordings in PE and TE group and in PL and TL group.Fig. 3 shows four typical examples of EHG segments from each group.The main frequency component of EHG signal ranges between 0 and 5 Hz .The EHG signals preprocessed by the band-pass filter of 0.08−4 Hz were selected from the TPEHG database, in which the interferences from fetal and maternal electrocardiogram, respiratory movement, motion artifacts and 50/60 Hz power noise had been removed .Furthermore, the first and last 5 min of EHG segments were abandoned to avoid the transient effects due to filtering process , and the remaining 20 min EHG signals were used for further analysis.Thirty-one features were extracted with time domain, frequency domain, time-frequency domain and nonlinear analysis as follows.The mean ± SD of the derived EHG features were calculated across all the cases in the PE and TE group, and PL and TL group.Non-parametric t-test was performed using SPSS 22 to assess the difference of EHG features between PE and TE, and between PL and TL.A p-value below 0.05 was considered statistically significant.TPEHG dataset is not balanced in term of the sample size between term delivery and preterm deliveries.Classifiers are often more sensitive to the majority class and less sensitive to the minority class, leading to biased classification .ADASYN was employed in this study to oversample the minority class to balance the term and preterm samples .Therefore, the sample size of PE increased from 19 to 135 cases, and PL increased from 19 to 111 cases.In total, there were 278 cases in PE and TE group, and 230 cases in PL and TL group.31 features/case☓278 cases from PE and TE group, and 31 features/case☓230 cases from PL and TL group were respectively divided into subset 1 to n and entered to the base learner DT randomly.The value of n was determined by the number of features.The number of features in each subset was chosen randomly but not exceeding the preset maximum.The value of m is the number of base learner DT.The depth d determines the maximum layer each tree can reach.A DT, which is applied to select features, is formed by randomly selected subset of features.The feature importance is ranked based on its influence on the DT prediction results indicated by out-of-bag index.With the ranked features, all DTs in the forest would vote for the most popular class .Six-fold cross validation method was applied to evaluate the RF performance for classifying preterm and term delivery, independently for the PE and TE group and for the PL and TL group.The PE and TE group, and the PL and TL group were randomly partitioned into six subsets respectively, five of which were employed to train the RF, the other was used to test the RF.The cross-validation process was repeated six times, with each of the six subsets used once as test data.The accuracy, sensitivity, specificity from the six-fold cross validation were averaged to evaluate the performance of RF classification results, independently for the PE and TE group, and for the PL and TL group.The area under the curve from the receiver operating characteristic curve was also calculated and compared between the PE and TE group, and the PL and TL group.Table 3 shows the 15 key features which were identified as the best features for recognizing preterm delivery both in PE and TE group, and PL and TL group.The feature importance accounted for less than 0.1 % were a2, SM3, SV3, SV4, SS3 in PE and TE group, and a2, a3, SE5, SM5, SV4, SV5 in PL and TL group.It was noticed that SampEn, MDF, MNF, SE4, SM2 and SM4 played important roles on the classification of preterm and term delivery in both PE and TE, PL and TL groups.In particular, SampEn accounted for nearly 70 % of the importance for recognizing preterm delivery.ROC curves for classifying preterm delivery in PE and TE group, and PL and TL group are shown in Fig. 7.There was no significant difference between the two AUCs from the ROC curves.As shown in Table 4, RF achieved the ACC of 0.92, sensitivity of 0.88, specificity of 0.96 and AUC of 0.88 for PE and TE group, and ACC of 0.93, sensitivity of 0.89, specificity of 0.97, and AUC of 0.80 for PL and TL group.Table 4 summarizes the performance of RF model in this study in terms of ACC, sensitivity, specificity and AUC, in comparison with the previously published papers using TPEHG database .All the studies achieved over 80 % ACC and sensitivity.In this study, RF classifiers were developed using EHG signals recorded before and after the 26th gestational week to recognize the preterm delivery.Among the extracted EHG features, SampEn, MDF, MNF, SE4, SM2 and SM4 were more important for classification of preterm and term delivery whether early or later recorded.With RF classifier, the classification results in PE and TE group were similar to the results in PL and TL group.Compared with other studies using TPEHG database, the current study extracted EHG features including 27 linear and 4 nonlinear features more comprehensively.RF classifier which did not require computational complexity, performed a promising result without additional step of pre-selected features in a wider band pass filter of 0.08−4 Hz.The feature importance was ranked by RF based on classification accuracy.After the importance of different features was ranked by DT, SampEn was found to be the most important feature for recognizing preterm delivery.The previous studies concluded that nonlinear methods such as sample entropy , approximate entropy and Shannon entropy can provide better discrimination between pregnancy and labor contractions compared to linear methods .It is probably because entropy reflects the complex and nonlinear dynamic interactions between myometrium cells .SampEn was considered to be particularly suitable for revealing EHG changes in relation to pregnancy progression and labor .RF classifier could obtain the promising results as the previous studies illustrated .The performance of recognizing preterm delivery was influenced by the cut-off frequency of filter and the extracted features.Jager et al. got the highest classification ACC of 100 % with features from the frequency band of 0.08∼5 Hz when using the entire records of TPEHG database.Most of studies used the specific features or selected features for prediction of preterm delivery, while RF utilized the extracted features without additional feature selection algorithm.Similar to the other studies in Table 4, the current study extracted features from the entire records because there were no annotated contraction intervals or even no contraction during early recordings.Recently, various features and classifiers have been proposed to recognize uterine contraction with Icelandic 16-electrode database .As UC detection is necessary for monitoring labor progress, some studies extracted features from EHG bursts and achieved reliable results of UC detection by machine learning and deep learning algorithms .A multi-channel system for recognizing uterine activity with EHG signal has also been developed in clinical research .They also provided important ways for recognition of preterm delivery with UC.ADASYN technique was applied to solve the problem of unbalanced data in our study, though synthetic minority oversampling technique algorithm has been employed in the previous studies .Compared with ADASYN technique, the synthetic samples generated by SMOTE algorithm may increase the likelihood of data overlapping which will not provide more useful information .ADASYN achieved better results for classification of preterm delivery in current study.The present work has the following limitations.The synthetic data generated by ADASYN is less convincing than the clinically collected EHG data.More clinical EHG signals are essential, in particular from preterm delivery.A comprehensive study has been conducted on various EHG features, however, sixteen of which were from wavelet decomposition coefficients.Therefore, AAR model , EMD technique , multivariate multiscale entropy features and combination of multi-channel EHG signals could be investigated to improve the prediction of preterm delivery .Nevertheless, as a pilot study, the positive results from using channel 3 was the first step for evaluating the effectiveness of a RF model.Furthermore, comparison of different classifiers for recognizing preterm delivery could be considered in future study.In current study, sample entropy played the most important role on recognizing preterm delivery among the 31 extracted features.RF classifier was a promising method without additional steps of selecting features.EHG signals recorded before the 26th week of gestation achieved the similar results to those after the 26th week.This study is of great helpful in the early prediction of preterm delivery and early clinical intervention.Jin Peng, Hongqing Jiang designed the analysis method and classifiers; Lin Yang and Mengqing Du assisted with signal preprocessing; Xiaoxiao Song and Yunhan Zhang assisted with data curation; Jin Peng analyzed the results and wrote the original draft; Dongmei Hao and Dingchang Zheng reviewed the draft,The analysis of this database was approved by the Research Ethics Committee of the Faculty Research Ethics Panel of Beijing University of Technology and Anglia Ruskin University.All the authors declare that they have no conflict of interest.The database used in this study is available to access via the link: http://lbcsi.fri.uni-lj.si/tpehgdb/ or https://www.physionet.org/physiobank/database/tpehgdb/.
Developing a computational method for recognizing preterm delivery is important for timely diagnosis and treatment of preterm delivery. The main aim of this study was to evaluate electrohysterogram (EHG) signals recorded at different gestational weeks for recognizing the preterm delivery using random forest (RF). EHG signals from 300 pregnant women were divided into two groups depending on when the signals were recorded: i) preterm and term delivery with EHG recorded before the 26th week of gestation (denoted by PE and TE group), and ii) preterm and term delivery with EHG recorded during or after the 26th week of gestation (denoted by PL and TL group). 31 linear features and nonlinear features were derived from each EHG signal, and then compared comprehensively within PE and TE group, and PL and TL group. After employing the adaptive synthetic sampling approach and six-fold cross-validation, the accuracy (ACC), sensitivity, specificity and area under the curve (AUC) were applied to evaluate RF classification. For PL and TL group, RF achieved the ACC of 0.93, sensitivity of 0.89, specificity of 0.97, and AUC of 0.80. Similarly, their corresponding values were 0.92, 0.88, 0.96 and 0.88 for PE and TE group, indicating that RF could be used to recognize preterm delivery effectively with EHG signals recorded before the 26th week of gestation.
182
Immunohistochemistry on a panel of Emery–Dreifuss muscular dystrophy samples reveals nuclear envelope proteins as inconsistent markers for pathology
Emery–Dreifuss muscular dystrophy typically presents in early childhood with slow progression, though adult onset also occurs ."Three defining features of this disorder include early contractures of the elbows and Achilles' tendons in the absence of major muscular defects, progressive wasting of the lower leg and upper arm muscles and cardiac conduction defect .All these features are variable in clinical presentation: while typical patients remain ambulatory, severe cases require wheelchairs.Likewise, cardiac defects do not always present, but complete heart block can occur in the most severe cases.Conduction defects can also present in the absence of prior muscular involvement and female carriers of the X-linked form can develop cardiac problems .Even within the same family, the same mutation can yield highly variable clinical presentation amongst family members .With this clinical variability it was not surprising to find that EDMD is also genetically variable.Mutations in 8 nuclear envelope proteins account for ~47% of patients.The vast majority of mutations are X-linked in EMD and autosomal dominant in LMNA though more rare autosomal recessive LMNA mutations also occur .Lamin A is a nuclear intermediate filament protein that lines the inner surface of the nuclear envelope while emerin is a nuclear envelope transmembrane protein.Roughly 3% of patients are linked to mutations in 5 other NETs: TMEM43, SYNE1, SYNE2, SUN1 and SUN2 .The remaining 3% of known mutations are linked to FHL1 .FHL1 has many splice variants that have multiple cellular localisations including muscle z-bands and the nucleus, but FHL1B targets also to the nuclear envelope .FHL1 is also linked to other myopathies such as X-linked myopathy with postural muscle atrophy and deletion in mice leads to muscle hypertrophy .The strong nuclear envelope links for nearly half of all cases raises the possibility of a common pathway at the nuclear envelope affected in EDMD.The principal mechanisms proposed to explain how nuclear envelope disruption can yield pathology are genome misregulation, mechanical instability and failure of stem cell maintenance – all potentially leading to impaired differentiation .However, it is unclear how mutations in these widely expressed proteins can cause this muscle-specific disorder.One proposed model is that muscle-specific partners that function in complexes with these widely expressed nuclear envelope proteins might mediate the muscle-specific pathologies.Several candidates were identified by proteomics of muscle nuclear envelopes .WFS1, Tmem214 and Tmem38A/TRIC-A were identified only in muscle out of several tissues separately analysed by proteomics for nuclear envelopes .NET5/Samp1 was found in nuclear envelopes from other tissues, but has a muscle-specific splice variant .Several of these are candidates for mechanical functions due to implied connections to the cytoskeleton: NET5/Samp1, WFS1 and Tmem214 localise to the mitotic spindle and NET5/Samp1 knockdown dissociates centrosomes from the NE .As the centrosome organises microtubule networks and cell polarity, disrupting its association with the nuclear envelope could result in contractile defects in myofibres.Tmem214 additionally tracked with microtubules on the nuclear surface and thus could influence nuclear rotation and migration to the edges of the myofibres.WFS1 also has a separate function shared by Tmem38A/TRIC-A in genome organisation and regulation of gene expression during myogenesis and knockout of these two muscle NETs together with a third with the same function completely blocked myotube differentiation .Tmem38A/TRIC-A separately contributes to the regulation of calcium ion transport and thus could affect either muscle contraction or signalling at the nuclear envelope.That some of these muscle-specific NETs had overlap in their functions further supports the possibility of their working in a common pathway towards EDMD pathophysiology.We postulated that if a central mechanism at the NE underlies EDMD pathology through disruption of a functional complex then components of that complex might redistribute away from the NE.Early studies reported that emerin depends on lamin A for its localisation to the nuclear envelope and that lamin EDMD mutation L530P and mutation R377H from a family with dilated cardiomyopathy combined with specific quadricep muscle myopathy similarly yield a notable loss of emerin at the nuclear envelope in tissue culture cells .Emerin also redistributed away from the NE in fibroblasts from a patient with an EDMD mutation in nesprin, another NET.The single nesprin 2β T89M mutation resulted in redistribution of emerin to the cytoplasm while this same mutation combined with a nesprin 1α V572L mutation resulted in redistribution of emerin to the polar cap .The nesprin double mutation yielded a slightly different redistribution of emerin to the plasma membrane in actual muscle sections .Correspondingly, nesprin redistributed away from the NE in fibroblasts from a patient with an emerin EDMD g.631delTCTAC mutation that results in loss of exon 6 .Interestingly, in one of these studies two cardiomyopathy lamin A mutants studied had variable emerin mislocalisation phenotypes while a lipodystrophy mutation exhibited no altered localisation , suggesting that NET mislocalisation might be a specific feature of nuclear envelope linked muscle disorders.No study has systematically tested for the mislocalisation of the wider range of EDMD-linked proteins in a panel of patients covering the genetic spectrum of EDMD.Here we stained a wide panel of EDMD muscle biopsy sections and cultured myoblast/fibroblast cultures from biopsies with a panel of antibodies to the EDMD-linked proteins.To investigate potential muscle-specific NET involvement in mechanisms to generate the pathology of the disorder, we also stained these samples with antibodies against the muscle-specific NETs NET5/Samp1, WFS1, Tmem214 and Tmem38A.We find that neither emerin nor lamin A nor any of the other NETs are uniformly altered in all patient samples.However, nesprin 1, SUN2, and several muscle-specific NETs exhibited unusual distribution patterns in a subset of samples.These findings indicate that there are likely to be multiple pathways leading to EDMD pathology and suggest the possible involvement also of these muscle-specific NETs in the disorder.Primary human myoblast/fibroblast cultures and muscle biopsies for sectioning were obtained from either the Centre for Inherited Neuromuscular Disease in Oswestry through C.S., the MRC Centre for Neuromuscular Disorders Biobank in London, or the Muscle Tissue Culture Collection at the Friedrich-Baur-Institute.All control and patient materials were obtained with informed consent of the donor at the CIND, the CNDB or the MTCC.Ethical approval for this particular study was obtained from the West of Scotland Research Ethics Service with REC reference 15/WS/0069 and IRAS project ID 177946.Primary human myoblast/fibroblast cultures obtained from patient biopsy were maintained in skeletal muscle cell growth medium.Cells were kept from reaching confluency to avoid differentiation.For myoblast differentiation into myotubes, the primary human myoblasts in the cultures were differentiated using a matched differentiation medium.C2C12 cells were maintained in DMEM with 20% Foetal calf serum and antibiotics.All cells were maintained at 37 °C in a 5% CO2 incubator.Antibodies were obtained from multiple sources and used at several different dilutions.Tmem38A and NET5/Samp1 were affinity purified against the protein fragment/peptide used in their generation.The antibody baits were dialysed out of their storage buffer into PBS and coupled to Affi-Gel matrix.Antibodies were bound to the column from serum, eluted with 200 mM Glycine pH 2.3 and the buffer was immediately exchanged using spin concentrators to PBS containing 25% glycerol.All secondary antibodies were donkey minimal cross-reactivity Alexafluor-conjugated from Invitrogen except for those used for Western blot, which were also donkey minimal cross-reactivity IRDye®-conjugated from LI-COR.Protein samples were separated by SDS–PAGE, transferred onto nitrocellulose membranes and blocked 30 min in Western blot blocking buffer: 5% milk powder and 0.05% Tween-20 in TBS.Membranes were incubated with primary antibodies in Western blot blocking buffer overnight at 4 °C.Six washes in TBS-0.05% Tween-20 were then followed by incubation with the secondary antibodies for 60 min at room temperature.After another 6 washes in TBS-0.05% Tween-20 antibody signals were detected on a LI-COR Odyssey Quantitative Fluorescence Imager.Adherent cells grown on uncoated coverslips were washed in PBS prior to fixation with −20 °C 100% methanol and immediately stored at −20 °C.Methanol fixation was used because it improved epitope accessibility for the antibodies and should precipitate membrane proteins at location rather than wash them away.Prior to staining cells were incubated 10 min in TBS-0.1% Tween-20.Coverslips were blocked in 1X immunofluorescence blocking buffer for 20 min at RT and incubated with primary antibodies.Following 3 washes in TBS-0.1% Tween-20, coverslips were incubated with secondary antibodies and 4 µg/ml 4,6-diamidino-2 phenylindole, dihydrochloride.Coverslips were extensively washed in PBS or TBS-0.1% Tween-20 over 30 min and mounted with VectaShield.Muscle biopsies were mounted on cork in OCT mounting medium and frozen in isopentane cooled in liquid nitrogen.10 µm sections were cut using a Leica CM1900 cryostat and collected on SuperFrost Plus slides, placed immediately on dry ice and stored at −80 °C.Sections were brought to RT before staining.Using a PAP hydrophobic marker pen, a working area was drawn around each section and the sections were washed in immunofluorescence blocking buffer for 30 min.Sections were incubated in primary antibodies overnight at 4 °C in a humidified chamber and then washed 3 × 5 min using TBS-0.1% Tween-20.Secondary antibodies were applied for 1 h, then removed gently by blotting with tissue paper and DAPI applied for 10–15 min.Sections were then washed 3 × 10 min in TBS-0.1% Tween-20 and 1 × 10 min in TBS.Excess liquid was blotted off carefully using tissue paper, a drop of Vectashield added and a coverslip applied.Images were acquired on a Nikon TE-2000 widefield microscope using a 1.45 NA 100× objective, Sedat quad filter set, PIFOC z-axis focus drive and a CoolSnapHQ High Speed Monochrome CCD camera run by Metamorph image acquisition software.Widefield images are mostly shown, but for Fig. 5 deconvolved images are shown.For these, z-stacks acquired at intervals of 0.2 µm from the 1 µm above to 1 µm below the imaged nucleus were deconvolved using AutoQuant X3.Several earlier reports presented data showing that emerin, nesprins and lamin A/C staining, normally concentrated at the nuclear envelope, was aberrant variously in lamin A knockout cells and cells expressing certain EDMD lamin A, emerin and nesprin mutations .However, typically only a single patient mutation was tested and only lamin A/C and a few NETs were tested for any given sample, though EDMD has now been linked to 8 different nuclear envelope proteins.To determine if any particular one of these proteins is recurrently defective in its intracellular distribution we stained a panel of 3 control and 8 EDMD patient myoblast/fibroblast cultures for emerin, lamin A/C, nesprin 1, nesprin 2, SUN1, SUN2, and FHL1.Although images are likely to contain a mixture of myoblasts and fibroblasts, we expect that the majority of cells are likely to be myoblasts as staining cultures for 4 of the patients with desmin antibodies revealed 78%, 56%, 100% and 78% of DAPI-stained nuclei in desmin positive cells respectively for patients P1, P5, P6 and P7.All stainings were done in parallel and all images were taken with the same exposure times and microscope software settings.This panel included patients with lamin A/C-, emerin-, and FHL1-linked disorder.Surprisingly, emerin, despite previous reports of its aberrant distribution, exhibited strong nuclear envelope staining with a crisp rim of fluorescence at the nuclear perimeter in all patient cells indistinguishable from the control cells.Patient P3 was a female with a heterozygous truncation mutation in the X chromosomal gene encoding emerin.Though unusual for a female carrying an emerin mutation to have a muscle phenotype, the affected father also carried the emerin mutation."Patient P3 expressed full-length emerin in a subset of cells, excluding uneven X-inactivation, and, together with the father's earlier presentation than his affected uncles, this possibly indicates an additional unknown mutation .Here this subset of emerin-positive cells exhibited a moderately weaker staining compared to that in other patients.While some emerin accumulation in the ER appeared in patients P2 and P5, it was not more than for control C2, and this control had more ER accumulation than other EDMD patient cells.Thus, any minor differences in emerin distribution were within the same breadth of such differences exhibited by the control group.No visible differences were observed for lamin A/C staining between the patient and control cells and even within each set, unlike emerin where both some control and some patient cells exhibited minimal ER accumulation.The image selected for control C3 was chosen because the cell was smaller and had more nucleoplasmic lamin staining, likely due to being at an earlier cell cycle stage.Cells shortly after mitosis characteristically have larger nucleoplasmic lamin pools because the lamins remaining from the previous cell cycle have not fully reassembled and this pool disappears as nuclear volume increases.None of the larger or smaller cells from the patients had more nucleoplasmic lamin accumulation than this control, further underscoring the fact that any minor visible differences can be discounted.For the other NETs and FHL1 there was greater variation amongst samples, but in nearly all cases a similar range of variation was observed for the controls.For example, in multiple controls nesprin 1 staining was variable in intensity at the nuclear membrane compared from cell to cell in the same field.Also some control cells exhibited spotty intranuclear staining while others did not, with similar variation observed also in the patient cells.Analysis of this intranuclear staining in z-series indicated that it reflects invaginations of the nuclear membrane.Roughly half of the control cells also exhibited some punctate staining in the nucleoplasm, most likely due to invaginations, but possibly also soluble splice variants, was generated to full-length nesprin1-α).Within the patient population similar variation was observed in overall intensity, relative rim intensity and punctate areas.However, patients P3 and P4 exhibited minor staining in the ER that was not observed for either the controls or the other patients.Although this is a different specific mutation, the P3 staining is consistent with the previous report of nesprin mislocalisation with emerin EDMD mutation g.631delTCTAC resulting in loss of exon 6 .This is a new observation for the P4 LMNA p.R545C mutation, but notably other lamin and the FHL1 mutant myoblast/fibroblast cultures did not exhibit similar ER accumulations; thus, this difference is not a general characteristic of EDMD.SUN2 also exhibited some ER accumulation in myoblast/fibroblast cultures from two patients, but these were different patients with lamin mutations and some ER accumulation was also observed in the control myoblast/fibroblast cultures.In general SUN2 and FHL1 exhibited the most variable staining patterns, but as variability was also observed in the controls this may reflect effects of the cell cycle or differentiation state.This latter issue of differentiation state is likely the reason for the poor staining of nesprin 2, which is stained well by this antibody in differentiated myofibres .Notably, the one patient with clear rim staining, P6, had the appearance of multiple nuclei lined up in a myotube while the weak rim staining for P7 appears to reflect a senescent cell by its extremely large nucleus and spread cytoplasm.Therefore we also stained for nesprin 2 after induction of differentiation in reduced serum differentiation medium.Not all patient cells differentiated efficiently into fused myotubes, perhaps due to myoblast passage number in culture or different amounts of contaminating fibroblasts.Nonetheless, a distinct rim-staining pattern could be observed in both the C1 control and all EDMD patient cells tested.As the EDMD-linked NETs are all widely expressed and known to have many binding partners, we considered that their failure to exhibit aberrant distribution patterns uniformly through the set of patient samples might reflect redundancy in the partners to retain them at the nuclear membrane.As mutations in widely expressed nuclear envelope proteins cause a much wider range of tissue-specific disorders including also lipodystrophy, dermopathy, neuropathies and bone disorders, it has been proposed that tissue-specific binding partners might mediate the tissue-specific pathologies .Therefore, we postulated that muscle-specific partners might contribute to the pathology of the disorder, have fewer binding sites and be more likely to be disrupted in their distribution in patients.Antibodies were obtained for Tmem38A, NET5/Samp1, WFS1 and Tmem214 and tested for their specificity.C2C12 cells were transduced with lentiviruses encoding GFP fusions to these NETs, fixed, and stained with the NET antibodies.In all cases the GFP-signal co-localised with the NET antibody signal.Notably, for NET5/Samp1 the endogenous rim staining was sufficiently stronger than the GFP-fusion that an even more pronounced rim was observed in the antibody stained sample than for the GFP signal.This is particularly apparent because some of the overexpressed exogenous GFP-fusion protein accumulated in the ER, most likely due to saturation of binding sites at the nuclear envelope.The antibodies were also tested by Western blot from lysates generated from additional cells from the same transfections.In all cases the band recognised by GFP antibodies for the muscle NET–GFP fusion was also recognised by the muscle NET antibody.Hence, all antibodies recognise the target NET.Because the antibodies were to be used to stain human cells, they were also tested on a lysate from a human control muscle biopsy.This yielded strong staining principally for just one band for the Tmem38A, NET5/Samp1 and WFS1 antibodies, indicating that they should each specifically recognise their target protein for the immunofluorescence images in subsequent figures.The Tmem214 antibody was much less clean than the other NET antibodies and so it should be understood that nuclear envelope redistributions could reflect additional proteins that it recognises as well as the Tmem214 protein.Tmem38A and WFS1 are induced during muscle differentiation and a muscle-specific isoform of NET5/Samp1 has been reported .Therefore patient myoblast/fibroblast cultures were induced to differentiate the myoblasts into myotubes for staining.These cells were co-stained with myosin as a marker for differentiation to distinguish cells that may have poorly differentiated due to the EDMD mutation and contaminating fibroblasts.The necessity of performing this analysis in differentiated cells was highlighted in all cases by the lack of rim staining in cells lacking the red myosin signal.A clear rim with some punctate areas inside the nucleus was observed in the C3 control for the Tmem38A antibody.Similar staining was observed for P5 but a significant loss of rim staining and strong increase in the punctate areas was visually clear for the other lamin and emerin mutations.NET5/Samp1 exhibited clear nuclear rim staining in all differentiated cells for both the control and EDMD patient myotubes; however, a visible relative increase in ER staining was observed for the P5 and P3 patient samples.For Tmem214 a weak rim could be discerned in all samples except for EDMD patient sample P4 while no WFS1 rim could be discerned in EDMD patient sample P5 though much stronger ER staining was observed for patients P6 and P4.Thus, none of the muscle-specific NETs yielded a uniform redistribution phenotype in all patient samples; however, each yielded different aberrant distribution patterns in cells from distinct subsets of patients.There are many different aspects of cultured cell growth that could potentially contribute to protein redistribution through stress effects.Some of these are difficult to control for such as pH changes and nutrient availability due to differences in growth rates between different patient cultures.Others, such as differing passage numbers from patient myoblasts/fibroblasts and thus progress towards senescence, are often unknown.Therefore, we sought to confirm these results in skeletal muscle biopsies from EDMD patients.As muscle sections contain other cell types, these were co-stained for dystrophin to delineate the plasma membrane of muscle cells and some images were chosen specifically to show that some NETs clearly only stain in the muscle nuclei and not the nuclei of these other cell types in muscle sections.For example, in Fig. 5A both controls and all but patient P10 have nuclei outside fibres that are negative for Tmem38A.In Fig. 5B it is interesting that patient P8 has a nucleus outside the fibre that is negative for WFS1 while patient P11 has one outside that is positive.All images were taken at the same microscope settings and later levels were adjusted.For Tmem38A the controls C4 and C5 exhibited crisp nuclear rim staining with weaker distribution through the sarcoplasmic reticulum.Crisp nuclear rim staining could be observed in all patient sections; however, the relative intensity of nuclear rim to sarcoplasmic reticulum staining was notably diminished compared to the controls.Unlike differences in the cultured cells that were patient-mutation specific, this difference was observed generally.For Tmem214 a nuclear rim stain could be observed in all samples, both control and patient; however, this time differences in the relative and absolute intensities varied between patient samples so that no generalised difference could be observed.Notably, the nuclear rim staining for this NET was much more crisp and clear than in the cultured myotubes.In patients P6 and P9 a nucleus for a cell in the space between the myofibres as delineated by dystrophin staining, possibly a capillary nucleus, had a much stronger nuclear rim staining than the nuclei in the muscle fibres in contrast with Tmem38A staining where nuclei outside the muscle fibres were completely negative.NET5/Samp1 stained the control nuclei very strongly against a weak background in the sarcoplasmic reticulum and this was the same for most patients.Moreover, some staining could be observed at the plasma membrane co-localised with the dystrophin membrane marker in the controls and most patients, but this was not present in patients P6 and P7.Finally, WFS1 exhibited weak staining at both the nuclear rim and sarcoplasmic reticulum in all fibres.Taking all images using the same settings the intensity of staining varied much more than for other muscle NETs, but this could reflect accessibility in the different sections as when the intensity of staining was equalised in the enlarged region boxes the character of staining was quite similar between patients.Thus in summary, Tmem38A generally appeared to have more accumulation in the sarcoplasmic reticulum in all the patients and both Tmem214 and NET5/Samp1 appeared to exhibit some differences from the controls in different subsets of patients.These results indicate that the previous finding of emerin redistributing away from the nuclear envelope with loss of lamin A or lamin A EDMD mutation L530P and mutation R377H from a family with dilated cardiomyopathy combined with specific quadricep muscle myopathy is not a general characteristic of AD-EDMD.Only a few patients had been tested for this before, but by comparing a wider panel of EDMD mutations it is now clear that the emerin redistribution effects are only characteristic of a subset of mutations.Notably, the use of 3 separate controls revealed that to some extent emerin redistribution can occur in cells even in the absence of EDMD mutations.Thus, the relevance of this redistribution to EDMD pathology is unclear even in the patients where it was observed.One recent study suggested a link between emerin cytoplasmic accumulation and pathology in that emerin-p.P183T assembles into oligomers that perhaps cannot pass through the peripheral channels of the nuclear pore complexes .Nonetheless, unlike this particular case, most reported emerin mutations result in a loss of protein.While the specific mutations analysed in this study and earlier studies differed, another aspect that may have contributed to redistribution phenotypes previously reported is the use of complete knockout or mutant over-expression and the use of rapidly dividing cancer cell lines.Two of the earlier studies focused on lamin knockout or loss , but most lamin EDMD mutations are dominant, total lamin levels generally appear normal where tested, and the point mutations by prediction should not block targeting and integration into the lamin polymer.The lamin mutations analysed here included mutations in the N-terminus, the rod, the Ig fold, the edge of the Ig fold and the unstructured region after the Ig fold.These should all yield different effects on the protein.Lacking the rod domain, the N-terminal deletion should act like a null, though it might dominant-negatively interfere with head-to-tail assembly.The rod p.E358K mutation has been tested before, yielding conflicting results in assembly studies with one reporting no disruption of filaments and the other reporting deficient assembly in vitro, more soluble protein in the nucleoplasm and reduced mechanical stability .In contrast the Ig fold mutation is on the surface but with the backbone buried so that it should still enable the beta sheet, that it is a part of, to form, but push it out relative to the adjacent beta sheet.p.R545C is in a basic patch and so might change charged interactions and p.T582K is hard to predict as it is in an unstructured region.Other studies showing redistribution used mutant over-expression in tissue culture cells of the L530P and R377H mutations , which may have influenced results.In these cases the cells used were mouse embryonic fibroblasts, lymphoblastoid cell lines and standard cancer cell lines as opposed to the myoblast/fibroblast cultures, myotubes and patient muscle tissue sections used here.In the study where the emerin g.631delTCTAC mutation and nesprin 2β T89M and 1α V572L/2β T89M combined mutations were found to respectively affect the localisation of the other protein patient cells and muscle sections were used; however, muscle sections were presumably only available from the patient with the combined nesprin 1α V572L/2β T89M mutations that exhibited a more striking phenotype in cultured cells than other individual mutations tested in their study.Nonetheless, in keeping with their results, we did find more intense relative nesprin 1 staining in the ER in the patient with an emerin mutation.However, for the EDMD-linked proteins we found that none exhibited a consistent redistribution phenotype throughout the wider collection of patient mutations analysed here.Moreover, by analysing a wider range of controls than most other studies, we observed considerable variation within the control population that was as strong or stronger than that observed for all NET stainings except for that of nesprin 1 and SUN2.It is noteworthy also that many of the reports using over-expressed mutant proteins in cancer cell lines or dermal fibroblasts in culture highlighted defects in nuclear morphology and blebbing.In contrast, here using patient myoblast/fibroblast cultures and myotubes at relatively early passage number and skeletal muscle sections we observed very little nuclear morphology defects or blebbing.This argues that aspects of 2-dimensional tissue culture, rapidly dividing cancer cell lines and senescence of dermal fibroblasts probably underlie these phenotypes.Such changes, particularly senescence, could also have influenced previous reports of aberrant distribution of EDMD-linked proteins.While we did not observe notable shared differences for any of the EDMD-linked proteins, we did observe many differences for the muscle-specific NETs in myotubes.In tissue culture these tended, like the nesprin 1 and SUN2 effects, to be observed only in distinct subsets of patient cells.The redistribution of Tmem38A to the sarcoplasmic reticulum was observed in all but one of the patient in vitro differentiated myotubes and was observed in all patient skeletal muscle tissue sections, though it was not sufficiently striking to be used effectively diagnostically.Differences were also observed in both in vitro differentiated myotubes and muscle tissue sections for Tmem214 and NET5/Samp1, though as for nesprin 1 and SUN2 these were only observed in subsets of patients.NET5/Samp1 is particularly interesting because it also interacts with lamin B1 and SUN1 and its mutation affects the distribution of SUN1, emerin and lamin A/C .Samp1 also associates with TAN-lines that are important for nuclear migration .This provides it with a function that could underlie the pathology of the disorder and a molecular network that parallels that of the nesprins .WFS1 and Tmem38A are also interesting because they are important for proper muscle gene expression and for muscle differentiation .In fact, disruption of three muscle-specific NETs participating in this function together almost completely blocked myogenesis, though knockdown of each alone had little effect .Thus, these NETs are prime candidates to mediate EDMD pathology because muscles appear to develop normally and then exhibit defects when they begin to be more heavily used, i.e. gene expression defects that prevent the muscle from fully functioning make for a reasonable explanation of pathophysiology.Tmem38A could also influence Ca2+ regulation , especially considering its increase in the sarcoplasmic reticulum relative to that in the nuclear envelope in patients compared to controls.Though much still needs to be done to prove their participation in EDMD pathophysiology, the finding of stronger redistribution effects for these muscle-specific NETs across a panel of EDMD patient mutations than for that in the already linked proteins raises the strong possibility of their involvement as new players in EDMD.Taken together this might suggest the hypothesis that the clinical variability of EDMD is also mirrored on a cellular level.Although we do not have sufficient clinical details to make a clear statement about this, it is interesting that patient P3 was reported to have a mild clinical severity score and cells from this patient exhibited no relative increase in cytoplasmic staining compared to that in the nuclear envelope for any antibody compared to controls except for nesprin 1.However, patients P1, P2, P4 and P5 were all graded as moderate severity and variously exhibited distribution defects with between 2 and 6 different antibody stainings.Thus more work will be needed to determine if a specific mutation and distribution defect could be diagnostic of severity or clinical progression.Several different proteins at the NE can be affected to varying degrees, yet many of them exhibit interactions that suggest their co-functioning in a larger network.In addition to WFS1 and Tmem38A co-functioning in myogenic genome regulation and the NET5/Samp1 partners, redundancy of functions is observed for emerin and MAN1 and for SUN1 and SUN2 .That we show several different NETs can be affected to varying degrees in EDMD muscle further clarifies EDMD as a NE disorder and indicates that many different pathways to disrupt NE organisation yield a similar muscle phenotype.This work was supported by an MRC PhD studentship to P.L.T., Wellcome Trust Senior Research Fellowship 095209 to E.C.S. and the Wellcome Trust Centre for Cell Biology core grant 092076.
Reports of aberrant distribution for some nuclear envelope proteins in cells expressing a few Emery–Dreifuss muscular dystrophy mutations raised the possibility that such protein redistribution could underlie pathology and/or be diagnostic. However, this disorder is linked to 8 different genes encoding nuclear envelope proteins, raising the question of whether a particular protein is most relevant. Therefore, myoblast/fibroblast cultures from biopsy and tissue sections from a panel of nine Emery–Dreifuss muscular dystrophy patients (4 male, 5 female) including those carrying emerin and FHL1 (X-linked) and several lamin A (autosomal dominant) mutations were stained for the proteins linked to the disorder. As tissue-specific nuclear envelope proteins have been postulated to mediate the tissue-specific pathologies of different nuclear envelopathies, patient samples were also stained for several muscle-specific nuclear membrane proteins. Although linked proteins nesprin 1 and SUN2 and muscle-specific proteins NET5/Samp1 and Tmem214 yielded aberrant distributions in individual patient cells, none exhibited defects through the larger patient panel. Muscle-specific Tmem38A normally appeared in both the nuclear envelope and sarcoplasmic reticulum, but most patient samples exhibited a moderate redistribution favouring the sarcoplasmic reticulum. The absence of striking uniform defects in nuclear envelope protein distribution indicates that such staining will be unavailing for general diagnostics, though it remains possible that specific mutations exhibiting protein distribution defects might reflect a particular clinical variant. These findings further argue that multiple pathways can lead to the generally similar pathologies of this disorder while at the same time the different cellular phenotypes observed possibly may help explain the considerable clinical variation of EDMD.
183
Family transfers and long-term care: An analysis of the WHO Study on global AGEing and adult health (SAGE)
Lower fertility and mortality rates have resulted in population ageing globally, presenting most countries with the challenge of sustaining economic growth while supporting their older adult populations.About two-thirds of the global population of adults aged 60 years and older reside in low- and middle-income countries.Meeting the needs of ageing populations in these contexts may be particularly difficult when doing so requires not only resources, but also the establishment of social welfare systems that ensure the needs of older adults are met, for example through the provision of long-term care or health insurance and pensions.In many LMICs, welfare systems have only partially and insufficiently adapted to the needs of an ageing population, although there are exceptions, such as Namibia, which provides a universal, non-means tested pension to people older than 60 years.Where public transfer systems are weak, their role is often filled by private intergenerational family financial transfers.This may be particularly so in LMICs where norms of filial obligation to support parents are stronger than in western high-income countries.A major source of costs among ageing populations is the need for long-term care due to ill health or disability.The health needs of ageing societies are dominated by chronic, non-communicable diseases, which often impair ability to perform activities of daily living and tend to require long-term care and medical treatment.Ageing populations are also at higher risk of chronic condition multi-morbidity, exposing them to even higher long-term care costs.In LMICs, where health care coverage is typically not universal or comprehensive, costs associated with long-term care for chronic illness or disability incurred by households can lead to catastrophic spending and impoverishment.Previous research has suggested that, when faced with costs of care for chronic illnesses, households may cope by accepting financial or in-kind transfers from family outside the household.The extent to which households with long-term care needs rely on family transfers is likely to vary from country to country, depending on the social welfare system and the burden of costs of care borne by families.For example, a 16-country wide study in Europe showed that greater pension entitlement reduced inequality in unmet health care need among older adults, especially in countries with health systems funded largely by out-of-pocket payments.Our aim was to test for a relationship between long-term care needs and family transfers in LMICs with different health and social welfare systems.Family transfers are also likely driven by other socio-demographic factors that are related to health and long-term care needs.Transfers are likely to vary with age, economic, and education status of the household, as well as with unemployment, when labour income is no longer available to meet material needs."They are also likely affected by gender, for example in cases where widows are unemployed, no longer collecting their husbands' pension, and more likely to rely on others to finance health care expenditure.Transfers to finance long-term care needs are also likely to vary by health insurance status, although this relationship may depend on the comprehensiveness of insurance benefits, and how much households are left to pay out-of-pocket.Other socio-cultural factors have been put forth as possible determinants of family transfers, such as values, traditions and practices.While we cannot operationalize these with the data at our disposal, we expect some of these to be captured by the inclusion of six different LMICs, as well as the inclusion of an urban/rural variable in our analysis.Research has shown that norms of filial obligation may be stronger in rural areas, and that health care needs and utilization vary by place of residence.In the long term, coping with health care costs using family transfers may negatively affect the economic security of the household and their extended families.Furthermore, since the economic well-being of different family generations can be highly correlated, this form of financing health and long-term care needs will likely exacerbate existing inequalities in quality of life and life expectancy.Understanding patterns of family transfers in LMICs, and their association with the long-term care needs of households may provide insights into the extent to which the financial burden of long-term care needs of ageing populations is borne by families.However, existing research on family transfers is often limited to high-income countries, or fails to quantify the extent of family transfers and their association with long-term care needs of households.We set out to describe patterns of family transfers to older households in six LMICs and to estimate the association between these transfers and the long-term care needs of these households, including in our analysis other potential determinants of family transfers.We used data from the World Health Organization’s Study on global AGEing and adult health Wave 1.The methods used in WHO SAGE have been described previously.Briefly, SAGE is a longitudinal study of ageing and health that includes nationally representative samples of individuals aged 50 years and older, and smaller comparison samples of younger adults.For Wave 1, face-to-face interviews were conducted in China, Ghana, India, Mexico, the Russian Federation and South Africa.Households were sampled using multi-stage cluster sampling and household enumerations were carried out for the final sampling units.One household questionnaire was completed per household.In households selected to have an individual interview with a member aged 50 and older, all those aged 50 and older were invited to participate.Proxy respondents were identified for selected individuals who were unable to complete the interview for health or other reasons.Household-level analysis weights and person-level analysis weights were generated for each country.Household wealth quintiles for WHO SAGE were generated using an asset-based approach through a multi-step process.The assets were derived from the household ownership of durable goods, dwelling characteristics, and access to services such as improved water, sanitation and cooking fuel.Durable goods included number of chairs, tables or cars, and if, for example, the household had electricity, a television, fixed line or mobile phone, or washing machine.A total of 21 assets were included with overlaps and differences in the asset lists by country.Resulting data were reshaped so that a pure random effect model could be used to generate specific thresholds on the latent income scale, and country-specific “asset ladders”.A Bayesian post-estimation method was used to then convert the asset ladder so that raw continuous income estimates could be transformed into quintiles.The process of deriving country-specific asset ladders and transforming these into wealth quintiles has been explained in detail previously.The income quintile variable was generated from the unweighted data – so distributions will shift when accounting for the complex survey design.For this study, data from the household roster were used to identify households where all members were aged 50 years or older.This resulted in an analytical sample of n = 8700 households.The mean household size of households aged 50 years or older in each country was: China 1.72; Ghana 1.22; India 1.70; Mexico 1.90; Russia 1.51; South Africa 1.40.Among these households, the proportion of household members requiring long-term care was estimated.A household member was defined as requiring long-term care if the household informant answered positively to the following question for that household member: “Does need care due to his/her health condition, such as a long-term physical or mental illness or disability, or because he/she is getting old and weak?”.The net mean total monetary value of transfers received by households was estimated in each of the six countries.Financial transfers were defined by self-reported estimates of cash received by the household from family outside the household, net of transfers to family living outside the household, over the 12 months preceding interview.In-kind transfers were defined by the self-reported estimate of the value of any non-monetary goods received/provided by the household from/to family outside the household.These were then divided by the number of household members to provide a per capita transfer estimate.Local currency was converted to $USD using World Bank average exchange rates for the year of data collection in each country.Weighted logistic regression models were used to estimate the association between a net positive or net negative or zero per capita family transfer and the proportion of household members requiring long-term care.We used net transfers rather than transfers in to account for the sometimes large financial or in-kind flows out of older households to their adult children, which also vary by household characteristics.As discussed above, family transfers may be influenced by household characteristics such as economic status and age.To account for the possible confounding effect of these characteristics on the relationship between long-term care needs and family transfers, the model controlled for: i) wealth quintile; ii) urban or rural place of residence; iii) mean household age; iv) proportion of the household currently working; v) highest level of education attained in the household; vi) proportion of the household that is male; and, vii) proportion of the household not covered by health insurance.Finally, weighted linear regression models were used to estimate the association between the financial value of per capita family transfer received, among those households that received net positive transfers, and the proportion of household members requiring long-term care, controlling for the same variables as the above model.All regression models were run separately for each country.The income quintile distribution, as well as the mean age, proportion needing long-term care, proportion with net positive and net negative family transfers, and mean net per capita family transfer received are shown in Table 1.In China, the proportion of household members requiring long-term care was highest among the poorest income quintile, as was the proportion of households with net positive transfers.In Russia as well, the proportion of the household needing long-term care was higher in the poorest two income quintiles than in the richest, but there was no pattern between income quintile and proportion receiving net positive transfers.In India, the proportion of households receiving net positive transfers appeared to decrease with income, with 40.6% of the poorest income quintile receiving net positive transfers, compared to only 19.3% in the richest quintile.In all the countries included, except for South Africa, the proportion of households with net negative transfers was highest in the richest or second richest income quintile.The per capita value of transfers received, as a share of total annual per capita household income is shown in Fig. 1.Overall, this share appeared smallest in Russia and South Africa, relative to other included countries.In China, Ghana and India, the transfers occupied the largest share of income in the poorest income quintiles.Table 2 shows the results of the logistic regression, estimating the association between the proportion of household members requiring long-term care and net negative or net positive transfers, adjusted for covariates.Odds ratios above 1.00 indicate an increase in odds of net positive transfers vs. net negative transfers associated with a one-unit increase in that variable.As shown in Table 1, the proportion of household members requiring long-term care was low in South Africa, with many households reporting 0%.As a result, there was very little variation in the explanatory variable.Regression analyses using this variable generated unstable estimates, and therefore results for South Africa are not included here., which they may not define as long-term care, as opposed to formal care in health facilities, which has been historically less accessible to the majority of the population.After controlling for income quintile, place of residence, mean household age, proportion male, proportion working, proportion with no health insurance, and highest level of education attained, the proportion of household members requiring long-term care in each country was associated with increased odds of receiving net positive transfers.This result was statistically significant in China, Ghana, and Russia.Odds of net positive transfers were also statistically significantly associated with some income quintiles and place of residence in China and Russia, with mean household age in China, Ghana, India and Mexico, with proportion male in China and Ghana, with proportion working in China, Ghana, India, and Russia, and with highest education level in China,The results of our linear regression to estimate the association between the proportion of household members needing long-term care and amount of family transfer received, among households with net positive transfers and adjusting for socio-demographic variables, are shown in Table 3.Once conditioned on having received net positive transfers, the proportion of household members requiring long-term care did not have a statistically significant effect on amount of transfer received in any country, except for Mexico.Statistically significant effects were observed for some income quintiles in China and Ghana, for mean household age in India, proportion male in China, proportion working in Ghana and some levels of education in Ghana and India,As societies in LMICs become older, research is required to understand how costs for increased long-term care needs are being met, and specifically what burden of long-term care expenditure rests on families.To the best of our knowledge, this is the first study that investigates family transfers to older households across several LMICs and estimates the relationship between receiving transfers and requiring long-term care.Our findings suggest that a high proportion of households aged 50 years or more in China, Ghana, India, Mexico and Russia, received net positive financial or in-kind transfers from family outside the household.In South Africa, the proportion of households with net negative transfers was higher, which may be in part due to social protection structures in this country.These include a means-tested, non-contributory pension scheme providing income for 75% of the older adult population in retirement, and higher access to basic health care for older adults than in the other five SAGE countries).There is also evidence that pensions of older South Africans can be an important source of income for some extended families,; when given to women, there is evidence that these pensions have a positive impact on the health of their grandchildren.However, further research is required to understand the higher proportion of households with net negative transfers in this country.The share of household income occupied by net positive transfers was highest in the poorest households in China, Ghana and India.While social security systems in these three countries continue to evolve, problems with inequitable coverage and benefits remain.In China in particular, in 2008, the national pension scheme covered only 7.8% of the rural population and while those below the poverty line were eligible for public transfers, these were limited in amount.Our regression results point to a relationship between receiving net positive transfers and requiring long-term care in all included countries, with a statistically significant relationship in China, Ghana, Russia.However, the proportion of those insured in the household was not associated with receiving net positive transfers.Together, these results suggest that in these countries, requiring long-term care leads to increased expenditure which households are unable to meet with their own resources, requiring transfers from family members outside the household, and that being insured alone may not be as important as the type and extent of the health insurance benefits.These results are consistent with recent research from China, for example, that suggests social security benefits, and thus the affordability of care, vary considerably among disabled older adults in different socio-economic groups, with the poorest least likely to be able to afford long-term care."Transfers from adult children in China to their parents have also been found to be responsive to parents' demand for health services.Our results may also be explained by the persistence of a culture of filial obligation in some countries included in this study.It is therefore surprising that the relationship between receiving net positive transfers and requiring long-term care was not statistically significant in India, where the obligation of sons to meet the long-term care needs of their parents has been provided as one reason for son preference in the country.One possible explanation for this is that transfers from adult children to ageing parents in India are common regardless of the long-term care needs, and therefore unaffected by the extent of these needs, or if perceived as an obligation, are not perceived as ‘transfers’.The relationship was also not statistically significant in Mexico, perhaps due to expanding coverage of the pension, health insurance, and poverty alleviation programmes in that country.There were other common factors influencing receipt of net positive transfers across the included countries.Receiving net positive transfers was significantly negatively associated with the proportion of the household members currently working, in every country except for Mexico.These results suggest that in these countries, social security systems are not sufficiently meeting the financial needs of those who are no longer in formal or informal employment.Our results also point to an important negative relationship between the proportion of the household that is male and odds of receiving net positive transfers in China and in Ghana.More research on access to public transfers from a gender perspective is needed in these countries, but our results are consistent with evidence from Nigeria suggesting that female-headed households are more likely to depend on others to finance health care costs, whereas male-headed households are more likely to have health care costs subsidised.Finally, mean household age was also associated with increased odds of receiving net positive transfers in China, Ghana, India and Mexico.It has been previously asserted that net family transfers increase with age in Asia and Latin America; our results suggest this may be the case in African countries as well.While household need for long-term care appeared to influence the net flow of resources in the family, we did not find any relationship between the proportion of household needing care and the amount of transfer received, except for in Mexico.These results may suggest that there are limits to the ability of extended families to transfer resources.In other words, while the extended-family network may able to provide at least some economic support to cope with older family members’ long-term care needs, the amount of resources that can be mobilized is more likely to be driven by the networks’ own resources than by the care needs of the affected household.This interpretation is supported by our results in China, which suggest that the transfer amount is highest in the richest income quintile.However, we cannot assume that there is a correlation between the income of the recipient and source households.There is no statistically significant relationship between income quintile and amount of transfer received in the other countries.The positive effect in Mexico may be explained by the fact that, despite the expansion in social security provided to older Mexicans, when health-related expenditure does occur in this population, it tends to higher than the average population and largely driven by costly hospitalizations rather than ambulatory care.Our findings should be interpreted in light of some limitations.First, our analysis relies on cross-sectional, self-reported financial data, which have been shown to be vulnerable to recall bias.Secondly, we cannot know for certain whether the family transfer received was used for long-term care expenditure, or for some other expense.Nevertheless, the strong significant positive relationship between needing long-term care and receiving positive transfers, after accounting for other covariates, suggests that a relationship between these two is likely.Future studies may consider collecting data that can identify which household expenditures are financed by family transfers.Finally, our analysis does not account for other means of support provided by adult children to ageing parents, such as co-residence and care-giving, which if accounted for would increase the observed amount of support provided.We also acknowledge that, while our paper focuses on family support received by older households, many older households also likely provide significant support to their adult children, for example through taking care of grandchildren, and for a comprehensive assessment of the economic impact of population ageing this support must be accounted for.Despite these limitations, our study presents strong evidence that, in selected LMICs, receiving family transfers is common among older households and is associated with requiring long-term care.Our results suggest that more comprehensive health care insurance and pensions for ageing populations are needed to protect extended families from the burden of costs associated with long-term care in LMICs.However, further research is needed within these countries, to better understand the drivers of the observed associations and to identify ways in which the financial protection of older adults’ long-term care needs can be improved.
Background: Populations globally are ageing, resulting in increased need for long-term care. Where social welfare systems are insufficient, these costs may fall to other family members. We set out to estimate the association between long-term care needs and family transfers in selected low- and middle- income countries. Methods: We used data from the World Health Organization's Study on global AGEing and adult health (SAGE). Using regression, we analysed the relationship between long-term care needs in older households and i) odds of receiving net positive transfers from family outside the household and ii) the amount of transfer received, controlling for relevant socio-demographic characteristics. Results: The proportion of household members requiring long-term care was significantly associated with receiving net positive transfers in China (OR: 1.76; p = 0.023), Ghana (OR: 2.79; p = 0.073), Russia (OR: 3.50; p < 0.001). There was a statistically significant association with amount of transfer received only in Mexico (B: 541.62; p = 0.010). Conclusion: In selected LMICs, receiving family transfers is common among older households, and associated with requiring long-term care. Further research is needed to better understand drivers of observed associations and identify ways in which financial protection of older adults’ long-term care needs can be improved.
184
MoniThor: A complete monitoring tool for machining data acquisition based on FPGA programming
Today’s machine tool and automated facilities needs to control production, scheduling and projects near to real time, in order to keep both precision and quality and productivity rates.Each event must be responded to adapt discrete event manufacturing to the actual status.All decision must be made based on multiple and sound information.From the perspective of production engineer, attention is paid on the machine-tool environment.Computer-aided design and manufacturing play a major role within product design.These areas of technical knowledge must be basic parts of engineering and industrial courses .Regarding manufacturing, some authors proposed a complete architecture for an integrated, modular machine tool simulator.In that work, they applied their simulator for a 2-axis lathe.Using Matlab© and GUI capabilities, Urbikain et al. faced the simulation of variables such as cutting forces, surface roughness or power consumption as well as dynamic problems such as chatter in turning and milling systems.They even developed a package specially designed for industrial applications using Microsoft Visual .CAM, numeric control or mechatronics learning are other typical educational tasks in machining.In , Radharamanan and Jenkins developed how the manufacturing laboratory facilities, and design/automation hardware and software at Mercer University School of Engineering were integrated to teach CAD, CAM, integration of CAD/CAM and robotics.Yao et al. proposed virtual machining technology to improve students’ skills on NC programming.Shyr presented a graphical human interface technology to be used in mechatronics learning.Using this tool, students can design a graphical monitor program and can perform mechatronics experiments in the laboratory.Concerning learning robotics, Temeltas et al. provided a Hardware-In-the-Loop approach to simulate robots.Under this approach, different kinematical configurations can be tested and users may remotely access to the test-bed.Authors in used RoboAnalyzer, which is based on 3D model of robots to improve students’ learning in robotics; particularly, they applied it to better visualize Denavit Hartenberg parameters.Particularly, monitoring of machining processes is a hot topic in machining .Indeed, monitoring the machining processes and tool condition is critical to achieve better product quality, higher productivity, and so, lower costs.The integration of sensors inside the machine is a key issue to reduce failures, to detect failures before parts are broken.So, sensing the system structure is crucial to any machine-tool to track and monitor the following items.There are a number of commercial solutions for monitoring machining processes.Marposs© created the Genior system, which is a modular tool for recording torque, strain, power, noise from the main spindle speed and feed drives.Montronix© systems use PC based open architecture CNC and intelligent algorithms to measure force, power or vibration.The University of Nantes in collaboration with Airbus created Smartibox©, which analyzes and protects the spindle speed .Harmonizer© determines the chatter frequency using acoustic emissions and finds the optimum spindle speed.The authors in used a sensor and an actuator to compensate chatter.All these tools are very useful but often are focused only on 1–2 key magnitudes.Under this perspective, a group of teachers at the Department of Mechanical Engineering at the University of the Basque Country started revising the way Manufacturing subject was taught and the students’ competences evaluated.Based on that discussions, the authors presented a special tool for simulation purposes to be used in practical lessons .The experience was very stimulating and satisfactory.Now, with the aim of closing the circle between theory and practice, we present a complementary tool for data acquisition.One of the main advantages of such approach is that the proposed tool allows including a variety of heterogeneous machining magnitudes: forces, accelerations, sound pressure, spindle speed, displacements, temperatures, etc.The software has a double purpose.On one hand, it can be used for teaching and training specialized engineers in metal cutting mechanics.Students put the tool to work in a real environment as they will do in the near future.In this way, they are introduced to real engineering.On the other, the package is also being prepared for industrial transference.In this way it can be used for decision-making, once the recorded magnitudes are conveniently post-processed.The LabVIEW FPGA system is composed from the LabVIEW FPGA Module, the NIO-RIO driver, and an NI-RIO device, also known as RIO target.The LabVIEW FPGA Module compiles the LabVIEW VI to FPGA hardware.Behind, the graphical code is translated to text-based VHDL code.Finally, Xilinx compiler tools from National Instrument© synthesize the VHDL code.The data processing starts with the bitfile.This contains the main programming instructions for LabVIEW.When we run the application, it is loaded to the FPGA chip and used to reconfigure the gate array logic.The FPGA chip can also be controlled through a host application.Using FPGA the host-side driver does not need to be as full-featured as DAQmx.On the contrary, the host-driver only provides the interface functions to the hardware.So, end users define both the application running in Windows or a Real-time operating system, and on the FPGA.– LabVIEW FPGA Target: even if R Series Multifunction RIO provide analog and digital acquisition and control, in this case we use a compactRIO which is a modular, reconfigurable control and acquisition system designed for embedded applications.Due to this and because we need a modular system with built-in signal conditioning and direct signal connectivity, this was the preferred option.– Software: LabVIEW 32 bits, RT and FPGA modules: application software necessary to build the Project tree.Special licenses are negotiated and offered for several departments of the University UPV/EHU.– C-series modules: they are designed as autonomous measurement systems.All the conversions A/D and D/A are performed inside the module before data reaches the chassis.Table 1 shows the references from National Instruments©: Kistler’s Type 5171A charge amplifier module to record the cutting forces; NI 9234, using a triaxial accelerometer and a microphone.The capabilities of the system for tailored solutions is almost unlimited.For instance, slots NI 9239 and NI 9244 could be applied for power consumption and and to gauges and thermocouples, respectively.The project tree includes the following levels:Communication between the PC and c-RIO system uses Ethernet and serial protocols.The c-RIO system needs the correct IP address.Previously to run Labview, logical connection between c-RIO and the laptop must be ensured using DAQmx application.At the center of the platform, we find the Real-time controller.The files and programs, which are embedded in the project structure under the system, will run on this controller.Fig. 3 shows some parts of the real-time programmed code.The FPGA Target corresponds to the real FPGA module, which is fitted in the chassis and communicates directly with the modules.Each of the module drivers in the FPGA processor will run at this level, allowing operations to be undertaken at high speed.All files embedded in the project structure under the FPGA Target can control the modules and swap data with one another.The drivers on the FPGA write data to a FIFO data stack.In this way, data are continuously read from the Real-time controller and processed by the programs at the real-time level.This structure enables data monitoring at the most primitive step.Even if the presented modules for data acquisition are highly prepared to work together in the c-RIO chassis, these modules are quite different from one to another.As depicted in the latter section, we will use both on-demand modules and delta-sigma-based modules.The first ones are modules that use the Loop Timer VI to time their loop rate.They can be digital modules, multiplexed modules or analog modules, with successive approximation register analog-to-digital converter.The other type is the delta-sigma C modules, which are used for high speed dynamic measurements.These devices include a modulation circuit of a delta-sigma converter that compares the running sum of differences between the desired voltage input and a reference voltage.Timing and synchronization of both on-demand and delta-sigma-based modules is also a key issue when building the platform.For instance, vibration or sound measurement applications require a high level of synchronization between channels.To synchronize different channels and modules from On-demand class, we placed all the channel reads or updates in the same FPGA I/O node.Analog modules have one ADC per channel and acquire data with almost no deviation between channels.Additionally, to synchronize delta-sigma modules in c-RIO hardware, we need to physically share the oversample clock and start triggers between all of the modules.One of the modules would be selected as master for the others.Communication between FPGA and Host is also a key issue.In this case, DMA FIFO were created to transfer data by selecting Target to Host.An Invoke Method node was used for the interaction between host and DMA FIO.For each of the DMA FIFOs created, the procedure was as follows: open reference to an FPGA VI or bitfile; add the Invoke Method Node; select the FIFO.If the DMA FIFO Read is configured to read x elements and times out before x or more elements are available, the DMA FIFO Read returns an error.To prevent this error, the user must check if the targeted number of elements is available before reading.Additionally, overflow needs to be controlled.It happens when the DMA FIFO is filled as a result of data being written faster than it is read.This may lead to data losses.To avoid overflow, it is necessary to check whether the DMA FIFO is full when the DMA FIFO Write is invoked.Some alternatives are: to reduce the data writing rate to the DMA FIFO; to increase the number of elements to read on the host; to increase the FPGA and host buffer sizes.Recovering is done by resetting the FPGA VI.Data storage was also specifically planned.Decisions for data saving are often made arbitrarily.This increases the cost of the project architecture in case of including new capabilities in the future.For data storage, there are available a variety of format options.Here, we profit from the Technical Data Management Streaming file format as a result of the deficiencies of other data storage options commonly used in test and measurement applications.The binary TDMS file format is an easily exchangeable file format.It is structured in three hierarchical levels: file, group, and channel.The file level can contain an unlimited number of groups, and each group can contain an unlimited number of channels.In this way, we choose how to organize the data to make it easier to understand.TDMS format can be used by all the tools in Labview©.Here, the simplest option is working with the Write to Measurement File Express VI.This Express VI offers the most common features for the configuration but sacrifices data processing performance.For this application, more flexibility is required and the author chooses the TDMS primitive VIs.By using this storage system, the generated archive may result from large recording times without problems and post-processed with Diadem© tool in adequate formats as historical data.Given the importance of in-process data acquisition to evaluate the behavior of the machine and machining processes, the tool can be used either in industrial workshops as for teaching purposes.Here, the second case is presented.In this practice lesson, the students were invited to use the tool to study chatter and vibrations problems in milling systems.Briefly, one practice consists of the following steps: Explanation of the devices required for data collection: Labview environment, c-RIO, input/output modules and measuring devices: BNC cables to connect sensors and modules;, Experimental machining tests using the following two configurations:– 2a.Test 1-Rigid machining: a rigid prismatic piece is prepared on a Kistler© dynamometer table and the accelerometers placed.Machining passes are done and forces + accelerations are measured using the developed application.– 2b.Test 2-Flexible machining: a thin wall piece is prepared and the accelerometers can be placed.In this case, the accelerometer is placed in the middle of the thin-wall.Machining passes are done and forces and accelerations measured as in 2a.The program is simple to use.First, the communication is done by introducing the internet address of the c-RIO.Then, the archive location and name are set and the tool is ready to work.The recording process starts by pressing the ‘Start’ button and ends by pressing the ‘Stop’ button.The file is automatically saved in the requested fold in the hard disk.In this way, the milling system is being monitored.For industrial purposes, we could imagine the machine workshop, plenty of machine-tools, turning and milling centers, and our portable diagnosis tool that can be easily moved from one machine to another according to the process requirements.The application saves useful machining data which are then studied to identify the error sources and to correct the machining parameters. The Fast Fourier Transform and comparison between both cases to obtain conclusions.This is a personal tool developed by the teachers at the University of the Basque Country.It has a double purpose: measuring machining data in industrial environments; instruction of future engineers.We consider its maturity level enough to share it with the scientific community.One important thing on this software is that it gives machinists, practitioners, etc. a fast and simple way of recording heterogeneous data machining during long recording session.The tool is currently being used in research projects developed by the University of the Basque Country.Particularly, the presented tool is conceived for acquiring a large amount of information: 7 channels at high sampling rates: three for the cutting forces, three for the triaxial accelerometer and one for a microphone.It can be used to study diverse problems such as tool wear, spindle speed’s behavior or chatter.At instruction level, the application responds to the needs of the students through a novel, original and state-of-the-art platform.As long as future engineers will deal with monitoring and diagnosis tasks and shall develop and design improved platforms and tools for data acquisition, using such kind of platforms during their instruction is highly recommended.This work presents a personal tool specially oriented for machining applications and associated problems.The application is aimed for specialized students in Advanced Machining as well as for industrial practitioners.After the practical use of this application, some social and technical conclusions are:
Today requirements by machine-tool users point towards knowing more about the machining processes. In a globalized and competitive market as manufacturing is, future engineers will be urged to develop transversal skills on diverse science domains such as Mechanics, Mechatronics or Electrics. Monitoring of machining processes allows determining possible errors and deviations from the desirable conditions (aged machine-tools, bad machining conditions, most energetically / cost-effective conditions) as well as saving historical data of the workpieces manufactured by the same machine. In order to strengthen students’ skills, the Department of Mechanical Engineering of the University of the Basque Country (UPV/EHU) developed a monitoring tool using Labview© programming. The application, built from the combination of reconfigurable Input/Output (I/O) architecture and Field Programmable Gate Arrays (FPGA), was applied to practical classes in the machine shop to improve students’ skills.
185
Aqueous batteries as grid scale energy storage solutions
Due to climate change and the depletion of fossil fuel reserves, governments have started to re-evaluate global energy policy.Therefore, we are experiencing an increasing demand of energy from renewable sources such as solar and wind power and the majority of countries face challenges in the integration of an increasing share of energy coming from these intermittent sources .Renewable sources are changing the energy market and they may displace significant amounts of energy that are currently produced by conventional means; this is, for example, an staggering 57% of the total demand of electricity in Denmark by 2025 , around 15% of the total UK energy demand by 2015 and almost 16% of China by 2020 .Energy storage technologies are required to facilitate the move towards the supply of low carbon electricity , and are particularly useful when exploiting intermittent energy sources.Incorporating energy storage has been shown to be beneficial to various sectors of the electricity industry, including generation, transmission and distribution, while also providing services to support balancing and manage network utilization .In large-scale energy storage systems operational safety is of prime importance and characteristics such as energy and power density, which are major drivers in the development of devices for mobile applications, are of lesser concern.Other desirable characteristics for large scale energy storage systems are a low installed cost, long operating life, high energy efficiency and that they can be easily scaled from several kWh to hundreds of MWh.Different battery chemistries demonstrated for use at this scale include lead-acid, lithium-ion and sodium-based batteries.Lithium-ion batteries exhibit very high round trip efficiencies, energy densities in the range of 100–200 Wh kg−1 and can typically withstand 1000 cycles before fading .Sodium-based batteries operate at temperatures in the region of 300–350 °C and are characterized by a round trip efficiency of 80%, energy densities up to 150 Wh kg−1 and lifetimes in excess of 3000 cycles .There have been several recent incidents where lithium-ion cells and sodium–sulfur batteries have failed which has resulted in the release of toxic materials; this aspect has raised serious safety concerns over the application of these batteries to large-scale energy storage .Not only there are safety concerns with these chemistries, but also these technologies are associated with high costs due to the materials used, manufacturing processes and auxiliary systems required for their operation.As a result of these considerations, the inherent safety and potential low cost offered by the aqueous-based electrochemical energy storage devices discussed in the following sections reveals that they can contribute positively to large-scale energy storage applications.At present, lead-acid cells are the most recognizable aqueous-based battery system and represent a major proportion of the global battery market.For example, it was reported that during 2010 the use of lead acid batteries in China reached a staggering 75% usage of all new photovoltaic systems ; likewise, during 2008, lead acid technology held 79% of the US rechargeable battery market share .This paper is focused on aqueous electrolyte based electrochemical energy storage technologies suitable for large-scale applications and discusses some of the challenges faced in the development of viable systems.The list of systems discussed here in is not exhaustive, but intended to give a brief overview of the area.These technologies have the potential to be integral components in future electricity supply systems providing that substantial reductions in cost can be achieved, and that safe and reliable operation can be assured.The oldest example of a practical rechargeable battery was developed by Gaston Planté using metallic lead and sulfuric acid as electrolyte in 1859 .Lead acid batteries may have different arrangements depending on the applications.Starting batteries are widely used in automotive applications for engine starting, lighting and ignition, where peaks in current are requested intermittently.This is achieved by using thinner electrodes and separators, resulting in lower internal resistance than that of regular lead acid batteries .In such applications the depth of discharge is kept low to maintain device longevity.On the other hand, if the desired application requires constant discharge at relatively low rates, or when powering small vehicles), the deep-cycle or ‘marine’ battery is used.This device architecture incorporates thicker electrodes to allow for a much greater depth of discharge to be utilized .Finally, Pb2+ ions are precipitated in the form of PbSO4 as indicated by Eq.Fig. 1 provides a schematic representation of a lead acid cell .Lead acid batteries are well known for their low rate of self-discharge, complicated production process, low cost of raw materials, recyclability, and good performance over a wide range of operating temperatures.Significant developments in performance have been achieved by the introduction of a valve-regulated system and also by the addition of carbon material to the anode.The use of carbon in the Pb anode improves both cell efficiency and cycle life due to reduced PbSO4 accumulation .Most of the problems that are usually encountered with lead acid batteries are strongly dependent upon the negative electrode.Sulfation, the formation of nearly insoluble crystals of lead sulfate is, by far, the most common ageing phenomenon inherent to lead acid batteries.During charging these crystals regenerate to a minimal extent, adversely affecting cell efficiency and lifetime.One solution to this challenge has been to incorporate a highly conductive additive to the negative electro-active electrode to prevent sulfation while maintaining high electronic conductivity .Also, it has been reported that nano-structuring the negative electrode improves electrode performance .The exchange of the lead anode by a carbon electrode has also been explored.This assembly is similar to an asymmetric supercapacitor and resulted in a battery with longer cycle life .Another promising design is the use of a double anode, containing a foil of metallic Pb and a second foil of carbonaceous material; such designs allow the battery to operate at high power due to supercapacitor-like behavior for an extended period in partial state of charge operation .Although, the corrosion of the positive electrode has always been regarded as a major concern in lead-acid battery technology, the corrosion of the negative electrode has drawn an increased attention recently.The importance of corrosion control through the optimization of the electroactive materials has recently been reported .In order to enhance the performance of the lead-acid battery, low antimony grids are commonly used.Unfortunately, low antimony grids are prone to develop a passivation film between the grid and the electroactive material of the electrode.In addition, the corrosion of the positive electrode is well known to play a detrimental effect on the performance of the lead-acid battery.Therefore, the production of new formulations based on lead oxides , and different additives , play a pivotal role in controlling corrosion and preventing passivation of positive electrodes for lead acid batteries .Finally, it is important to highlight the importance of collectors for these electrodes.In order to improve the energy density of lead-acid batteries development has focused on reducing the redundant weight in cells by optimizing the electrode composition and the structure of the collector grid.Improvements in the manufacture of lighter grids have been realized by electro-depositing layers of lead on highly conductive and low specific gravity substrates such as copper, aluminum, carbon, barium, indium, etc. .Broadly speaking there are two main types of grid used at the positive terminal; lead–antimony and lead–calcium based grids.Unfortunately, lead–calcium grids are unsuitable for deep-discharge applications.Likewise, lead–antimony grids are associated with a reduced hydrogen overpotential, which results in considerable amounts of hydrogen being evolved during charging.It has been reported that elements such as strontium, cadmium, silver and the majority of rare earth elements can be used to produce lead–antimony or lead–calcium based alloys with enhanced performance .Vitreous carbons coated with lead have also been proposed as suitable electrode grids, however due to oxygen evolution these are not well suited for use in positive plates .Lead acid batteries are known for their low energy density, about 30 Wh kg−1, which represents only 25% of the value associated with lithium-ion batteries.Other major challenges faced by this chemistry are limited cycle life, toxicity, and relatively low charge/discharge efficiency .Nickel–iron batteries have been successfully developed and commercialized in the early 20th century.Nickel–iron or ‘NiFe’ cells are secondary batteries that fell out of favor with the advent of cheaper lead acid cells.There is renewed interest in these batteries due to their environmentally friendliness, longevity, and tolerance to electrical abuse.It is believed that this technology could provide a cost effective solution for large-scale energy storage applications, particularly where only a relatively low specific energy is required.The relative abundance of the raw materials required to produce NiFe cells is another aspect favoring their use.Nickel and iron are among the most abundant elements in the Earth׳s crust, and less abundant elements included in the cell are used in relatively small proportions, therefore NiFe cells have the potential to be manufactured at relatively low cost .Fig. 2 provides a schematic representation of a NiFe cell.Mitigation of this process in NiFe cells has been traditionally achieved by either modification of the iron electrode formulation or by the addition of electrolyte additives, in such a way that the activation energy for electrolyte decomposition would be increased .The positive electrode in NiFe cells is based on the nickel hydroxide/oxyhydroxide couple used in nickel–cadmium and nickel-metal hydride cells.Two polymorphs of Ni2 exist, they are α-Ni2 and β-Ni2; they can be transformed into γ-NiOOH and β-NiOOH, respectively.However, due to the low stability of α-Ni2 in alkaline media, the β-Ni2 is usually used as a precursor material in alkaline batteries .NiFe cells use strongly alkaline solutions of potassium and lithium hydroxide and selected additives to prevent electrolyte decomposition.Typically, the mitigation/prevention of hydrogen evolution during charging has been achieved by either modification of the anode or by the addition of electrolyte additives that increase the hydrogen overpotential.Other electrolyte additives such as wetting agents , long chain thiols and organic acids amongst others, have been investigated .As discussed above, a major challenge facing NiFe batteries is the evolution of hydrogen, which results in low charge/discharge efficiencies , and low specific energy, Another consideration with this chemistry is the toxicity of nickel which significantly influences manufacturing costs.The concept of intercalation electrodes, used in lithium-ion cells, has inspired research into similar systems that replace organic solvents with aqueous-based electrolytes.This enables the use of much lower cost materials with increased ionic conductivity.Cells using intercalation electrodes are also known as ׳rocking-chair’ batteries , as ions are inserted into and removed from electrodes during charge and discharge.An aqueous Li-ion cell was first reported where a VO2 anode and LiMn2O4 spinel cathode in 5 mol l−1 LiNO3 solution exhibited an energy density of around 75 Wh kg−1 .This is significantly higher than that seen in lead-acid and nickel-based cells, however this system exhibited a poor cycle life .A clear limitation of aqueous electrolytes is their restricted electrochemical stability as, under standard conditions, electrolysis of H2O occurs at 1.23 V and involves H2 or O2 gas evolution.The energy density in aqueous-based systems has been increased by expanding the operating potential.For example, Hou et al. reported values around 342 Wh kg−1 at an average discharge voltage of 3.32 V when the lithium anode was covered by a polymer and LISICON film.These layers acted as a protective coating preventing the formation of lithium dendrites and to separate the lithium metal from the aqueous electrolyte .To gain deeper insights into the intercalation mechanisms occurring in such cells, similar electrode materials employed in non-aqueous batteries have been considered.However an additional consideration in these systems is pH, as this influences H2 and O2 evolution potentials as well as the stability of the electroactive material .In the case of cathode materials, which are quite stable in aqueous solutions, protons or water molecules inserted into the host structure compete with lithium-ion insertion which reduces capacity due to obstructed transport pathways .The structure of the host material is very important as each structure behaves differently throughout the insertion processes.As an example, structures such as spinel Li1−xMn2O4 and olivine Li1−xFePO4 cannot host H+ meanwhile layered structures presented a large amount of H+ concentration in the framework in acidic media .Moreover, dissolution of the electrode material in the electrolyte is the other limiting factor in terms of long-term cyclability.The addition of a protective surface coating onto the electrode has been shown to improve cycle life .Increasing demand for lithium and its relatively low natural abundance has resulted in a search for suitable alternatives.Sodium is the most promising candidate to replace lithium as it exhibits similar chemical behavior and has a similar ionic radius .It has been shown that sodium can be reversibly inserted into the tunnel structure Na0.44MnO2 delivering a capacity of 45 mAh g−1 at 0.125 C .It was later reported that a large format hybrid/asymmetric aqueous intercalation batteries using λ-MnO2 as cathode materials and an active carbon in Na2SO4-based electrolyte could be operated over a wider range but with a lower energy density.This was further developed to replace some of the activated carbon with NaTi23, and has been shown to withstand thousands of cycles without significant capacity loss.This technology has been commercialized by “Aquion Energy” who offers a range of systems from 2 kWh units for residential use to off-grid applications and grid services.Recently, Chen et al. have shown that the use of Li+/Na+ mixed-ion electrolytes results in good stability .In these systems, at one electrode Li+ are exchanged between the electrolyte and electrode whereas at the other electrode Na+ is exchanged with the concentration of Na+/Li+ remaining constant upon cycling.“Rocking chair” chemistries could emerge as a potential alternative in the development of safer and higher energy density batteries in comparison with Pb-acid cells, however the secondary reactions present in all aqueous system restrict their performance and cycling life.The main challenges associated with aqueous ‘rocking chair’ systems have been identified as electrolyte decomposition evolving H2 and O2 side reactions with water or evolved gases proton co-intercalation into the host electrode, and the dissolution of electrode materials .Exploiting the same reaction as the positive electrode in NiFe cells, are devices with alkaline aqueous electrolytes that use metal hydride or cadmium based negative electrodes .These chemistries may be familiar, as they have been employed for many years in the consumer electronics sector, and were integral to the development of electric vehicles in the 1990s.This represents a mature battery technology that has been identified as suitable for power quality applications and grid support.Research efforts in this area are focused on improving their energy density and cycle life alongside and preventing the reactions that result in self-discharge.An example of this technology is the system developed by “GVEA” that uses almost 14,000 NiCd cells providing backup power of 27 MW for upto 15 min.This system has been in operation since 2003.Recent improvements to cell architecture have focused on increasing the power density of NiMH cells and it remains a viable choice for use in light rail vehicles.NiMH cells are characterized by energy densities in the region of 250–330 Wh l−1, a specific energy of upto 100 Wh kg−1 and are limited to around 1000 charge–discharge cycles.By comparison NiCd cells can perform roughly twice the number of cycles but are associated with a lower energy density.Not only is the toxicity of nickel and cadmium a major drawback of this technology, but it has been recently identified that NiCd cells are associated with substantially higher CO2 and SO2 emissions during production, when compared with lithium based cells .Primary zinc-air cells are a fairly mature technology that has found commercial applications in medical and telecommunications.As with other metal-air cells, a major driver for development is their outstanding theoretical energy density.The compatibility of zinc with an aqueous alkaline electrolyte allows for substantially reduced manufacturing costs in comparison with non-aqueous based cells.The development of electrically rechargeable zinc-air cells has been hindered by the propensity of zinc to form dendrites upon repeated charge–discharge cycling and their low output power .A further drawback of aqueous alkaline electrolytes is that carbon dioxide can be absorbed by the solution thus producing insoluble, electrode blocking compounds that decrease electrolyte conductivity and impede cell performance.As a consequence, the process of air purification needs to be considered alongside a new engineering cell design.Improvements in performance require the identification of suitable robust catalysts and electrolyte additives.Zinc-air cells have been proposed as a suitable alternative to lithium-ion for use in electric vehicles and were successfully demonstrated by “Electric Fuel” in 2004.Currently, “Eos Energy Storage” are developing a grid scale zinc-air system using a hybrid zinc electrode and a near neutral pH aqueous electrolyte.An alternative cell chemistry that has received attention of late is the iron-air cell that also operates in an aqueous alkaline electrolyte.Iron-air cells do not exhibit the same stripping/redeposition problem as seen in zinc-air cells but they have a lower theoretical energy density of 764 Wh kg−1 compared to Zn-air batteries but higher than Pb-acid or NiFe batteries.Also the electrical rechargeable cells exhibit relatively low energy efficiencies .As with zinc-air cells the development of more efficient oxygen electrodes is required.Another noteworthy technology utilizing aqueous electrolytes is the development of a rechargeable copper–zinc battery by “Cumulus Energy Storage”.This technology is based on processes used in metal refining, this project aims create safe, low cost battery systems with capacities in the range from between 1 MWh and 100 MWh.For large scale electrochemical storage to be viable, the materials used need to be low cost, devices should be long lasting and operational safety is of utmost importance.Energy and power densities are of lesser concern.For these reasons, battery chemistries that make use of aqueous electrolytes are favorable candidates where large quantities of energy need to be stored.Table 1 lists selected figures of merit for various aqueous battery technologies to allow for easy comparison.It is clear that certain chemistries display desirable characteristics but are hindered by poor performance in other areas.Large scale energy storage does not demand high efficiency, nor does it require very high energy densities; the capital and operating costs of the system are more crucial design parameters.Moreover, non-aqueous batteries require the implementation of sophisticated safety systems to prevent hazardous situations.In addition, the cost and relative abundance of the reactants and raw materials required to build non-aqueous batteries remain a concern when such systems are proposed for use on the large scale.Table 2 summarizes some of the advantages and disadvantages of the aqueous batteries presented in the previous sections.Section 2.2 presented several reasons favouring the use of NiFe batteries, but also discussed some of the challenges associated with this chemistry.A major challenge preventing NiFe batteries from wider adoption is their low coulombic efficiency, which mainly occurs due to electrolyte decomposition during charging.Consequently, we have investigated several aspects of the behavior of iron based electrodes in such cells, and have developed NiFe batteries exhibiting coulombic efficiencies reaching 95%, whereby electrolyte decomposition has been virtually prevented .Iron based electrodes were prepared procedures as described elsewhere . Briefly, electrodes were produced by mixing varying amounts of Fe, FeS, Cu, Bi and Bi2S3 with PTFE.Strips of nickel foam were coated with the electrode materials and then hot-pressed at 150 °C and 10 kg cm−2 during 3 min, in such manner that 0.2–0.25 g of iron powder were loaded on an area of approximately 1 cm2.Once produced, electrodes were tested in different electrolyte systems so coulombic efficiency was increased.As with the development of electrolyte systems, experimental design was used to facilitate the improvement of electrode formulations.Data extraction was automated by using an in-house developed C/C++ program that interrogates all files produced by the battery cycler.Data analysis was accomplished by utilizing Python and the R statistical software.The basic electrolyte used in NiFe battery development is an aqueous solution of potassium hydroxide, typically at a molarity of 5.1 mol l−1.Additives investigated in electrolyte formulations include, K2S, LiOH, Mucic acid, CuSO4, and selected thiols.Deionized water was produced by using an Elix 10-Milli-Q Plus water purification system.Iron-based electrodes were tested in a three-electrode cell with potentials measured against a mercury/mercury oxide reference electrode.Nickel electrodes, obtained from a commercial nickel iron battery, were employed as counter electrodes.Electrodes were cycled from 0.9 to 1.4 V vs. Hg/HgO at a rate of C/5, which is a standard procedure for testing iron electrodes under galvanostatic conditions, by using an Arbin SCTS battery cycler.Galvanostatic charge discharge experiments were performed at room temperature until steady state was reached.Formation and stabilization of the electrodes was typically found to be complete by the 30th cycle of charge and discharge .Once assembled, NiFe cells were cycled as explained in Section 3.2. Experimental results indicate that before a NiFe battery attains a steady capacity, iron based electrodes require to reach a stable configuration before the steady state was reached.Fig. 3 illustrates that electrodes require such conditioning period however the electrolyte.It can be clearly seen that in the early stages, coulombic efficiency is always very poor; however, this issues disappear with the cycle number and in general, after the 30th cycle, batteries have not only increased their coulombic efficiency, but have reached steady state.In our previous paper , we have reported the use of lithium hydroxide seems to have a marginal incidence on cell performance.Fig. 3 confirms this observation.It is noteworthy that the efficiency of electrolytes A and B do not exhibit meaningful differences between them.It has long being recognized the use of lithium hydroxide as an electrolyte would benefit the long run operation of the iron electrode.However, our experimental tests were not long enough to either confirm or deny the veracity of this claim.Longer testing would reveal the usefulness of LiOH as an electrolyte additive for NiFe cells.Potassium sulfide is seen to have a positive effect on battery performance.The efficiency of formulations C, D and E are markedly higher than cells using electrolytes that did not contain potassium sulfide, and the greatest efficiency is observed at a K2S molarity of 0.2 mol l−1 increasing the molarity of K2S to 0.3 mol l−1 results in a significant reduction in efficiency, indicating the presence of an optimal molarity.Traditionally, iron electrodes have been manufactured by utilizing different additives, an in particular, the use of iron sulfide in concentrations not exceeding 15% has been reported to be beneficial to the performance of the battery , and in fact battery performance tend to decrease with the iron sulfide content in the region from between fifteen and forty percent of iron sulfide.Although, most NiFe papers tend to focus on electrode formulations below 20% FeS, we investigated the entire composition space of electrodes ranging from pure iron electrodes to pure iron sulfide electrodes on a binder free basis.Fig. 4 illustrates the variation of capacity and coulombic efficiency with changing iron sulfide content.At concentrations of greater than 50% FeS, coulombic efficiency rises with FeS content; surprisingly coulombic efficiencies of 90–95% were reached when the iron sulfide content exceeded 80%, indicating that electrolyte decomposition has been prevented.However, Fig. 4 also shows that at high concentrations of iron sulfide, the capacity of the battery is drastically reduced in comparison with that achieved at low FeS concentrations.Fig. 4 highlights the existence of compromise in battery design; at low concentrations of FeS coulombic efficiencies are low and capacities large conversely, at high FeS concentrations, coulombic efficiency tends to be high but capacity low.An additional problem with batteries utilizing large amounts of iron sulfide is their reduced life cycle.It was observed that cells utilizing large amounts of FeS faded after only 100–150 cycles.It remains a challenge to maintain the improved coulombic efficiency of NiFe batteries that utilize large fractions of FeS while improving capacity and cycle life.By pursuing the development of cost effective iron sulfide based NiFe cells, we have identified two main electrode formulation regions.At low concentrations iron sulfide, cells exhibit low coulombic efficiencies and relatively large capacities.Conversely, at large concentrations of iron sulfide, cells exhibit very large coulombic efficiencies and very low capacities.The experimental approach used in this project, has facilitated and accelerated the development of secondary NiFe batteries.From our experimental findings, we can conclude there is a link between electrode performance and:electrode composition, in the form of iron sulfide content,electrolyte composition, and,A strong correlation between cell performance and electrolyte system was found.Potassium sulfate was identified as a key additive to improve cell performance.Lithium hydroxide on the other way was found to have limited effect in improving cell performance; however, no long run testing was done, and it is not wise to rule out the importance of this additive that is present in nearly all NiFe cell formulations, so extended testing of the battery is recommended as a future work.For large-scale applications, the safety and low installed costs of aqueous-based batteries make them desirable propositions if some of their limitations can be overcome.Due to the existing manufacturing capacity, lead acid cells are likely to remain a viable option for many applications, however as this is a mature technology; only incremental advances in performance are likely.More substantial improvements in performance with respect to efficiency, longevity and cost are likely to be seen in other aqueous-based chemistries, and these technologies have the potential to be integral components in future electricity supply systems.Finally, the ideal aqueous battery would be one that had the longevity of a NiFe cell combined with the specific energy density of a metal air battery and the environmental friendliness of a ‘rocking chair’ battery.Developmentally daunting, but a worthwhile project requiring Manhattan Project-scale investments.The authors declare that there is no conflict of interest regarding the publication of this paper.The authors would like to acknowledge the U.K. Engineering and Physical Sciences Research Council for supporting this work.VLM thanks FAPESP for fellowship support.The information and views set out in this article are those of the authors and do not necessarily reflect the official opinion of elsevier or renewable and sustainable energy reviews.
Energy storage technologies are required to make full use of renewable energy sources, and electrochemical cells offer a great deal flexibility in the design of energy systems. For large scale electrochemical storage to be viable, the materials employed and device production methods need to be low cost, devices should be long lasting and safety during operation is of utmost importance. Energy and power densities are of lesser concern. For these reasons, battery chemistries that make use of aqueous electrolytes are favorable candidates where large quantities of energy need to be stored. Herein we describe several different aqueous based battery chemistries and identify some of the research challenges currently hindering their wider adoption. Lead acid batteries represent a mature technology that currently dominates the battery market, however there remain challenges that may prevent their future use at the large scale. Nickel–iron batteries have received a resurgence of interest of late and are known for their long cycle lives and robust nature however improvements in efficiency are needed in order to make them competitive. Other technologies that use aqueous electrolytes and have the potential to be useful in future large-scale applications are briefly introduced. Recent investigations in to the design of nickel–iron cells are reported with it being shown that electrolyte decomposition can be virtually eliminated by employing relatively large concentrations of iron sulfide in the electrode mixture, however this is at the expense of capacity and cycle life.
186
LiDAR, UAV or compass-clinometer? Accuracy, coverage and the effects on structural models
Virtual outcrops are an important source of information, from which a wide variety of geological data can be derived.These detailed 3D reconstructions of outcrop geology are applied to a broad range of studies, including sedimentology and stratigraphy, reservoir modelling, and structural geology.Light Detection and Ranging has been the principal acquisition technique for deriving virtual outcrops in the last decade, though acquiring this type of detailed 3D spatial data requires expensive instrumentation and significant knowledge of processing workflows.The recent availability of ready-to-use small Unmanned Aerial Vehicles and the advent of Structure from Motion i.e. digital photogrammetry software, has opened-up virtual outcrops to a growing number of Earth scientists.This technique provides the ability to use unreferenced, overlapping images of a structure to semi-automatically generate a 3D reconstruction, easily and without the expense and specialist knowledge required for LiDAR acquisition and processing.The combination of these factors coupled with the advantage of synoptic aerial survey positions and efficient, rapid surveying afforded by UAVs has seen these techniques gain much popularity in the Earth Science in recent years.The aim of this paper is to assess virtual outcrop generation methodologies, the accuracy and reliability of these reconstructions, and the impacts for structural analysis when using these digital datasets, based on a single case study.The validity of using virtual outcrops to make predictions of the subsurface has been addressed by a number of workers.The accuracy and precision of virtual outcrops was found to be critical if geological models derived from them are to make reliable predictions and decisions.As such, a number of studies have addressed the efficacy of LiDAR derived surface reconstructions in Earth science applications, across a multitude of scales.These studies found that LiDAR provides high accuracy reconstructions, resolvable to mm scales.Similarly, work has been done to test SfM reconstructions with reference to geological problems, generally against detailed 3D LiDAR and differential Global Positioning Systems datasets.These studies found that SfM yields acceptable surface reconstructions, albeit with less consistency than those generated by LiDAR.Inaccuracies in SfM datasets include point-cloud ‘doming’, greater inaccuracies at model edges, and failure of automated feature matching due to lack of discernible features in imagery.The specific focus of this study is whether the known accuracy of LiDAR data automatically results in a better, more reliable geological model when compared with SfM, using data acquired at a single site.Furthermore, we assess whether greater coverage afforded by UAV compensates for instances of lower accuracy and reconstruction reliability if the entire virtual outcrops, rather than isolated patches, are used as a primary source of data for model building.Thus the specific aims of this study are: to assess the efficacy of terrestrial LiDAR, terrestrial SfM and aerial SfM to accurately reconstruct geological surfaces over an entire 3D outcrop, testing the validity of using virtual bedding surfaces for extracting structural data; to evaluate the effects of acquisition method and survey design on the accuracy and reliability of surface reconstructions and coverage of virtual outcrops; to quantify the influence of these factors on geological model building and along-strike predictions of geometry.Manual and digital compass-clinometer data were used to compare virtual outcrops generated using terrestrial LiDAR, TSfM and ASfM, at a fold structure in SW Wales.Using our direct measurements and derived orientation data from the three virtual outcrops, separate geological models were built and compared.Finally, dGPS data was used to determine the effects of direct georeferencing on the spatial accuracy of the datasets.This paper highlights potential sources of error and inaccuracies introduced during data acquisition and processing and the effects on the reliability of resultant structural measurements.Based upon these results, the impact of the various acquisition methods on along-strike prediction of fold geometry and hinge placement is discussed.Consequently, the implications for survey design and quality checking of data are considered when creating virtual outcrops for structural analysis and prediction.The classic Stackpole Quay syncline, a photograph of which appeared in the first issue of this publication, is situated on the southeastern edge of the Pembroke Peninsula, West Wales.The structure is composed of folded Visean carbonates of the Pembroke Limestone group, and lies close to the northern limit of Variscan deformation in Britain.This syncline is an upward-facing fold, contains a sub-vertical axial surface, and a shallowly ENE plunging fold axis.Stackpole Quay was chosen for this study, as it has a number of features suitable for a comparison of surveying technologies and methods.Primarily, the 3D nature of the outcrop and continuity of bedding around it allows bedding orientation measurements to be made across the syncline hinge and limbs, at multiple locations along strike.The limited outcrop size also enables a rapid and comprehensive survey of the structure, using the chosen acquisition methods.Notwithstanding its coastal setting, convenient vantage points exist for viewing the structure, and direct access onto the outcrop is possible for collection of structural measurements.The five methodologies used for primary data acquisition are:Traditional compass-clinometer for direct measurement of bedding dip and dip azimuth,Digital compass-clinometer app for a portable tablet for direct measurement of bedding dip and dip azimuth,Terrestrial LiDAR scans with simultaneous acquisition of digital images for virtual outcrop generation and texturing,Terrestrial acquisition of digital photographs for SfM reconstruction for virtual outcrop generation,Aerial acquisition of images by UAV for construction of virtual outcrops using SfM,In addition to these five methods, a differential Global Positioning System was used to acquire precisely located points around the outcrop, allowing co-registration of datasets and calibration of models.LiDAR scan station positions, TSfM acquisition points and dGPS measurement locations provided in Fig. 2a.A summary of the data acquisition, processing and pre-interpreted datasets is provided in Table 1.Orientations of bedding surfaces on the structure were collected using a traditional handheld Suunto compass-clinometer.Experienced users of handheld compass-clinometers of this type are reckoned to achieve measurement accuracy within 1° on compass bearings and 2° on dip measurements.These values fall well within the range of variability of bedding orientations on single surfaces at Stackpole, and as such this method was assumed to be the ‘accurate’ base dataset against which the virtual outcrop bedding plane dip and dip azimuths were compared.The locations of orientation measurements were derived from map reading and recorded on a 1:500 paper version of the digital Ordnance Survey map of the study area.Subsequent to the data collection, the orientation data and their 3D coordinates were digitized to allow for compilation and comparison with the digital datasets.FieldMove, a digital compass-clinometer and notebook package for portable devices by Midland Valley on iPad Air, was used for digital acquisition of bedding orientations."Automatic positioning of measurements was achieved within the app using the iPad's integrated GPS unit, directly onto preloaded satellite imagery.Accuracy of iPad measurement positioning was generally estimated to be within 3 m of location, in agreement with published data.On occasion the app mislocated data points and these were manually corrected, in the app, during field acquisition.The accuracy of FieldMove and other digital compass-clinometer apps on Android devices has been addressed in recent work, but no systematic study of measurement accuracy is available for FieldMove 1.2.2 on iPad Air.According to Midland Valley, FieldMove compass-clinometer measurements on iPad tablets have accuracies to within 5° for compass bearings and ‘very good’ clinometer accuracy, with results from Apple devices displaying greater reliability than Android or Windows counterparts.During the digital compass-clinometer survey, measurements were regularly monitored and compared with traditional compass-clinometer measurements, in accordance with recommended workflows.A fully portable, tripod mounted RIEGL VZ-2000 laser scanner was used to scan the study site.Attached to the laser scanner was a Nikon D80 DSLR with 50 mm fixed focal length lens for digital image acquisition to texture the 3D LiDAR model.LiDAR scanning followed normal methodologies applied to Earth science applications, with consideration given to sufficient scan overlap and correct scanner positioning to minimize scan occlusion.The VZ-2000 has a range of over 2 km and acquisition rates of up to 400,000 measurements per second, with each data point assigned local, Cartesian x,y,z coordinates.Ten scans from positions around the outcrop were performed, in conjunction with the acquisition of images using the calibrated Nikon D80.Quoted range measurement precision and accuracy of the RIEGL VZ-2000 are 5 mm and 8 mm respectively at 150 m range.GPS survey information to initialise absolute and relative LiDAR scan registration was provided by an integrated single channel GPS receiver and inclination sensors on-board the VZ-2000 scanner, allowing coarse registration of point-clouds during processing.A 24 Megapixel, Nikon D3200 camera was used for terrestrial image acquisition, at a fixed focal length of 26 mm.Images were acquired in auto mode to account for changes in lighting and distance from the structure: exposure times in the image dataset range from 1/100 s to 1/500 s, with ISO values of 100–400.As this survey encompassed Stackpole syncline and the adjacent Stackpole Quay, the original dataset comprised 724 images from 27 camera positions.Only those that included Stackpole syncline or the immediate surroundings were selected for virtual outcrop construction.This procedure resulted in a reduced dataset of 446 images from 20 camera positions used for SfM alignment and virtual outcrop generation.Local topography around the study site and the presence of open sea on the east side of the structure limited the availability of stable camera positions and thus did not allow full coverage of the outcrop in terrestrially acquired images.Camera positions were, however, selected during fieldwork to maximise coverage, synoptic viewpoints and convergent imagery of the study site, where possible, in accordance with workflows laid out by a number of authors.Average ground-pixel resolution for the 402 images utilized for SfM reconstruction was 7.48mm/pixel.This survey was conducted with a DJI Phantom 3 Advanced UAV.No upgrades or modifications were made to the UAV prior to the field campaign.Flights were piloted manually, without the use of a flight planning package.Automatic acquisition of digital images at 5 s intervals was performed with the on-board, 12 Megapixel camera at automatic exposure levels.A single 15-min flight over the study area, generated 202 digital images of the outcrop with average image ground-pixel resolution of 6.24mm/pixel.The on-board GPS receiver provided approximate coordinates of each image acquisition point during the survey flight.A spatial survey was conducted with a Leica Viva dGPS system with Real Time Kinematic corrections received by radio link, allowing measurement of point locations with 3D accuracy, in this survey, of 0.009–0.015 m. Setting up a GPS base station 20 m from the structure and conducting the survey with a rover unit enabled achieving this accuracy.The justifications for including a dGPS in our survey methodology are threefold:To precisely define Ground Control Point positions in the survey area, and incorporate them into the TSfM processing workflow.GCPs were located during fieldwork according to protocols set out by a number of authors to ensure an accurately georeferenced virtual outcrop.To ease compilation of all data into a single oriented geometrical framework.To test the efficacy of direct georeferencing of point-clouds and virtual outcrops using on-board GPS instruments for terrestrial LiDAR and ASfM surveys.This latter analysis was achieved by recording 108 positions on the structure, specific to individual bedding surfaces.This protocol enabled an assessment of the positional accuracy of virtual outcrops generated without GCPs.Following acquisition of field data, the pre-processed LiDAR point-clouds and digital images required a number of processing steps to generate a final virtual outcrop.Buckley et al. provide a detailed review about the processing procedure for LiDAR datasets acquired for use in the Earth Sciences.Absolute and co-registration of the point-clouds generated from the individual scan stations was achieved using RiScan Pro, with coordinates provided by on-board GPS measurements during acquisition.Subsequent to coarse, manual co-registration of point-clouds, filtering and decimation of point-cloud data was performed to reduce point redundancy from overlapping scans and adverse effects caused by the presence of vegetation.An iterative closest points algorithm was used in RiScan Pro to automatically align point-clouds after filtering and decimation.A ’least square fitting’ calculation was performed to align a total of 9246 points from the 10 point-clouds.Overall standard deviation of distances between all used pairs of tie-points was quoted in RiScan Pro as 0.0095 m.The fundamental process of SfM is a feature-detection-and-description algorithm by which common features or textures in overlapping images are identified.After detection of matched features, they are assigned 3D coordinates and iteratively the software automatically constructs a 3D network of matched or tie-points.Subsequent to the creation of the 3D tie-point network, a dense cloud is generated to populate the space between tie-points, by a MVS algorithm.This algorithm essentially functions by searching pixel grids within images, selecting the best matches and generating points in 3D space.Estimated achievable precision of SfM-derived point coordinates is controlled by: number of overlapping images in which the feature of interest appears; mean distance from camera to target; distance between camera centres relative to the object of interest, i.e. the angle of image convergence on the scene; principal distance of camera, a measurement similar to focal length; and the precision of image measurements and reconstruction parameters.Generation of tie-point networks and dense point-clouds to derive virtual outcrops was performed using Agisoft PhotoScan Professional 1.1.6, from protocols set out by a number of authors.Processing parameters in Photoscan for ASfM and TSfM datasets were identical, to allow direct comparison.Images were aligned at ‘high’ accuracy, with generic pair pre-selection enabled, and default key and tie point limits of 40,000 and 1000 respectively.Dense cloud construction was set at ‘high quality’ with depth filtering set to ‘aggressive’, improving the accuracy of automatically estimated point coordinates.A dense point-cloud of 1.8 × 105 points was generated from a raw dataset of 446 terrestrial images, using the stated parameters in Photoscan.As locations of camera positions were recorded using a consumer-grade GPS, this information was not included in the processing workflow in Photoscan.High precision dGPS measurements of GCPs were, however, defined during processing, following established procedures to allow precise georeferencing of the dataset.The 202 images taken in the single 15-min UAV flight were automatically geotagged during acquisition.PhotoScan automatically utilized this information during the SfM processing to enhance the image matching process and constrain estimates of camera locations.Processing time was consequently reduced, and direct georeferencing of point-clouds was enabled by using the global coordinate system WGS 1984.This direct georeferencing can be overridden or supplemented during processing by defining GCPs, as in our terrestrial photogrammetry dataset.However, to allow estimation of the accuracy of auto-registration, this protocol was not used.Efficacy of direct georeferencing by PhotoScan and necessary post-processing corrections is addressed in Section 4.4.1.To obtain spatially homogenous point-cloud data, each point-cloud created by the three methods was subsampled using CloudCompare, an open-source software package for 3D point processing.Semi-automated triangulation and generation of 3D mesh surfaces from processed point-clouds was done using Innovmetric PolyWorks.Any obvious meshing artefacts and holes were manually corrected in PolyWorks.Identical procedures were followed for each point-cloud in PolyWorks to allow comparison of the final outcomes.Mesh face counts are provided in Table 1.Subsequent to meshing, texturing by projection of images was carried out within RiScan Pro and Agisoft PhotoScan for the LiDAR and photogrammetric datasets, respectively.As the focus of this study is the geometrical accuracy of virtual outcrops, rather than quality of texturing and photorealism, this step was performed at a relatively low resolution to allow easier handling of data during interpretation.Finally, the three separate virtual outcrops were co-registered in the digital environment, using the spatial data collected by dGPS and on-board LiDAR scanner and UAV instruments.Table 1 provides a summary of pre and post processed point-clouds and resultant meshes.Prior to any structural analysis of the virtual outcrops, or of the directly collected field data, the datasets needed to be co-registered into a single oriented geometrical frame.First, individual compass-clinometer dip and dip azimuth data were digitized manually in their recorded map position.Digital compass-clinometer measurements, automatically georeferenced in the FieldMove app, and occasionally corrected, required no further corrections.GCPs defined during TSfM processing enabled the generation of a spatially correctly referenced virtual outcrop that did not require any further corrections.These three datasets were combined into an oriented geometrical frame in Move 2016.1 by Midland Valley.Direct georeferencing of the ASfM point-cloud by use of geotagged imagery in PhotoScan significantly reduced user input time into the processing workflow.To estimate the efficacy of this process, after importing the ASfM virtual outcrop to Move, known points on the virtual outcrop were picked and compared to corresponding dGPS points on the TSfM virtual outcrop, acquired during the TSfM survey.This comparison enabled an approximate estimation of inaccuracies in scale, orientation and position.Table 2 provides a summary of post-processing-applied corrections to the directly georeferenced ASfM virtual outcrop.Calculations from applied corrections provide an estimate of less than 1-degree rotation and a scaling ratio of 1.006 of the virtual outcrops relative to control measurements.For the purposes of this study, this variation was well within accepted ranges of error, given deviations in bedding orientations and user/instrument error in dip and dip azimuth measurements, irrespective of the method used.As such, further investigations into accuracy and precision of direct georeferencing were not required for this study.After compiling the data into a single georeferenced framework, differences between the five datasets were identified by: targeted comparisons of structural measurements from defined control surfaces around the outcrop; assessment of differences in point distributions through a single cross-section slice of LiDAR, TSfM and ASfM dense point-clouds, and coverage and ‘completeness’ comparisons of the three virtual outcrops.To perform a quantitative comparison of our data collection methods and accuracy of surface reconstruction in virtual outcrops, a number of bedding surfaces around the syncline structure were analyzed.This targeted approach provided six defined areas of the outcrop for measurement comparison.‘Control surfaces’ were chosen to represent the range in bedding surface characteristics on the syncline, taking aspect, elevation, size and structural position into account.As such, bedding planes from both the limbs and the hinge of the syncline were chosen for analysis.Overhanging and upward facing planes, those near the base and higher up on the structure, and planes visible from different compass directions were also important considerations when selecting control surface locations.Accounting for these factors we selected six control surfaces that fulfilled all criteria for quantitative comparisons of methodologies.Multiple direct dip and dip azimuth measurements were collected on each surface using a traditional compass-clinometer.This process was repeated with the FieldMove app on iPad, with a similar number of measurements made for each control surface.After processing and georeferencing of the LiDAR, TSfM and ASfM virtual outcrops, the six control surfaces were manually identified in each virtual outcrop.Patches containing multiple mesh triangles on each control surface were selected and analysed for dip and dip azimuth.The dip and dip azimuth of each mesh triangle in the selected patch is plotted as a pole on an equal angle stereonet for comparison with our control data.Spherical statistical methods using an orientation matrix and eigenvectors were then used to calculate mean principal orientations of bedding planes and vector dispersion of poles to bedding.They provided us with mean principal dip and dip azimuth data for the mesh triangle populations in each of the six control surfaces for the remotely acquired data, and corresponding orientation measurements collected in the field.Digital compass-clinometer control surface data display a dip deviation of up to 4° with respect to traditional compass-clinometer measurements.Deviation of dip azimuth, however, is more pronounced.This is especially apparent for CS5 and CS6, which display respective deviations of 10 and 15° from the mean principal orientation of the traditional measurements.In addition, digital compass-clinometer stereonet poles display greater dispersion than traditional measurements on the majority of control surfaces.Measurement drift was observed on a number of occasions during fieldwork using FieldMove on the iPad, and is particularly pronounced with respect to dip azimuth orientations.User monitoring, censoring of orientation readings and gyroscope updating is required at regular intervals, and calibration against a traditional compass-clinometer is recommended.Restarting the app and hence updating the gyroscope is generally sufficient to rectify any measurement drift, before resuming fieldwork.Poor agreement of dip azimuth data to traditional measurements for CS5 and CS6 is thought to be likely due to measurement drift and insufficient recalibration.LiDAR derived reconstructions display the greatest consistency with compass-clinometer measurements for the six different control surfaces.A maximum deviation from control data of five degrees in mean principal orientation was calculated for control surfaces 2 and 6.High point-cloud densities and detailed LiDAR-mesh reconstruction results in the conservation of bedding plane asperities, and thus greater dispersion of poles to bedding than control data.Despite greater pole dispersion, good agreement of data to control measurements and statistical robustness afforded by large sample numbers means that this conservation of bedding plane asperities does not negatively affect mean principal orientations of control surfaces.General agreement of mean principal orientations to control data display the fidelity of LiDAR derived mesh reconstructions to true outcrop geometries.This method generally yielded the poorest results across all datasets along with the greatest deviation from the control.TSfM reconstructions of CS1, CS2 and CS3 display good agreement to control data with deviation values similar to the LiDAR dataset, albeit with greater dispersion of plotted poles.Data from CS4, CS5 and CS6, however, display high deviation in mesh triangle orientations, greater dispersion of plotted poles and less reliable reconstructions, when compared to compass-clinometer.The greatest deviation from compass-clinometer, of 70°, occurs at CS6.Also, mean principal orientations of some bedding surfaces have a dip azimuth direction roughly opposite to that of other datasets.A full discussion of the inaccuracies and variable reliabilities in TSfM reconstructions is provided in Section 7.2.Mesh surfaces derived from ASfM were generally of greater accuracy than those derived from TSfM.Reconstructions of CS5 and CS6 display increased deviation from control data, though to a lesser extent than TSfM.Maximum deviation of mean dip and dip azimuth across control surfaces is 25° and 40° respectively.Tighter clustering of poles for ASfM reconstructions indicate greater homogeneity in mesh triangle orientations compared to TSfM and consequently a greater confidence in mean principal orientations.While these data displays some deviation from control, reliability of surface reconstructions is greater than the TSfM dataset.A detailed view of CS5 reveals the discrepancies between surface reconstructions of the three virtual outcrops.The TSfM surface reconstruction has a mean principal dip azimuth in the opposite dip quadrant to that of the other datasets, and visual comparison with the field photograph of the same area shows the mesh surface reconstruction to be erroneous.Though the LiDAR reconstruction for CS5 contains a greater number of mesh triangles than TSfM and ASfM equivalents, and hence orientation measurements, we do not attribute discrepancies in bedding orientations to mesh or measurement density.Inspection of the respective CS5 reconstructions reveals smoothing of the stepped profile and flattening/inversion of the dip panel in the TSfM virtual outcrop.Inspection of identically subsampled point-clouds from which mesh reconstructions were derived, reveals the differences in datasets.LiDAR point distributions faithfully represent the stepped profile of the structure, and the angular nature of bedding edges.In addition, surfaces unsampled by LiDAR are apparent at a number of places through the point-cloud cross-section, where outcrop geometry and low-elevation survey positions placed bedding planes in scan shadow.In contrast, TSfM point distributions display a distinctly smoothed trend where topographic recesses are pronounced.Surfaces unsampled by LiDAR are populated by TSfM data.Thus, TSfM points are more continuously and evenly distributed across the section than those generated by LiDAR, but are not geometrically representative of the outcrop, particularly where the outcrop profile is angular.ASfM point distribution displays a trend for greater continuity and regularity of point distribution across the section, and a number of surfaces unsampled by terrestrial methods are well sampled by this method.At some locations across the section minor smoothing across bed edges is apparent in ASfM point distributions, but to a lesser degree than observed in the TSfM data.Greatest agreement between datasets is observed on large planar or ‘simple’ surfaces, whereas areas of stepped weathering and recessed bedding surfaces result in the greater deviation in point-cloud distributions.As all datasets were meshed using identical parameters, the three separate dense point-clouds underwent the same process of data reduction and point interpolation during processing.Point distribution is the fundamental control on surface reconstruction accuracy, and results from Section 5.2 clearly shows that poor surface reconstructions are coincident with significant point smoothing over areas of stepped outcrop profile.An appraisal of control surface results reveals that the LiDAR dataset is of greater accuracy than TSfM and ASfM counterparts, and as the datasets were meshed using identical parameters, point distribution is the controlling factor.Given that TSfM and LiDAR data in this study were both collected from terrestrial survey stations, point sampling and distribution were expected to be roughly coincident, with occlusion occurring at similar locations.Fig. 6, however, displays marked differences in respective point distributions, the causes for which are addressed in Section 7.Virtual outcrop reconstructions of 69%, 78% and 100% were achieved of the Stackpole structure by the respective remote acquisition techniques.A visual comparison of the three dip azimuth-coloured virtual outcrops highlights the differences in coverage and completeness of the reconstructions.Patchy coverage and mesh surfaces with a large number of holes is characteristic of the LiDAR data, while the extent of TSfM is similar, but more complete, in agreement with observations from Fig. 6.This effect is due to the lack of suitable survey stations at the seaward side of the structure, and a lack of available synoptic vantage points for terrestrial LiDAR scanning and acquisition of images for TSfM around the outcrop.The ASfM virtual outcrop, however, displays complete coverage of the structure, particularly toward the top and eastern end of the outcrop, due to the ability to gain elevated survey positions from the UAV platform.Colouring the mesh surface for dip azimuth highlights corresponding surfaces across the three virtual outcrops datasets that do not have the same orientation attributes.This difference is clear on the southeastern part of the mesh reconstructions, where NE-dipping surfaces are faithfully represented by LiDAR and ASfM, but TSfM counterparts appear to be SW-dipping.This highlights the trend in the TSfM dataset for erroneous mesh reconstructions, as observed on CS5.Direct comparisons of surface reconstructions and directly acquired data provide an insight into the limitations of contrasting surveying methods and potential sources of error.Part of the aim of this study was to ascertain how these factors ultimately impact structural interpretation and model building.To test this, we calculate fold-projection vectors based on established methods to predict the along strike position of the syncline hinge and the fold geometry.In addition to the data acquired for control surface comparisons, orientation data were collected across the entire structure during fieldwork by compass-clinometer, and subsequently, from virtual outcrops.As with field collection of data, the LiDAR, TSfM and ASfM virtual outcrops were examined separately, and only surfaces that were deemed as representative of bedding were targeted for data collection and analysis.Mean principal orientations were calculated from these mesh triangle patches and used to populate stereonets for fold axis calculations.Each plotted data point in Fig. 8 represents the mean principal orientation of a picked bedding patch on the respective LiDAR, TSfM or ASfM virtual outcrops.Of the models generated in this study, the most complete virtual outcrop was afforded by the UAV platform and provided the greatest number of measureable bed surfaces.Most critically, top bedding surfaces on the southern face of the structure were picked with much greater confidence than on the terrestrially acquired counterparts.Virtual outcrop holes associated with low-elevation LiDAR acquisition and errors in TSfM surface reconstructions were common, particularly on higher, upward facing beds.Consequently, picking LiDAR bed reconstructions was difficult over some parts of the outcrop, and a number of TSfM surfaces were rejected, based on quality checking of orientation data.Overhanging surfaces provided the bulk of the measurements on the southern side of the outcrop from terrestrially acquired datasets.Due to the data acquisition angle, these overhanging surfaces provided the most reliable reconstructions of true bedding.The numbers of poles to bedding on fold projection stereonets reflect the number of ‘patches’ confidently picked for data extraction, from which mean principal orientations were calculated.Following extraction of bed orientation data from the virtual outcrops, each dataset, including those directly collected, were plotted as poles to bedding for the purpose of structural analysis.High-density CS data points were not included at this stage to ensure statistically representative samples.Best-fit great circles were calculated for each dataset.Derived π-axes were plotted and recorded to predict fold axis orientation.This method was chosen as a means of contrasting the effects of different data acquisition methods on along-strike prediction from structural models.In addition, field measurements of bedding-fracture intersection lineations, interpreted as fold-axis parallel, were included in Fig. 8 for comparison with calculated values.Poles to fold limbs, π-girdles and π-axes for each of the five datasets display similar trends on first appraisal.On closer inspection, pole distributions reveal some important differences, particularly with respect to pole density.Pole to bedding distribution is similar for traditional compass-clinometer, digital compass-clinometer and ASfM, with fairly even distribution of points on and around the π-girdle, including in the hinge zone of the syncline.TSfM and LiDAR data reveal a paucity of poles in the hinge and a roughly bimodal distribution.Greater dispersion of poles is evident in the TSfM dataset throughout both limbs and hinge of the syncline, with occasionally points falling far outside of the range represented by the other datasets.Of the remotely acquired data, ASfM and LiDAR poles display greater similarity to those from direct measurements, although with differences in azimuth and plunge of calculated fold axes.Fold axis calculations from the two directly measured datasets show the least deviation between each other, of 1 and 3° for dip and dip azimuth respectively.Finally, the TSfM fold axis shows the greatest deviation from traditional compass-clinometer.Projection of the predicted fold geometry onto serial cross sections allowed a quantification of along-strike deviation of the fold hinge using calculated fold axes from bedding orientations for each dataset.To begin with, a single bedding surface, identifiable on both limbs and the hinge at the western end of the structure was mapped and digitized as a polyline in 3D space.This feature was detected in all three virtual outcrops and clearly visible in the field.Because of the limited extent of the Stackpole structure, the fold was assumed to be cylindrical and calculated fold axes were used as projection vectors onto 12 serial cross sections striking 10° NNE, spaced 5 m apart.The projection of the same single polyline onto cross sections removed interpretational bias and allowed a direct comparison of data acquisition methods.Polyline node deviation was calculated using 2D Root Mean Square Error on the y and z plane, adapted from calculations for horizontal map accuracy using remotely sensed data.Calculated fold profiles from poles to bedding and resultant projections of polylines to cross section highlight the rapid divergence of predictions over a short distance along strike.RMS calculations of fold hinge nodes for different acquisition methods show deviation from control of 2.1 m for digital compass-clinometer, 4.3 m for ASfM, 6.4 m for LiDAR and 6.9 m for TSfM predictions respectively, at 60 m projection distance from polyline interpretation.Terrestrial LiDAR derived surface reconstructions were of greater accuracy than SfM equivalents, and show greater agreement to direct measurements.Given these high accuracy reconstructions, derived along-strike predictions were expected to be most similar to those derived from control data.Examination of these along-strike predictions, however, shows that with the exception of TSfM, LiDAR predictions diverge from control to a greater degree than the other measurement methods.The relatively poor coverage of the LiDAR virtual outcrop and associated lack of surfaces or patches from which to extract orientation data during interpretation resulted in under-sampling of parts of the syncline structure.This behaviour resulted in a paucity of measurements in this structurally important area, and thus greater deviation from control in along-strike prediction.This issue arises primarily because the structure is an upright, ENE-plunging synform, and as such bedding planes dip toward the centre of the structure, and gently seaward.Survey positions above, and to the seaward side of the outcrop are thus best at this site, particularly for bedding planes in the hinge of the syncline.Terrestrial survey positions did not provide sufficient elevation to reconstruct the hinge zone, and in an attempt to reduce the effects of occlusion, elevated positions were chosen during field acquisition.As such, predictions using the LiDAR dataset were less constrained than those from ASfM and direct measurements, particularly in the hinge zone of the syncline.The schematic presented in Fig. 11a highlights the high accuracy of LiDAR reconstructions where bedding planes are visible to terrestrial scan stations, but the limiting effects on reconstruction where occluded.It should be noted that, in contrast to SfM, single LiDAR scans of surfaces provided sufficient data to generate highly accurate surface reconstructions, whereas at least two aligned images, but preferably more, are required for SfM reconstructions.This consideration is an important one, and a factor that significantly affected the accuracy of TSfM reconstructions.TSfM derived along-strike predictions reflect the negative effects of occlusion in two ways.First, as with the LiDAR virtual outcrop, large parts of the outcrop are not represented by reconstructions, and thus bedding surfaces were not sampled, particularly in the hinge zone of the syncline.Second, where bedding planes were partially occluded to TSfM camera positions, surfaces are reconstructed erroneously.Given the improved accuracy of SfM reconstructions with higher overlapping image counts, this partial occlusion is a major factor to be considered during acquisition.An appraisal of image overlap numbers reveals that between 2 and 9 overlapping images were used for generating each tie-point during TSfM processing.An example of this effect is provided by poor TSfM surface reconstruction of CS5.While this bedding plane was not entirely occluded during TSfM acquisition, inspection of terrestrially acquired imagery reveals that CS5 appears in only two images.Point coordinate estimation, therefore, was performed on the minimum number of images required for this part of the outcrop.Pixel resolution of images used for tie-point matching influences precision of point matching and must be considered to achieve close range, in-focus imagery.Appraisal of TSfM survey positions highlights the relatively large distances of terrestrial camera positions from the outcrop, compared to those achieved by UAV.These terrestrial positions around Stackpole syncline are fundamentally controlled by local topography, with semi-synoptic viewpoints only available ∼60 m from the structure.This increased distance from outcrop resulted in a reduced ground-pixel resolutions for TSfM images than for the ASfM dataset, in spite of greater camera resolution.This low-resolution imagery is likely to have compounded the negative effects of partial occlusion and further impacted the accuracy of surface reconstructions.This distance-to-outcrop effect did not significantly affect the terrestrial LiDAR dataset, however, given the quoted accuracy, precision and range of the instrument used in this study.These factors had important effects on the accuracy of the final TSfM virtual outcrop and derived along-strike predictions.Full occlusion to terrestrial camera positions at the eastern end of the structure resulted in fewer bedding planes being reconstructed in the hinge of the syncline, and partial occlusion in some places resulted in erroneous bedding reconstructions.The negative outcomes were reduced availability of bedding planes from which to extract structural data, and increased inaccuracies, along with weaker picking confidence, in extracted structural data.These factors likely explain the rapid along-strike deviation from control.ASfM achieved along-strike prediction closest to those derived using direct measurements.While this dataset still displays along-strike divergence, it is significantly less than for the other two remote acquisition methods.On individual control surfaces, however, ASfM did not achieve as high control surface accuracy as LiDAR.The ability to move around the study site from a UAV platform allowed close range imagery to be acquired of the outcrop from a number of angles.This manoeuvrability, and the requirement of accurate SfM reconstructions for convergent imagery is a key advantage of a UAV platform over terrestrial image acquisition at this location.The greater coverage afforded by this method allowed a much greater number of reliably reconstructed surfaces to be sampled.Not only were more bedding planes reconstructed, particularly toward the top and eastern part of the outcrop, but the increased image overlap, improved ground-pixel resolution and convergence of acquisition stations afforded by the manoeuvrable UAV platform meant that reconstructions were more accurate than TSfM.Between 6 and 12 overlapping images were used to automatically estimate each ASfM tie-point location and UAV image acquisition points averaged a distance of 13 m from the structure, improving ground-pixel resolution.Bedding planes were also picked and sampled with greater confidence on the ASfM virtual outcrop, such that the entire syncline could be sampled, through both limbs and hinge, as well as along strike, resulting in a better fold axis prediction than either of the other remote acquisition techniques.The ‘fullness’ of the dataset is likely the cause of the better results and greater agreement with directly collected data, notwithstanding some minor inaccuracies in surface reconstruction.A schematic representation of the advantages of UAV acquisition of images for SfM reconstruction is provided in Fig. 10c.This case study at Stackpole Quay demonstrates how different acquisition techniques and outcrop morphology have important effects on structural analysis, not only through the accuracy of derived structural data, but also the amount and distribution of the data that each method provides.Accurately reconstructed bedding planes by terrestrial LiDAR did not automatically provide accurate along-strike predictions in this case, primarily because structurally important, data-rich parts of the outcrop were not sampled.Similarly, the Terrestrial Structure from Motion dataset suffered from the negative effects of occlusion, where large parts of the outcrop were un-reconstructed.This outcome is an effect of the characteristic morphology of Stackpole Quay, and the lack of close range, synoptic survey points around the structure.The ability to survey the entire outcrop aerially, however, provided the camera positions and angles to fully reconstruct the outcrop and thus derive structural data from any chosen bedding plane.This clear advantage of UAV acquisition has particular influence on structural models from Stackpole, given the fact that the structure is an upright syncline, with inward-dipping beds.The availability and accuracy of structural data proved important for along-strike predictions.Where a paucity of data existed, as in the LiDAR and TSfM reconstructions, structural predictions were poorly constrained.Where bedding planes were partially occluded to terrestrial camera positions, erroneous reconstruction of surfaces led to greater variability of structural data and difficulties in picking bedding planes for data extraction with confidence.The quality of virtual outcrops is thus critical if reliable structural data are to be extracted from them.This requirement implies that careful consideration should be given to every stage of the process, from survey planning, data acquisition and processing to the extraction of structural data and the ways in which predictions are made.During survey planning and acquisition of data for virtual outcrop construction by SfM, attention should be paid to the principles of this technique, and survey design should account for the requirement of SfM for close range, convergent, overlapping imagery.Differences in lighting of surfaces or features displaying low contrast can negatively impact SfM reconstructions through the inability to match features and generate tie-points, and as such timing of surveys should be considered.Irrespective of the acquisition technique, quality checking of models is prudent to ensure reliable results are obtained.The collection of ground-truth data, in the form of dGPS control points and measured bedding orientations at the study site enables calibration of remotely acquired data, and can aid in analysis and interpretation.Data assessment during SfM processing is critical: consideration should be given to overlapping image numbers, tie-point image counts, accuracy of tie-point estimations, and relative point densities.Chosen processing parameters are likely to be determined by the specific requirements of the study, but where this primary data do not meet predetermined thresholds, derived structural measurements and predictions should be treated with circumspection.These quality checks are important to improve virtual outcrop accuracy, reduce uncertainty and ultimately make better geological predictions using modern techniques.This case study from Stackpole Quay highlights the relative merits and shortcomings of modern versus traditional techniques for structural measurement and along-strike prediction.UAV image acquisition coupled with SfM software generated a virtual outcrop that provided better along-strike predictions than terrestrial LiDAR and terrestrial SfM counterparts.While LiDAR surface reconstructions proved more accurate than either SfM dataset, the greater coverage afforded by UAV allowed improved characterisation of the structural geometry of the study site, and thus provided a better predictor of along strike structure.This result reflects the morphology of the study site and the level of accuracy required for predictive geological modelling.Irrespective of the characteristics of the individual study site or the methodology used, careful survey planning, data processing, and quality checking of this data is critical for robust structural analysis and accurate geological models.
Light Detection and Ranging (LiDAR) and Structure from Motion (SfM) provide large amounts of digital data from which virtual outcrops can be created. The accuracy of these surface reconstructions is critical for quantitative structural analysis. Assessment of LiDAR and SfM methodologies suggest that SfM results are comparable to high data-density LiDAR on individual surfaces. The effect of chosen acquisition technique on the full outcrop and the efficacy on its virtual form for quantitative structural analysis and prediction beyond single bedding surfaces, however, is less certain. Here, we compare the accuracy of digital virtual outcrop analysis with traditional field data, for structural measurements and along-strike prediction of fold geometry from Stackpole syncline. In this case, the SfM virtual outcrop, derived from UAV imagery, yields better along-strike predictions and a more reliable geological model, in spite of lower accuracy surface reconstructions than LiDAR. This outcome is attributed to greater coverage by UAV and reliable reconstruction of a greater number of bedding planes than terrestrial LiDAR, which suffers from the effects of occlusion. Irrespective of the chosen acquisition technique, we find that workflows must incorporate careful survey planning, data processing and quality checking of derived data if virtual outcrops are to be used for robust structural analysis and along-strike prediction.
187
Status of vaccine research and development for norovirus
Noroviruses cause acute, debilitating gastroenteritis characterized by vomiting and diarrhea.The US Centers for Disease Control and Prevention estimate that it is the most common cause of acute gastroenteritis in the United States with 21 million cases each year and an estimated 70,000 hospitalizations and 8000 deaths nationwide .NoVs have also emerged as an important cause of gastroenteritis worldwide.These infections can occur in all age groups and commonly result in significant morbidity and mortality, particularly in the very old and very young.A recent systematic review estimated NoV prevalence to be at 14% and found that rates of NoV are higher in community-based and outpatient health care settings compared to hospital-associated cases.While this may seem to suggest that NoV causes less severe cases than other causes of diarrheal disease, the sheer frequency of illness results in a larger burden of severe NoV disease overall .It is estimated that up to 200,000 children die from complications of NoV infection worldwide annually .In addition, NoV illnesses and outbreaks exact a significant socioeconomic toll on businesses, hospitals, schools, and other closed settings such as dormitories, military barracks, and cruise ships.However, there are current gaps in the epidemiology of NoV, particularly for lesser-developed countries where advanced molecular diagnostics have been limited.Global regions such as Africa and Southeast Asia are not well represented by data, and case definitions have not broadly included the full spectrum of case presentations, including vomiting as the predominant symptom.Thus, global NoV incidence is likely underestimated and additional high-quality studies are needed.The Norovirus genus is divided into five genogroups, with GI, GII and GIV causing human infections.Each genogroup is further subdivided into genotypes based on analysis of the amino acid sequence of its major viral capsid protein VP1.Norwalk virus, the prototype human NoV species, is classified as a GI virus.Over 80% of confirmed human NoV infections are associated with genotype GII.4.Serotyping—as commonly done for viruses through neutralization assays—is impossible for NoV, as the virus cannot be cultured in vitro.Therefore, the true biological significance of these classifications is unknown.Pathogenesis is thought to be dependent on binding of the virus to human histoblood group antigens on the epithelium of the small intestine.HBGAs are glycans found on the surface gut epithelium.The expression of these glycans has been shown to affect the susceptibility to infection with certain NoV, namely in human challenge studies where only individuals who have a functional glycosylase enzyme and consequently express certain HBGAs are susceptible to infection with Norwalk virus .Studies have also described resistance to infection from other NoV genotypes due to a non-functional glycosylase.The inability to culture NoV hampers research on pathogenesis, vaccine development, and diagnostics.Although molecular diagnostics are available, the fact that NoV can be shed at low levels for long periods of time after infection makes disease attribution difficult.Recent attempts have been made to rigorously define the burden of acute enteric diseases in the developing world.The Global Enteric Multicenter Study used a conventional multiplex real-time polymerase chain reaction study for the detection of several enteric RNA viruses, including NoV .GEMS attributed moderate-to-severe diarrheal disease to NoV in only one of its seven sites across Africa and Asia.However, the case-control study design utilized in the GEMS study in which control selection may not have eliminated healthy individuals who had prior norovirus in the preceding month may have not been able to differentiate between NoV positivity in acute disease and asymptomatic controls and, thus, is likely to have poor sensitivity and specificity and underestimate the role of NoV as a cause of diarrhea, especially in the high-transmission settings .In fact, recent results from the multicenter MAL-ED birth cohort study, which controlled for a longer duration of shedding in controls, found that NoV GII was responsible for the most cases of diarrheal illness among children overall and particularly in countries where rotavirus vaccine had been introduced .Although the MAL-ED observation is consistent with the finding that NoV rates are higher in community-based studies compared to hospital-based studies, indicating that the disease has a milder presentation.However, the fact that NoV is the leading cause of clinical diarrhea in the United States also suggests that NoV could likely be a cause of severe disease among children in lesser-developed countries, a notion furthered by the WHO Food Epidemiology Reference Group which has identified norovirus as one of the most important pathogens transmitted by food in the world.In order to more accurately attribute pathogen etiology at the inpatient, outpatient, and community levels, future epidemiological studies should consider including quantitative diagnostics, frequent sampling, and well-considered control subjects in their design.Furthermore, to attain high-quality global data to influence policy decisions and generate global commitment, a surveillance network similar to that which was established in advance of rotavirus vaccine introduction may be needed.NoV is also highly transmissible, requiring a very low infectious dose of <10 to 100 virions, causing acute illness of fever, nausea, vomiting, cramping, malaise, and diarrhea persisting for two to five days.The disease is mostly self-limiting, although severe outcomes and longer durations of illness are more likely to be reported among the elderly and immunocompromised groups.Because immunity after infection is limited in duration and appears strain-specific, all age groups are susceptible.Apart from supportive care such as oral rehydration, there are no treatments currently available to decrease the severity of NoV-induced illness.In countries where sustained universal rotavirus vaccination has been introduced, NoVs have become the main cause of gastroenteritis in children.There are currently no licensed vaccines for NoV.While current estimates of under-five mortality rank NoV less than rotavirus and enteropathogenic E. coli, NoV is above enterotoxigenic E. coli .Additionally, the high estimated morbidity attributed to NoV, which occurs in all age groups in both developed and developing countries, suggests that the global health value of a NoV vaccine may rank equivalently with other enteric vaccines under development when evaluating both disability and mortality measures.A recent review on NoV vaccine development explored the factors complicating vaccine design .These include the lack of appropriate model systems to explore pathogenesis and vaccine target efficacy, unknown duration of protective immunity, antigenic variation among and within genogroups and genotypes, and unknown effects of pre-exposure history.Preclinical development is challenging due to the lack of relevant models—currently limited to a chimpanzee model, which has been halted due to ethical restrictions on the use of nonhuman primates, and a gnotobiotic pig model—and the lack of NoV cell culture.The inability to culture NoV has obviously limited any traditional whole-cell vaccine approaches, but recombinant technology—more precisely, NoV-recombinant virus-like particles produced by the expression and spontaneous self-assembly of the major capsid protein VP1—has played a major role in generating the current body of knowledge and leading approaches in NoV vaccine development.Efficacy trials will be essential in answering the issues raised above, including duration of vaccine-induced immunity, implications of antigenic diversity and drift on vaccine-induced protection, and the consequence of pre-existing immune responses.Despite the limitations, vaccine feasibility has been convincingly demonstrated with the development of a vaccine candidate based on a recombinant approach using a self-assembling virus-like particle that has shown protection against disease in two human challenge efficacy studies.Currently, it is being developed as a bivalent GI.1/GII.4 vaccine administered intramuscularly.NoVs have extensive antigenic and genetic diversity, with more than 25 genotypes recognized among the three genogroups containing human viruses.This has best been documented with the GII.4 genotype, though GI.1 and GII.2 isolates have also demonstrated stability over the past 30 years.While significant variation is known to occur with the epitopes responsible for seroresponse, there is evidence to suggest that more conserved domain epitopes across groups and strains may serve as a protective antigen in an adjuvanted vaccination regimen.There are also preclinical and clinical data that support broadened activity beyond the vaccine VLP strains.More encouragingly, although there is a lack of correlation of pre-existing serum antibody with protection from infection, the presence of serum antibodies that block binding of NoV virus-like particles to HBGAs have been associated with a decreased risk of infection and illness following homologous viral challenge.This blocking assay could play a critical role in facilitating further development and optimization of this vaccine.If a vaccine based on this bivalent approach is developed and licensed, future studies will need to determine whether the broadly protective immune responses elicited by the vaccine remain effective as strain variation occurs naturally.Modification of the formulation may be required if non-vaccine strains emerge for which the vaccine does not induce functional antibodies.Based on modeling of epidemiological data, protective immunity after natural NoV infection may persist between four and eight years.Early human challenge/rechallenge studies have observed protection from six months to two years.Thus, duration of protection is still unknown and the rapid incubation period from infection to illness onset may challenge the timing of effective memory response activation.The influence of multiple exposures and pre-existing immunity may also complicate the vaccine approach and immune responses.From a low- and middle-income-country perspective, there are unique issues to consider for vaccine development and feasibility.Dose number and schedule are important, and circulating strains may be different compared to those found in developed countries.Furthermore, the current vaccine approach for an indication in adults is based on an intramuscular injection of the vaccine with an effective immune response that is thought to rely in part on the boosting of memory from previous natural infection.As this vaccine would likely be targeted to younger children, the effectiveness of such a vaccine in a naïve infant could be less.It will be most interesting to learn about the priming of functional antibody responses from current phase 2 trials in these settings which are currently underway.If necessitated, an alternative mucosal priming-parenteral boost and/or adjuvant may be necessary, but this would complicate the development of a vaccine for use in a resource-limited setting.From a developed-country perspective, there are a number of target populations and indications for which a norovirus vaccine could be developed and bring substantial public health value.These include not only travelers to lesser-developed countries and military personnel on deployment, but also healthcare workers and food handlers in developed countries who are at higher risk given the frequency and impact of NoV outbreaks in these populations.Elderly populations in group and institutional settings also experience frequent outbreaks and tend to have more severe disease and associated mortality, but may present a challenge to induce effective immune responses.While more high-quality data on the epidemiology of NoV across ages and geographic areas are needed, it is clear that NoV has a global distribution that causes significant morbidity and mortality.A recent systematic review on mortality due to diarrhea among children less than five years of age found that NoV was associated with hospitalization in approximately 14% of cases, behind rotavirus and enteropathogenic E. coli .There is a dearth of data from Africa, where the effects of NoV-associated gastroenteritis may be more severe.Furthermore, because NoV affects all ages, it likely contributes to additional global disease burden beyond mortality.Forthcoming results from the World Health Organization Foodborne Epidemiology Reference Group should provide better estimates for relative disease burden across age strata to consider.In summary, based on available epidemiological data, it appears that NoV has a similar epidemiology to rotavirus in incidence among children less than five years of age, with multiple infections and the highest incidence rates occurring in the first two years of life.Changing dynamics in strain variation and circulation could likely also extend the high incidence beyond two years of life.A successful vaccination strategy would therefore need to target children at the earliest opportunity in order to have maximum public health benefit.Favorable advances have been made with a bivalent VLP-based vaccine.Firstly, proof of concept for efficacy has been recently demonstrated in a human challenge model with 50 vaccine and 48 placebo recipients, which observed 52% protection against all severity levels of disease and 68% and 100% protection against moderate-to-severe disease and severe disease only, respectively .Furthermore, evidence for a correlate of protection is emerging based on induction of antibodies which bind to HBGAs .HBGAs are hypothesized to serve as attachment factors for noroviruses, and data from clinical trials show that vaccinated subjects with higher HBGA-blocking antibodies had lower levels of infection and less severe disease.Furthermore, HBGA-blocking antibodies found among placebo recipients at pre-challenge were associated with protection.An important concern about NoV is that strains change over time, and immune response to natural infection does not provide effective neutralizing antibodies for heterologous strains.However, a recent report describes broadly reacting HBGA-blocking antibodies in the recipients of a candidate bivalent VLP vaccine for diverse strains and genotypes not included in the vaccine, suggesting broadly protective capability of the vaccine .While these studies are encouraging, translation to low- and middle-income countries will present challenges, including adequate immune responses to a variety of circulating strains, substantiation of a correlate of protection that is also present in developed-country settings, and integration of a NoV vaccine into a crowded EPI immunization schedule.Limited data from cohort studies suggest that the timing of a vaccine may be important.In one birth cohort study in Peru it appears that norovirus has a similar epidemiology to rotavirus in incidence under 5 years of age occurring within the first year of life though highest incidence and multiple infections occur in the first two years of life .However, changing dynamics in strain variation and circulation could likely extend the high incidence beyond 2 years of life.Additionally, more research is needed on virus-host associations and epidemiology to ensure adequate strain coverage of the current bivalent vaccine approach.Specifically, there is a need for rigorous studies that can accurately describe NoV prevalence and disease status in developed and developing world settings through the use of more sensitive and specific indicators of genotype-specific NoV exposure from serum.For an adult traveler vaccine indication, further Phase 2 clinical development is being planned for the bivalent VLP parenteral vaccine being developed by Takeda Vaccines.These types of studies can be challenging due to the lack of predictability of disease outbreaks, the common finding of copathogenicity of diarrheal disease, frequent mismatch of vaccine strains with virus strains encountered, as well as field study challenges in following and collecting specimens from travelers.Despite these challenges, travelers’ diarrhea vaccine studies have successfully been performed in the past.Trials among elderly in nursing homes could also be considered, although lack of predictability of outbreaks and adequate immune responses may present similar challenges.Favorable results from trials in adult travelers or other high-risk populations in developed countries would support the effectiveness of such a vaccine approach in low- and middle-income country populations.For a developing-country vaccine indication, plans for a multisite, pediatric, age-descending Phase 2 study in Colombia, Panama, and Finland are currently underway.After these introductory clinical trials, rotavirus vaccine development pathways have demonstrated that there are sufficient field sites, experience, and regulatory pathways to take a NoV vaccine through large-scale safety studies and pivotal trials in low- and middle-income countries.Although acute gastroenteritis clinical endpoints for rotavirus infection have been defined and accepted for developing-country populations, these types of clinical endpoints may not adequately address outcomes for mild-to-moderate disease, which is more common with NoV compared to rotavirus.However, the incidence of NoV disease is high enough to support such a trial with reasonable numbers—assuming there is broad enough coverage against circulating and emerging strains.Another important consideration for a NoV vaccine is that, in order to achieve acceptability in developing countries, the vaccine formulation should ideally be non-cold-chain-dependent and low cost.In addition, the Expanded Programme on Immunization vaccine schedule is already quite crowded, and appropriate integration of another vaccine into the schedule would need to be navigated with EPI decision-makers.A combination vaccine with parenteral rotavirus vaccine is conceivable, given the similarities in disease indication and early age at which introduction is needed, and could be a more acceptable solution for low- and middle-income countries.Table 1 outlines the NoV vaccine candidates currently under development.The most advanced candidate is a recombinant VLP capsid protein vaccine, which has been formulated with Aluminum Hydroxide and Monophosphoryl lipid Adjuvants.It is given in a two-dose series separated 28 days apart .The candidate has completed proof-of-concept in two human challenge studies, where protection against disease—particularly, against severe disease—was achieved .Takeda Vaccines is sponsoring the development of this vaccine with significant academic collaboration and involvement with the US Department of Defense.The vaccine is modeled in part on the success of the currently licensed human papillomavirus VLP vaccines.An alternative VLP candidate is being developed at Arizona State University using the same construct as the Takeda VLP, but their candidate is produced in a plant-based vector and has not yet entered clinical development.A NoV VLP-rotavirus protein combination vaccine candidate is also currently in preclinical development, and clinical trials may begin in the near term .The vaccine is being codeveloped by the University of Tampere and UMN Pharma.The trivalent combination consists of NoV capsid-derived VLPs of GI-3 and GII-4 and rotavirus recombinant VP6, a conserved and abundant rotavirus protein.Components are expressed individually in the baculovirus expression system and then combined.Preclinical studies in mice demonstrated strong and high-avidity NoV and rotavirus type-specific serum IgG responses, and cross-reactivity with heterologous NoV VLPs and rotaviruses was elicited.Blocking antibodies were also described against homologous and heterologous norovirus VLPs, suggesting broad NoV-neutralizing activity of the sera.Mucosal antibodies of mice immunized with the trivalent combination vaccine inhibited rotavirus infection in vitro.Most recently, the developers have described an adjuvant effect of the rotavirus capsid V6 protein has on the norovirus VLP response which would be advantageous if this obviates the need for addition of an exogenous adjuvant in the target population .The University of Cincinnati has conducted preclinical immunological studies on a NoV P particle construct vaccine candidate .This candidate is derived from the protruding domain of the NoV VP1 capsid protein.P particles can be easily produced in E. coli expression systems at high yield and thus could represent a manufacturing advantage through relatively low cost of goods.Recent preclinical research using a gnotobiotic pig model supports heterologous cross-reaction of the intranasal P particle vaccine as well as intestinal and systemic T cell responses.The heterologous protective efficacy of the P particle vaccine was comparable to that of the VLP vaccine in pigs and the homologous protective efficacy in humans.Clinical development plans for this vaccine are unknown.Finally, a novel construct which experimentally combined VLPs of norovirus GII.4 and enterovirus 71 was recently reported by developers from the Institut of Pasteur of Shanghai and the Chinese Academy of Sciences .In a mouse study they were able to demonstrate functional antibodies to both viruses without evidence of interferences.Such a novel combination vaccine may offer additional value to areas of the world where both these diseases are prevalent, however further work is needed to understand valency requirements to adequately cover the six EV71 genogroups that are evolving and geographically unique around the world .Currently, Takeda Vaccines is predominantly funding development of its VLP-based candidate, though the US Department of Defense has also provided some support to Ligocyte, which Takeda bought in 2012.While industry funding would likely be able to take a vaccine to the developed-country market, further development in low- and middle-income country markets would require funding from a range of sources, including vaccine-manufacturing partners in potential target markets, state governments, and global health nonprofit organizations.Furthermore, one should not discount the considerable value opportunity that public and private markets in the emerging economy markets bring to development and introduction of new vaccines .Similar to rotavirus vaccine introduction which has been observed in many emerging economies, a norovirus vaccine may make economic and public health sense to many countries which can afford to introduce such a vaccine.Gavi, the Vaccine Alliance has indicated an interest in enteric vaccines, including one for NoV, though their strongest preference would be for a combined vaccine, such as the NoV VLP-rotavirus fusion candidate.Alternatively, combination with or expression by another enteric pathogen could also enhance uptake of a NoV vaccine.Because a NoV vaccine is expected to have a dual market—both developed and developing countries—there would likely be scale-up advantages for commercial development and global distribution.M.S.R is an employee of the U.S. Government and military service members.This work was prepared as part of official duties.Title 17 U.S.C. §105 provides that “Copyright protection under this title is not available for any work of the United States Government., "Title 17 U.S.C. §101 defines a U.S. Government work as a work prepared by a military service member or employee of the U.S. Government as part of that person's official duties.Mark Riddle – Takeda Vaccines: advisory board and named investigator on a Cooperative Research and Development Agreement with the US Navy; Richard Walker – No relevant conflicts of interest to disclose.
The global health community is beginning to gain an understanding of the global burden of norovirus-associated disease, which appears to have significant burden in both developed- and developing-country populations. Of particular importance is the growing recognition of norovirus as a leading cause of gastroenteritis and diarrhea in countries where rotavirus vaccine has been introduced. While not as severe as rotavirus disease, the sheer number of norovirus infections not limited to early childhood makes norovirus a formidable global health problem. This article provides a landscape review of norovirus vaccine development efforts. Multiple vaccine strategies, mostly relying on virus-like particle antigens, are under development and have demonstrated proof of efficacy in human challenge studies. Several are entering phase 2 clinical development. Norovirus vaccine development challenges include, but are not limited to: valency, induction of adequate immune responses in pediatric and elderly populations, and potential for vaccine-strain mismatch. Given current strategies and global health interest, the outlook for a norovirus vaccine is promising. Because a norovirus vaccine is expected to have a dual market in both developed and developing countries, there would likely be scale-up advantages for commercial development and global distribution. Combination with or expression by another enteric pathogen, such as rotavirus, could also enhance uptake of a norovirus vaccine.
188
Glass transition temperature versus structure of polyamide 6: A flash-DSC study
Polyamide 6 is an engineering polymer, commonly known as “Nylon 6”, which is used in the field of load-bearing applications where mechanical performance and lifetime are keywords.Applications such as under-the-hood components and sport items are often exposed to demanding conditions like high load, challenging temperature regimes and elevated relative humidities.However, the performance of PA6 depends strongly on micro-structural details that, to a great extent, are determined during processing.Therefore, it is obvious that an investigation of the influence of processing on properties is required to predict and improve the performance.PA6 belongs to the family of aliphatic polyamides.Its monomer has two polar groups; the amide and carbonyl groups.These polarities can form hydrogen bonds between chains, leading to high strength .However, this polar character causes a crucial characteristic of polyamide 6; hygroscopicity .Indeed, if exposed to a humid environment, PA6 absorbs water till reaching a saturation level which is dependent on temperature and relative humidity .If this occurs, part of the hydrogen bonds are broken and new H bonds are formed with the water molecules .This phenomenon is called plasticization and results in a depression of the glass transition temperature .The plasticization causes a considerable deterioration of the mechanical properties, as shown by a number of authors in literature .Therefore, it is clear that the glass transition temperature is an important parameter for PA6 properties.However, the glass transition temperature of PA6 not only depends on the hydration level; it also changes with crystallinity , amorphous orientation and, as more recently reported, with the rigid amorphous phase content .An illustrative example of the influence of processing on the glass transition is given in Fig. 1, where the storage modulus is displayed as a function of temperature for three samples produced with three different thermal histories.The drop in the modulus due to the glass transition is different for the three samples, and also the value of Tg changes.In order to understand how processing affects the glass transition, we will use a model that describes a semi-crystalline polymer as a system composed by three phases: the crystalline phase, the rigid amorphous and the mobile amorphous phase.The main difference is in the molecular mobility, the highest mobility belongs to the mobile amorphous phase, the lowest to the crystalline domains and the rigid amorphous phase has an intermediate mobility .The rigid amorphous phase, as assumed in this model, is a sort of inter-phase between the crystalline and the mobile amorphous phase.This concept was employed also to other systems .Many experimental techniques are used, but the most common one is the traditional differential scanning calorimetry, focusing on the change of specific heat capacity due to the glass transition.From this it is possible to estimate the amount of amorphous phase that did not transform to the rubbery state during the glass transition.This excess of amorphous phase is defined as the rigid amorphous phase.Fast scanning calorimetry allows us to investigate a wide range of cooling rates and cooling procedures that are not achievable by traditional DSC.This gives the possibility to a) get completely amorphous samples, b) perform real isothermal crystallization and c) suppress cold crystallization upon heating.The difference in Fig. 1 are typically caused by processing-induced differences in structure.An important characteristic of PA6 is its polymorphism; there are two important crystal forms with respect to melt processing: the most stable α-phase for low under-cooling or high isothermal crystallization temperature, and the less stable γ-mesophase for high under-cooling or medium to low isothermal crystallization temperatures.In case of very fast cooling also a completely amorphous sample can be obtained .Mileva et al. , have shown that melt-crystallization at low under-cooling is connected with the formation of lamella, also for the γ-mesophase, while high under-cooling leads to γ-mesophase with a nodular morphology for which no lamella can be observed.The aim of this work is to investigate the structure formation in PA6 for a wide range of cooling procedures and conditions mainly by means of fast scanning calorimetry, in order to determine the relation of the glass transition temperature with the structure, i.e. the amount of crystalline phase, MAF and RAF.This will form the basis for future works on mechanical properties for which Tg is a crucial parameter.The material employed in this work was polyamide 6 provided by DSM.This PA6 has a viscosity-average molar mass of about 24.9 kg/mol.Two different DSC apparatus were used."A traditional DSC and two flash-DSC's.The conventional DSC was a Mettler-Toledo 823e/700 module with a Cryostat intra-cooler; one flash-DSC was a Mettler Toledo flash-DSC 1 equipped with a Huber TC100 intra-cooler, the other one was a special flash-DSC setup for in situ X-ray.Experiments were carried out using 50 μl aluminum pans and UFS1 sensors for DSC and flash-DSC respectively."Moreover, in all DSC's experiments were performed under a constant flow of dry nitrogen.In the case of conventional DSC, the samples were obtained by cutting pellets in pieces with a mass between 5 and 10 mg that were placed in the aluminum pans.For the ultrafast DSC, tiny pieces of approximately 100 ng were cut and put onto the sensor by the use of an eyelash.In this work, conventional DSC was used solely for the determination of flash-DSC sample mass.Two different solidification methods were investigated, namely continuous cooling and isothermal crystallization.In order to get a well defined and comparable Tg, the standard continuous cooling is actually replaced by a so called “two step continuous cooling” where the cooling rate with which Tg is passed is kept constant.This is explained in detail in Appendix A.In order to have an absolutely identical condition from which the experiment could start, every experiment was preceded by a condition cycle, see Appendix B.In order to study the case of isothermal crystallization with space filling lower than 1, isothermal crystallization were performed at 90 and 180 °C varying the duration of the isothermal segment.The glass transition temperature was determined by estimating the midpoint in proximity of the change in specific heat.Lines tangential to the Cpsolid, Cpliquid and the transient in between were drawn, see Fig. 3a.The intersections between Cpsolid, Cpliquid and transient are defined as Tg onset and Tg end-set respectively.The midpoint of the segment between Tg onset and Tg end-set was defined as the glass transition temperature, as shown in Fig. 3a.Unfortunately, estimating the crystallinity is not always possible.In fact, despite high heating rate, when starting from γ-mesophase in the solid state, partial melting and re-organization in the more stable α-phase will take place.Thus, the measured melting area would not be related to only the pre-existing crystallinity but also to the heat necessary to melt the transformed fraction of crystals.This fraction cannot be estimated from the DSC measurements.Therefore, also in situ wide angle x ray diffraction experiments were carried out on samples on the flash-DSC sensor for several solidification procedures.The crystallinity obtained by WAXD was subsequently used for the structural analysis melting occurs only for the slow cooled samples and b) the glass transition temperatures are different.Fig. 4b shows the results for samples that are crystallized isothermally.Again a clear difference in melting behavior is observed.This is related to crystallization of different crystalline phases as demonstrated by, for example, van Drongelen et al. .Again, differences in glass transition temperatures are found.The crystallinity values are derived from the patterns shown in Fig. 5 applying the X-ray analysis explained in Appendix C and Section 2.3.The WAXD analysis also gives information about the crystallographic phase obtained by several solidification procedures.As far as the two step continuous cooling is concerned, a completely amorphous sample was obtained by very fast cooling at 200 °C/s; at 75 °C/s a weak γ-form characteristic reflection was found; with a cooling rate of 5 and 2.5 °C/s a predominance of the γ-form reflection with α-phase shoulders was observed; for very slow cooling, only α-phase reflections were observed, see Fig. 5a.In the case of isothermal crystallization, at low temperature only the γ-form reflection was found, for temperatures between 120 and 170 °C the γ characteristic reflection is predominant and rather weak α-phase shoulders were observed; at 180 °C only the α-phase characteristic reflections were found, see Fig. 5b.First the influence of cooling rate and isothermal temperature on the a) crystallinity and b) glass transition temperature will be presented.In Fig. 6a, the overall crystallinity obtained by two step continuous cooling is reported as a function of the initially applied cooling rate.As expected, cooling rates higher than ≈100 °C/s lead to a completely amorphous sample while for cooling rates lower than 100 °C/s, a rapid increase of crystallinity is observed up to a maximum value of about 37%.Upon isothermal crystallization for temperatures between 90 °C and 180 °C, a rather slight increase in crystallinity is found from about 30% at the lowest Tiso of 90 °C up to 36% at the highest Tiso of 180 °C, see Fig. 6b.Plotting the glass transition temperature as a function of applied cooling rate a minimum is found for the higher cooling rates where the samples are completely amorphous, and a maximum for the lower cooling rates where crystallinity is the highest.Similar to crystallinity, Tg also increases with the decreasing cooling rate.In the case of isothermal crystallization, the glass transition shows a monotonic decrease.Next, the correlation between structural parameters and Tg is investigated.In Fig. 8 the glass transition is plotted as a function of crystallinity.In the case of continuous cooling, the highest Tg corresponds to the highest crystallinity and the increase of glass transition is close to proportional with crystallinity.In contrast, the isothermal crystallization displays a totally different scenario; the glass transition temperatures measured in the case of isothermal crystallization is opposite to what is observed from continuous cooling.This is the first clear sign that crystallinity is not a determining parameter for the glass transition temperature, in contrast to what previously proposed by .In the following we will present a concept that explains these results and, moreover, a rather simple model that can capture them and predict the Tg as a function of structural parameters that, again, are determined by the thermal history.Next, a phase composition analysis was performed in order to investigate the influence of the solidification procedure on the RAF, MAF and crystallinity content.In Fig. 9a the fraction of rigid amorphous phase, mobile amorphous phase and crystallinity are presented as a function of applied cooling rate.When completely amorphous samples are obtained, the rigid amorphous fraction and crystallinity are both zero while the mobile amorphous fraction is stable at 1.For cooling rates between about 10 and 90 °C/s, both crystallinity and RAF increase rapidly, while MAF decreases.For even lower cooling rates, the rigid amorphous fraction appears to be rather stationary while the mobile amorphous fraction keeps on decreasing slightly.Fig. 9b shows the case of isothermal crystallization.In this case, only temperatures leading to complete space filling were selected.In fact, the maximum MAF estimated is about 0.5 for crystallization at 180 °C where the crystallinity is the highest and RAF the lowest.Moving towards lower isothermal temperatures, crystallinity slowly decrease and, at the same time, the mobile amorphous fraction decreases and the rigid amorphous increases till a maximum at 90 °C.By comparing Figs. 7 and 9, it is observed that the glass transition follows a similar trend as the rigid amorphous fraction; opposite to the one observed for the mobile amorphous fraction.This observation is made for both, continuous cooling and isothermal crystallization.Fig. 10a shows the glass transition temperature as a function of mobile amorphous fraction for samples solidified by continuous cooling and by isothermal crystallization while in Fig. 10b the glass transition temperature is plotted as a function of the rigid amorphous fraction.These two figures demonstrate that, although not perfectly, both solidification procedures give a similar trend for Tg if plotted as a function of RAF or MAF.This is, an indication that structural investigation are crucial to understand the glass transition in PA6.In order to understand the role of crystallinity for a given crystallization temperature, isothermal crystallization at 90 and 180 °C were performed varying the isothermal segment duration.In Fig. 11, the glass transition of samples crystallized at different Tiso and tiso is plotted as a function of crystallinity.Remarkably a lower isothermal temperature lead to a higher Tg for the same fraction of crystallinity.Therefore, a phase composition analysis was performed and shown in Fig. 12.The fractions of RAF, MAF and crystallinity are plotted as functions of crystallization time, for the crystallization temperature at 90 and 180 °C.From Fig. 12a and b, it becomes clear that these differences can be rationalized in terms of the RAF content.At lower crystallization temperature the resulting RAF content is higher.In Fig. 13a and b, comparisons between Tiso 90 °C and 180 °C are given.In this case, MAF and RAF are plotted as functions of crystallinity; it is observed that RAF increase linearly with crystallinity.Two different slopes are observed for isothermal crystallization at 90 °C and one for 180 °C.Some rather straightforward arguments are presented to capture the coupling between the different phases in a semi-quantitative way.This will help to understand and interpret the experimental results presented before.Starting point is the idea that RAF is a layer of changing mobility adjacent to the crystalline lamella, i.e. the amount of RAF is proportional to the face areas of the lamella, see Fig. 14.First, we will discuss some direct consequences and trends resulting from this model.For a constant crystallinity Xc, the fraction of RAF goes up and for MAF goes down with decreasing lamellar thickness lc.Moreover, this increase is stronger for higher Xc-values.For isothermal crystallization the lamellar thickness lc is directly related with the crystallization temperature, i.e. a higher crystallization temperature leads to higher MAF values at equal Xc-values.In the same way, using the expression for the RAF content, it is found that, for isothermal crystallization, the RAF content is higher for lower crystallization temperatures at equal Xc-values.For isothermal crystallization with completed space filling, see Fig. 9b, the final crystallinity X∞ varies with crystallization temperature; a higher Tiso means more mobility and thus more perfect crystals and a higher crystallinity.From Eqs. it follows, when considering crystallinity only, that MAF and RAF should decrease and increase, respectively.However, the opposite is observed.This is due the dominant role of the varying lamellar thickness.It is clear that knowledge of the lamellar thickness is crucial and, therefore, this was separately measured for the case of isothermal crystallization only by means of ex situ SAXS.Results are shown in Fig. 15a. For the non-isothermal experiments the crystallization-peak was defined as the crystallization temperature, see Fig. 15b. Using interpolation of the isothermal results for lc versus Tc, the lc-values for the non-isothermal experiments were estimated; see again Fig. 15a.As mentioned in Section 1, crystallization of γ-mesophase at high under-cooling leads to a non-lamellar morphology; thus, lc values obtained at temperatures below 130 °C should be interpreted as crystal thickness rather than lamellar thickness .With a full model for the crystallization kinetics, including secondary crystallization, and the relation between crystallization temperature and lamellar thickness it would be possible to predict the MAF and RAF content for a given thermal history.However, the formulation and experimental validation of such a model is outside the scope of this paper.Instead we will use the experimentally measured space filling and the estimates for the final crystallinity levels to give predictions for the MAF and RAF contents for the whole range of experimental conditions.The results from isothermal crystallization for two crystallization temperatures and different crystallization times are discussed first.Values given below are estimates from these figures.Three regimes can be observed: regime I) for too short times no observable crystallinity is found, regime II) crystallization with growing space filling and RAF-fraction.The crystallinity X∞ at space filling ξ = 1 estimated to be ≈0.15 and ≈0.3 for Tiso = 90 and 180 °C, respectively.Finally, regime III) crystallinity increases further while the RAF becomes approximately constant, i.e. X∞ increases to its maximum that is in the order of ±0.3.The average lamellar thickness for these two crystallization temperatures is ≈1.8 and 3.4 nm, respectively.Using the numbers at the end of regime II) in Eq. it follows that the thickness of the RAF layer is about 1–1.15 nm where the higher value is found for the lower temperature.This small difference in RAF layer thickness could be related to the freezing of lower mobility at a lower crystallization temperature.More important is the observation of a much higher RAF content for Tiso = 90 °C compared to the results for Tiso = 180 °C, while the crystallinity values Xc show the opposite.This is rationalized by Eq.: the large difference in the lamellar thickness over-compensates the difference in crystallinity.Moreover, in regime III crystallinity can still grow via crystal perfectioning, i.e. lamellar thickening, while RAF does not hardly change since X∞ and lc are proportional, see Eq.This is most clear for Tiso = 180 °C while for Tiso = 90 °C, where less perfect crystals are grown, also some new lamella could be created and thus some new surface for RAF.However, this effect is small.The lamellar thickness was assumed to be constant, at lc = 1.7 nm for 90 °C and lc = 3.4 nm for 180 °C.Fig. 17a and b give the results for lR = 1.1 nm.Clearly the trends are captured quite well but the final levels of the MAF and RAF fractions are off.By adjusting the RAF-layer thickness lR = 0.8 nm the results look much better, see Fig. 18a and b. However, for the longer crystallization times the same lR as for the isothermal experiments with varying Tiso and tiso = 180 s should be used.Next, continuous cooling results are analyzed.These non-isothermal crystallization cases are rather complex.Crystalline structures are formed in a range of temperatures leading to a more broad distribution of lamellar thickness, new nuclei may occur during the cooling history and the spherulite growth rate varies.Moreover, similar to the varying isothermal crystallization time experiments, a range of cooling rates did not lead to complete space filling.Therefore, we use the auxiliary function) again, but now with the cooling rate as the argument.As mentioned before, the enthalpy-peak during crystallization was defined as average crystallization temperature, see Fig. 15b.This is only possible for a limited number of intermediate cooling rates.The corresponding lamellar thickness are plotted in Fig. 15a. Clearly, full space filling, i.e. ξ = 1, is obtained at a crystallinity higher than 0.2, from where on the RAF fraction is constant.The X∞ was chosen equal to the maximum crystallinity reached at the lowest cooling rate, X∞ = 0.36, from which the crystallinity is obtained with Eq.As already discussed, the lamellar thickness could not be estimated along the full range of cooling rates; hence, the lc values were obtained by interpolation over the full range of cooling rates, see Fig. 19b. Also in this case, the RAF thickness is fixed at 1.1 nm.Fig. 19a gives the MAF and RAF fractions as a function of the crystallinity for all continuous cooling rates.The results for which a crystallization temperature could be defined are indicated by solid symbols.The results shown in Fig. 20 are quite reasonable; again, the trends are captured well but the final levels of the MAF and RAF fractions can be improved by varying one of the parameters.The RAF-layer thickness is not measured directly and the lamellar thickness is an estimated average value.The results improve by varying of lR from 1.1 nm to 1.25 nm, see Fig. 21a, or decreasing lc by multiplying with a factor 0.85 and keeping lR = 1.1 nm, see Fig. 21b.These results show the potential of this model.However, the model could be improved by a further investigation of the non-isothermal crystallization case.In particular, the lamellar thickness distribution, which is expected to be broader in non-isothermal conditions; in fact, the low thickness part of the population would contribute to a higher RAF.Another aspect to investigate further is the RAF thickness.This effect is mimicked by decreasing lc, see Fig. 21b.It seems reasonable to think that lR is not really a constant but varies with the temperature.However, these parameters are not easily achievable, in particular in the case of flash-DSC samples.The value for Tg,MAF is obtained from the experimental results while the Tg,RAF is used as the only fitting parameter, which correspond to 91 °C.Note that this is not necessarily the actual Tg,RAF, whatever that might be, since RAF is not really a phase but rather represents a layer with a gradient in the reduced mobility.Fig. 22a–c show the results; the model captures the experimental Tg results quite well.The standard deviation is 1.15 °C.Note that a model for the crystallization kinetics that includes the evolution of the distribution of the lamellar thickness is available, see for example Caelers et al. .Connecting such a model with the one presented in this study, will create the possibility of making a prediction of the Tg and this result can be used as input for deformation kinetics modeling that is used for lifetime predictions.In this work, the influence of thermal history on glass transition was investigated.The glass transition temperature was found to be not directly dependent on crystallinity whilst a clear relation with structures was established.In fact, the content of crystallinity, rigid and mobile amorphous phase for different cooling procedure were measured.A marked relation between glass transition and RAF–MAF content was found.More in particular, Tg is directly proportional to RAF and inversely proportional to MAF.This founding is also in agreement with the mobility scenario; indeed glass transition is a physical parameter strictly related to mobility, thus an high content of low mobility amorphous phase would lead to high Tg.In the opposite case, a full mobile amorphous sample can only have a minimum value of glass transition temperature.The structural investigation led to deny any strict correlation between RAF–MAF content and the crystalline unit cell, as supported by X-ray experiments.Moreover, a model able to predict the structure development was proposed.Finally an equation able to predict the glass transition temperature as function of RAF–MAF content was presented.
The glass transition temperature (Tg) is a crucial parameter for understanding the mechanical behavior of polyamide 6. It depends mainly on two aspects: hydration level and processing, i.e. the thermal history and the flow conditions. In this work, the effect of the thermal history on Tg was investigated by means of fast scanning calorimetry (flash-DSC). Two different solidification procedures were studied; isothermal crystallization and continuous cooling were performed at different temperatures and rates respectively. The procedures have led to two contradictory trends of glass transition evolutions when related to their crystallinity fraction. The concept of rigid amorphous phase is used. This is considered as a part of the amorphous phase with a lower mobility, present at the inter-phase between crystals and bulk amorphous (mobile amorphous fraction). The analysis leads to the conclusion that the thermal history affects the ratio between rigid and mobile amorphous phases and it is this ratio that determines the glass transition temperature of dry polyamide 6.
189
Present and future of glass-ionomers and calcium-silicate cements as bioactive materials in dentistry: Biophotonics-based interfacial analyses in health and disease
The interaction between restorative dental materials and tooth tissue encompasses multiple aspects of dental anatomy and materials science.Until relatively recently, many adhesive dental restorative materials were thought to have a passive hard tissue interaction based on simple infiltration with the enamel or dentin upon which they were placed.However, there is increasing interest in mapping the interactions between materials and tooth tissue, where the former has a more aggressive interaction with the latter, while promoting ‘bioactivity’; it can be argued that materials such as glass ionomer cements have had such interactions for over thirty years, but only recently have mechanisms of adhesion and adaptation been elucidated indicating the potential for chemical interactions with both resin and water-based cement-like materials .Bioactivity can be defined as ‘materials that elicit a specific biological response at the interface between tissues and the material, which results in the formation of a bond’.In this paper we will concentrate on two classes of water-based cement-type restorative materials, glass ionomer and calcium-silicate based cements, and examine their interactions with the hard tissues in health and disease, in particular examining their potential for bio-mineralization in dentin.These materials are promoted as dentin replacements, mimicking many of the physical properties of this composite biological material, but they do not, as yet, have the wear resistance and mechanical properties to make them suitable as long-term enamel replacements.Glass ionomer cements were first introduced to dentistry in 1975 and since then they have been used in a wide range of clinical applications.Conventional GICs are dispensed in a powder form supplied with its own liquid.The powder is formed of fluoro-aluminosilicate glass, while the liquid is an aqueous solution of a polyalkenoic acid, such as polyacrylic acid, although in later formulations, the acid may be added to the powder in a dried polymer form .Strontium has been added in some commercial GICs, such as GC Fuji IXGP, to substitute calcium due to its radiopaque properties ."This substitution does not have any effect on the setting products or cement's remineralizing capability .Upon mixing, an acid–base reaction takes place between the polyalkenoic acid and the ion-leachable glass particles , which occurs in two phases.The first phase is a dissolution phase, in which the acid attacks the surface of the glass particles to release ions such as aluminum, fluoride, and calcium or strontium.Following ion release, the polyacid molecules become ionized and adopt a more linear form."This renders the polyacid's carboxylic groups more accessible for the ions and facilitates their cross linking in the later stage of gelation .Eventually, the set cement will be formed of a composite of un-reacted glass particle cores, encapsulated by a siliceous gel and embedded in a polyacid–salt matrix which binds the components together .Within the polyacid–salt matrix of set GICs, water is distributed in two forms; loose water, which can be removed through desiccation, and bound water which is chemically locked into the matrix .Water plays an essential role during the maturation of the cement as well as the diffusion of ions .Further material developments have included a newly named material: ‘Glass Carbomer®’ which is claimed to contain nano glass particles, hydroxyapatite/fluorapatite nano particles and liquid silica.The nanocrystals of calcium fluorapatite may act as nuclei for the remineralization process and initiate the formation of FAp mineral as well as nanocrystals of hydroxyapatite.The glass has a much finer particle size compared to conventional GICs, giving properties that are thought to aid its dissolution and ultimate conversion to FAp and HAp.However, using “magic angle spinning” nuclear magnetic resonance spectroscopy, Zainuddin et al. have shown that the HAp in the powder is consumed during the cement formation process in this material and so may in fact have reduced availability for bio-mineralization.When a freshly mixed GIC is placed on wet dentin, an interaction between the two materials takes place, in the form of an ion exchange .Aluminum, fluoride, and calcium or strontium leach out of the cement as the glass is being dissolved by the polyacid, while calcium and phosphate ions also move out of the underlying dentin as a result of the self-etching effect of the setting cement on mineralized dentin .This ion exchange process creates an intermediate layer composed of ions derived from both substrates .The release of fluoride and calcium/strontium ions has provided GICs with the potential for remineralization of carious tissues , where ion exchange could replenish the demineralized tissues’ ions, thus tipping the balance in favor of apatiteformation.Calcium-silicate based cements were first introduced to dentistry in 1993 when Torabinejad developed a formula based on ordinary Portland cement to produce the mineral trioxide aggregate, or the gray MTA .This material was principally composed of tri-calcium silicate, di-calcium silicate, tri-calcium aluminate, and tetra-calcium aluminoferrite, in addition to calcium sulphate and bismuth oxide, added as a radiopaquer for clinical applications .In 2002, a white MTA version was developed, which was identical to the gray form but lacked the tetra-calcium aluminoferrite and had reduced aluminate levels .Since their introduction, MTAs have been principally used for endodontic applications such as repairing perforated roots, apexification, or pulp capping due to their relatively long working and setting times.In 2011, Biodentine™, a quick-setting calcium-silicate based dental cement, was introduced by Septodont.Henceforth this commercial name will be used for representation and brevity.Biodentine™ was developed as a dentin replacement material, a novel clinical application of this family of materials, intending it to function as a coronal restoration.The relatively short setting time, can enable the use of this cement for restorative procedures; impossible with MTAs that achieve an initial setting 3–4 h .Biodentine™ is principally composed of a highly purified tri-calcium silicate powder that is prepared synthetically in the lab de novo, rather than derived from a clinker product of cement manufacture.Additionally, Biodentine™ contains di-calcium silicate, calcium carbonate and zirconium dioxide as a radiopacifer."The di-calcium and tri-calcium silicate phases form around 70% of the weight of Biodentine's de-hydrated powder, which is close to that of white MTA and white Portland cement .Unlike MTA, Biodentine does not contain calcium sulphate, aluminate, or alumino-ferrate.The powder is dispensed in a two part capsule to which is added an aliquot of hydration liquid, composed of water, calcium chloride, and a water reducing agent.Despite similar constituents, there is significant variation in calcium-silicate dental cement manufacturing processes.This affects the purity of their constituents and hydration products, as well as their behavior .Studies have shown that cements such as ProRoot MTA and MTA Angelus share almost the same composition as white OPC, except for the addition of bismuth oxide to the MTAs, for radiopacity .However, the MTAs also include impurities and contaminating heavy metals such as chromium, arsenic, and lead .This suggests their manufacture is similar to OPCs but less segregated and refined as the particle sizes also vary more widely .On the other hand, other calcium silicate based dental cements, such as Biodentine™ and MTA-bio have been produced under more stringent production conditions from raw materials, in an attempt to avoid any potential contamination of the basic constituents, and to avoid the incorporation of aluminum oxide .Similar to OPC, calcium-silicate based dental cements set through a hydration reaction .Although the chemical reactions taking place during the hydration are more complex, the conversion of the anhydrous phases into corresponding hydrates can be simplified as follows:2Ca3SiO5 + 7H2O → 3CaO·2SiO2·4H2O + 3Ca2 + energyC3S + water → CSH + CH2Ca2SiO4 + 5H2O → 3CaO·2SiO2·4H2O + Ca2 + energyC2S + water → CSH + CHThis setting reaction is a dissolution–precipitation process that involves a gradual dissolution of the un-hydrated calcium silicate phases and formation of hydration products, mainly calcium silicate hydrate and calcium hydroxide.The CSH is a generic name for any amorphous calcium silicate hydrates which includes the particular type of CSH that results from the hydration of tri- and di-calcium silicate phases .The CSH precipitates as a colloidal material and grows on the surface of un-hydrated calcium silicate granules, forming a matrix that binds the other components together, gradually replacing the original granules."Meanwhile, calcium hydroxide is distributed throughout water filled spaces present between the hydrating cement's components .Compared with OPC, the hydration of MTA is affected by the bismuth oxide, which forms 20% of its weight and acts as a radiopacifier .Hydrated MTA and tri-calcium silicate cements were found to release more calcium ions than hydrated OPC .For Biodentine™, the setting reaction is expected to be similar to the hydration of pure tri-calcium silicate cement and is therefore expected to produce the same hydration products as WMTA and OPC, except for the absence of alumina and gypsum hydration products.The zirconium oxide present in the Biodentine™ powder as a radiopacifier also acts as an inert filler and is not involved in the setting reaction , unlike the bismuth oxide present in MTA .The hydration reaction starts with the fast dissolution of the tri-calcium silicate particles, which therefore can explain the fast setting of Biodentine™ , in addition to the presence of calcium chloride in the liquid, which is known to speed up the hydration reaction , and the absence of calcium sulphate that acts as a retarder.Despite similar contents, the manufacturing variations and absolute constituent ratios influence clinical behavior within the materials group.Therefore, it is essential to study and characterize these cements individually, in order to understand their nature and clinical behavior.Such studies have been limited for Biodentine™ .Therefore, further characterization of Biodentine™ and its setting reaction is required.However, a number of studies have been conducted on the interaction of Biodentine™ with dental tissues and pulpal tissues .The interfacial properties of Biodentine™ and a glass-ionomer cement with dentin have been studied using confocal laser scanning microscopy, scanning electron microscopy, micro-Raman spectroscopy, and two-photon auto-fluorescence and second harmonic-generation imaging by Atmeh et al. ."Their results indicated the formation of tag-like structures alongside an interfacial layer called the “mineral infiltration zone”, where the alkaline caustic effect of the calcium silicate cement's hydration products degrades the collagenous component of the interfacial dentin.This degradation leads to the formation of a porous structure that facilitates the permeation of high concentrations of Ca2+, OH−, and CO32− ions, leading to increased mineralization in this region.Comparison of the dentin–restorative interfaces shows that there is a dentin-mineral infiltration with the Biodentine™, whereas polyacrylic and tartaric acids and their salts lead to the diffuse penetration of the GIC; consequently a new type of interfacial interaction, “the mineral infiltration zone”, is suggested for these calcium-silicate-based cements.For remineralization of dentin, different approaches have been applied, which can be classified as classical and non-classical , or top-down and bottom-up approaches .In the classical approach , dentin remineralization is based on the epitaxial growth of residual crystallites, which act as nucleation sites for the calcium phosphate minerals to precipitate when dentin is stored in a solution rich with calcium and phosphate ions .However, recent studies have indicated that such an approach would result in an incomplete and non-functional remineralization of dentin ."The classical remineralization approach results in extra-fibrillar remineralization of the collagen matrix of dentin, without the mineralization of the collagen's intra-fibrillar compartments .This is due to the lack of control on orientation and size of the apatite crystals formed during this process.The non-classical approach was suggested as an alternative in vitro remineralization system, which attempted to achieve a hierarchical biomimetic remineralization of the organic matrix of dentin .This approach involves the use of synthetic substitutes for certain dentin matrix proteins that play an essential role during the bio-mineralization process.Two types of analogs were suggested: the first is a sequestration analog, which requires polyanionic molecules such as polyacrylic acid to allow the formation and stabilization of amorphous calcium phosphate .These nano-aggregates of ACP are thought to form flowable nano-precursors which can infiltrate the water filled gap zones in dentinal collagen fibrils, where they precipitate as polyelectrolyte-stabilized apatite nano-crystals .This precipitation is guided by the second analog, which is a dentin matrix phosphoprotein substitute .This analog is usually a polyphosphate molecule, such as sodium metaphosphate, which acts as an apatite template, encouraging crystalline alignment in the gap zones , leading to a hierarchical dentin remineralization .In the biomimetic bottom-up remineralization approach, calcium silicate based cements, such as MTA, were used as the calcium source .Upon hydration, these cements release calcium in the form of calcium hydroxide over a long duration .In addition to calcium provision, these cements release silicon ions into underlying dentin .Silica was found to be a stronger inducer of dentin matrix remineralization compared to fluoride .Furthermore, calcium silicate cements have the advantage of high alkalinity, which favors apatite formation and matrix phosphorylation , thereby providing a potential caustic proteolytic environment which could enhance dentin remineralization .The non-classical approach is thought to be advantageous over the classical, as the former provides continuous replacement of water, which occupies the intra-fibrillar compartments, by apatite crystals.The non-classical approach also provides a self-assembly system that does not require the presence of nucleation sites .However, the idea of using matrix protein analogs could be challenging clinically and difficult to apply.Recent developments have attempted to take this concept a further step into clinical application .Therefore, the classical approach remains a closer and more applicable approach to the clinical situation.Many different techniques have been used to evaluate dentin mineralization.Scanning and transmission electron microscopy , Fourier transform infrared spectroscopy , Raman spectroscopy , X-ray diffraction , energy dispersive X-ray spectroscopy , microradiography , micro-CT scanning , and nano-indentation .However, they do not enable high-resolution observation of this process within the organic matrix of dentin.The capability of these techniques in detecting mineralization at high resolution is limited to surface characterization, whether morphological, chemical, or mechanical.Hence, they do not enable the observation of the mineralization process and its morphological features deeply within the structure of the remineralizing organic matrix and may not allow a return to the same sample position over a period of time.For this purpose, combining a microscopic technique with the capability of deep tissue imaging, such as two-photon fluorescence microscopy, with a selective mineral labeling fluorophore, such as Tetracycline, could provide a useful method .Tetracyclines are polycyclic naphthacene carboxamides, composed of four carboxylic rings, which facilitate binding of the molecule to surface available calcium ions of the mineralized tissue.This binding process, helpfully enhances the fluorescence of Tetracycline after binding to calcium in these tissues .Therefore, this labeling agent has been widely used for the study of mineralization process in bone , and teeth : it is normally excited in the deep blue excitation range.Two-photon excitation fluorescence microscopy is an advanced imaging technique that enables high-resolution observation of deep tissues due to the reduced scattering of high-intensity pulsed infrared laser light compared with conventional confocal microscopy .Our home-built two-photon lifetime fluorescence microscope uses a femtosecond Ti-Sapphire laser to excite the sample and a time-correlated single-photon counting card to record the lifetime of the emitted fluorescence, providing extra intensity-independent imaging contrast.Measurement of the subtle fluorescence lifetime variations across a dentin–restoration interface allows the sensitive detection of newly-formed Tetracycline-bound minerals.In a study reported by Atmeh et al. 1.5 mm thick demineralized dentin disks were apposed to set calcium silicate dental cement: Biodentine™.The samples were stored in PBS solution with Tetracycline and kept in an incubator at 37 °C.Disks of demineralized dentin were kept separately in the same solution but without the cement to be used as control samples.After 6 weeks, one half of each sample was examined using a two-photon fluorescence microscope, illuminating at 800 nm and with a 40× objective.A highly fluorescent Tetracycline band was detectable beneath both the apposed and under-surface of the disks.Small globular structures were also noticed in the tubular walls adjacent to highly fluorescent substances within the dentinal tubules.With the enhanced understanding of the dental caries process and improvements in both dental materials and diagnostic devices, more interest has been directed toward minimally invasive approaches for the treatment of dental caries.Such approaches eventually aim to minimize the excavation of dental tissues and instead, to encourage their recovery and repair.Dentin caries results from a bacteriogenic acid attack followed by enzymatic destruction of the organic matrix.These lesions can be classified into caries-infected and -affected dentin based on the extent and reversibility of the damage induced.In the caries-infected dentin, the organic matrix is irreversibly damaged, while the deeper caries affected dentin is hypomineralized with sound organic matrix, which could be repaired and remineralized.The slow progression of caries allows a reparative intervention, which can restore the mineralized architecture after excavating the infected layer.Carious dentin remineralization has been studied extensively using different in vitro models.In these models, the dentin caries process has been simulated by partial demineralization of sound dentin using pH cycling or short duration application of phosphoric acids or ethylenediaminotetraacetic acid and in some studies total demineralization and .These methods were actually aiming to simulate the caries affected “hypomineralized” dentin, which has the potential to be remineralized.However, these models all show shortcomings in that they do not model the effect of the pulpal response to the carious lesion and the tissue fluid dynamics/re-mineralization phenomena within the dentin tubules.The challenge of using real carious dentin as a substrate is its inherent variability, so methods of characterizing the extent of caries removal and the position of the tooth restoration interface within the treated lesion would be beneficial for introducing some degree of consistency to a variable substrate.The use of confocal endoscopic fluorescence detection methods can aid in this depth determination and discrimination , while other optical techniques such as Raman spectroscopy and fluorescence lifetime imaging may also help to characterize the dentin caries substrate .Recent work in our laboratory has investigated the capability of Biodentine™ and GICs to induce re-mineralization in caries affected dentin.In order to give a reasonably reliable indicator of caries excavation endpoint , caries infected dentin was excavated chemo-mechanically using Carisolv® gel in seven carious extracted teeth.Excavation was performed after preparing occlusal cavities to access the carious lesions.On one of the proximal walls of the cavity, a diamond cylindrical bur was used to prepare a flat proximal wall to the cavity to create a right angle with its floor; this angle was used later as a fiducial reference for the imaging.Two additional sound teeth were used as controls, in which occlusal cavities were prepared and left without restoration.For each tooth, the root was cut and the crown was sectioned vertically through the middle of the cavity into two halves using a water-cooled wafering blade.Sectioned samples were subsequently polished using 600, 800, and 1200-grit carborundum papers and cleaned in an ultrasonic bath with deionized water for 10 min.The halves of five caries-excavated sectioned cavities were filled with two different restorative materials; one with Biodentine™ and the other glass ionomer cement Fuji IXGP.Before applying the cements, each sectioned tooth was mounted using a specially designed small vice, with which the sectioned surface of the tooth was tightened against a rigid plastic matrix to prevent any leakage of the cement."Cements were applied directly after mixing as per manufacturer's instructions using an amalgam carrier for the Biodentine™ and adapted with a plastic instrument.Samples were stored for 1 hour in an incubator at 37 °C temperature and 100% humidity.For aging, samples were stored in a 0.015% Tetracycline solution and phosphate buffered saline at 37 °C for 8 weeks.Solutions were replaced every 2 days.Two carious teeth and two sound teeth were not restored and used as a negative controls, one half of each tooth was stored in Tetracycline-containing media, while the other half was stored in Tetracycline-free solution.All samples were stored separately in glass vials containing 7.0 ml of the storage media.A two-photon fluorescence microscope was used to image the samples using a ×10/0.25 NA objective lens, 800 nm excitation wavelength, and 500 ± 20 nm emission filter.Using both fluorescence and fluorescence life-time imaging, the dentin–cement interface of each sample was imaged at six points before aging and the XY coordinates of each point were saved.After 8 weeks of aging, samples were lightly polished with a 2400-grit carborundum paper and cleaned in an ultrasonic bath for 10 min to remove any precipitates that may have formed on the section surface; the same points were re-imaged after re-locating the sample at the previous position using the fiducial marks.For the fluorescence lifetime measurements, data were analyzed using TRI2 software.The decay curves were well fitted using a double exponential model.The analysis was conducted in the same manner as for the FI, where an area of interfacial dentin was selected while another area was selected away from the interface to represent the sound dentin.The change in the FLT after aging was calculated using the above equation, and values were averaged for each sample and each group.Representative fluorescence intensity and lifetime images of each group before and after aging are shown in Fig. 4.The blue color in the FLIM images represents the drop in the FLT.The reduction in the FLT of the dentin could be explained by the adsorption or incorporation of Tetracycline, which has a very short FLT compared with the FLT of dentin.However, in the Biodentine™ group the area of reduced FLT appeared in the form of a well-defined band underneath the dentin–cement interface, which also appeared in the GIC filled samples.On the contrary, in the samples without restoration, the reduction in the FLT was generalized.The appearance of the bands indicates that Tetracycline incorporation was mainly concentrated in these areas, which could be explained by active mineral deposition and remineralization of the matrix.The effect of the aging of caries-affected dentin or sound dentin on the FI and FLT when stored in phosphate-rich media with or without the Biodentine™ or GIC is presented in Fig. 5.The graph in Fig. 5a shows an increase in the FI in all of the samples.Biodentine™-filled samples exhibited the highest increase, while in the GIC samples the increase was around.The FI increase in the restoration-free carious samples stored in Tetracycline-free media was found to be higher, while it changed minimally in the sound samples stored in the same media or in the carious sample stored with Tetracycline.The FLT graph in Fig. 5b shows a reduction in the normalized FLT in all of the groups except for the restoration-free sound dentin stored Tetracycline-free PBS solution.The reduction in the Biodentine-filled samples was the highest, followed by the GIC-filled samples compared to the minimal changes in the control samples.In this study, all of the teeth that were used were selected with deep carious lesions.However, since these lesions occurred naturally, it was impossible to fully standardize caries excavation, which may have led to a variation in the quality of the tissues left at the end of the caries excavation.Under or over excavation of carious dentin could have left caries-infected or sound dentin respectively.If infected dentin was left, fluorophores originating from the cariogenic bacteria or from the host might interfere with the FLIM analysis.Moreover, residual infected dentin could interfere with the remineralization process, where no remineralization is expected to occur in this structure-less substrate.On the other hand, over-excavation would remove the caries affected dentin and leave a sound substrate, with no potential sites for remineralization.Normalization of the FI and FLT measurements of the interfacial dentin to similar measurements obtained from areas of sound tissue away from the interface made the results more comparable, where the sound dentin could be considered as an internal control for the same sample, and samples of the same group.Increased fluorescence intensity and shortened fluorescence lifetime of the interfacial dentin are both related to the incorporation of Tetracycline fluorophore into the dentin at these areas.Such incorporation could be mediated by the formation of calcium containing minerals such as hydroxyapatite, which is expected to form in the presence of calcium and phosphate under high pH conditions.Such conditions were available in the Biodentine™ group, in which the hydrated cement was the source of calcium and hydroxyl ions, and the phosphate buffered saline was the source of the phosphate ions.This explains the change in the FLT in this group.While in the absence of free calcium ions, as in the restoration-free samples, there was almost no change in the FI and FLT before and after aging.In the caries-excavated samples stored in Tetracycline-free media, bacterial growth over or within the samples cannot be excluded with the absence of Tetracycline which could play this role in the aging media, therefore this growth might have resulted in an increase in the FI after storage, with minimal changes in the FLT.With the sound dentin samples, as expected there were no changes in the samples lacking Tetracycline.However, a higher increase in FI with samples were found and could be attributed to the Tetracycline conditioning effect on sound dentin.This role of Tetracycline was observed in periodontal studies using Tetracycline for dentin conditioning to facilitate dental tissue healing .In the Biodentine™ and GIC samples, the increase in the FI of interfacial dentin after aging indicated the formation of Tetracycline-incorporated minerals, which could indicate remineralization.The difference in the FI between the two groups could be attributed to the nature of the minerals that has formed, as the pH conditions were not the same between the two groups.In the Biodentine™ filled samples, the storage solution developed a high pH that favors the formation of hydroxyapatite.In the GIC filled samples, however, the pH was much lower, which would be less favorable for apatite formation.This could also explain the difference in the FLT change between the two groups.This study indicates that fluorescence lifetime and fluorescence intensity imaging and measurements can produce a useful insight into mineralization processes, recorded over time, within the same carious lesion next to bioactive restorative materials.The future challenge will be to develop imaging and measurement techniques that can assess the interface of the whole carious lesion, both infected and affected dentin, with materials that are capable of encouraging repair and replacement of lost tooth structure.This paper has reviewed the underlying chemistry and the interactions between glass ionomer cements and calcium silicate cements with tooth tissue, concentrating on the dentin–restoration interface.The local bioactivity of these materials can produce mineralization within the underlying dentin substrate.The advantage of this for the minimally invasive management of carious dentin is self-evident.We also report a new high resolution imaging technique that can be used to show mineralization within dental tissues, while also allowing monitoring of samples over time.There is a clear need to improve the bioactivity of our restorative dental materials and these cements offer exciting possibilities in realizing this goal.The author declare no potential conflicts of interest with respect to the authorship and/or publication of this article.Ethics approval was granted: reference number 12/LO/0290 NRES Committee London for the use of extracted teeth.
Objective Since their introduction, calcium silicate cements have primarily found use as endodontic sealers, due to long setting times. While similar in chemistry, recent variations such as constituent proportions, purities and manufacturing processes mandate a critical understanding of service behavior differences of the new coronal restorative material variants. Of particular relevance to minimally invasive philosophies is the potential for ion supply, from initial hydration to mature set in dental cements. They may be capable of supporting repair and remineralization of dentin left after decay and cavity preparation, following the concepts of ion exchange from glass ionomers. Methods This paper reviews the underlying chemistry and interactions of glass ionomer and calcium silicate cements, with dental tissues, concentrating on dentin-restoration interface reactions. We additionally demonstrate a new optical technique, based around high resolution deep tissue, two-photon fluorescence and lifetime imaging, which allows monitoring of undisturbed cement-dentin interface samples behavior over time. Results The local bioactivity of the calcium-silicate based materials has been shown to produce mineralization within the subjacent dentin substrate, extending deep within the tissues. This suggests that the local ion-rich alkaline environment may be more favorable to mineral repair and re-construction, compared with the acidic environs of comparable glass ionomer based materials. Significance The advantages of this potential re-mineralization phenomenon for minimally invasive management of carious dentin are self-evident. There is a clear need to improve the bioactivity of restorative dental materials and these calcium silicate cement systems offer exciting possibilities in realizing this goal. © 2013 Academy of Dental Materials.
190
capture and simulation of the ocean environment for offshore renewable energy
Improved fundamental understanding of oceanic and coastal processes, across spatial scales from centimetres to kilometres, and particularly in areas of complex inter-process interaction, is required to accelerate the sustainable exploitation of our seas as an energy resource.Recognition of this requirement has led to multiple UK and international research projects being conceived, funded and executed.Focusing on programmes of work in the UK, this paper provides research highlights of four major projects, conducted between 2009 and 2018, which have made progress against this broad research challenge.A combination of published and new unpublished research related to progress in the field is presented.The work of ReDAPT1 , FloWave , and multiple components of the UKCMER SuperGen Marine programme2 are discussed.These works are strongly interlinked in terms of their motivation and scope, in no small part due to the involvement and leadership of the late Professor Ian Bryden, whose research interests ranged across the wave and tidal sectors, and from scale testing to full scale deployment.The wide scope of these projects reflects this.The research activities discussed here span from new methods of measuring & characterising fluid flows to the design and build of a combined wave-current test facility to enable recreation of these captured dynamics at scale."The connection between the work in the sea, at the European Marine Energy Centre, and FloWave was remarked upon by Professor Bryden in 2015 upon his departure from the EMEC board, where he noted that “the world's best full and mid-scale test facility working with the world's best laboratory test facility represents a powerful opportunity for the marine sector” .Many technologies are being developed to extract clean renewable energy from the vast resource available in the global oceans.Offshore Renewable Energy offers well-established societal and commercial benefits from the generation of low-carbon electricity.This can assist towards meeting stringent renewable energy and emissions targets required to reduce the impact of climate change.Understanding the complex marine environment allows engineers, with sufficient tools and processes, to replicate key elements and processes in tools used to quantify forces on structures, ultimately helping to develop improved devices and technologies, research on which forms the basis of this paper.This approach is outlined in Fig. 1.It is an iterative process, building on previous knowledge and lessons learnt.A starting point is measurement of environmental parameters at sea, which typically have a high temporal resolution but with limited duration and poor spatial coverage, due primarily to technical and economic constraints.Therefore hydrodynamic models, calibrated and validated using in-situ measurements, are used to expand the temporal and spatial range of environmental data.Despite often consisting of phase averaged metrics, the extended spatial and temporal coverage provides significantly greater insight into the range and nature of environmental conditions.Validation of these models using site-data, is however of critical importance prior to using the model outputs.Before engineering tools can be utilised, processing of the input data is required.This includes characterisation of the wave, tidal, and combined conditions; then potential simplification prior to implementation in the chosen simulation tool.There are a wide range of tools used for the design and operation of electro-mechanical systems for ORE, predominantly grouped into physical and numerical simulation.This paper focuses on physical simulation in test tanks.Results obtained from tank testing can also be used to validate numerical models, from relatively simple models such as those based on Blade Element Momentum Theory , to complex Computational Fluid Dynamics considering detailed fluid-structure interaction .Recent works feature two-way coupling of the fluid environment and the electro-mechanical system .Wave and tidal energy devices are both designed to extract useful power from the energetic marine environment.Despite significant commonality, and the interaction between waves and the tides, there are also major differences between these technologies and the conditions experienced.Different measurement and replication techniques are typically required for conditions at wave and tidal energy sites, so they are primarily dealt with separately in this paper.A focus is given to the recreation of the conditions that ORE devices must operate in, both complex directional wave conditions and combined waves and currents.Increasing the realism of testing improves understanding of performance and helps de-risk device development.The remainder of the article is laid out as follows.Motivation for these works, their position in the wider industrial and research landscape, and background on the established engineering tools and analysis techniques are covered in Section 2.Marine datasets and their underlying measurement techniques appropriate for understanding the complex ocean environment are detailed in Section 3.Section 4 deals with recent progress in converting these captured conditions into useful characterisations and subsequent replication at scale.Whilst the underlying fluid domains overlap, these three sections separately assess environmental condition replication by wave and tidal energy applications to aid clarity.The discussion considers the aggregate impact of all these related projects and summarises the new insights and tools whilst revealing many ongoing challenges and gaps, with conclusions offered in Section 6.There are many well established techniques for characterising the marine environment, with the resulting metrics serving as the input to simulations, both numerical and physical.These methods are often codified into guidance and standard documents, especially in the case of wave characterisation.While this approach has produced standardised tools essential for the commercial assessment of technologies by developers and certification bodies alike, it does tend to preclude the application of the most recent characterisation and replication techniques.The requirement to replicate the marine conditions is driven by needs of the ocean energy sector, and enabled by the abilities of researchers and facilities to measure, characterise, and reproduce the marine environment.Detailed in the sections below are the motivations for extracting and replicating the detail of the marine environment, as applicable to the wave and tidal energy sectors in particular.The interaction of ocean energy technologies with the environment is complex, and the desire to accurately replicate behaviour in a research environment has driven advances in facilities and modelling technologies.Multi-directional wave generation has been implemented in facilities world-wide extending the capability and scale first demonstrated in the University of Edinburgh Wide Tank in the 1970s , while several basins now incorporate the ability to produce waves in combination with current."The ability to generate directionally complex combined wave-current sea states was driven forward, in no small part due to Professor Bryden's support and expertise, with research and design efforts into designs for a round wave tank with the ability to generate and absorb waves from 360 degrees with current from any relative angle .The University of Edinburgh Round Tank, as it was known at this stage, was constructed in 2011–2014 and now operates as the FloWave Ocean Energy Research Facility within the School of Engineering, Fig. 2.While FloWave and other facilities provide the tools to produce complex sea conditions, their performance can only be as good as the data available to them.The physical measurement aspects of characterising the marine environment are discussed in depth in Section 3, but for this data to be useful it must be characterised in a manner that is practical for replication in a facility or model.The importance and detail of this process is explored in Section 4, but it is informative to first consider the engineering impact and requirements that motivate the measurement, characterisation and replication of the marine environment.The broad variables examined in this paper are summarised in Table 1, and the applications and motivations for more advanced replication techniques are explored below.Standard practices and guidance for testing ORE devices over the full range of technology readiness levels was reviewed as part of the MaRINET2 project,3 which also included a gap analysis of published guidance .For tank testing, this mainly builds upon guidance for testing for ships and offshore structures, which are designed not to resonate with the waves and typically avoid highly energetic tidal currents.This study also identified tank testing in combined wave-current conditions as one of the key limitations in published guidance.The inclusion of site-specific energy–frequency distributions, current velocity, and directional characteristics for power performance testing/modelling can be challenging.High-fidelity data is required as a basis, and capable test facilities or numerical models are required for replication.In addition, to capture the range and likely combinations of these parameters within a practically implementable test program, advanced data reduction methods may be required.Recent advances in the characterisation and replication of these complex site-specific features in a practical manner are detailed in Section 4.3.As discussed in Section 2.2.1 a bivariate description of the wave climate neglects a number of important features, namely: spectral shape, directional characteristics, and the presence of and magnitude of current.In the context of extreme load estimation it is the largest load experienced in the identified extreme sea states that will inform the structural design; which is classically associated with the largest wave in a given sea state.The aforementioned parameters can significantly alter the nature of these extreme events, by affecting the kinematic and dynamic properties of the waves.Spectral width will alter the steepness of extreme wave events, and current velocity will alter the size, shape and velocities of waves significantly .Directional spreading will generally serve to reduce peak pressures associated with a given extreme event, yet directionality, for certain device types, may induce forces and moments at angles more damaging to key components .Indeed, freak waves have often been attributed to the presence of current and the famous Draupner wave has been suggested to have been a result of two crossing directional spread wave systems .Understanding the true site-specific nature of extreme events is therefore important to properly de-risk device development.It is possible to include some of these features, by including additional statistical parameters in extreme value analysis; as demonstrated in for the multivariate extreme value analysis of wave height, wave period, and wind speed.A similar approach could be adopted for the inclusion of statistics related to spectral shape, current speed, and wave/current directionality.This is not currently a commonly adopted approach for testing ORE devices however.Additionally, the site-specific spectral form cannot be considered in such approaches, omitting complexities associated with real extreme events.For the replication of individual extreme conditions expected to give rise to peak loads, there are two dominant approaches.The first is to generate long-run irregular sea states with the desired extreme statistics and to rely on extreme events occurring in the sea state realisation.This approach is demonstrated in Section 4.3.1 for the generation of directional site-specific extreme wave conditions.The alternative approach is to generate focused wave groups, as in the NewWave approach , whereby the most likely expected extreme wave event is generated.The latter has the advantage of only requiring short test lengths, however, is a questionable approach for floating dynamic systems as peak loads are often only weakly correlated to statistics such as the largest wave.This focused wave group approach is detailed in Section 4.4.2 for the generation of extreme wave events in fast currents for assessing peak loads on seabed mounted tidal turbines.Similar to wave energy, the expected yield from tidal energy converters can be significantly affected by site-specific attributes of the flow.Turbulent fluctuations are important, with power output shown to be directly linked to the level and nature of the turbulence .The inclusion of site-specific velocity fields in tank tests can be challenging, not least due to the requirement of obtaining data in highly energetic tidal environments.The subsequent replication can also be problematic particularly in relation to turbulent spectra.Recently developed facilities and approaches enable some of these key characteristics to be recreated, with two example approaches outlined in Section 4.4.Large unsteady loads are imposed on tidal turbines, resulting from a combination of turbulence, shear, and wave orbital motions .The precise nature of these conditions, and combinations thereof, will determine the peak and fatigue loads experienced by blades and other key components.Peak wave induced loading has been suggested to be most significant of these unsteady loads, and can be several orders of magnitude larger than ambient turbulence .Recent experimental work carried out as part of the FloWTurb project has demonstrated peak wave-induced thrust loads over double those obtained at rated speed in current alone.The reader is referred back to Section 2.2.2 where the site-specific nature of waves and extreme wave events, along with their effect on wave kinematics and dynamics are investigated.Analogous multivariate extreme analysis approaches can be utilised to infer extreme conditions at tidal sites.The magnitude and relative direction of the current together with directional and spectral characteristics of the wave conditions will determine the wave-induced velocities, and hence subsequent loads on the turbine rotor and structure.With the development of advanced test facilities it is now possible to recreate complex combined wave-current environments.Recent progress, building on the work of multiple predecessor projects, related to this area is detailed in Section 4.3 and 4.4.Although wave-current combinations may dominate peak loads experienced by tidal turbines, their variability and intermittency means that other factors largely dominate fatigue loading.As the fatigue life of components is determined primarily by the number and magnitude of loading cycles, persistent unsteady loads introduced by shear and turbulence are of major importance.As mentioned, the precise replication of these phenomena can be challenging as technical constraints of flow generation techniques constrain levels of environmental condition replicability.Nevertheless it is important to characterise and understand the nature of turbulence and shear, both at sea and in experimental facilities, to enable quantification of likely discrepancies.The instrument capability and configuration will largely determine the information available for characterisation, and may comprise of simple scale-invariant metrics such as TI or may consider more detailed characteristics such as flow coherence.This is discussed further in Section 3.3.The collection of meteorological and physical oceanographic data has a long history across many sectors including naval and maritime engineering.These metocean datasets play a key role throughout the major tidal energy research projects ReDAPT, PeraWaTT,4 and X-Med5 as well as for test facilities such as the European Marine Energy Centre and FloWave.There are a multitude of measurement technologies and techniques to collect this metocean data.Interestingly, the technical challenges of data acquisition vary by application, with deep water posing problems due to the large pressures for example, whereas relatively shallow tidal energy sites feature highly oxygenated waters and cyclic loading leading to accelerated corrosion .Furthermore, the replication of representative sea states for the ocean energy industry places specific requirements on the type and fidelity of the datasets upon which the simulation relies.The datasets and challenges explored here are not comprehensive, but summarise the inputs for the replication and characterisation techniques described in Section 4.In particular, the challenges of collecting metocean data for the specific needs of offshore renewable energy industry are discussed with reference to industry-standard and novel and advanced measurement techniques.When collecting data to understand deployment conditions for wave energy converters, it must be considered that metocean datasets vary dramatically in detail and duration.The advanced replication techniques explored in this paper explore directional wave spectra in high fidelity, with representative sea states derived over multi-year periods to give confidence in extreme sea characterisation and seasonality.This places demanding requirements on the datasets used for the characterisation process, a challenge that was a consideration in the EU FP7 funded EquiMar project, which ran from 2008 to 2011 ."Some of the most expansive datasets are associated with large national wave buoy deployments, such as the French CANDHIS, Spanish Puertos del Estado, and Italian RON coastal networks.In the US the National Buoy Data Centre have to date archived 52,287 buoy months.While these datasets may be useful for geographical scale resource assessment, the site specific advanced replication techniques explored here are reliant on data local to probable deployment sites.It also notable that many deployments provide good long term duration datasets, but detailed directional wave spectra are either non-existent or difficult to access.Essentially, established metocean datasets struggle to provide the detail and/or the durations required for advanced site replication for the ocean renewables sector at the locations of interest.Several open sea test sites have been established worldwide, with varying levels of infrastructure and support.The wave characterisation work presented here arose from a collaboration with EMEC in Orkney, UK which provides grid connected berths for both the wave and tidal sector."A comprehensive list of open sea test sites is given by Ocean Energy Systems , with notable sites including Wave Hub, SEM-REV, BIMEP, the US Navy's Wave Energy Test Site and MERIC.The development of a test site will typically include a site characterisation measurement programme, with the advantage that detailed directional wave spectra are archived.Not only do these detailed spectra allow for a better estimate of the power available for a particular sea state, but they also allow the directional detail carried over to a directionally capable laboratory.The remaining challenge has been obtaining datasets of sufficient duration to characterise the resource, but with sites such as EMEC having operated since 2003, datasets in excess of 10 years are now available.Surface waves, fundamentally, can be measured from either a fixed or free-floating sensor as noted by MS Longuet-Higgins and since his pioneering work wave buoys have been deployed for the measurement of gravity surface waves .In-Situ Wave Measurement.Wave buoys have been traditionally deployed for marine weather forecasting for the maritime industries and provide good quality wave height, period and often direction measurements whilst suffering from poor spatial coverage.Directionality information can be obtained by either measuring the same parameter at multiple points or by measuring different parameters at the same point, with the latter technique used in directional wave buoys through the co-located measurement of body heave, pitch and roll.Historically, time-series of buoy motions are processed on-board, before summary statistics of a selected period are transmitted via radio telemetry.These time-averaged wave parameters may prove insufficient for ORE applications where access to the time series is required in near real-time for e.g., control applications and deterministic wave models.The quality of buoy-derived time series, for ORE applications, has been noted to be variable and in some cases can require extensive post processing .In addition, it is to be stressed that as buoys do not typically measure current velocity, the presence and subsequent effect of any current on the wave field will be unknown.This results in data contamination: sea state power and steepness will be misinterpreted as the predicted wavelengths and group velocities will not reflect the true ocean conditions.By residing on the surface and therefore having access to through-air communications buoys have an advantage over submerged sensors at the expense of exposure to damage caused by human activity and storms.Small diameter shallow-water buoys provide more dynamic response than their large open-ocean counterparts whose ability to track the moving surface is hindered by higher inertias and mooring influences .In addition, advances in computational power, data storage and low-power, low-cost telecommunications permit live streaming of raw measurements of buoy motion.In summary, buoys are the standard method for providing spectral parameters including directional information, are recommended for use in offshore renewable resource assessment and are routinely used at leading European ORE test sites.Whilst proven technology, wave buoys remain expensive to build, deploy and operate and their use can be limited in regions of strong currents.Furthermore, for complex coastal environments large degrees of spatial variation in the wave field are present, necessitating high resolution models to augment point measurements."Whilst measuring the Doppler shift of suspended particles in a water column and inferring the surrounding fluid's velocity was originally intended for use as a tool in current flow measurement the technique has been extended to measure waves.Combining velocity profiles with routinely installed co-located pressure sensors and echo-location of the air-water interface enables multiple techniques to ascertain wave climate from a single instrument.Sensing from deeper water, however, leads to a diminishing ability to capture higher frequency waves and affects wave direction estimates, whilst direct echo location, which can provide near-direct time series of elevation, provides no directional information.Separately, pressure measurements can be transformed to surface elevation via linear wave theory but again suffer from poor results as depth increases.In addition, breaking waves entrain large volumes of air, which effect the performance of acoustic-based instruments.Examples of these measurement methods, and their spatial extents scaled against a real-world tidal turbine are shown in Fig. 3.This depicts a seabed-mounted divergent beam acoustic Doppler profiler, also commonly referred to as an acoustic Doppler current profiler, and two turbine-mounted single beam ADP sensors orientated horizontally on the turbine hub and rear.A vertically-orientated SB-ADP provides vertical velocity profiles and echo-location of the air-sea interface.These instrument variants are discussed further in Section 3.3.2.In Fig. 3 a post-processing technique developed during ReDAPT provides online time series of elevation which has been overlaid on the same vertical scale as the ReDAPT tidal energy converter which featured a 20 m rotor plane.Remote Sensing.A promising but as yet not widely deployed remote sensing technique uses commercially available X-Band radar and operates on the principle of measuring the backscatter of radar energy from the ocean surface.These systems offer massive spatial coverage improvements over wave buoys with a typical system being able to cover a swept area of radius 2 km at a spatial resolution down to tens of metres.For spectral sea-state parameters, estimates obtained from X-Band radar measurements have been shown to agree well with those obtained using wave buoys .Beyond spectral information, data processing using linear wave theory as the basis of an inversion technique can provide surface elevation estimates and work is ongoing on post-processing techniques .If further improvements can be delivered these systems would provide ORE farms with wide area coverage of wave climate, along with surface currents for use in device design and operation and maintenance activities.Flow characterisation of a tidal energy site centres on gaining information on water velocity over a range of spatial and temporal scales that have been targeted a-priori for use in various engineering tools.These potentially include information varying across annual and seasonal time scales to fluctuations in velocity at time-scales of seconds and below.Sites often exhibit significant spatial variation at scales in the order of several rotor diameters, important for array studies and impacting measurement and modelling campaign specification.Since no single technique currently exists to provide high resolution data for sufficient duration across a wide spatial extent, measurement campaigns must be designed that provide metrics that are obtainable, reliable and representative and moreover that are appropriate for the engineering tools that will use them as model inputs."The ReDAPT project conducted, to the authors' knowledge, the most comprehensive metocean measurement campaign centred around an operating tidal turbine, the Alstom 1 MW commercial prototype DeepGEN-IV.The measurement campaign provided calibration and validation data to three separate simulation tools: BEMT models , side-wide hydrodynamic models and blade-resolved CFD .Additional measurement requirements stemmed from the need to produce accurate machine power predictions , which are publicly available .It is notable that to meet the requirements of these different engineering applications a suite of off-the-shelf sensors, in varying configurations, were required, along with the in-project development of novel systems.For example, whilst the BEMT software operated using inputs of depth profiles of velocity and turbulence intensity values, the CFD required information on mid-depth turbulence length scales in the stream-wise, transverse and vertical directions: parameters not available from the available standard equipment.Hence up to ten turbine-installed, remotely-operable single-beam ADPs were specified and deployed at locations indicated in Fig. 3 along with a single deployment of five SB-ADPS configured as the first demonstrated field-scale geometrically convergent beam system .These configurations are discussed further in Section 3.3.2.Multiple seabed-mounted D-ADPs where successfully deployed between March 2012 and December 2014 at distances between two and five rotor diameters fore and aft of the turbine accruing over 350 days of data across all seasons.These provided essential long-term stable ambient conditions upon which to base flow reduction and flow characterisation works."The data is available publicly, in archived format at the UK Energy Research Centre's Energy Data Centre and at the University of Edinburgh where analysis continues.6",It is interesting to note that the original ReDAPT programme only targeted the capture and simulation of tidal currents, excluding waves.After a winter deployment however, having assessed the impact of the wave climate on the TEC performance and the turbulence characterisations, this approach was changed and wave measurement became a focus for metocean data collection in the latter stages.The FloWTurb project builds upon ReDAPT experiences and focuses on wave-current interaction.In 2017 a measurement campaign was conducted to probe spatial variations in mean and turbulent flow conditions at tidal farm scales.In 2018 a dataset was also collected from the MeyGen tidal site in the Pentland Firth.Other notable projects include TIME7 which deployed multiple D-ADPs in Scottish waters and trialled newly available 5-beam variants with improved wave measurement capabilities and faster sampling rates, and InSTREAM.8,InSTREAM has the particular relevance to the replication of site conditions due to the commonality in instrumentation and analysis techniques deployed in the field and laboratory.A related and recently launched European project, RealTide,9 will further develop the outputs of these projects and install multiple sensors to the Sabella D10 TEC in the Fromveur Passage, France in September 2018.In RealTide flow characterisations with and without the presence of waves are being used to validate both tank tests and blade resolved CFD models, which also feature embedded tide-to-wire models.As discussed, no single instrument can provide the mean and turbulent metrics required for the commonly used engineering design tools, nor can they capture the information from every location of interest.Acoustic-based techniques, however, have the flexibility to be deployed in various modes and configurations to meet many of the data requirements.Even so, there remains a significant challenge in homogenising, comparing and combining the various data streams and analyses to provide a more holistic map of the flow field.Acoustic-based velocimetry.Acoustic Doppler Profilers, particularly in geometrically diverging configurations, are the most commonly used sensor for the measurement of offshore flow velocities due to their large sensing range, ability to operate for extended periods on batteries and unobtrusive flow measurements.They operate via measuring the Doppler shift from their backscattered ultrasonic acoustic emissions.ADPs have been used internationally at multiple well-known tidal channels and can be installed in fixed locations on the seabed, on moving vessels conducting site transects as well as on submerged buoyant structures.Conventional instruments emit acoustic signals from a number of diverging transducers in order to deduce a three-dimensional velocity measurement, relying on the underlying assumption of flow homogeneity, as discussed below.ADP variants and limitations.The transformation of the velocity components measured in beam coordinates to the instrument coordinate system assumes flow homogeneity i.e., that the underlying flow velocities in the sampled region of a each beam is identical for a given depth layer.This is often a reasonable assumption for mean flow velocities, which typically do not exhibit large inter-beam variation.In tidal channels, the instantaneous flow velocity is seen to vary over a wide range of time and length scales and this assumption is not reliable particularly for coherent turbulent structures that are smaller than beam separation distances which increase with range from the transducer.The instrument processing technique also misrepresents the reporting of large scale eddies .Additionally, ADP measurements suffer from contamination by Doppler noise which if not corrected leads to overestimates of turbulence intensity, which in turn affects component selection for ORE applications.Correction techniques are available .A detailed description of underlying TI values, following data characterisation to remove time-series where waves are present, with and without correction, and for varying depth regions of a tidal channel are provided in .Large scale convergent-beam ADP systems are not routinely used and remain at present research systems under development in the UK and Canada .They do however, offer the potential to provide 3D turbulent information, without applying flow homogeneity assumptions, from region of interests to tidal developers e.g., across the rotor plane.Fast sampling miniature C-ADPs, known as Acoustic Doppler Velocimeters, are routinely used in test-tanks and for marine boundary layer studies where they can capture small-scale turbulent processes.Their applicability to ORE problems has recently been extended through mounting on motion-tracked compliant moorings to reach sampling locations at significant distances from the seabed .Recent works have advanced the ability to secure accurate metocean data at the required resolution to return flow and wave metrics important to ORE applications.Challenges remain however.Instrument ease-of-installation and robustness needs to be improved with accelerated cable and connector degradation an issue.Long term bio-fouling and its impact on sensor performance requires assessment.Cost of sensor equipment is high in a sector with lower profit margins than, for example, the oil and gas industry.The standardisation of post-processing techniques could be improved and data could be shared more rapidly and openly.The further development of advanced sensors will provide improved information on turbulence, and the bridging of separate flow descriptions, through mathematical techniques and intra-instrument comparison, will provide a more complete map of the coherency in flows, enabling better inputs for numerical and physical modelling.The capability of physical modelling facilities have progressively advanced, with multi-directional wave basins incorporating current generation capabilities now operational at several locations world wide, including the FloWave Ocean Energy Research Facility in which the work demonstrated here was applied.The analysis techniques demonstrated below are built upon the motivations laid out in Section 2, to replicate in detail the conditions at representative ocean energy deployment sites, taking full advantage of access to state-of-the-art field measurements and physical modelling facilities.In order to generate at scale the sea states measured in Section 3, the selected facility will typically have to be capable of generating: multi-directional seas states; waves in combination with collinear or non-collinear current; and current with representative turbulence and velocity profiles.Other important elements of a facility are adjustable water depth and supporting infrastructure to allow for measurement and validation of the generated seas.Careful consideration of the facility scale is also of utmost importance as tidal devices in particular are sensitive to non-representative Reynolds numbers associated with Froude scaled velocities.As noted in Section 2.1, the FloWave facility was envisaged by Professor Bryden and others to replicate complex multi-directional wave spectra in combination with fast currents from any direction .The facility was established from the outset as a resource to support the ORE sector and meet the motivations outlined in Section 2.A primary requirement during the design process was the ability to simulate complex directionality, both in terms of wave spectra and the relative wave-current direction.The concept of a circular basin with wave and current generation from any direction was originally proposed by Professor Stephen Salter and the design developed under Professor Bryden was inspired by this proposed configuration, with wavemakers forming the outer circumference of the basin and an underfloor recirculating flow drive system to generate current .The final design utilises 168 active absorbing wavemakers and 28 independently controlled 1.7 m diameter impellers located under the tank floor.This wavemaker layout removes the constraints on directionality, allowing waves to be generated and absorbed over a full 360 degrees.The flow drives are also arranged in a circle, and by operating in paired banks the flow is generated across the tank, being recirculated through the underfloor plenum chamber, as described in more detail in and characterised in .A key consideration in the design process and justification for FloWave was ensuring that the facility allowed testing at a scale appropriate for ORE devices.The key parameters are driven by Froude scaling considerations, and as such this tends to drive a push towards larger scale models with the aim of reducing the Reynolds number discrepancy.As discussed by Ingram et al. , this resulted in a facility operating at scale range of approximately 1:20–1:40, a Reynolds regime in which tidal turbine tip speed ratio and coefficient of power is comparable to full scale.The final scaling issue is one of depth.The FloWave facility has a depth of 2 m, which is somewhat shallower than many basins operating in this scale category.The advantage of this depth is that it scales well to tidal deployment sites, which typically have water depths of between 20 m and 70 m. Ideally, the facility would incorporate variable depth, but this was not practical to implement in combination with the flow drive hardware.In some cases water depth cannot be correctly scaled within the constraints of a test programme, and this may introduce discrepancies in wavelength, group velocity, and wave power, an issue discussed and quantified in .There is a long history of using scale models in wave tanks to determine properties of full sized devices, primarily ships and offshore platforms.Guidance has been produced for this, which as covered in Section 2.1 is often conservative.More recently; devices to harness energy from the energetic ocean environment are being tested.Wave energy converters in particular, are designed to resonate with the waves, so previous guidance is not always applicable.Additional parameters need to be considered, to accurately represent the complexity of ocean waves, in order to fully understand the potential for energy capture.To demonstrate this, three case studies showing how advanced wave climates can be replicated in a facility like FloWave are presented in the subsequent sections.This case study focuses on the replication of realistic sea states from buoy data, with a focus on preserving and reproducing the observed directional complexity.Four years of half-hourly data from the Billia Croo wave test site at EMEC was utilised, spanning from January 2010 to December 2013.This comprised a total of 64 974 sea states, after removal of those identified as poor by quality control processes.It is clearly impractical to replicate all of these sea states in tank tests, and as such a classification and data reduction procedure is required.The process of creating a validated set of representative directional wave conditions from buoy data is depicted in Fig. 4.In addition to key statistics, the half-hourly data available consists of spectra and directional Fourier coefficients.These allow estimation of half-hourly directional spectra, describing the energy distribution across both frequency and direction.From these spectra, all proxy statistics typically used for site classification and characterisation processes can be derived.There are a variety of approaches available for the reconstruction of directional spectra from directional Fourier coefficients.Sources including suggest that the Maximum Entropy Principle provides the most reliable estimates from single point measurements.As such, it was utilised for the directional spectrum reconstruction in this work.The representative directional sea states resulting from this work were Froude scaled using the ratio between the depth of the site and the test facility.The 38 sea states which subsequently did not breach tank limits were generated using the single-summation method of directional sea state generation .The sea states were measured using a directional array of wave gauges, with incident and reflected directional spectra evaluated using the SPAIR method .The incident spectrum of each sea state was corrected to achieve the desired wave amplitudes in a single iteration.Two examples of the final measured frequency and directional distributions are compared with the target spectra in Fig. 7.Mean errors in the directional spectra were around 10% for all sea states generated.These 38 sea states represent an extensive validated set of directional spectra which cover the range of conditions expected at the Billia Croo wave test site.They have been used to test wave energy devices in the FloWave tank, and demonstrate that increased direction and frequency spectral realism can be used in tank testing.Similar approaches may be employed on additional datasets, and the data reduction methodology adapted to suit the aims of the test programme and device sensitivity.This case study details recent work on the replication of directional sea states in the presence of low current velocities typical of those present in the open ocean.This was identified as an area of importance as there are often low current velocities present at locations of interest to wave energy.The effect of these currents is typically ignored and they are seldom measured, yet can have a significant influence on wave properties and hence device response.Fig. 8 demonstrates the effect of a current on the wave climate for a representative wave condition, highlighting that the available power and sea state steepness are significantly altered.If the current is unknown, assumed values of these parameters will have significant errors, owing to the incorrect computation of wavenumbers and group velocities.Both steepness and power available are of critical importance to device response and assumed efficiency of the machine, and as such it is important to know the true range and nature of the wave-current conditions devices will operate in.In addition, it is important to test devices in such conditions so they may be better understood and optimised prior to full-scale deployment.This example, presented in detail in , demonstrates the simulation and validation of non-parametric directional spectra in the presence of current.A representative directional sea state resulting from the Billia Croo data reduction process was chosen and generated at five angles relative to a variety of current speeds.A correction procedure was implemented to ensure the desired component wave amplitudes were attained in the different current velocities, accounting for wave-current interaction in the tank.The resulting frequency spectrum, directional spectrum, and frequency-averaged Directional Spreading Functions for the 0.1 m/s case are depicted in Fig. 9.It is evident that the frequency and directional distributions are very close to that desired, demonstrating that the generation of complex realistic directional sea states in current is achievable and can aid in increasing the realism of tank testing outputs.When replicating tidal energy sites, turbulent flow parameters such as bulk flow, vertical flow profile TI, and lengthscales, are obviously important.In some flumes it is possible to change the vertical flow profile or to increase TI by introducing vortices using a grid; however this is still not replicating the tidal site-specific turbulence.It has also been suggested that peak loads induced by waves are most significant, and can be several orders of magnitude larger than ambient turbulence .Therefore the following examples from the SuperGen project showcase recent work on creating combined wave-current conditions in a large basin.This example focuses on the recreation of combined wave-current environments when the current is large, and hence demonstrates the replication of tidal energy sites which are exposed to waves.Site data collected as part of the ReDAPT project was utilised, obtained from the EMEC Fall of Warness grid-connected tidal test site.Full scale velocities of 1.2, 1.8, and 2.4 m/s were chosen, and wave cases both following and opposing the current were identified which were common in the site data.The combinations of current velocity, peak wave period, and significant wave height chosen are detailed in Table 2, noting that common wave heights opposing the current are larger due to the wave-current interaction.Due to a lack of detailed spectral and directional information from the available datasets, wave conditions are defined as uni-directional JONSWAP spectra.This assumption of uni-directionality is usually valid for tidal channels where the incident wave directions are constrained by the channel width.The combined wave-current conditions were Froude scaled according to the depth ratio.An iterative correction procedure was implemented to obtain the desired wave spectra when averaged over an array of wave gauges covering the location where tidal turbines are installed.The resulting normalised frequency spectra compared with desired are shown in Fig. 10.It is noted that, although discrepancies are generally small, there was difficulty in obtaining the correct high frequency part of the spectrum for the following wave conditions in fast currents.This is a result of significant wave-current interaction causing a large reduction in the measured amplitudes; whereby the required input amplitudes to correct this exceed the wavemaker limits at these high frequencies.Focused wave groups are common practice for testing offshore structures , however their generation in current is rarely documented.The authors are not aware of published work on their use for assessing peak loads on tidal turbines, but recently submitted work demonstrates this capability and their subsequent effectiveness in following conditions .This case study briefly details unpublished work on the creation of focused wave troughs in the presence of fast opposing currents; conditions expected to give rise to peak loads on tidal turbines when waves oppose the predominant current direction.Presented within this paper are recent advances in the measurement, characterisation, and subsequent replication of realistic directional wave and wave-current conditions.Advanced site replication enables ORE devices to be tested in conditions representative of their operating environment by bringing the complexities observed at sea into the laboratory.Challenges still remain to further advance this approach, both in terms of data collection and test tank capability.The demands of the ORE sector and the establishment of test sites such as EMEC in Orkney, UK has resulted in the production of datasets of high fidelity and long duration in representative deployment locations.This is most evident for wave measurement, with long term directional wave data available for sites such as Billia Croo at EMEC.The opportunity and capability now exists to interrogate these wave datasets at a spectral level, as detailed in Section 4.3.1.This process provides a reduced dataset practical to realise in the tank, without the constraints of parametric spectral inputs.The resultant test matrices capture the potentially complex wave directionality present at an ORE deployment site, providing the opportunity to better replicate ORE energy generation, sea-keeping, and structural loads.These datasets have primarily been gathered with established point-measurement technologies, and as such spatial variability is difficult to characterise.Novel, or less-established, techniques deploying remote sensing have the potential to fill this gap.Datasets describing tidal energy deployment sites have largely been limited to short term deployments of seabed mounted acoustic Doppler instruments, typically providing a vertical profile with bins separated in the order of tens of centimetres.The characterisation of turbulence is key for the tidal energy sector due to its influence on structural design, but the effectiveness of the established Doppler instruments for this application is limited by assumptions of homogeneity across multi-metre scales.Novel measurement techniques deployed in the Fall of Warness, Orkney, UK under the ReDAPT project, as described in Section 3.3.2, reduce this assumed homogeneity scale by an order of magnitude and give improved confidence in the conditions required for laboratory replication.However, at present this converging beam approach does not provide the depth profiling associated with conventional instruments, a feature which may be possible with future developments of the sensing technology.The final issue to consider is that real world tidal deployment sites experience significant wave action, and vice-versa.The ReDAPT dataset offers information on both the wave and current conditions, yet is limited in duration and currently does not provide information on wave directionality.Measurement of currents is also limited for the Billia Croo wave test site.Indeed, it is often the case with site data that either they do not measure a comprehensive set of parameters or their duration is limited, and inherently the spatial range of site data is limited.This requires the use of wide-area numerical models to enable data to be obtained over wide spatial areas and over long time frames.At present there exists only a small number of validated combined wave-current models.In future it is likely that numerical models validated by existing combined wave-current datasets will offer significantly increased potential in terms of site recreation capability.Site-specific simulation aims to reproduce observed site complexity in laboratory environments capturing features including: complex wave-directionality; representative tidal turbulence; temporal and spatial variation of the tidal flow; and combined wave-current conditions.The replication work presented in this paper was undertaken at the FloWave Ocean Energy Research Facility at The University of Edinburgh, a circular wave-current basin designed to deliver many of these capabilities.To date, site-replication has delivered detailed non-parametric directional spectra, combined wave-current recreation, and tidal flows with site-representative turbulence levels.Despite advances in the hardware and facilities focused on the needs of the ORE sector, limitations and challenges remain, particularly for the recreation of turbulent characteristics and spatial variability measured in the field.To make progress in this areas it is likely that test facilities will need to consider the influence of bed topography, a factor that has been shown to significantly influence turbulent flow structures through numerical modelling and field studies.The production of site-specific turbulence spectra in the laboratory will require consideration of more complex and controllable flow generation systems, a considerable challenge when considered in the context of a large basin.Furthermore, measurement techniques for detailed turbulence characterisation at tank scale must be further developed to support validation against field data .The recreation of combined wave-current sea states has been demonstrated with both long irregular seas with low velocity current, and energetic current with focused design waves.These approaches are aimed primarily at the wave and tidal sectors respectively, and illustrate the utility of a combined wave-current basin for ORE research.Certain elements of recreation remain challenging, reflections or excitation of tank specific modes e.g. cross waves, may arise and cause deviation from the desired sea state.Absorbing beaches and/or active-absorbing paddles are used to minimise these.However, there will always be boundary effects not present in the real sea.Measurement and analysis techniques can be utilised to quantify and characterise the discrepancies in wave-only tests, yet for combined wave-current environments even understanding the discrepancy becomes challenging.At present reliable reflection analysis can be implemented when flow speeds are low and turbulence fluctuations are small relative to group velocities, see .Progressing these techniques to more energetic flows will be an important step for expanded experimental analysis of wave influence on tidal devices with longer duration test runs.This paper reviews the requirements for, and recent progress in, the simulation of the ocean environment for offshore renewable energy applications.In addition to summarising motivation and key considerations, this article presents highlights of a decade of flagship research covering the process of physical oceanographic data collection, classification, and eventual recreation in advanced experimental facilities.This work demonstrates significant evolution in the approaches and tools available, highlighting recent capability to physically simulate real-world ocean complexity.This progress has been made possible only through the collection and characterisation of high fidelity ocean data and through the development of physical infrastructure able to emulate the conditions measured; areas is which Professor Bryden made extensive contribution.It is demonstrated that by exploiting this new capability, and replicating more of the true complexity of ocean conditions, that offshore renewable energy technologies can be more appropriately tested and understood.This will support nascent wave and tidal energies in their quest for commercial viability, enabling key lessons to be learnt prior to costly full scale deployment.
The offshore renewable energy sector has challenging requirements related to the physical simulation of the ocean environment for the purpose of evaluating energy generating technologies. In this paper the demands of the wave and tidal energy sectors are considered, with measurement and characterisation of the environment explored and replication of these conditions described. This review examines the process of advanced ocean environment replication from the sea to the tank, and rather than an exhaustive overview of all approaches it follows the rationale behind projects led, or strongly connected to, the late Professor Ian Bryden. This gives an element of commonality to the motivations behind marine data acquisition programmes and the facilities constructed to take advantage of the resulting datasets and findings. This review presents a decade of flagship research, conducted in the United Kingdom, at the interfaces between physical oceanography, engineering simulation tools and industrial applications in the area of offshore renewable energy. Wave and tidal datasets are presented, with particular emphasis on the novel tidal measurement techniques developed for tidal energy characterisation in the Fall of Warness, Orkney, UK. Non-parametric wave spectra characterisation methodologies are applied to the European Marine Energy Centre's (EMEC) Billia Croo wave test site, giving complex and highly realistic site-specific directional inputs for simulation of wave energy sites and converters. Finally, the processes of recreating the resulting wave, tidal, and combined wave-current conditions in the FloWave Ocean Energy Research Facility are presented. The common motivations across measurement, characterisation, and test tank are discussed with conclusions drawn on the strengths, gaps and challenges associated with detailed site replication.
191
Urea-based fertilization strategies to reduce yield-scaled N oxides and enhance bread-making quality in a rainfed Mediterranean wheat crop
Nitrogen fertilization is essential to feed the increasing worldwide population through the enhancement of crop yields.On the other hand, N fertilizers can have a major impact on the environmental, e.g. through the release of gases such as ammonia, nitric oxide or nitrous oxide to the atmosphere.Nitrous oxide is a powerful greenhouse gas and an ozone-depleting substance, while NO contributes to the formation of tropospheric ozone and acid rain.The main biochemical processes involved in the emissions of both N oxides are nitrification and denitrification.Urea fertilizer is the most commonly used nitrogen fertilizer used worldwide due to its low cost, high N content and ease of management during transport and storage.After application to the soil, it rapidly hydrolyses to release NH3 and carbon dioxide.The use of this NH4+-N based fertilizer can result in lower N2O emissions than NO3−-based fertilisers in agro-ecosystems where denitrification is a dominant soil process.However, greater N2O emissions can occur from urea under contrasting conditions, such as arable-crops in semi-arid areas.This may be a result of the predominance of nitrification in semi-arid calcareous soils with low organic carbon content that causes NH4+ oxidation to be the main N2O production pathway, even under irrigated conditions.According to Tierling and Kuhlmann, the accumulation of nitrite under non-denitrifying conditions plays a key role in the increase of N2O production, and this is favored at high soil pH. Therefore, it is essential to find mitigation strategies for N oxides in semi-arid calcareous soils, where emissions from widespread urea fertilizer are significant.One possible approach is the use of nitrification and urease inhibitors with urea.Dicyandiamide has been one of the most widely used NIs worldwide, but its use is currently under discussion because traces of this inhibitor were found in New Zealand milk products.Plant interception and uptake, followed by grazing by dairy cows, were potential routes for this contamination of milk.There is therefore interest in other NIs, such as 2- succinic acid isomeric mixture or nitrapyrin-pyridine).Neither of these two NIs have been commercialized yet in Europe, although nitrapyrin was introduced as a NI to the US market in the 1960s.The efficacy of nitrapyrin has been demonstrated in other areas of the world under a range of management and environmental conditions.DMPSA has been shown to reduce emissions of N oxides following calcium ammonium nitrate and ammonium sulphate applications to soil, mainly under irrigated or humid rainfed conditions.However, the efficacy of DMPSA to reduce emissions of N oxides has not been tested with urea fertiliser in rainfed semi-arid conditions yet.The use of NIs can result in negative trade-offs, e.g. increased NH3 volatilization.Other possible mitigation strategies include the use of UIs such as N-butyl thiophosphorictriamide, which delays the hydrolysis of urea thus reducing NH3 emissions.Moreover, NBPT also has showed positive results in mitigating emissions of N oxides through the reduced availability of topsoil NH4+ and NO3−.However, its N oxides mitigation efficacy may be more dependent on environmental conditions, thus leading to a lower average performance than NIs.Therefore, it is important to evaluate the efficacy of NBPT in consecutive years in rainfed semi-arid areas, where high rainfall variability influences the effectiveness of N fertilizer mitigation strategies.In this sense, the use of double NI-UI inhibitors may lead to mitigation of both N oxides and NH3 emissions.The potential for the double-inhibitor approach to improve the mitigation efficacy of N oxides has yet to be proved under rainfed semiarid conditions.Since NIs and UIs are known to improve the synchronization of the N applied with crop demand, other strategies such as splitting N application should be compared against inhibitors.Previous studies have shown the potential of smaller more frequent doses of fertiliser to enhance N recovery efficiency and decrease N losses.Some of these mitigation strategies can increase farm costs and decrease net margins for farmers, in comparison to the single application of urea without inhibitors.Consequently, possible improvements of farm benefits through increments in crop yields and quality, or improvements in NUE, which can be affected by inhibitors or more split doses of fertilizer must be evaluated together with N oxides emissions.A large number of previous studies have explored the use of NIs and UIs on gaseous emissions and crop yields, but a complete overview including crop quality is lacking.Previous studies have shown the potential of inhibitors and N application timing to increase plant N concentrations and/or influence N remobilization or protein composition of grain.In the case of bread-making wheat, an increase in grain protein content is closely linked to dough quality.In this context, a field experiment was established to compare several strategies based on urea fertilization, including the use of NIs and/or UIs, and split dressings of urea.We hypothesized that all of these strategies would improve the balance between mitigation of yield-scaled N oxides emissions, NUE, crop yield and bread-making quality, compared to conventionally managed urea application.Although its contribution to GHG balance in semiarid croplands is generally low methane emissions, which also affect the GHG balance of agro-ecosystems, were also measured.The field experiment was located in the National Center of Irrigation Technology, “CENTER” in the Madrid region of Spain.According to the Soil Taxonomy of USDA the soil is a Typic xerofluvent with a silt loam texture in the upper horizon.The main physico-chemical properties of the topsoil were: bulk density, 1.27 Mg m−3; water pH, 8.2; organic matter, 20.7 g kg-1; total N, 1.64 g kg-1; CaCO3, 8.16 g kg-1; extractable P, 28.4 mg kg-1; total K, 3.14 g kg-1.The site’s mean annual average air temperature and annual rainfall during the last 10 years was 14.1 °C and 393 mm, respectively.The average rainfall from November to July was 296 mm, while mean soil temperature for this period was 11.8 °C.Data for daily rainfall and daily air and soil temperatures were obtained from the meteorological station located at the field site.A field experiment was carried out from October 2015 to October 2017, including two wheat cropping seasons: year 1 and year 2.The same plots were used in the two years.A complete randomized block design with three replicates was used, with each plot covering an area of 64 m2.The application of fertilizers was adjusted to provide the equivalent of 120 kg total N ha−1 for all treatments during the cropping period.The different fertilizer treatments were: 1) Urea applied in one dose; 2) Urea+NBPT; 3) Urea+DMPSA; 4) Urea+NBPT+DMPSA; 5) Urea+Nitrapyrin 6) Urea split in two applications; and 7) Control with no N fertilization.The proportion of DMPSA in the fertilizers was 0.8% of the NH4+-N, whereas NBPT was applied at 0.13% of ureic N. NBPT and DMPSA based products were provided by EuroChem Agro in a granular form, and were homogeneously applied to the soil by hand.Nitrapyrin was applied was applied at a rate of 0.35% of the applied N.The mixture U+NIT was obtained by dissolving U and nitrapyrin in water.The solution was then sprayed with a manual applicator.All fertilizers were applied to the soil surface at tillering stage.In the case of SU treatment, the second N fertilizer dose was applied at the beginning of stem elongation.Soil phosphorus and potassium concentrations were analyzed prior to the beginning of the experiment.No additional P or K fertilizer was applied since the soil content for both were sufficient for wheat production.The field was sown with winter wheat on 27th October 2015 and Triticum aestivum L. ´Marcopolo` on 11th November 2016 at 210 kg ha−1.A cultivator pass was performed after sowing and also to incorporate the wheat residues after harvesting.The field was kept free of weeds, pests and diseases, following local practices.Because of the low rainfall during the spring of year 2, three 20 mm irrigation events were performed using sprinklers.During the first 30 days following each fertilizer application, sampling for gaseous emissions occured 2–3 times per week since this was considered the most critical period for high fluxes.Afterwards, the frequency of sampling was decreased progressively.The GHG fluxes were measured using the closed chamber technique, as described in detail by Guardia et al.One chamber per plot was used for this analysis.The chambers were hermetically closed for 1 h, by fitting them into stainless steel rings, which had been inserted into the soil to a depth of 10 cm to minimize the lateral diffusion of gases and avoid the soil disturbance associated with the insertion of the chambers in the soil.The plants were cut when their height surpassed that of the chamber.The rings were only removed during management events and GHG measurements were always taken with wheat plants inside the chamber.Gas samples were taken at the same time of day in order to minimize any effects of diurnal variations in the emissions.Gas samples were taken via a septum in the chamber lid and placed in 20 ml pre-evacuated vials at 0, 30 and 60 min to test the linearity of headspace gas accumulation.The concentrations of N2O and CH4 were quantified by gas chromatography, using a HP-6890 gas chromatograph equipped with a headspace autoanalyzer, both from Agilent Technologies.HP Plot-Q capillary columns transported the gas samples to a 63Ni electron-capture detector to analyze the N2O concentrations and to a flame-ionization detector.The increases in N2O and CH4 concentrations within the chamber headspace were generally linear during the 1 h sampling period.In the case of nonlinear fluxes, linear regressions were performed, since it has been described as the recommended option for three sampling points.NO fluxes were measured using a gas flow-through system on the same days as the N2O measurements.One chamber per plot was used for this analysis.In this case, the interior of the chamber was covered with Teflon® to minimize the reactions of NOx with the walls and the chamber had inlet and outlet holes.The nitric oxide was analysed using a chemiluminiscence detector.Air was passed through the headspace of the chamber, and the gas samples were pumped from the chambers at a constant flow rate to the detection instruments via Teflon® tubing.The ambient air concentration was measured between each gas sampling.As proposed by Kim et al., the NO flux was calculated from a mass balance equation, considering the flow rate of the air through the chamber and the increase in NO concentration with respect to the control when the steady state concentration was reached.Soil samples were taken concurrently with gas samples in order to determine the moisture content, NH4+-N and NO3–-N concentrations, and relate them to the gaseous emissions.Three soil cores were randomly sampled in each microplot and then mixed and homogenized in the laboratory.The soil NH4+-N and NO3–-N concentrations were analyzed using 8 g of soil extracted with 50 mL of KCl and measured by automated colorimetric determination using a flow injection analyzer with a UV–V spectrophotometer detector.Water-filled pore space was calculated by dividing the volumetric water content by the total soil porosity, assuming a particle density of 2.65 g cm−3.The gravimetric water content was determined by oven-drying soil samples at 105 °C with a MA30 Sartorius® moisture analyzer.The wheat was harvested on 21st June 2016 and 21st June 2017 with a research plot combine.Previous to this, the plants of one row were harvested to determine the total N content of grain and straw, which were measured using a TruMac CN Leco elemental analyzer.Grain proteins were sequentially extracted using the modified Osborne method.Samples were analysed using a Beckman ® 2100 P/ACE system controlled by a System Gold Software version 810.Proteins were detected by UV absorbance at 214 nm with a photo diode array detector.As described by Ronda et al., in order to reduce the lack of reproducibility usually obtained in electrophoretic analysis, the lys-tyr-lys tripeptide was used as internal standard.Cumulative gas emissions during the experimental period were calculated by linear interpolation between sampling dates.The global-warming potential of N2O and CH4 emissions was calculated in units of CO2 equivalents over a 100-year time horizon.A radiative forcing potential relative to CO2 of 265 was used for N2O and 28 for CH4.Greenhouse gas intensity and yield-scaled NO emissions were calculated as the ratios of GWP to grain yield and NO-N emissions to grain yield, respectively.The N2O and NO emission factors were calculated as the ratio of the cumulative emissions to the total synthetic N applied.The N2O and NO mitigation percentages were calculated using the EFs.The NUE was calculated as the ratio of the total N in aboveground biomass in each fertilized treatment to the total N applied through synthetic fertilizer.The N surplus of fertilized treatments was calculated as the N application minus the aboveground N uptake.Analyses of variance were performed using Statgraphics Plus 5.1.Data distribution normality and variance uniformity were previously assessed using the Shapiro-Wilk test and Levene’s statistic, respectively, and log-transformed when necessary.Means were separated by Least Significant Difference test at P < 0.05.For non-normally distributed data, the Kruskal–Wallis test was used on non-transformed data to evaluate differences at P < 0.05.Simple Linear Regression analyses were performed to determine the relationships between N2O-N, NO-N, and CH4-C fluxes with soil NH4+-N, NO3−-N, WFPS and soil temperature, as well as among some yield/quality variables.Average soil temperatures at 10 cm depth and rainfall distribution throughout both years are shown in Fig. 1.Total precipitation over the wheat cropping cycle was 319 mm and 272 mm for year 1 and 2, respectively.In the February-July period, the accumulated rainfall was 243 mm and 99 mm for year 1 and 2, respectively.Soil WFPS in the February-July period ranged from 11% to 60% and from 3% to 50%.Mineral N concentrations in the topsoil increased markedly after fertilization.During year 1, treatments containing NIs generally increased average soil NH4+ concentrations in comparison to U, particularly from the stem elongation stage.In agreement, average NO3− concentrations decreased in these treatments with respect to U, particularly during tillering and stem elongation.The U + NBPT treatment generally reduced the average NH4+ and NO3− concentrations, while SU reduced mineral N concentrations until the second fertiliser dressing, after which NH4+ and NO3- contents increased.In year 2, the treatments containing NIs generally resulted in greater soil NH4+ concentrations and lower NO3− concentrations.The main differences with respect to year 1 occurred for U + NBPT, which only decreased mineral N contents during the first period.Mineral N concentrations after flowering for year 2 were significantly higher than those of year 1.Specifically, the soil NO3− content after harvesting reached 43 mg N kg soil-1, much higher than that of the previous year.Emissions of N oxides from mid-February to the end of May, including fertilization events in each year, are shown in Figs. 4 and 5.Nitrous oxide emissions ranged from -0.12 to 0.94 mg N m−2 d-1 and from -0.14 to 1.07 mg N m−2 d-1.In year 1, N2O peaked on 25th March, reaching 0.35 mg m-2 d-1 on average for the fertilized treatments.In year 2, the main increases were observed 15 days, 67 days and 76 days after fertilization.The U treatment resulted in the highest N2O EFs in both years were.During the first year, U resulted in significantly higher cumulative N2O losses than inhibitor-based treatments or SU.The N2O cumulative fluxes from SU were greater than those of U+DMPSA, but similar to those of U+NIT, U+DI or U+NBPT.In the second year, U+NBPT and SU did not decrease N2O cumulative losses in comparison to U.The U+DMPSA treatment was the most effective mitigation treatment of all inhibitor-based strategies, with significantly lower emissions than U+NIT.At the end of this second year and after a rainfall event in mid-October, an increase in N2O emissions was noticed.This peak was concurrent with an increase in soil moisture.No significant differences between treatments were observed in this peak.On average, cumulative N2O fluxes were increased by 36% in this second year compared to those in year 1.Regarding the relationship of N2O fluxes with soil properties, in year 2, daily N2O fluxes were positively correlated with WFPS and negatively with soil NH4+.These significant correlations were not observed in year 1.Nitric oxide emissions ranged from -0.36 to 7.76 mg N m−2 d-1 and from -1.31 to 21.2 mg N m−2 d-1.Nitric oxide peaks were generally concurrent with those of N2O.The NO EFs ranged from 0.2% to 1.4% and from 0.0 to 1.6%, with highest values corresponding to U.As in the case of N2O, U resulted in the highest NO cumulative emissions in year 1, while U+DMPSA led to the lowest emissions of N fertilized treatments, being significantly lower than those of U+NBPT, U+NIT and SU.In the second year, U+DMPSA was also the N fertilizer treatment that caused the lowest cumulative emissions, which were even lower than those of control.The U+DI and U+NIT also decreased cumulative NO fluxes with respect to U, but SU and U+NBPT did not decrease NO emission in comparison to U, as a result of high emissions after irrigation events.A strong and positive correlation between N2O and NO fluxes was found.Regarding soil properties, cumulative NO fluxes correlated with mean soil NH4+ contents, while daily NO fluxes correlated with NO3− concentrations throughout year 2.Methane fluxes ranged from -1.70 to 2.60 mg m−2 d-1, and from -1.62 to 1.69 mg m−2 d-1.The soil acted as a CH4 sink on most sampling dates, although emission peaks of CH4 were observed during both years about one month after N fertilization.No significant differences in cumulative CH4 oxidation were reported between treatments in either of the two years.If GWP is considered for both GHGs, CH4 uptake by soil was only 4–13 % of N2O emissions; therefore, the net result is the emission of 70.3 ± 3.6 kg CO2-eq ha-1 and 97.8 ± 4.7 kg CO2-eq ha-1 in years 1 and 2, respectively.Average grain yields were 2850 and 845 kg ha−1 while wheat straw yields were 6095 and 6610 kg ha−1 in years 1 and 2, respectively.Even though there were no significant differences between fertilized treatments for grain or straw yield during the first year, yields from the control plots were significantly lower than the fertilized treatments.During year 1, average grain yield was slightly above the average regional value.Conversely, in the following wheat cropping season, grain yields were much lower than in year 1.Only SU, U+DMPSA, control and U+NIT exceeded 1000 kg grain ha−1 and showed a positive response with respect to U.During year 2, straw production was similar for all treatments, including the control treatment, with the exception of SU which had the highest straw yield.Average grain N contents were 2.8% and 3.2% in year 1 and 2, respectively.The use of different wheat varieties in each year means that the differences in crop yields and quality between years were not only influenced by the meteorological conditions, but also by the different genetic characteristics of both cultivars.In the first year, grain from the control had the lowest protein content.U+DI significantly increased the grain protein content in comparison to U+DMPSA, U, SU and U+NIT, with U+NBPT having an intermediate value.In the second year, the results were similar, although U and U+NBPT had similar grain protein content to U+DI.Further, the control did not result in a lower grain protein content than the SU, U+NIT or U+DMPSA treatments.In this cropping season, grain yield and grain N content were negatively correlated.The U+DI treatment had the highest total gliadin and glutenin concentrations in both years, and was significantly higher than that of some of the other fertilized treatments.However, differences between treatments regarding gluten proteins composition were generally small.On average, gliadins accounted for 64.6% and 68.3% of total gluten proteins in years 1 and 2, respectively.The corresponding gliadin to glutenin ratios were 1.9 and 2.3.Grain N content was positively correlated with both gliadin and glutenin contents in both years.In the first year, U+DI led to the numerically highest NUE value, while SU decreased N efficiency compared to U+NBPT and U+DI.Regarding the N surplus, the treatments involving NBPT had the lowest surpluses while the highest value was reported for SU in year 1.In the second year, SU and U resulted in the highest and lowest NUEs, respectively, with the rest of the treatments having intermediate results.The average NUEs were 62.2% and 29.8% in years 1 and 2, respectively.In agreement, the average N surplus in the second year was higher than that in year 1.In year 1, GHGI and YSNO emissions followed a similar pattern as N2O and NO emissions, respectively.In the second year, U and U+NBPT led to the highest GHGIs, while SU significantly decreased this index.NI-based treatments resulted in the lowest GHGIs, but the double inhibitor increased the amount of CO2-eq emitted per kilogram of grain yield, compared to U+DMPSA.YSNO emissions were decreased in all fertilized treatments in comparison to U, with U+DMPSA being the most effective N-fertilized option.Average GHGI and YSNO were increased in year 2 with respect to year 1 by factors of 7.6 and 9.1, respectively.The new inhibitor DMPSA was consistently efficient in mitigating N2O losses in both year 1 and 2.High mitigation efficacies with the use of this NI, in comparison to those reported by the meta-analysis of Gilsanz et al., provides evidence of the importance of nitrification as a dominant processes generating N2O in these calcareous and low organic C content soils and the effectiveness of DMPSA.In the typical rainfall cropping season, the higher mitigation efficacy of U + DMPSA for NO than for N2O also supports the relevance of nitrification, which has been suggested as the main source of NO.The correlation between NH4+ depletion and N2O emissions and by the higher NO fluxes compared to those of N2O in both years are also in agreement with the importance of nitrification.Indeed, average NO emissions exceeded those of N2O by factors of 2.8 and 3.0 in years 1 and 2, respectively.Field studies measuring both N2O and NO emissions after DMPSA application are currently limited to irrigated conditions and the use of calcium ammonium nitrate, therefore, this study demonstrates that DMPSA is also effective in mitigating both N oxides from urea in rainfed crops.In comparison to urea only, the addition of nitrapyrin to urea significantly reduced cumulative N2O and NO emissions.Even though the meta-analysis of Thapa et al. reported similar mitigation efficacies for DMPP and nitrapyrin under a wide range of environmental conditions, in our study DMPSA always surpassed nitrapyrin regarding N oxides abatement.We hypothesize that the application of nitrapyrin as liquid solution may have enhanced the release of N oxides particularly after fertilization thus decreasing its efficacy, since soil moisture is a limiting factor for N oxides emissions, particularly in rainfed semi-arid crops.This effect was noticed in the first year, especially for NO fluxes, which were similar or even higher than those of urea in the first weeks after N application.However, both NIs maintained their mitigation efficacy consistently through cropping seasons for one to more than two months after N application, when the highest peaks of N oxides occurred.This was supported by the increments in average soil NH4+ concentrations and decreases in average NO3− contents after tillering, in comparison to urea alone.However, this was not observed for the N2O peak in October 2017, 8 months after fertilization.Therefore, we did not observe any significant residual effect of any of the NIs applied, contrary to some previous findings.The occurrence of N oxides pulses after the first rainfall events in autumn, arising from unusually high amounts of residual N has been previously described by other authors.Contrary to NIs, the effectiveness of the UI, NBPT, in mitigating N oxides emissions was greatly influenced by the meteorological conditions in each year.Indeed, the U + NBPT treatment significantly decreased N2O and NO emissions during the first year, compared to urea.These values were lower than those obtained by Abalos et al. under similar conditions, and also lower than those for DMPSA.However, during the second year the cumulative N emissions from U+NBPT were not significantly different from those from the urea treatment.In spite of the high NBPT effectiveness in reducing emissions prior to wheat flowering, we observed marked N oxides pulses after flowering because of the high substrate availability resulting from the lower wheat N uptake.When irrigation water was applied, this residual soil N resulted in marked increases in N oxides emission.Since these peaks occurred 67 and 76 days after N fertilization, the more temporary effect of NBPT had disappeared, in contrast to the NIs.Our results demonstrate that due to the climatic variability of rainfed semi-arid crops and the transient effect of NBPT, the potential of this strategy to mitigate N oxides emissions is uncertain and, in any case, lower than that of NIs.Therefore, the possible reduction of the efficacy of mitigation practices under a global change scenario should be considered for the implementation of cost-effective practices by stakeholders and policymakers.The mixture of UI and NI was developed to achieve a reduction in total N losses combining the beneficial effect of UIs on NH3 volatilization abatement and that of NIs on the reduction of emissions of N oxides and N leaching.In our study, U+DMPSA and U+DI generally resulted in similar N2O and NO emissions.This result and the observed tendencies are in agreement with the study of Zhao et al., which reported that the effectiveness of DMPP alone significantly surpassed that of DMPP+NBPT in a calcareous soil.They speculated that this effect could be driven by the decomposition of NBPT and by other side reactions when both inhibitors are mixed.The low N2O fluxes in this rainfed semi-arid crop masked this effect, so the significant reduction of N oxides together with the NH3 mitigation potential of NBPT should also be taken into account, since the amount of N loss is often greater than that of other gaseous forms.As for the U+NBPT treatment, our results indicated that SU only mitigated N oxides emissions during the first year.Therefore, its effectiveness is highly dependent on rainfall distribution after fertilization events.Our results suggest that in dry years with water scarcity during tillering and stem elongation, splitting the application of U may enhance the risk for N oxides pulses after subsequent rainfall/irrigation events, as a result of the inefficient N uptake by plants which leads to increased opportunities for microbial N transformations in soil.In addition, split applications under conditions of fast urea hydrolysis and subsequent nitrification may cause splitting N to be inefficient as a surface-scaled N2O mitigation strategy, as observed by Venterea et al. in a rainfed maize crop.Since splitting N can have a negative effect on N2O losses reduction but also enhance crop yields, the assessment of yield-scaled emissions, grain yield and quality, surplus and N efficiency is needed to gain a complete view of the sustainability of these urea-based strategies.The calculated N surpluses revealed that in year 1 the crop obtained more N from the soil, via net mineralization.The scale of the calculated N surpluses for both years are within the range of “no effect” on yield scaled-emissions as suggested by Van Groenigen et al.However, both GHGI and YSNO emissions were markedly increased in year 2, compared to year 1.The grain yields obtained in year 2 were visibly affected by severe drought conditions.The setting-up of the irrigation system occurred after stem elongation stage, so the scarcity of soil moisture during the stages of tillering and stem elongation, which are critical for N uptake, resulted in a devastating effect on grain yield and a high content of soil residual N after these stages, some of which was then lost to the environment.In this dry year, we observed a positive response of some N management strategies on grain yields, compared to urea.We hypothesize that under optimum conditions, the response of yield to fertilization management is often masked.However, under limiting conditions, responses to N management can be detected.However, this depends on the extent of the limiting conditions, which in year 2 resulted in very poor grain yields.Under extreme drought conditions, N fertilization was a useless and even counter-productive strategy, as shown by the similar grain yields in the control and the top-yielding treatments.Protein content was clearly influenced by the common negative relationship between grain N content and grain yield that we also noticed.In both years, the use of NBPT showed the potential to biofortify bread-making wheat through the enhancement of grain protein.Previous studies have shown that NH4+-N based plant nutrition increases protein content.We hypothesize that the slow-release effect of NBPT may prolong NH4+ availability, thus raising grain protein.In addition, the potential direct uptake of NBPT by the crop could have promoted N remobilization, as shown by Artola et al. and Cruchaga et al., leading to enhanced N content in the grain.Gluten proteins, which are related to bread-making quality, followed a similar tendency as total grain proteins, reaching maximum values in U + DI treatment.The increments in grain protein were not observed for DMPSA, in agreement with Huérfano et al.All of the NIs showed a neutral effect on the composition of gluten proteins and therefore rheological properties.Contrary to other authors, we did not observe any effect of splitting N fertilization on the composition of gluten proteins.The cost of purchasing inhibitors is one of the main barriers to their widespread use.Consequently, it is important to evaluate the effects of these products on N efficiency, crop yield and quality in order to obtain a complete view of their potential advantages, in addition to the documented public economic benefit of reducing the environmental impacts of N pollutants.In the typical rainfall year, the U + DI treatment gave the best balance between N oxides mitigation, NUE, N surplus and protein content, while no effect of N fertilized treatments was observed regarding grain yield.The high NUE values and enhanced protein content in NBPT-based treatments could be attributed to the abatement of NH3 losses as widely reported by previous studies, which are quantitatively more relevant, from an agronomic perspective, than those of N oxides.The U + DMPSA treatment resulted in the lowest GHGI and YSNO emissions, suggesting it is a promising mitigation strategy.In the second year, U + DMPSA was again the most effective GHGI and YSNO mitigation strategy, while high yields in the SU treatment offset partially the high surface-scaled emissions.The enhancement of grain yields with the tillering-stem elongation fractionation of fertilizer dose, which was also observed by López-Bellido et al., only occurred in the driest year in our experiment.In agreement with these authors, SU also increased NUE nearly doubling that of other fertilizer treatments, although differences were only significant when compared to urea alone.Our results showed that all alternative treatments decreased surface-scaled and yield-scaled GHG and NO emissions, compared to urea alone.In the first year, the use of the double NBPT + DMPSA inhibitor led to the best balance between mitigation of yield-scaled N oxides emissions, N efficiency, and crop yield and bread-making quality.In the following dry year, the grain yields did not respond positively to N fertilization, although NIs increased grain yield in comparison to urea only.During this second year, the urea + NBPT treatment was not an effective mitigation strategy since the main N oxides peaks occurred when its effect had run out.Splitting urea should be recommended rather than a single dressing application of urea, although its efficacy was lower in our study than that of the inhibitors.During dry years, splitting urea applications can improve grain yields but with the risk of increasing N oxides losses.The effectiveness of nitrapyrin in mitigating yield-scaled emissions was generally exceeded by that of DMPSA.The application method and rate of nitrapyrin application, which is still not used in Europe, should be improved to enhance its mitigation potential.The use of DMPSA with urea was the most effective yield-scaled emissions mitigation option, regardless of rainfall conditions.The use of NBPT showed the potential to biofortify wheat through the enhancement of grain protein, including those related to bread-making quality.Our results suggest that in spite of the inexistent effect of inhibitors on grain yield during typical rainfall years, the enhancement of NUE and/or grain quality, and also the increment of grain yield under drought conditions can help offset the price of these inhibitors for farmer.Our results also showed that if water supply is not enough during tillering and stem elongation stages, the application of N fertilizers can be even counter-productive leading to higher yield-scaled emissions, N surpluses and low NUEs.These low N uptake efficiencies should be avoided to prevent severe gross margin penalties and pulses of N oxides emissions.
Urea fertilizer applications to calcareous soils can result in significant nitrous oxide (N2O) and nitric oxide (NO) emissions, predominantly via nitrification rather than denitrification. To address this, we explored several mitigation strategies based on improved urea management in a rainfed winter wheat (Triticum aestivum L.) crop during two consecutive cropping seasons with contrasting rainfall quantities and distribution. The strategies we investigated included the split application of urea at top dressing, the use of nitrification inhibitors (e.g. 2-(3,4-dimethyl-1H-pyrazol-1-yl) succinic acid isomeric mixture, DMPSA, and nitrapyrin), the urease inhibitor N-butyl thiophosphorictriamide (NBPT), or the double inhibitor DMPSA + NBPT. Emissions of N2O, NO, methane (CH4), as well as measurements of grain and straw yield and bread-making quality (protein content, reserve protein composition: glutenins and gliadins) were measured. Nitrogen (N) use efficiency (NUE) and N surplus were also calculated. Results were affected by rainfall, since the first cropping season experienced typical rainfall quantity and distribution, whilst the second cropping season was very dry, thus increasing significantly the yield-scaled emissions and N surplus, and markedly decreasing the NUE. In comparison to the single application of urea without inhibitors, all treatments generally decreased surface-scaled and yield-scaled emissions, with urea+DMPSA being the most effective and consistent mitigation option. Split urea and NBPT did not mitigate surface-scaled emissions in the dry cropping season, because of the marked peaks in N oxides after flowering, caused by inefficient crop N uptake. In the typical rainfall cropping season, the use of the double NBPT+DMPSA inhibitor led to the best balance between mitigation of yield-scaled N oxides emissions, N efficiency, crop yield and bread-making quality (i.e. increments in total protein, gliadins and glutenins). We did not observe any effect of nitrification inhibitors on grain yield (except in the dry cropping season) or the composition of gluten proteins. Our results suggest that the use of DMPSA with or without NBPT should be recommended to mitigate yield-scaled emissions of N oxides in rainfed semi-arid crops.
192
Intergenerational differences in beliefs about healthy eating among carers of left-behind children in rural China: A qualitative study
Eating habits developed during childhood can persist into adolescence and adulthood, influencing individual growth, development, and health in later life.For children, their eating decisions are often made within the family context, which is the most influential aspect of the immediate social context."The family eating circumstances that caregivers provide during early childhood, including their feeding practices, their own eating patterns, and their beliefs and attitudes about healthy eating can directly and indirectly influence and shape children's eating habits. "Household living arrangements are an important feature of family circumstances that contributes to the formation of children's eating practices.Kinship care of children, especially by grandparents, is a common arrangement in developing countries.Grandparents therefore play a key childcare role in multigenerational and ‘skipped generation’ households.Being cared for by family members other than parents may take a toll on child development.For example, one qualitative study drawing on in-depth interviews with 12 parents and 11 grandparents in Beijing urban areas suggested that young children aged 3–6 years and cared for by grandparents tended to develop unhealthy eating habits.A study using the China Health and Nutrition Survey 2006 showed that adolescents living in extended families, where children live with their parents and grandparents were more likely to develop unhealthy food preferences compared to those living in nuclear families.These intergenerational differences in forming eating habits may be due to differences in generations across a range of socio-economic variables, for example, educational attainment, economic status and early life experiences involving food.Contemporary China has been undergoing unprecedented rural-to-urban migration, which has tremendously altered household living arrangements.Around 61 million children live apart from either one or both parents."About one fourth of left-behind children share a household with a parent and grandparents, while one third live with grandparents only.LBC cared for by grandparents in rural China tend to be subject to potential nutritional problems.For example, a large survey conducted in the rural areas of seven Chinese provinces showed that non-parent caregivers had relatively poorer nutrition knowledge regarding the intake of certain nutrients than parent caregivers of non-LBC."Another survey using a sample of LBC and non-LBC from 10 rural communities in 5 provinces across mid-south China showed that grandparents tended to pay less attention to children's diets and were unable to provide meals for LBC on time during farming seasons.Although these results suggest that LBC are prone to unhealthy eating, they fail to provide in-depth information about how caregivers understand and manage eating practices for LBC.We argue that intergenerational differences in beliefs about dietary-related behaviours and nutritional intakes, the focus of this work, may put children at different risks for nutritional deficiencies."The aim of the present study was to explore caregivers' beliefs about healthy eating for LBC in rural China.A qualitative study design was used to provide rich and in-depth information about the complex phenomenon of feeding practices and eating behaviours among children and their caregivers, and to advance an understanding of social and behavioural aspects of food and eating.We employed a social constructionist approach that viewed people as active agents who shape and create meanings and understandings within certain social, cultural and historical contexts.This perspective recognises that people construct subjective and complex meaning pertaining to food, eating and nutrition through their personal experiences and interactions with other people and their environments.Understanding of caregivers’ perspectives and experiences related to healthy eating could inform the development of nutrition-related polices and interventions for children in rural China."This study was conducted from September 2013 to February 2014 in a rural township in Henan Province, the most populous and traditionally one of the largest migration-sending central areas of the People's Republic of China.The annual per capita disposable income of urban residents in 2012 was 19,408 RMB and 7432 RMB for rural residents, which is lower than the national average.The agricultural township has a population of 55,000 and includes 30 villages under its administrative jurisdiction.Child participants were recruited from a rural primary school.Children who participated in this study were from 7 villages that are geographically close.To be eligible, LBC had to have at least one migrant parent living away for employment reasons.Non-LBC had to be children with both parents currently living at home.Both LBC and non-LBC were aged over 6 years old because this is the normal youngest age of commencement of primary education in China, and an age at which children have the cognitive and language capabilities to be interviewed.Children as young as 6 years old can demonstrate a basic understanding of the purposes of research and what is expected of them during the research process.The caregivers for eligible children were invited to take part.The sample was purposive in order to achieve maximum variation of age, gender, and family structures.The sample size was not fixed until saturation occurred, which was defined as ‘data adequacy’, meaning that recruitment stopped when no new information was obtained from additional participants.Ethical approval was obtained from the University of Manchester Research Ethics Committee.Child participants were included after gaining informed consent from their caregivers, as well as their own assent, either in writing or by verbal audio-recording from illiterate participants.Pseudonyms have been used to preserve participants’ anonymity.Face-to-face semi-structured interviews were conducted with caregivers, either individually or together with their partners, and their children.Prior to interviews, children were encouraged to keep diaries about their daily eating and activities for around one week.Diaries were used to establish rapport with children and their caregivers, and as a starting point for discussions between participants and the researcher."In addition, the diaries were used to facilitate discussions in further interviews with caregivers and to check consistencies and inconsistencies in caregivers' interview accounts regarding children's diets.Children were provided with pens and interesting and user-friendly notebooks, which may have rendered the children more predisposed to completing the diaries.The first author explained to the children how to complete the diaries using plain language and encouraged them to ask questions and/or air any concerns.She asked caregivers to encourage their children to complete the diaries, but not to influence their content.In addition, the first author visited the school twice a week to remind and encourage children to continue completing their diaries."The interviews took the form of informal conversations in which the interviewer asked open questions about children's food and diet using a topic guide. "The topic guide contained open-ended questions designed to elicit caregivers' understanding of healthy eating for children, as well as the related feeding practices for children under their care.The primary questions included:What do you think of healthy eating/eating well for children?,Probes included questions such as can you name some food items that you think are healthy and why you think they are healthy; and,How do you manage the food for children you are looking after?,Probes included food preparation, meal places and meal time.Caregivers were encouraged to elaborate on their answers and to raise additional topics that they considered relevant.Interviews were conducted in Chinese.Each interview lasted approximately between 1 and 1.5 h. All interviews were conducted by the first author in the participants’ homes and were audio-recorded with their permission, and subsequently transcribed verbatim in Chinese by a different person outside the research team.The first author checked all the Chinese transcripts to minimise data loss."There were two sources of data: children's diaries and interviews.The primary data source was interviews with caregivers and children."Children's diaries were not designed to collect reliable data about precise consumption but used principally to build rapport with participants and facilitate further interviews. "However, they provided an overall picture of children's daily eating and were also compared with caregivers' accounts concerning children's diets during interviews. "The food items and frequency of consumption were summarised to explore consistencies and inconsistencies between the food items that children mentioned in their diaries and caregivers' stated views about healthy food. "As children's reports in diaries were consistent with interview accounts, the data were included in this paper to provide additional detail and context.Principles and procedures of the constant comparative methods guided data analysis, following transcription and entry into the qualitative analysis computer program Nvivo 10.Concurrent data collection and data analysis occurred with codes and categories being inductively developed from the data.Analysis involved identifying codes and their properties and dimensions, grouping these codes to create categories, systematically comparing and contrasting the codes and examining the connections between the categories and subcategories.Data analyses were initially conducted in Chinese in order to avoid misunderstanding and to minimise the risk of losing participants’ original meanings.Each transcript was read through carefully several times so that the first author could familiarise herself with the data.During this process, recurring themes were noted as part of the initial coding process.During the second stage, the first author re-read and coded each transcript line-by-line within the Nvivo software package.Data were fractured into segments and distinct codes.During this process, joint meetings were scheduled with co-authors to review the coding, discuss possible meanings and achieve consensus on categorisation and interpretation.Data related to emerging themes were translated into English to facilitate review and discussion with co-authors.The translated versions and the original Chinese versions were checked by an independent and bilingual person outside the research team.Then, relationships between categories were linked on a conceptual level rather than on a descriptive level, concerning conditions, context, action/interactions and consequences.Once all interviews had been individually analysed, the identified themes were integrated with subsequent comparative analysis.Twenty-six children aged between 6 and 12 years old were recruited: 21 LBC and 5 non-LBC; 12 of 21 LBC had been left behind by both parents, 3 had been left behind only by the mother, and 6 had been left behind only by the father.Twenty-four of the children kept diaries.Table 2 presents the characteristics of the caregivers.Of the 21 grandparents, 15 were aged over 60 years old, and 6 were aged over 70.Only two of them attended primary school and the rest had never attended to school.The other caregivers had up to a middle school education.‘Healthy eating’ and ‘eating well’ were broadly interpreted by participants and were used interchangeably during the interviews.Caregivers illustrated individual food items that were believed to be ‘good for children’ when asked about what they thought as eating well/healthy.The grandparent generation emphasised the importance of home-grown foods, such as starchy foods and vegetables.For the parent generation, the foods that were most valued were animal products from local markets, which were described as associated with household economic status.There were distinct differences in expectations and concerns regarding food between these two generations.The grandparent generation expected food as a means for avoiding hunger and they viewed meat as a special treat for special occasions, while the parent generation viewed food, especially meat, as a sign of economic status within their communities.Grandparents tended to be concerned about having enough food for the children they cared for, while the parent generation paid more attention to food safety and tended to express concerns about the lack of access to fresh milk for their children growing up in rural areas, compared to children growing up in urban areas."Food items and frequency of consumption, as noted in children's diaries, were grouped into three categories based on children's living arrangements. "Table 4 presented as contextual data for interpretation of caregivers' interview accounts regarding children's daily eating. "Food items reported in children's food diaries were broadly consistent with the descriptions in the interviews with caregivers.There were no distinct differences in the consumption of starchy foods and vegetables between children left in the care of the grandparent and parent generations.Protein-source foods including animal products, eggs and milk were mentioned more often for each meal among children left in the care of the parent generation, compared to children who were cared for by the grandparent generation.The food items most frequently mentioned by grandparents as ‘good for children’ were steamed buns, noodles, meat, rice and eggs."According to grandparents, starchy foods were more important than other food groups for children's growth.Starchy/staple foods are an essential part of Chinese dietary culture, which can include wheat and wheat products, rice and rice products, and foods of other grains.One paternal grandfather of a 10-year-old left behind boy believed that starchy foods could help children grow ‘taller and bulkier’:He is too thin for his age.We try our best to let him eat more staple food, like steamed buns and noodles.One of his playmates in this village can eat a couple of bowls of noodles at a time.So he is quite bulky.How could you possibly become taller and stronger without staple food?, "Similarly, one grandmother of a six-year-old left-behind girl believed that the intake of starchy foods was essential for meeting children's nutritional needs.She illustrated this by comparing her two grandchildren:She is not as tall and strong as her little brother.She is quite fussy about the food.She does not like steamed buns, rice and sweet potatoes.At least her little brother eats steamed buns.How can her nutritional needs be possibly met?,Four grandparents spontaneously referred to their past experiences during the Great Famine when asked about ‘good food for children’.The Great Famine took place in China from 1959 to 1961, leading to 30 million deaths.The province where this study was conducted experienced a severe reduction in grain, which was an essential part of the Chinese diet at the time."The experience of starvation during their grandparents' early years of life appeared to have shaped their values about food, which in turn may have influenced their feeding practices.One paternal grandfather mentioned that his left-behind grandson constantly complained that ‘the food was not nice’ and ‘we have noodles all day’.The grandfather argued for the importance of starchy foods and believed that modern food was much better than before:Modern people eat way better than before .Nowadays they have very nice steamed buns.But steamed buns alone are not good enough for them.They want more, such as delicious dishes.But you know in the old days, we did not even have salt.We were so hungry then that we had to steal wheat sprouts to feed our empty stomachs.,Steamed buns were the food item most frequently described by grandparents as ‘good for children.’, "They were considered to be ‘nice’ from the grandparents' point of view, as they were rarely available during the Great Famine:",When seeing others have nice steamed buns , I was wondering when I could possibly have one of my own.You know in the old days, even high-ranking officials had no access to nice steamed buns and dishes like there are today.I tried my best to live one more day just for one more steamed bun.,One key reason why grandparents emphasised the value of starchy foods was that it could prevent feelings of hunger:Even watery rice was not available at all during the Great Famine.It was all about water-boiled edible wild herbs in the communal kitchens.How could you not be hungry by only eating this?,Most of our grandparent participants experienced the Great Famine which was a period of severe food shortage and food consumption was a matter of survival."Grandparents argued for the value of staple or starchy foods regarding children's growth and persuaded their grandchildren to consume more of these items, placing a special value on one specific food that was often associated with limited access and/or survival.Starchy foods in our study were given greater emphasis by the grandparent generation, compared to other foods.The value given to these foods appeared to persist throughout their lives, even after they become widely accessible and abundant.Meat was one of the most frequently mentioned foods by grandparents as ‘good for children’, after starchy foods.Seven grandparents directly referred to meat.The key issue was affordability, especially for those LBC who received little remittances from their migrant parents.Therefore, meat was provided occasionally and seen as a special treat:We are unable to afford meat and eggs every day.I cook meat every ten days or so.I mince the meat into quite small pieces and mix them with some vegetables when served.But she only picks out the meat and never the vegetables.,For the grandparent generation, meat was considered a special treat."This finding is consistent with Lora-Wainwright's observations in southern rural China, which suggested that offering meat to household guests was proof of the value placed on the guests in question. "Animal-source foods are important to children's growth and development, but access to them is still limited for poor families in the developing world.Financial remittances from migrant parents were described as an important economic resource for improving the diet of LBC who were being cared for by grandparents.LBC whose parents sent little or no remittances tended to have limited access to meat, so that it appeared that a lack of remittances compromised the diversity of foodstuffs provided to some LBC."A paternal grandmother caring for a 12-year old boy and his younger sister said that she did not receive financial support from the LBC's migrant parents:",He always complains that we have noodles all the time.What else do you expect me to cook for you?,Your parents never send money back … I used to tell them , “You two poor things have to suffer now .You can eat whatever you want when you grow up and have the capacity to make your own money.,In the above case, LBC had to ‘suffer’, as stated by their paternal grandmother, because she was unable to afford foodstuffs other than noodles.This may support the idea that some LBC who received no remittances from parents have limitations in their diets.An 11-year old boy and his little brother with congenital heart disease had migrant parents who earned a low income and rarely sent back remittances.Their maternal grandparents, aged over 80, were struggling to prevent the children from going hungry and as such, nutrition itself was not a priority:How can we care about the nutrition?,We are trying our best not to end up being hungry.,A left-behind boy wrote in his food diary that he ate bean sprouts for almost every meal during the period when he was asked to record his meals.His maternal grandmother further elaborated on this:The bean sprouts were from our own field.We harvested soy beans around one month ago.Some were left in the fields and grew to bean sprouts.We collected them and then cooked them for meals.,According to maternal grandmother 1 and three other grandparents, their expectation of food was to avoid hunger and to meet basic survival needs.They were more concerned about not having enough food for the children, rather than nutrition.This may be partly due to their poor financial status, and also influenced by their experiences of hunger during the Great Famine.Amongst this generation, there was more diversity in food items than amongst the grandparent generation."Children's diaries showed that children who were cared for by the parent generation tended to have better protein intake including animal-source food, eggs and milk.Five caregivers from the parent generation mentioned ‘cooking different foods’ and ‘preparing different dishes’ for children.Only one mother mentioned about the Great Famine when she was promoted by her elderly neighbour.The parent participants in our study were mostly in their 30s- and 40s and had not experienced the Great Famine.They tended to hold different attitudes towards starchy foods:In the past, people used to be alright with just eating wheat flour, but this is not the case for modern people.How can children nowadays eat food only made of wheat flour?,By the way, almost half of the nutrients come from meals with vegetables, meat, or a mixture of both instead of staple foods.Eating only wheat flour is not sufficient at all., "The maternal aunt of an 11-year-old left-behind boy whose parents had migrated for employment said that her family's finances were good, as her husband had secured a permanent job in the local coal mine industry. "Despite receiving no remittances from the boy's migrant parents, she was still able to afford a variety of food for the boy and her own daughter:",Sometimes I make dumplings and noodles mixed with meat.My husband usually goes fishing, so we do not need to buy fish.Sometimes I make deep-fried cakes with melted sugar in it."The cooking oil is allocated by my husband's company so the quality is quite good.They find the food and snacks from local markets not as tasty, so I make these foods on my own for them.,A varied diet does not always mean a balanced or nutritious diet.In the above case, for example, the maternal aunt made her children home-made snacks using deep-fried cooking methods by using cooking oils, which has a high energy content, but very few nutrients.Frying generally implied more edible oils and cooking special foods, which to some extent are related to purchasing power in China.Like the grandparent generation, the parent generation frequently mentioned meat.Meat was seen as a sign of economic status by comparing access to it with peers in the same communities.Three mothers with LBC highlighted this fact, “we are not better off than other families …, how can we afford our children meat for three meals a day?,One neighbour in his 20s compared himself with two left-behind boys cared for by their paternal grandmother, who were only offered meat occasionally and claimed proudly that “my family eat meat every day because we are loaded”."As a symbol of economic status, offering sufficient meat was described as essential for meeting children's nutritional needs.For instance, the mother of an eight-year old non-left behind girl said:"I think we meet our children's nutritional needs as we often go to local restaurants compared to other families in this village and we provide as much meat as they want.,This particular household was financially better off than their village peers, since they were running ‘a profitable vegetable greenhouse,’ which allowed them to afford dining out at the local restaurants on a regular basis, and providing meat for their children.The mother interpreted what the children ‘want’ as their nutritional ‘needs.’,This may lead to offering children excessive amount of high-energy foods.The potential risk in this case was that some children were likely to over-consume meat because it was considered a sign of economic status, rather than part of a balanced diet."This may contribute to children's being overweight or in the long term, even obesity.However, none of the parent caregivers involved in our fieldwork showed any awareness of this.Chinese children in rural areas have been experiencing a dramatic shift in nutrition, from a traditional low-fat and high-carbohydrate diet to a high-fat/energy diet.This is especially true among children from relatively affluent families."The prevalence of being overweight/obesity has increased among rural children in China, which may be partly due to parents' attitudes about the symbolic representation of meat as a status symbol.Furthermore, the parent generation was concerned about barriers to healthy eating in rural communities, including food safety and lack of access to fresh food for rural children, compared to urban children."Compared to the grandparent generation, the parent generation's beliefs about healthy eating extended beyond survival, but suggested higher expectations of food and a growing demand for food quality.It should be noted that our fieldwork was conducted during the winter season, a time that is not optimal for rural people to cultivate vegetables on their own lands.Instead, they instead relied on local markets for supplies.One mother of two non-left-behind children, who herself used to migrate to cities for work, described the lack of variety regarding food choices in rural communities, compared to urban areas.She compared restricted food access for rural children to urban children, highlighting in particular ‘fresh vegetables’ and ‘fresh milk’:It is winter now.There are not many vegetables available from the local market as there are in the cities.The local market is not open every day.Sometimes I go there to buy enough fresh vegetables to last a few days.The problem is that they do not stay fresh for long.Urban kids have access to fresh vegetables all the time."They drink fresh milk every day, but rural kids don't.,The parent generation was concerned about food safety, which was described as a threat to healthy eating for rural children.However, none of the grandparent caregivers raised this issue during the open-ended conversations.Two mothers expressed their concerns about food safety in rural areas:We had food poisoning after eating frozen dumplings from the local market.We did not feel well after lunch.My two kids became very ill.We were sent to the central hospital in the city.After that, I no longer buy frozen dumplings from the local market.,You see, although bean sprouts from the local market do not look as good as before the additives were banned by the law, they are safer to eat.This is what it is like to live in rural areas.We have to worry about food safety., "The aim of this study was to explore caregivers' beliefs about healthy eating for LBC in rural China. "Intergenerational differences in beliefs about healthy eating showed that where the grandparent generation tended to emphasise the importance of starchy foods for children's growth due to their own past experiences during the Great Famine, the parent generation paid more attention to protein-source foods including meat, eggs and milk.Parents were also more likely to offer their children more high-energy foods, which was emphasised as a sign of economic status, rather than as part of a balanced diet.These could imply that the parent generation may have a different, but not necessarily better understanding of healthy eating for children, compared to the grandparent generation."This is inconsistent with previous studies suggesting that grandparents who cared for LBC had poorer nutritional knowledge and attitudes than the children's parents, and that they may not be able to provide a proper diet for LBC. "Our findings suggested that financial remittances from migrant parents were described by grandparents as an important source for being able to include higher cost foodstuffs such as meat in LBC's diet.Some grandparents suggested lack of remittances from migrant parents limited food choices they could offer LBC.Our previous work found that left-behind boys were less likely to receive resources in the form of remittances from migrant parents in a society where sons were culturally more valued than daughters."This was because their parents migrated to save up for their sons' adult lives, rather than for the lives of their daughters.Limited financial remittances for left-behind boys may reduce their food choices, which might restrict their growth and development during the early course of their lives."Another important finding of this study is how grandparents' past experience during the Great Famine influenced their expectations and concerns about food, as well as their beliefs about healthy eating for children. "From the interviews conducted with grandparents, a prominent theme that emerged in our data was grandparents' past experiences about food availability during the Great Famine, which occurred from 1959 to 1961.Despite rapid socio-cultural and economic changes in China during the past few decades, the experience of food shortage in early life stages persisted.Evidence shows that individuals exposed to famine in early life are at increased risk of adverse health outcomes in later life, which could be partially explained by persistent unhealthy food preferences across their life course.This is partly supported by evidence that exposure to the Dutch Famine in early gestation is associated with increased energy intake due to a fat preference."On the other hand, early-life experiences can shape people's attitudes towards food and healthy eating, thus directly and indirectly influencing the feeding practices of caregivers and eating habits of children. "Therefore, the influence of people's early-life exposures to food shortage may be passed on to their offspring through generations. "This has an important implication, because a substantial number of LBC who were left behind by their parent and were cared for by their ageing grandparents in rural China due to its large-scale rural-to-urban migration. "It is important to take into account caregivers' experiences and intergenerational effects when exploring LBC's health and well-being in rural China.Several limitations were encountered in this study.The key concepts of ‘healthy eating’ and ‘eating well’ were self-defined by our participants.These terms may have different meanings due to individual experiences.It may also have been unrealistic to ask participants to quantify food intakes during interviews and to note them in diaries.Additionally, we only asked about food items that were normally consumed.We expected and found that income was a sensitive issue and most participants were reluctant to disclose their incomes.Therefore, we were unable to compare income differences between the parent and grandparent generations quantitatively."However, our interviewees did volunteer information and suggested that remittances from migrant parents which they identified were reported as an important financial source to improve LBC's food and diet.Reflexivity is essential in qualitative research and it encourages the researcher to be part of, rather than separate from, the research.The first author initially approached the LBC through school channels, which may have left an impression that she was a friend of the school staff, thus imposing pressure on children to take part.On the other hand, as native Chinese, the first author shared the same cultural background with the participants.However, their personal experiences, preferences, as well as world views may have been different, which may have led to different interpretations of conversations.A qualitative study of this kind has some limitations in terms of its representativeness.The sample size was sufficient to reach saturation and it was not designed to be statistically representative of the community of caregivers for LBC.Despite these limitations, our study tentatively suggests potential for two separate types of nutritional risks for children, especially LBC in rural China: LBC cared for by aged grandparents, who had experienced the Great Famine, were likely to suffer from malnutrition, especially in the case of children lacking remittances from their parents.Although the parent generation placed greater emphasis on protein-source foods, the risk was that they also tended to provide their children with excessive high-energy food.This may contribute to children being overweight or obesity in the long term when they grow up."These findings suggest that research on LBC's nutritional health in rural China can benefit from exploring the beliefs regarding healthy eating and expectations about food of caregivers.It is important to identify intergenerational differences in beliefs of healthy eating for children and expectations to inform development of educational programmes and other interventions specific to the parent and grandparent generations.
China's internal migration has left 61 million rural children living apart from parents and usually being cared for by grandparents. This study aims to explore caregivers' beliefs about healthy eating for left-behind children (LBC) in rural China. Twenty-six children aged 6-12 (21 LBC and 5 non-LBC) and 32 caregivers (21 grandparents, 9 mothers, and 2 uncles/aunts) were recruited in one township in rural China. Children were encouraged to keep food diaries followed by in-depth interviews with caregivers. Distinct intergenerational differences in beliefs about healthy eating emerged: the grandparent generation was concerned about not having enough food and tended to emphasise the importance of starchy foods for children's growth, due to their past experiences during the Great Famine. On the other hand, the parent generation was concerned about food safety and paid more attention to protein-source foods including meat, eggs and milk. Parents appeared to offer children high-energy food, which was viewed as a sign of economic status, rather than as part of a balanced diet. Lack of remittances from migrant parents may compromise LBC's food choices. These findings suggest the potential for LBC left in the care of grandparents, especially with experience of the Great Famine, may be at greater risk of malnutrition than children cared for by parents. By gaining an in-depth understanding of intergenerational differences in healthy eating beliefs for children, our findings could inform for the development of nutrition-related policies and interventions for LBC in rural China.
193
Prediction of equilibrium isotopic fractionation of the gypsum/bassanite/water system using first-principles calculations
Gyspum is a common hydrous mineral on Earth and has also been shown to be abundant on Mars.The hemihydrate form of calcium sulfate, bassanite, is a precursor to gypsum formation, but is rarely found in natural mineral deposits on Earth and more often occurs as a mixture of gypsum and/or anhydrite.Nevertheless, the presence of bassanite, together with gypsum and other hydrous minerals on Mars, has generated considerable interest in how these minerals form and their paleoenvironmental significance.Applying the isotopic composition of mineral hydration water as a paleoclimatic proxy requires detailed knowledge of these fractionation factors and their dependence on environmental parameters, such as temperature and salinity.The first experimental measurements of the fractionation factors for 18O/16O and D/H in the gypsum-water system were performed by Gonfiantini and Fontes and Fontes and Gonfiantini, who reported values of 1.004 and 0.98 for α18Ogypsum-water and αDgypsum-water, respectively.These values agree with the results of more recent studies within analytical uncertainties.The α18Ogypsum-water was reported to be insensitive to temperature between 12 °C and 57 °C.Hodell et al. confirmed the temperature-insensitivity of α18Ogypsum-water; however, they found a small positive temperature dependence for αDgypsum-water between 12 °C and 37 °C.More recently, Gázquez et al. empirically measured α18Ogypsum-water and αDgypsum-water more precisely to be 1.0035 ± 0.0002 and 0.9805 ± 0.0035 between 3 °C and 55 °C, respectively.The α18Ogypsum-water was found to decrease by 0.00001 per °C in this temperature range, whereas αDgypsum-water increases by 0.0001 per °C.These authors concluded that in the temperature range under which gypsum forms in most natural environments, the dependence of the isotope fractionation factors with temperature is insignificant.However, temperature must be taken into account when applying fractionation factors to hydrothermal or cryogenic gypsum, especially for αDgypsum-water.Importantly, the α18Ogypsum-water and αDgypsum-water at or less than 0 °C is still unknown because experimental limitations prevent the precipitation of gypsum at temperatures close to the freezing point of water.Salinity has been also found to affect α18Ogypsum-water when salt concentration exceeds 150 g/l of NaCl, with no significant effect at lower concentrations.The salt effect on αDgypsum-water is relevant even at relatively low salinities, so salt corrections are needed when dealing with gypsum formed from brines.However, many gypsum deposits do not necessarily form from brines per se, but rather from low-salinity waters.For example, the salinity of water rarely exceeds 5 g/l in many gypsum-precipitating lakes.Additionally, relatively low salt concentrations have been found in fluid inclusions of hydrothermal gypsum and in fluid inclusions of gypsum deposits formed in marine environments affected by freshwater.To our knowledge there is no reported value for α18O and αD for the bassanite-water system, mainly because of the difficulty in synthesizing pure bassanite.Bassanite can be synthesized but isotope measurements of its hydration water and the original solution are made difficult by isotope exchange with reagents used for bassanite stabilization.Thus, prediction of bassanite-water fractionation from first-principles offers a promising solution.The 17O-excess variability in natural waters on Earth varies by less than 150 per meg relative to an analytical precision for 17O-excess that is better than 10 per meg).Inaccuracy in the θgypsum-water value can lead to significant errors when reconstructing the 17O-excess of paleo-waters from gypsum hydration water.For example, a variation of 0.005 units in θgypsum-water produces a bias of 15 per meg in the reconstructed 17O-excess.Such a difference might lead to different interpretations when modelling and comparing 17O-excess values of paleo- and modern waters.Consequently, accurate determination of the θgypsum-water value is crucial for using triple oxygen isotopes to reconstruct past hydrological conditions.Gázquez et al. measured a θgypsum-water value of 0.5297 ± 0.0012 and concluded that this parameter is insensitive to temperature between 3 °C and 55 °C.This measured value is close to the greatest theoretical value of θ in any mass-dependent fractionation process of the triple oxygen isotope system, which ranges from 0.5200 to 0.5305.Herwartz et al. reported a slightly lower θgypsum-water value of 0.5272 ± 0.0019.Theoretical studies of isotopic fractionation are particularly useful in systems that are difficult to characterize experimentally, or when empirical data are rare or absent.Theoretical calculations can extend the temperature range over which fractionation factors can be used, which is especially relevant for low-temperature mineral-solution fractionation where isotopic equilibrium takes a long time to achieve.Theory also offers insights into the causes of isotopic fractionation, such as changing properties of vibration and chemical bonding.Theoretical calculations of high accuracy have been made for molecules in the gas phase.First-principles calculations have also been used for solid materials within the framework of density functional theory, including minerals such as quartz, kaolinite, brucite, talc, gibbsite, albite and garnet.Here we use DFT to determine the equilibrium α17Omineral-water, α18Omineral-water and αDmineral-water values for the gypsum-water and bassanite-water systems.We compare the theoretical calculations with published experimental results for gypsum-water.Building upon successful convergence of empirical and theoretical results for gypsum, we determine for the first time the bassanite-water fractionation factors for triple oxygen and hydrogen isotopes, and predict the theoretical dependence of these parameters with temperature.Lastly, we seek to understand the origin of the opposite direction of the fractionation factors for δD and δ18O in many hydrous minerals by using gypsum as a case study.Because the reported theoretical results depend on the experimental fractionation values α18Owater-vapor, the errors of α18Owater-vapor should be propagated to the theoretical results.We calculate the vibrational frequencies for a water molecule, gypsum and bassanite within the harmonic approximation.Water vapor is simulated by using an isolated H2O molecule in a 20×20×20 Å3 box with periodic boundary conditions.The partition function of the water molecule is composed of translational, rotational and vibrational contributions.The details of the methods used for calculating the translational and rotational contributions were described by Méheut et al.We carried out first-principles calculations based on density functional theory using the Siesta computer program with numerical atomic orbital basis sets.All calculations were done with the generalized gradient approximations, using the Perdew-Burke-Ernzerhof scheme for solids functional for electronic exchange and correlation.Other functionals, such as PBE and LDA, were also tested to determine functional dependence and find the functional that gives the structural and vibrational values closest to experimental data.The LO-TO splitting effect was also included in the vibrational calculations.Core electrons were replaced by ab initio norm conserving pseudopotentials, generated using the Troullier-Martins scheme.Because of the significant overlap between the 3p semi-core states and valence states, the 3p electrons of Ca were considered as valence electrons and explicitly included in the simulations.The Ca, S, O, and H pseudopotentials were generated using the ATOM software and tested thoroughly.The basis set functions for all the elements were also generated using diatomic models.The basis set and pseudopotential for Ca were thoroughly tested for CaO vibrations including LO-TO splitting.A plane-wave energy cutoff of 600 Ry for the real-space integration grid, and a k-grid cutoff of 10 Å for the Brillouin zone sampling were used for the gypsum and bassanite unit cells, as found in convergence tests.Structural relaxations were carried out by minimizing the total energy until the smallest force component absolute value was under 0.001 eV/Å, and the smallest stress component under 0.01 GPa.The tolerance of density matrix element change for self-consistency was 10−5.Supercells of 3×1×3 and 1×3×1 were used to calculate the force constants by finite displacements, to get the full phonon dispersion from finite differences.For the phonons, the Brillouin zone was sampled with 220 points for gypsum and 290 points for bassanite.The starting structures for relaxation of gypsum and bassanite were obtained from single-crystal X-ray results.The structures of gypsum, bassanite, and the water molecule were relaxed using DFT.Our results for the water molecule agree well with experimental data, with 1% greater value for the OH bond length, and a 0.03% smaller value for HOH angle as compared to experimental values.The optimized structures of gypsum and bassanite are shown in Fig. 1.Gypsum has a monoclinic structure, whereas bassanite has an orthorhombic structure.The lattice parameters of the optimized structure of gypsum are 6.461 Å, 14.979 Å, 5.661 Å; for bassanite they are 12.023 Å, 6.917 Å, 12.611 Å.These lattice parameters are all in agreement with neutron powder diffraction results, being within 0.9% of the experimental values.Vibrational frequencies are needed to calculate the reduced fractionation factor β and eventually the fractionation factor α.Although they are all consistent with the experimental IR and Raman spectra, the values obtained from PBEsol are significantly closer to the ones obtained from experimental infrared and Raman spectra than those obtained from PBE.It is quite common to scale all frequencies with an empirical single scale factor to obtain a better match of the experimental spectra.However, the β and α fractionation coefficients in this paper were calculated solely from the raw theoretical harmonic frequencies obtained from DFT, as the better accuracy of PBEsol renders the scale factor unnecessary.Our calculated gamma phonons of gypsum are in agreement with the experimental values, with OH symmetric and asymmetric stretching at 3300–3500 cm−1, scissor bending at 1600 cm−1 and SO stretching of the SO42− group at 1000 cm−1.Our results also show that the OH stretching of bassanite as red-shifted compared to that of gypsum in agreement with the experimentally measured shift.We compare the theoretical α18Ogypsum-water with experimental results over the temperature range from 0 °C to 60 °C.Our theoretical results agree with the experimental fractionation factors with a maximum difference of only 0.0004.The LO-TO splitting is not included in the fractionation calculation because even when testing for the gamma phonon only, which is an overestimate of the real effect, there is only a 0.00002 difference for α18Ogypsum-water, which is negligible.The dependence of α18Ogypsum-water with temperature is well described by a third order polynomial, whereas the experimental data were fit by a linear equation because of the analytical errors associated with the measurements.The temperature dependence of α18Ogypsum-water is about −0.000012 per °C.The α17Ogypsum-water overlaps with experimental results, with maximum differences of 0.000098).Our theoretical result of θgypsum-water is 0.001–0.002 less than the value of 0.5297 ± 0.0012 reported by Gázquez et al., but in good agreement with the value of 0.5272 ± 0.0019 reported by Herwartz et al.).Our results show that θgypsum-water are well described using a third order polynomial with a weaker temperature dependence than the individual fractionation factors.This is because α18Ogypsum-water and α17Ogypsum-water change in a very similar fashion as a function of temperature, because of mass-dependent isotope fractionation.Over the temperature range from 0 to 60 °C, the mean α18Obassanite-water is 1.00346 ± 0.00018, almost the same as the α18Ogypsum-water).The temperature dependence is −0.000009 per °C, similar to α18Ogypsum-water.We found that α18Ogypsum-water and α18Obassanite-water are the same at ∼ 30 °C, whereas α18Ogypsum-water > α18Obasssanite-water at lower temperatures and α18Ogypsum-water < α18Obasssanite-water at higher temperatures, with differences not exceeding 0.00013.In general, our predicted αDgypsum-water values agree well with experimental results.Note that our theoretical results are extrapolated to cover the temperature range as low as −5 °C.At low temperatures, our theoretical αDgypsum-water is 0.9740 and 0.9748, respectively, with a deviation of about −0.004 from the experimental result of 0.978 at 3 °C.At temperatures greater than 25 °C, our theoretical results agree well with experimental data, with a difference of less than −0.001.For hydrogen isotopes, Gázquez et al. suggested that the measured experimental value for αDgypsum-water could be higher than the actual equilibrium value because of kinetic effects during gypsum precipitation.In contrast, our theoretical results for oxygen isotopes agree with their experimental values on α18O, suggesting that kinetic effects are small for α18O.Our results show that α18Ogypsum-water and α18Obasssanite-water are very similar in the temperature range of 0–60 °C, with differences of less than 0.0002.This is because the vibrational frequencies of bassanite and gypsum respond similarly to the replacement of 16O with 18O.Consequently, the partition function and reduced fractionation factors are similar.The average O − Ca bond lengths of bassanite and gypsum are almost the same.It is well known that shorter bond lengths correlate with stronger and stiffer bonds and, as a result, the value of β will be greater.In the case of gypsum and bassanite, very similar bond strengths lead to very similar β values.The temperature effect on α18Ogypsum-water is relatively small, with a variation of less than −0.0000122 per °C, which agrees with the experimental slope of −0.0000116 per °C.For α18Obasssanite-water, the temperature dependence is even smaller.A similar temperature effect has been found for α17O gypsum-water and α17Obasssanite-water.Our results demonstrate that the α18O and α17O for gypsum-water and bassanite-water are barely affected by temperature in the range relevant for most paleoclimatic studies.This lack of temperature dependence provides a distinct advantage when using hydration water to estimate the isotopic composition of the mother water from which the gypsum or bassanite precipitated.The αDgypsum-water shows that our theoretical results agree with experimental values, with a maximum deviation of ∼0.004 at low temperatures, which decreases above room temperature.This confirms the results reported by Gázquez et al., who found αDgypsum-water is relatively sensitive to temperature between 3 and 55 °C.However, in the temperature range of many paleoclimatic studies the variation in αDgypsum-water is only 0.002, leading to small differences of 2‰ when calculating δD values of the mother water.The temperature dependence of αDbassanite-water in the range of 0–60 °C, however, is much steeper than for gypsum-water.The temperature effect is greater at low temperature than at high temperature, indicating that the formation temperature of bassanite needs to be known in order to calculate accurate mother water δD values from the hydration water of bassanite.Salinity effects on oxygen and hydrogen isotopes during mineral precipitation have been reported for many different isotope systems.Gázquez et al. determined experimentally that the salt effect on α18Ogypsum-water is negligible for salinities below 150 g/L, but it becomes significant for greater salt concentrations.Moreover, it was found that αDgypsum-water shows a linear relationship with salinity even at relatively low ionic concentrations, so a salt correction needs to be considered when dealing with gypsum formed from brines.The dependence of the triple oxygen isotope system with salinity has not been tested experimentally; however, given that the activity ratios of 18O/16O and 17O/16O are both controlled by the cations in solution, θ should not be affected to a significant extent.Therefore, the α17Ogypsum-water for different salinities may be calculated from α18Ogypsum-water and θ at a given temperature.Why is the α18Ogypsum-water > 1, whereas αDgypsum-water < 1?,This observation sounds counterintuitive because if the vibrational modes for a chemical unit are stiffer in one system than in the other, the fractionation of all the species in that chemical unit should show the same trend.Although this effect has been known since the early 1960s, no clear theoretical basis has been offered apart from some ad hoc explanations involving relationships between fractionation factor, bond length and cation/anion chemistry in aqueous solution.A general assumption has developed in the literature that the opposite fractionations may reflect the fact that the different species correspond to different chemical units.This is not the case, however, because both H and O are replaced in the same hydration water molecule, and their respective fractionations are opposite.The correct explanation requires an examination of how different modes, with different participation in different species, become stiffer or softer when comparing two different systems.In this case, modes with a predominant H character in a water molecule may be stiffer in one of two systems, whereas modes with predominant O character are stiffer in the other one, albeit both being modes of the same molecule.This, of course, will be weighted by their relative participation in the partition functions, which we explore further below.In order to explain this observation in more detail, we consider gamma phonons only for simplicity.Every possible single 18O and D replacement site in gypsum hydration water is tested to ensure that inferences about the direction of isotopic fractionation are consistent with the overall fractionation factor.In the isolated H2O, there are three vibrational modes at 1590 cm−1, 3661 cm−1 and 3778 cm−1, which decrease to 1583 cm−1, 3653 cm−1 and 3762 cm−1 respectively when 16O is replaced with 18O.The difference in frequency of each mode is very small, with the greatest difference being only 15 cm−1 for the highest frequency, the symmetric stretching.For gypsum, the OH stretching at 3500 cm−1 is softer than the OH stretching mode at 3653 cm−1 of the mother water, and it decreases by 70–90 cm−1 when replacing 16O with 18O.Thus, the frequency shift with heavy oxygen isotopic substitution is greater for the hydration water than for the mother water.Eq. describes an exponential relation between the partition function and frequency.The reduced fractionation factor β18Ogypsum, with a larger Q18O/Q16O ratio, will be greater than β18Owater, with a smaller Q18O/Q16O ratio; thus, α18O will always be greater than 1.0.Considering αDgypsum-water < 1, a similar explanation can be used for α18Ogypsum-water > 1 but it is complicated by the fact that the two hydrogen atoms in the hydration water contribute to the partition function differently because of their different orientations, which means they need to be considered separately.The three vibrational modes of H2O, 1590 cm−1, 3661 cm−1 and 3778 cm−1, now decrease to 1394 cm−1, 2700 cm−1, and 3722 cm−1, with differences of 196, 960 and 56 cm−1, respectively, when H is replaced by D.These substitutions are much greater than the differences caused by the substitution of 16O with 18O, which is expected given the relative mass differences.For gypsum, there are two different configurations depending on the orientation of the hydrogen in the molecule with respect to the matrix of the host.One of the two hydrogens in the hydration water molecule connects to an oxygen in a SO42− group in the same layer by weak hydrogen bonding.The second hydrogen atom connects to a SO42− group oxygen in the next layer.In the first case, there are only two vibrational frequencies that change significantly when replacing H with D: the mode at 1585 cm−1 shifts by 162 cm−1 to 1423 cm−1, and the mode at 3264 cm−1 shifts by 876 cm−1 to 2388 cm−1.The two frequency shifts of the H2O molecule are larger than those of gypsum hydration water, which means the partition function ratio of D/H of the water molecule is larger than that of gypsum hydration water.This implies that the reduced fractionation factor β18O of gypsum hydration water is smaller than that of mother water.As a result, αDgypsum-water will be less than 1.In the second case, there are three vibrational frequencies that shift significantly when replacing H with D.The mode at 1585 cm−1 blue-shifts by 143 cm−1 to 1442 cm−1, the mode at 3264 cm−1 blue-shifts by 805 cm−1 to 2459 cm−1, and the mode at 3356 cm−1 blue-shifts by 74 cm−1 to 3282 cm−1.For the first two vibrations, the frequency shift for the water molecule is still greater than that of gypsum by 53 and 155 cm−1.In contrast, the third mode shift of H2O is less than that of gypsum by 18 cm−1.This is a much smaller difference than for the other two modes; thus, its effect on the partition function is minimal when the other two are taken into account.By considering the effect of these two different kinds of shifts on the partition function, we conclude that the partition function ratio of D/H of water is greater than that of gypsum for the second H position, which is similar to the case for the other hydrogen discussed previously.As a result, regardless of which hydrogen is considered, the conclusion that αDgypsum-water is less than 1 will be the same.We carried out first-principles calculations of oxygen and hydrogen isotopic fractionation factors between free water and the hydration water of gypsum and bassanite.Our theoretical fractionation factors, α18Ogypsum-water and αDgypsum-water, agree well with experimental values.The temperature dependence of α18Ogypsum-water is insignificant for most paleoclimate applications using gypsum hydration water, but the dependence of αDgypsum-water on temperature may be significant for some applications.For hydrogen isotopes, the formation temperature must be considered in order to ensure accurate fractionation factors are used, especially for hydrothermal or cryogenic gypsum deposits.The α18Ogypsum-water and αDgypsum-water at 0 °C are predicted to be 1.0038 and 0.9740, respectively.The predicted αDgypsum-water values at temperatures below 25 °C are consistently lower than the experimental observations.This can be explained by kinetic effects affecting αDgypsum-water at lower temperatures because of fast gypsum precipitation under laboratory conditions.The triple oxygen isotope parameter is 0.5274 ± 0.0006 in the temperature range of 0–60 °C, which roughly agrees with previous experimental results.This theoretical θgypsum-water value is probably more accurate than that derived by empirical results.We also explain why α18Ogypsum-water > 1 and αDgypsum-water < 1 from first-principles harmonic analysis, without resorting to an explanation involving different species being associated with different water sites in the mineral structure.We calculate fractionation factors between free water and hydration water in basanite for the first time.The α18Obassanite-water is similar to α18Ogypsum-water in the 0–60 °C temperature range, whereas αDbassanite-water is 0.009–0.02 lower than αDgypsum-water in the same temperature range.
The stable isotopes (18O/16O, 17O/16O and 2H/1H) of structurally-bound water (also called hydration water) in gypsum (CaSO4.2H2O) and bassanite (CaSO4.0.5H2O) can be used to reconstruct the isotopic composition of paleo-waters. Understanding the variability of the isotope fractionation factors between the solution and the solid (α17Omineral-water, α18Omineral-water and αDmineral-water) is crucial for applying this proxy to paleoclimatic research. Here we predict the theoretical equilibrium fractionation factors for triple oxygen and hydrogen isotopes in the gypsum-water and bassanite-water systems between 0 °C and 60 °C. We apply first-principles using density functional theory within the harmonic approximation. Our theoretical results for α18Ogypsum-water (1.0035 ± 0.0004) are in agreement with previous experimental studies, whereas αDgypsum-water agrees only at temperatures above 25 °C. At lower temperatures, the experimental values of αDgypsum-water are consistently higher than theoretical values (e.g. 0.975 and 0.978, respectively, at 3 °C), which can be explained by kinetic effects that affect gypsum precipitation under laboratory conditions at low temperature. We predict that α18Obassanite-water is similar to α18Ogypsum-water in the temperature range of 0–60 °C. Both α18Ogypsum-water and α18Obassanite-water show a small temperature dependence of ∼0.000012 per °C, which is negligible for most paleoclimate studies. The theoretical relationship between α17Ogypsum-water and α18Ogypsum-water (θ =[Formula presented]) from 0 °C to 60 °C is 0.5274 ± 0.0006. The relationship is very insensitive to temperature (0.00002 per °C). The fact that δ18O values of gypsum hydration water are greater than free water (α18Ogypsum-water > 1) whereas δD values of gypsum hydration water are less than free water (αDgypsum-water < 1) is explained by phonon theory. We conclude that calculations from first-principles using density functional theory within the harmonic approximation can accurately predict fractionation factors between structurally-bound water of minerals and free water.
194
Altruistic punishment is connected to trait anger, not trait altruism, if compensation is available
Altruistic behavior has been defined as a voluntary action intended to benefit another person without the expectation of receiving external rewards or avoiding externally produced aversive stimuli or punishments.However, this broad motive definition of altruism has been further narrowed down in a factor-analytical conceptualization where prosocial behavior has been categorized according to their driving motives into six different categories.Among these six categories was altruism, defined by Carlo and Randall, seeing altruism or altruistic prosocial behavior as “voluntary helping motivated primarily by concern for the needs and welfare of another, often induced by sympathy responding and internalized norms/principles consistent with helping others”.Also, as the helper is more concerned about the need of the others, costs that may occur are included in the definition as well.Accordingly, we use altruism as defined by Carlo and Randall as a trait-construct that is motivationally based on the concern for the needs and welfare of others and related to affective reactions like empathy and sympathy.In context of economic decision making games, behavioral altruism has been defined in a similar way as costly action of benefit to another person.Some of the behavioral paradigms that are used to assess altruistic behavior include third party dictator games.In this variant the participants assume the role of an observer, watching interactions or outcomes of interactions between a dictator and a responder in a dictator game.The dictator divides a given amount of money between himself and the responder.As usual in dictator games the responder cannot act and simply receives the allotted amount.The observer on the other hand is endowed with his own fixed amount of money.Following the dictators allocation, the observers may act themselves.Their scope of actions varies from study to study.In studies investigating altruistic punishment the observer has the opportunity to use his money to punish the dictator i.e. decreasing the money of the proposer.Another version of the third party dictator game gives the observers the chance to compensate i.e. increase the money of the responder or choose between compensating the responder and punishing the dictator.A question arising in this context is whether altruistic punishment is actually altruistic by its nature.While punishment is a costly behavior it has no obvious direct benefits for another person.Based on the aforementioned definition this behavior is not altruistic in a more narrow sense.However, it has been argued that altruistic punishment has indirect altruistic side effects by increasing conformity to social norms that increase cooperation.Additionally, it was found recently, that people in dictator games are prone to show behavior that is framed as morally right, which might also lead to altruistic punishment in third party economic games if altruistic punishment is the only available behavioral option.Fehr and Gächter showed that situations leading to altruistic punishment also evoke negative emotions like anger and Jordan et al. showed, that altruistic punishment is related to subjective ratings of state anger.Also, the offender focused emotion rating of Lotz and colleagues showed a strong relation of anger towards the offender with altruistic punishment.Additionally, anger has been found to be a mediator of altruistic punishment, being a better predictor to altruistic punishment than perceived unfairness.The concept of anger is defined as “the response to interference with our pursuit of a goal we care about.Anger can also be triggered by someone attempting to harm us or someone we care about.In addition to removing the obstacle or stopping the harm, anger often involves the wish to hurt the target”.Acting out anger should also not be seen as destructive and negative act per se, but this acting out of anger can also be used in a constructive manner, like punishing defectors in dictator games to cause a possible change in their behavior in future trials.But not only state anger might be relevant for altruistic punishment.Individual differences in the proneness to experience anger have been investigated for a long time and the construct of trait anger has been found to be related to behavior like the approach of hostile situations and cyberbullying.As trait anger is also linked to higher aggression, the link between trait anger and acting out a punishment as an reaction to the norm violation may be given.Further studies have identified and corroborated several correlates of altruistic punishment like guilt and anger as well as altruism and envy.Therefore different kinds of motives may be hidden behind third party punishment behavior, but anger plays an important role to get the punishment going.As the focus of research on this topic used to be on the state component of anger, we tried to investigate trait anger and its influence on “altruistic” punishment.Using this trait approach, one may be able to explain trait based variance in state based changes, in order to come to a more precise prediction.For example, a person with low trait anger scores could not get angry during a high state anger induction paradigm, while a high trait anger person might react quicker and more easily to this anger inducing procedure.These “capabilities” to react to a certain state manipulation based on personality traits are known in many research fields, for example for physiological reactions like frontal asymmetry.More general, one can also consider the latent state-trait model as a different variant to include state and trait variance in order to explain the behavior.Hence, we also wanted to investigate the trait effects of anger on altruistic punishment to achieve an estimation of possible systematic error variances that may be considered when analyzing only state anger.Third party compensation on the other hand is more likely to be primarily driven by the latter more narrow altruistic motivation defined by Carlo and colleagues and has been found to be related with empathic concern.To illustrate this idea of altruistic compensation in contrast to altruistic punishment, a short hypothetical example is given.If one thinks about a man being pushed to the ground, a person with high trait anger will primarily react with anger towards the aggressor, whereas a person with high trait altruism will primarily react with empathy for the victim.Altruistic compensation does not accept a possible harm either for the aggressor or the victim, and therefore it is in accord with the definition of altruistically motivated behavior because the welfare of the persons, even the welfare of the possible aggressor is not endangered, in contrast to altruistic punishment.Following this example, we even suggest to explicitly include this “benevolence” in a narrow definition of altruism.Altruism and altruistic acts following our view and extending the work of Carlo and colleagues, would be an action that is voluntary, intended to benefit another person, driven by this motivation to help the other person to at least 50% and is benevolent, meaning that there is no intention of harming other persons during the process of helping.This narrow definition of altruism is an extension of the definition given by Carlo and colleagues and is in contrast to the definition given by Fehr and Fischbacher, which only includes the costs of the action and the benefit for another person.In the case of “altruistic punishment”, this benefit is argued to be given by the reinforcement of a fairness norm, but we would doubt that the driving force behind this action is altruism.Instead, we would suggest that trait anger might play a more important role for “altruistic” or maybe more precisely “costly” punishment.Given the empirical evidence and our definition of altruism mentioned above, we aimed to examine the prevailing motivation in third party punishment as compared to third party compensation using individual differences in trait altruism and trait anger.Although these two concepts are different kinds of traits, for trait anger being the proneness to experience a basic emotion and altruism being a facet of prosocial behavioral tendencies, they were chosen because of their empirical relation to the behavioral options altruistic punishment and altruistic compensation.Also, they are both seen as facets of the big five personality traits, with anger being the second facet of neuroticism and altruism being the third facet of agreeableness.Therefore, these two yet different traits may be comparable concerning their effects on third party paradigms.We hypothesized that altruistic punishment would correlate positively with measures of trait anger and aggression whereas altruistic compensation would correlate positively with a measure of trait altruism.To control for the potential influences of the behavioral options provided by the paradigm used to study altruism, in this case providing only the option to punish or only the option to compensate and therefore measuring a combination of altruism and anger, we included three different blocks in the experiment, where the observer could only punish, only compensate or do both.The study was carried out in accordance with the recommendations of “Ethical guidelines, The Association of German Professional Psychologists” with written informed consent from all subjects.All subjects gave written informed consent in accordance with the Declaration of Helsinki before they participated in the experiment.The protocol was not approved by any additional ethics committee, for the used paradigms are common practice in psychological experiments.Also, following §7.3.2 of the “Ethical guidelines, The Association of German Professional Psychologists”, the approval by an ethical committee is optional.As the local ethics committee is very busy, it does not deal with paradigms that are common practice and ethically uncritical.The local ethics committee only handles potentially problematic experiments and as all ethical standards and recommendations were complied, and the study protocol was deemed uncritical concerning ethical considerations, the study was not submitted to the local ethics committee.Additionally, researchers have the responsibility for conducting their research according to the human rights and ethical guidelines, independent of being approved by an ethic committee or not.An ethic committee approval does not change the responsibility of the researcher."Accordingly, the study did not receive and does not require an ethical committee approval according to our institution's guidelines and national regulations.During the experiment, a cover story was used, but they were told about this deception as soon as the task was over, as it is common practice in psychological experiments.We a priori estimated the required sample size with G-power software.Assuming an average effect of r = .36 of anger on altruistic punishment and α = .05 and power = .8 yielded a required sample size of N = 55.58 participants participated in this study to account for possible data loss.Missing data occurred eventually for one person, because the number of the online questionnaires were lost and leading to a final sample size of 57 participants.Despite all participants having the illusion that they would get money from the experiment because of the cover story, most of the participants received educational credits for their participation, the rest of the participants were paid a small amount of money for their participation.Participants were told that they were part of a cooperation based study with other universities, investigating economic decisions under time pressure.This setting was used as a cover story and was not revealed to the participants until the end of the experiment in order to convince the participants that they were playing with other persons in the third party economic game.Also, as the other fictive players were from other universities and not a direct “ingroup”, the cooperation was not directly reinforced.First they filled in a web-based questionnaire, containing several trait questionnaires and demographical data.The online questionnaire was presented with SoSci Survey.Then the participants came to the lab for the experiment.They were told that three different roles were provided in this study which would be randomly assigned: The first role would be the dictator who has to divide 8 Cents between him- or herself and a receiver, the second position."The third position would be a spectator of the dictator's offer.The player in the third position would be able to interfere with the resulting division by investing his or her own money.Unbeknownst to the participants, the lottery assigning positions was staged so that participants always participated in the role of the spectator.Because they were the only person actually participating in the study, the other positions were played by the computer, which was not revealed to the participants until the end of the experiment.The experiment was divided into three blocks: In the first block, participants were only able to punish the fictive dictator by spending their money.The participants were told that for each cent spent, the dictator lost one cent.No additional information was given and no additional framing was intended.This is the classical third party punishment paradigm or altruistic punishment game as used by Fehr and Fischbacher.In the second block the participants could only compensate the receiver with their money.For each cent spent the receiver got an additional cent.No additional information was given and no additional framing was intended.This is the altruistic compensation game as used by Leliveld et al.In the third block the participants could either punish the dictator as in the first block or compensate the receiver as in the second block or do both.First they were able to punish the dictator as in the first block followed by the opportunity to compensate the receiver.At the end of the trial the resulting allocation was shown as in block 1 or block 2.In this third block the participants were able to spend twice as much as in the first and second block.But the maximum and minimum of the resulting amounts of money for dictator and receiver always stayed between the same boundaries as in the previous blocks and the ratio between money spent by the dictator and the money the participant is able to spend in every part of the task stayed the same.Each of the three blocks consisted of 45 trials and the participants were informed of their type of interaction just before the block started.So they had no knowledge during a block what kind of interaction with the other fictive players would occur in the next blocks.Also, the participants were not informed at the beginning of the experiment what kind of interaction exactly would be possible during the experiment.Hence, the participants had just the information what to do in the present block.All 45 offers were randomly sampled from the three offers in the offer range from 0 to 8 cents as explained below, with each offer being presented 15 times.All trials started with the alleged offer of a fictive dictator shown for 1.5 seconds depicted as picture of an offer with either 8:0, 6:2 or 4:4 cents and therefore always leaving at least one half of the money for the fictive dictators.Then participants had the opportunity to spend their money for 5 seconds.This time constraint was imposed because of the cover story and to keep the experiment time under control, for the free choice time could lead to very long trials.The amount of money participants were able to spend was identical to the money kept by the dictator.For example if dictators kept 8 Cents for themselves a maximum of 8 Cents could be spent.So a participant could use all available money for punishment and the dictator would get 0 cent.Thus the resulting amount of money for dictator and receiver were kept between the same boundaries.We only analyzed the relative amount of money spent, meaning the amount of money that was spent by the participant, divided by the amount of money that was available to spent in the respective trial, in order to correct for the different reference frames of the meaning of e.g. 2 Cents when one has 6 Cents to spent vs. 2 Cents to spent.After making a decision or after 5 seconds had passed the trial continued with showing the resulting allocation for the three parties for1 second.Thereafter, a fixation cross was shown for 3 seconds, keeping up the cover story of sending and receiving the data from the other participants before the next trial started.The first two blocks were followed by a break of 15 seconds each.It was not stated clearly to the participants whether the other players would be the same for the whole game or not, but the setting of the cover story suggested that they would be playing with the same persons during the whole experiment.The questionnaires used in this study were a translated version of the revised version of the Prosocial Tendencies Measure, the German version of Buss – Perry aggression questionnaire, the German version of State- trait – anger – expression – inventory and a German version of the empathic concern scale."For the PTM-R, the subscale altruism was used to determine altruism on a trait level.This scale consists of 6 items, like the negatively poled item “I think that one of the best things about helping others is that it makes me look good”."For the Buss – Perry aggression questionnaire, the subscale anger was used to determine anger on a trait level, along with the measurement of STAXI.This scale consisted of 7 items, like the positively poled item “I sometimes feel like a powder keg ready to explode”.The two different measurements of anger were not averaged, as the STAXI was used to assess the trait anger explicitly, while the Buss-Perry aggression questionnaire did not distinguish that clearly between state and trait anger.Therefore the measurements obtained with the Buss-Perry aggression questionnaire were only included in the exploratory analysis."For the exploratory analysis, we used the subscales of aggression and empathy.We computed four linear regressions with the mean of the relative amount of money spent in every condition as the criterion for each of two predictors: “Trait altruism” and “trait anger”.Following our hypothesis, we expected trait altruism to predict compensation and trait anger to predict punishment.Additionally, we made two linear regressions for the third block of the experiment were both behavioral options were available with the “trait altruism”/“trait anger” as criterion and the mean relative amount of money used for “punishment” and “compensation” as predictors.In addition we exploratory analyzed the correlations between the subscales of aggression and empathy with the amount of money spent in all conditions.Statistical analysis was carried out with IBM SPSS version 21.The data analyzed in this study is provided as supplementary data in order to be available for meta-analyses or re-analyses.The reliability of all questionnaire scales included in the analyses is shown in Table 1.The mean relative money spent on compensation and punishment in every block for every offer of the dictator can be seen in Table 2.For the regression models with the traits as predictors, only two regression models of the first 4 regression models show a significant effect of the predictor and one regression model shows a marginal effect for the predictor on the behavior.Summaries of these regression models are shown in Table 3.For “altruism” as a predictor for the criterion “compensation if both options are available” β = .297, t=2.15, p < .05, for “anger” as predictor for the criterion “punishment if both options are available” β = .249, t=1.79, p=.08 and for “anger” as predictor for the criterion “punishment only” β = .299, t=2.16, p < .05 significant effects were found.All in all, the regression analyses showed that if participants have the option to either punish or compensate, then people scoring high on anger are more likely to punish whereas high altruists are more likely to compensate.The two regression models with the traits as criterion for the third block revealed that persons showing more compensation in this block had higher altruism scores β = .352, t=2.68, p < .05.Also, participants that showed more punishment were marginally significantly less altruistic β = −.235, t=−1.78, p=.08 and had significantly higher anger scores β = .311, t=2.31, p < .05.The bivariate correlations of the relevant parameters in the significant and marginally significant regression models are shown in Fig. 1.Exploratory analysis revealed significant correlations between the mean of the relative amount of money spent in the different conditions as can be seen in Table 4.The subscales hostility and verbal aggression from Buss – Perry Aggression Questionnaire show a marginally significant correlation with the amount of money spent in the punishment only condition.Other personality traits than those that were already tested with regression models do not show a significant correlation with the amount spent on punishment or compensation.Possible income effects, leading to less investment if one has invested much in prior blocks can be ruled out by the positive correlation of all behaviors in all blocks.Besides using helping behavior as well as dictator games, third party dictator games were also used to account for altruistic behavior.In these third party dictator games the two major varieties that were used are altruistic punishment and altruistic compensation.Our study investigated whether altruism is the driving motivation for altruistic punishment and compensation or whether anger plays the major role in altruistic punishment.We found that given both opportunities to punish and to compensate, the relative amount of money spent in the task is predicted by trait anger in the case of altruistic punishment and by trait altruism in the case of altruistic compensation.Thus altruistic punishment seems to be driven more by trait anger than by trait altruism, if both options are available.Also, trait altruism does not predict altruistic punishment if both behavioral options are given.We could also show that this effect is true in general, not just in the case given both options.We also found an effect of the availability of just one option vs the two behavioral options.Here, just having the option to punish leads to more punishment compared to the punishment that is given if both options are available.Remarkably there was no significant interaction of the possibility to punish or to do both punishment and compensation, with trait anger in predicting the altruistic punishment.The opposite pattern can be observed for altruistic compensation, where trait altruism seems to be the driving force of the shown behavior, also with an additional effect of the behavioral options, where having both options leads to more compensation but still there is no significant interaction of these two effects.However, as there is a high correlation between the assessed behaviors in the different tasks, there still might be a conglomeration of trait anger and trait altruism driving the resulting behavior.Therefore it is not possible to get an uncontaminated measure of one or another if one just uses one behavioral option, either to compensate or to punish the other players in the third party economic game.But if there are two possible options, the option to punish the dictator and the possibility to compensate the receiver, the influence of altruism on the altruistic compensation and the influence of anger on the altruistic punishment are strengthened and the other trait loses influence on the behavior.Hence it is important to use a task that combines both paradigms, altruistic punishment and altruistic compensation instead of using just one behavioral option, if one is interested in the measurement and influence of altruism and anger on these economic decisions.Leliveld and colleagues as well as Lotz and colleagues did already make that notion on another behalf, showing that it is important to give participants the opportunity to choose what kind of behavior they want to execute.Furthermore they could show the influence of empathic concern and offender focused emotion on the choice of compensating or punishing behavior in third party economic games.Our work is trying to extend these findings to the motivational level, now showing that the narrow altruistic motive is not linked to the punishing behavior.Therefore, the altruistic consequences of the punishment behavior might not be the primary concern of the actor, but just the mere thought of retribution or reinforcement of social norms.As long as the punishment stays in between boundaries of adequacy, this may be a good way to strengthen the social norms in a society, but this kind of behavior might actually damage a society if one punishes to hard.Furthermore, the immediate problem of the receiver, in this case having less or no money at all, is not targeted by this kind of behavior, so the decline of the welfare of this person is accepted and a purely altruistic motive is therefore unlikely.The act of compensation on the other hand is linked to altruistic motivation, targeting the welfare of the receiver right away, but ignoring a possible perseverance of unfair behavior in the society.Thus altruistic compensation, besides being linked to altruistic motivation and closely related to the definition of altruism, just leads to a short sided welfare effect for the person supposedly in need, but does not have the intend to change the behavior or even harm or punish defectors of social norms and the society.In this study, we used trait altruism and trait anger as predictors for punishment behavior as well as for altruistic compensation.One reason to do so was to account for the problem of unexplained variance that may occur if one is only dealing with induced states in such a paradigm.Traits like anger and altruism may act as a heuristic for reactions in people.Hence traits may in some cases overshadow state manipulations that are given and therefore lead to systematic error variance, if only the states are considered as relevant.Another reason to include stable dispositions of people in this experiment was to investigate the reaction patterns related to relevant traits if different behavioral options are given.As trait anger was always related to punishment behavior, the behavioral options do not seem to have an impact on people with high trait anger.They will likely try to punish defectors, even if they have the additional chance to help the victims of the defection.This kind of behavior is not to be seen as a bad thing per se, as long as the punishment stays in appropriate boundaries.Some advantages and disadvantages of the punishment behavior and altruistic compensation for the individual and the society have been shortly mentioned above, and the purely altruistic act of compensation is not likely to cause a change in the behavior of the defector and his impact on society.If only the option to compensate is present, there is no specific reaction pattern for high or low trait anger.Hence they will just help as everyone else would do with no specific deviation from it.Trait altruism however only shows a clear influence if one is able to help the victim and to punish the defector.A specific negative relation to punishment was present and a positive relation on helping the victim was found.But if punishment or compensation was the only behavioral option, no specific relation was found."Therefore high trait altruism people as well as low altruism people will also go for the punishment like everyone else, if they don't have any other option to react.These findings lead to the assumption, that if one only provides the behavioral option of helping, everyone may choose this option, independent of their trait disposition, as long as a motivation to show any reaction is present.For punishing behavior on the other hand, trait anger seems to always play an important role.This leads to simple practical implications concerning behavioral options and confounds of trait motivation that could be used in our society.As long as one is only providing benevolent behavioral options, everyone may choose them in order to satisfy their urge to react according to their traits.But as soon as some other options are available, the traits will take their influence in choosing relevant behavioral options like punishment in the case of trait anger.Therefore every association, society or movement should consider whether they want to engage in purely benevolent actions like for example cleaning the shores in order to get everyone involved in these actions, or whether they want to provide also more punishment prone activities like for example blockading or even attacking an oil platform, which would automatically lead to a division of their members, likely based on traits and motivations like anger and altruism.Also, calling destructive and aggressive acts altruistic may not be the right labeling, for they are most likely driven by anger or trait anger and should therefore not be called altruistic.One limitation of the present work is the confounding of the different options of interaction with the order in the different blocks.As the participants experienced the two blocks with the options of punishment and compensation first, before they learned about their more complex task to do both, they all experiences the option where both behavioral opportunities were present at the end of the paradigm.This order was chosen to make sure that the participants are able to deal with the more complex task on one hand, on the other hand, that they do not feel bored in the blocks after the complex task with the simpler ones.Also in order to not work against the impulsive component of anger, the order of the block during the paradigm was chosen with the punishment always being the first option in block three or being the first behavior to execute in the paradigm in block one.A second limitation is the time constraint that was implemented for the decision of the participants.This may have an influence on the amount of punishment and compensation that is shown by the participants.Rand argues that more intuitive driven paradigms, as operationalized with the time constraint in our paradigm, lead to more cooperation.Also, Rand and colleagues found that this effect of intuition driven paradigms is true for women, but not for men.However Capraro and Cococcioni showed that a strong time constraint may also lead to decreased cooperation via ego-depletion.But these finding do not lead to a clear prediction of the bias in the present paradigm, because no third party economic game was included in both studies.One may only guess that the altruistic compensation might be higher under time pressure, for it is more similar to the cooperation behavior that was assessed in the meta-analysis by Rand than altruistic punishment.However Sutter et al. could show that a tight time constraint leads to more rejection and therefore altruistic punishment in the ultimatum game, although the effect vanishes with repetition.Therefore, we could also expect an initial higher altruistic punishment in third party games under the time constraint that is implemented here as we would expect without it.But as the bias should influence both altruistic punishment and altruistic compensation in the same manner, the time constraint should not add systematic error variance to the findings.Another limitation of the present study is the sample size.However, as the power of the study was estimated, we are confident, that this work might contribute to the field none the less.Also, the reliability of some scales involved in this study was rather low and this may influence also the reliability of the conclusions drawn from the data.Importantly recent studies using compensate only or punishment only paradigms have systematically confounded the motives of trait altruism and trait anger.Our findings are in line with previous results suggesting a strong relation between anger and altruistic punishment.If given the choice, high trait altruists seem to prefer compensation, which is perfectly well in line with the narrower view and definition of altruism being revealed by a voluntary action of benefit to another person without the intention of harming other persons.Accordingly, our results corroborate the view that different kinds of motives and traits may be hidden behind third party punishment behavior and that altruistic punishment is not related to altruism, if an option of compensation is available.Johannes Rodrigues: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper.Natalie Nagowski: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data.Patrick Mussel: Analyzed and interpreted the data.Johannes Hewig: Conceived and designed the experiments; Analyzed and interpreted the data.This work was supported by the German Research Foundation and the University of Wuerzburg in the funding programme Open Access Publishing.Also, it was funded by the European Union through the project “Individualisierung Digital” in the Europäischer Fonds für regionale Entwicklung.The authors declare no conflict of interest.No additional information is available for this paper.Supplementary content related to this article has been published online at https://doi.org/10.1016/j.heliyon.2018.e00962.
Altruistic punishment and altruistic compensation are important concepts that are used to investigate altruism. However, altruistic punishment has been found to be correlated with anger. We were interested whether altruistic punishment and altruistic compensation are both driven by trait altruism and trait anger or whether the influence of those two traits is more specific to one of the behavioral options. We found that if the participants were able to apply altruistic compensation and altruistic punishment together in one paradigm, trait anger only predicts altruistic punishment and trait altruism only predicts altruistic compensation. Interestingly, these relations are disguised in classical altruistic punishment and altruistic compensation paradigms where participants can either only punish or compensate. Hence altruistic punishment and altruistic compensation paradigms should be merged together if one is interested in trait altruism without the confounding influence of trait anger.
195
Adult attachment style and lateral preferences in physical proximity
Bowlby characterized the relationships between children and their caregivers in terms of “attachment”, finding that physical proximity plays an important role in attachment.It is thought that the physical proximity of the caregiver provides a sense of safety, especially when a child is exposed to situations causing fear or anxiety.An internal working model, constructed based on the secure base arising from the tie with a caregiver in childhood, indicates the nature of the relationship between self and others and is thought to function through one’s whole life.Crowell et al. developed adult attachment scale based on attachment scale for children created by Ainsworth et al., who proposed three attachment styles.Crowell et al.’s scale gave subjects forced-choice.Although a number of scales were developed, it was not clear which scale was the most appropriate for adult attachment study, partly due to the different number of factors used in scales.Brennan et al. conducted factor analysis of the responses of over 1000 subjects and found that two statistically orthogonal dimensions underlay pervious adult attachment scales.They thus created Experiences in Close Relationship, providing four attachment styles, the reliability and effectiveness of which have been supported by subsequent studies.There are well documented differences in the modes of communication and relationships among the four attachment styles.Secure subjects have low scores in both dimensions.These subjects typically seek help appropriately when they need it, and can successfully recover by using the support.Preoccupied subjects with high anxiety and low avoidance scores are typically worried about abandonment from their partners, and seek a high degree of intimacy with their partners.Dismissing-avoidant subjects with low anxiety and high avoidance typically do not want to establish close relationships with others mentally or physically, even in a romantic context.They do not ask for help even when they encounter problems, and try to rely on themselves.Fearful-avoidant subjects, with both scores high, have ambivalent feelings.They typically want to be familiar with their partners, but they are not close to their partners for the reason that they cannot believe in others.Some researchers investigated the relation between embodied cognition of physical proximity and attachment.Coan et al. reported that physical touch by a romantic partner could decrease one’s physical pain, although they did not address the question of laterality explicitly.Fraley and Shaver observed the behaviors of romantic couples when they separated at the airport, and reported different attachment behaviors among attachment styles in the situation, with some couples seeking physical proximity with high frequency.Here, again, laterality preference was not explicitly addressed.As described above, two attachment dimensions could change one’s preferred physical distance to the attachment figure.Tucker and Anders observed nonverbal behaviors of romantic couples and found that insecure subjects typically do not feel comfortable with physical closeness.Oxytocin is a peptide hormone, which increases a level of subjects’ trust to others, producing a mental closeness and affiliative behaviors.The number of oxytocin receptors is different among attachment styles.Oxytocin is released by physical touch.The frequency and quality of physical proximity from a caregiver in childhood is different among attachment styles, possibly providing one of the reasons why the sensitivity of physical touch depends on attachment styles.Thus, physical proximity plays a part in adult attachment, where the IWM might serve as a mental base for it.In addition to parameters such as physical distance, lateral preference is likely to play an important role in physical proximity: a subject may prefer to be on the left or right side when in physical proximity.Handedness is one factor which is likely to affect lateral preference significantly.Genetic factors might affect the nature of lateral preferences that an individual develops.There might be a kind of cognitive spontaneous symmetry breaking, where a subject, who originally did not have any specific laterality preference, might acquire a weak initial preference, which becomes stronger as the preference is reinforced and consolidated.In addition, there could be some cultural influences, e.g. on which side of the road a pedestrian is expected to walk.Furthermore, many studies reported the physical laterality of interaction with an attachment figure in childhood.A mother carrying a baby in her arm tends to put the baby on her left side with the right side of the infant’s body touching its mother in many cultures investigated.The left cradling bias was not attributed to handedness.One of the reasons behind such laterality preference might be so that mother’s heartbeat could give babies a sense of security through laying their ears against her left chest.Alternatively, mothers might cradle on her left side, bringing their infant into her left view, so as to be able to interpret the infant’s emotions and respond to his or her needs employing the right hemisphere of the cortex.It is important to build attachment so that mother could respond to infants’ signals appropriately.Indeed, right-cradling mothers are less sensitive to the signals from infants and likely to have more anxiety.Thus, different embodiment modes of attachment in one’s infancy might give rise to one’s lateral preferences in physical proximity, which, in turn, would result in different attachment styles in adulthood.The laterality manifest in child attachment might affect the laterality preferences of adults, where they may prefer specific body sides to get a sense of security.A study showed that humans recognize an affective intonation more effectively in hearing by the left ear compared to the right ear.Emotions are expressed more saliently on the left side of our faces than that on the right side, making it easier to judge the partner’s mood from the left side of the face rather than the right side.High anxiety subjects typically exhibit hyperactivity to the partner’s attitude, while high avoidance subjects do not seek intimacy with the partner.The dynamics of intimacy seeking and affectional relationship building in response to the partner’s moods are different among attachment styles, possibly affecting the laterality preferences in physical proximity.Having a developed lateral preference in physical proximity would stabilize relationships, especially when the lateral preferences of the partners are compatible.On the other hand, possible conflicts of lateral preferences might necessitate negotiation, e.g. by discussing lateral preferences explicitly.Based on the nature of the intimacy between the partners, including lateral preference matching and implicit or explicit references to laterality preferences in their conversations, the development and metacognition of lateral preferences would be affected.There is therefore a rationale for studying the correlations between lateral preferences and attachment styles.However, to the best of our knowledge, there has not been a study which addressed this point explicitly.Here we explore the correlations between lateral preferences in physical proximity and the level of two-dimensional scores of attachment styles, which have been reported in the literature to correlate with different modes of personal relationships.In the present study, participants were asked to answer some questionnaires about the side preferences when in contact with their romantic partners, in addition to questions for adult attachment style scale.We investigated whether there is a correlation between the side preferences when contacting a romantic partner and two orthogonal dimension scores of Japanese version of ECR created by Nakao and Kato.Our analysis suggests that high anxiety subjects felt less comfortable than low anxiety subjects, while avoidance scores did not correlate with degrees of discomfort."We discuss the correlation between the attachment styles and the side preference of physical contact with one's partner.The participants were recruited on the web.After receiving instructions regarding the general nature of the experiment, they answered some questions about lateral incidents or preferences, based on the framework of ECR.At the end of the questionnaire, the subjects signed forms giving permission for the data to be used in the analysis.Data from subjects who had a non-typical reason for laterality preference were excluded from the analysis.Finally, the data of 203 subjects were put to analysis.150 subjects had romantic partners at the time of investigation.One of the objectives of the research was to explore the correlations between the two dimensional scores of attachment scale and whether one was aware of the lateral preference when communicating with the romantic partner.Accordingly, the subjects were asked whether they had ever talked about the side preferences with their partner.Questionnaires about lateral incidents or preference consisted of 8 items.The questions were given in Japanese.The subjects were asked the frequency and side preferences of physical proximity with one’s romantic partner in various daily situations.Q4,and Q5,were supplementary questions, designed to ask the side preference when communicating indirectly with partners.Participants were given four choices.We excluded the data of participants who had answered “never experienced” for a given question, so that our final data set contained only participants who have had some experiences in the designated situations.In addition to these questionnaires, participants rated their degree of confidence for each answers on a 7-point scale.The sureness scale was introduced as a measure of how much subjects were aware of the designated physical contexts in the romantic relationship.Side preferences could be physically consistent among partners or inconsistent.The participants were asked two further questions: One was on whether they had ever talked about the lateral preference with their romantic partner.The other asked the degree of discomfort if their own lateral preference was inconsistent with that of their partner on a 7-point scale.We used the Japanese version of ECR scale, constructed from 26 items, described in Nakao and Kato.ECR consists of two components; anxiety and avoidance.In the version used in the study, there were 9 anxiety questions and 17 avoidance questions.The mean values of these scores for each participant were calculated.The attachment style can be divided into four types according to the score levels in these two dimensions.Although there is no guarantee that the two dimensions are statistically independent, the four types are considered in the literature to offer a robust classification of subjects in describing variabilities in their modes of personal relationships.The wealth of description in the literature based on the four styles makes it possible to compare the results of the present study with previous findings.The two dimensional scores have been used to explain the relationship between the comfort level of cuddling with partner and attachment style.In addition, these dimensions could indicate the nature of self and other model, where each model would have respective functions in situations where we communicate with others and build attachment relationships.These considerations motivated us to investigate the relations between the two dimensional scores of ECR and lateral preference scales.We carried out the Edinburgh Handedness Inventory to check whether these lateral preferences depended on handedness.The inventory is constructed from 10 items.We calculated each participant’s score according to Oldfield’s procedure, i.e., as/x100.The mean score was 86.7.We analyzed the effect, if any, of handedness on responses.There were no significant effects except for Q4.In the light of this result, we excluded Q4 responses from further analysis, as the effects of attachment styles could not be isolated from those of handedness.Two attributes were used to define pairs of subgroups, each representing different states concerning the relationship with the partner.The first pair is “discussion” and “no discussion” groups, depending on whether one has ever discussed one’s own lateral preference with his or her partner or not.There was no significant difference in the proportion of female participants between the groups.The second pair is “partner” and “no partner” groups, depending on whether one currently has a romantic partner or not.There was no significant difference in the proportion of female participants between the groups = 2.72, p = 0.10).We investigated the correlations between lateral preference and anxiety or avoidance scores.Cronbach’s alphas were high.The mean score of anxiety was 30.1, while the mean score of avoidance was 56.8.These scores were not correlated.Analysis of variance test did not reveal any significant results for the anxiety scores of way of answers in all questionnaires.Gender difference was not found = 0.32, p = 0.75).There was no dependence on discussion = 1.35, p = 0.18).The anxiety score for no partner group was significantly higher than that for partner group = 4.09, p < 0.001, Fig. 1a).ANOVA test showed that the avoidance score was different among way of responses in Q3 = 4.20, p = 0.04).The avoidance score of right was higher than that of left in Q3 = 2.05, p = 0.04, Fig. 2).Gender difference was not significant = 0.06, p = 0.95).The avoidance score of no discussion group was significantly higher than that of no discussion group = 3.7, p < 0.001, Fig. 1b).We found that the avoidance score of the no partner group was higher than that of the partner group = 3.86, p < 0.001, Fig. 1a).We divided participants into four attachment styles according to the procedures described in Brennan et al.Our data contained 63 secure, 43 preoccupied, 41 dismissing-avoidant, and 56 fearful-avoidant subjects.We analyzed gender, experience, and partner differences in the four attachment styles.Ratios of attachment styles were not different between male and female = 3.63, p = 0.30).There was a significant difference in the ratios of attachment styles between discussion and no discussion groups = 13.39, p < 0.01, Fig. 3).The ratio of secure style in the discussion group was higher than that in the no discussion group = 4.37, p = 0.04, Fig. 3).The ratio of fearful-avoidant style in the no discussion group was higher than that in the discussion group = 6.48, p = 0.01, Fig. 3).We found that the ratio of attachment styles were different between the partner and no partner groups = 27.25, p < 0.001, Fig. 4).The ratio of secure style in the partner group was higher than that in the no partner group = 24.5, p < 0.001, Fig. 4).The ratios of preoccupied and fearful-avoidant styles in the no partner group were higher than those in the partner group = 5.16, p = 0.02, fearful-avoidant: χ2 = 4.63, p = 0.03, Fig. 4).We analyzed the lateral preferences of participants when in physical proximity with a partner by chi-square test.Participants had lateral preferences in all situations listed in the questionnaire.Participants answered their own physical side incidents and preferences in various situations, including whether they communicated with partners.There were no gender, experience, or partner differences in the absence or presence of lateral preferences in each questionnaire.We conducted an analysis of the ratio of having lateral incidents and preferences to investigate the difference of responses between attachment styles.We did not uncover significant differences among styles in all questionnaires by ANOVA test.However, the ratio of having laterality preference in the preoccupied style was marginally higher than that in the secure style in Q7 = 3.66, p = 0.06, by chi-square test).In order to further analyze the effects of laterality, we calculated the average of responses with the coding of left = -1, both in equal proportion = 0, right = 1 for the laterality question.An average close to 1 would indicate that the participant was apt to prefer to be on the right side.This analysis showed that males tend to prefer the left side = 2.92, p < 0.01, Fig. 5a), while females tend to prefer the right side = 5.44, p < 0.001, Fig. 5a).There was a significant difference of the average between male and female = 5.52, p < 0.001, Fig. 5).Analysis of each questions indicated the same gender differences in Q1, 2, 7, and 8 by t-test. = 3.62, p < 0.001, Q2 “When you hold hands with your partner, which hand do you often hold?,: t = 5.04, p < 0.001, Q7: t = 4.78, p < 0.001, Q8 “When you walk with your partner, on which side of your partner do you feel more sense of being secure?,: t = 6.35, p < 0.001).There was no deflection of both the discussion group = 0.30, p = 0.77) and the no discussion group = 2.78, p < 0.01).Experience difference was not significant = 1.07, p = 0.29).The partner group does not have deflection = 1.40, p = 0.16), while the no partner group prefers the right side = 2.43, p = 0.02, Fig. 5b).Partner difference was not significant = 1.30, p = 0.19).We hypothesized that there would be different tendencies in the participants’ side preferences when they were with their romantic partners, depending on the score of anxiety or avoidance.A strong laterality preference in the subjects would result in a high degree of confidence in their responses.From this perspective, we analyzed the participants’ sureness for responses.Neither anxiety score nor avoidance score correlated with the average sureness.The sureness in the response of Q5 and the score of anxiety were significantly correlated, and the sureness of Q8 was marginally correlated with the anxiety score.Otherwise, there were no correlations between sureness and anxiety.No correlation with the score of avoidance was found.There were no correlations between two dimensional scores and mean sureness.We analyzed whether the scores of sureness was different among attachment styles.ANOVA test revealed differences of the sureness level in Q1, Q5 and Q8 = 3.01, p = 0.03, Q5: F = 2.74, p = 0.04, Q8: F = 3.83, p = 0.01).Post hoc Tukey’s test showed that secure style and fearful-avoidant style subjects had significantly higher sureness level compared with dismissing-avoidant style subjects in Q1.We tested for Q5 and Q8 in the same way.The fearful-avoidant style subjects had significantly higher sureness level compared with the dismissing-avoidant style subjects in Q5, while the secure style and preoccupied style subjects had higher sureness level compared with dismissing-avoidant style subjects in Q8.The mean sureness score was different among attachment styles by ANOVA test = 2.67, p = 0.049), while post hoc Tukey’s comparisons did not indicate significant difference.However, by t-test, the mean sureness of dismissing-avoidant subjects was significantly different from both secure and fearful subjects respectively.We investigated effects of gender, experience and partner status.Gender and partner differences were not found, while the mean score of sureness in the discussion group was higher than that in the no discussion group = 3.74, p < 0.001, Supplementary Fig. S1).We checked difference in each questionnaire.The scores of sureness in males in Q3 and Q5 were higher than that of females.In Q2 and Q3, the scores of sureness in the partner group were higher than those in the no partner group.Experience differences were significant except for Q3 and Q6.We analyzed the level of discomfort that participants would feel when their own lateral preference was physically inconsistent with partners.A higher score for discomfort in the inconsistency between one’s own side preference with that of a partner would indicate the subject’s awareness of own lateral preferences.The level of discomfort was not correlated with either anxiety or avoidance scores.There was also no difference among attachment styles = 0.90, p = 0.44).There were no gender or partner differences = 0.03, p = 0.98, partner: t = 0.47, p = 0.64).The score of discomfort in the discussion group was significantly higher than that in the no discussion group = 2.89, p < 0.01, Supplementary Fig. S2).We checked the correlation between the mean sureness and the score of discomfort when on the wrong side of preference.The mean sureness significantly correlated with the discomfort level.In this study, all participants had lateral preference as regards which own body side preferably contacts with their partners.Although we hypothesized that there would be correlations between the lateral preference and two-dimensional orthogonal scores of attachment scale, we could not get confirming results.We can consider some possible reasons for this absence.In our study, we asked participants which side they would prefer to be when walking, holding hands, and sleeping with a partner.In these situations, the subjects would typically feel a sense of relaxation, rather than uneasiness.The sensitivity to physical contact with a romantic partner may be higher under conditions when subjects are feeling a sense of anxiety).If there were any items in the questionnaire referring to a situation in which participants would typically experience uneasiness, we may have been able to confirm some correlations with lateral preferences to the level of anxiety and avoidance scores.However, these situations do not occur frequently in life.It would be difficult to investigate a subject’s memory for body side incidents in such specific situations by self-report.In order to investigate situations when subjects become more sensitive about the laterality of physical proximity with their partners, we need to conduct a behavioral experiment.There were experience dependent differences in the avoidance score and the proportion of attachment styles.In situations involving inconsistency about their own lateral preferences, the discomfort level of discussion group was higher than that of no discussion group.It is expected that one who feels discomfort would try to resolve one’s own dissatisfaction and discuss it with one’s partner.The anxiety score was marginally correlated with the level of discomfort.Furthermore, the avoidance score of no discussion group was higher than that of discussion group, while there was no experience dependent difference in the anxiety score.In Bowlby’s view, attachment styles were split depending on whether the internal models of self and other were positive or negative.Secure or preoccupied subjects who have lower avoidance score can typically have belief in getting support from an attachment figure, and they can handle their worries to conquer it.Meanwhile, high avoidant subjects has a negative other model, so they typically think others, even if he or she is an attachment figure, cannot help them when in need.Indeed, high avoidance subjects, including dismissing-avoidant subjects, are unwilling to face one’s own negative thoughts, because they are not good at seeking help from their attachment figures.Hence they may not want to discuss their own lateral preference in order to prevent a conflict if their own side preference was inconsistent with their partner.Subjects of no discussion group may not express own discomfort in disregard of their senses.The degree of sureness was lower in the no discussion group than it was in the discussion group.In fact, avoidant people seldom seek physical proximity or contact even when they separate from their romantic partners.Even when one is reluctant to part from one’s partner generically, avoidant people do not care much for physical proximity.High avoidant people may not care about physical proximity in the first place, for they are unwilling to get intimate with an attachment figure.Our results showed that the higher the anxiety score was, the more participants would tend to feel discomfort.If high avoidance people tend to ignore the negative emotion, it may be expected that the analysis does not suggest a correlation between the avoidance score and the level of discomfort.One of the reasons of the difference in sureness level between experience groups might be that the no discussion group subjects had less opportunities to discuss and be aware of the laterality preferences.The sureness of high avoidance subjects might have been affected by their low interest in the physical contact because of their trait.High avoidant people show low brain response to social reward, e.g. other’s smile.In future works, neuroscience approach may be able to explain the association between avoidant traits and attachment behavior by analyzing from the aspect of reward systems, as they process physical proximity and other aspects of interpersonal communications.In our study, preoccupied subjects tended to have larger lateral preferences than secure subjects in Q7.The two attachment styles differ in the level of anxiety score."In comparison with high avoidant people, high anxiety people exhibit extreme concerns about others' attitudes and get comfortable with physical proximity.Preoccupied subjects have negative self and positive other model, so they are more sensitive to others’ responses to them.There was a marginal association between the degree of anxiety and sureness in Q8.This result would suggest that high anxiety subjects tend to assign more cognitive resources to lateral preference in physical proximity.Left amygdala of high anxious subjects including preoccupied style reacts to negative feedback from others such as angry faces.They tend to overreact to the emotion of others and are frequently sensitive to someone’s mood, so that they may care about physical proximity to make it easier to notice the change of partner’s attitude.Subjects of the partner group had lateral preference, whereas those of the no partner group did not.Taken together with the result of higher anxiety score in the no partner group, this difference in lateral preference may reflect the different requirements of the two groups on physical proximity,In our analysis, the ratio of secure subjects was higher in the partner group than in the no partner group, while there were partner differences in two dimensional scores.We can make two observations from these results.One possibility is that the existence of a romantic partner makes one secure.The other is that secure subjects find it easy to make a romantic partner and keep the relationship longer.Although attachment styles are generally taken over from childhood to adulthood, Lewis, Feiring and Rosenthal found that a concordance rate of attachment style between childhood and adulthood was not so high.The attitude to life events varies between individuals.For example, IWM of married couple having a positive evaluation for marriage is changed to “secure” after the marriage.Some studies indicate that IWM might be more variable than previously thought, although it is not still clear what elements causes change in IWM.A romantic partner will not be an attachment figure soon after the beginning of a relationship: A certain amount of time is needed to build attachment with a romantic partner.The present result does not necessarily support the hypothesis that only secure subjects can easily build a romantic relationship.Subjects who reported preferences for the right side when they sleep beside their partner had higher avoidance score than those who preferred the left side.As described before, participants might learn the side for secure base in the relationship with their mothers if mothers had cuddled on her left side.High avoidance subjects do not seek attachment figures as much as high anxiety subjects, and they typically do not think their romantic partners as base for security.High avoidance subjects may not take their partner into their right side while they sleep.In our study, there were gender differences in lateral preferences.Males tended to prefer the left side, while females tended to prefer the right side.There are several findings which could be related to this effect of gender on lateral references.Subjects can distinguish an emotional intonation more effectively in hearing by the left ear than by the right ear.Females are good at perceiving other’s emotion from facial expression.The left side of one’s face expresses emotional states more intensely, with the right hemisphere thought to regulate self-affection.If a male stands on the right side, he can estimate his partner’s mood by voice, while a female could judge her partner’s emotion from his left face more accurately.Understanding the partner’s emotional state and to respond in an appropriate manner is important for building attachment.Lateral preference studies have been conducted mainly in the parenting context, such as cradling babies.Not only female but also male tend to cradle babies on her or his left side.Interestingly, mother’s left breast is more sensitive than the right one.Salk suggested that the reason for this phenomenon was because a baby could hear the caregiver’s heartbeats on the left breast.Heartbeats have a role in relaxing and reassuring babies.Females may unconsciously offer her left side consistent with possible future events of having a baby with the male partner.In the questionnaire presented in this experiment, the participants were not explicitly asked about their sexual tendencies.The effect of sexuality in the lateral preferences is one of the possible research themes to be pursued in the future.In summary, we did not find explicit evidence to suggest that the two-dimensional orthogonal scores of ECR contributed to one’s lateral preferences in the situation asked in the questionnaires."The attachment styles were correlated with some aspects of the romantic relationship.Preoccupied subjects tend to have lateral preference when they were with their partner.These findings would provide evidence as to the role of laterality preferences in the physical proximity, which plays an important role in adult attachment.Our study have suggested variabilities in the effect of laterality in the embodiment of interpersonal communication among the four attachment styles.Since embodied cognition is an important element in attachment, there need to be further studies into the presence, awareness, and inter-partner communications of lateral preferences.Data and programs used for analysis are available at https://osf.io/hyxc3/,The authors have no competing interests.
Attachment scale is constructed from two components (anxiety and avoidance)effectively treated as providing salient measures in previous studies. Recent studies have suggested associations between sensitivities to physical warmth and anxiety scores of attachment scale. Some researchers also suggest that the degree of one's comfort with physical proximity depends on attachment styles, attributing differences to the number of oxytocin (a neuropeptide released by physical touch)receptors. Lateral preference is an important aspect of physical proximity, coupled with the lateralization of visual, emotional, and other cognitive systems. However, there are few studies investigating the relationship between attachment scale scores and one's lateral preferences in physical proximity. Here we surveyed the preferences of subjects regarding positional relations with their romantic partner in some daily situations, and examined the association with attachment scale score. Our results show that the existence or absence of partner correlates with different relations between attachment styles and subjects’ awareness of lateral preferences. Lateral preferences in physical proximity may play an important role in attachment in adulthood.
196
QEEG in affective disorder: about to be a biomarker, endophenotype and predictor of treatment response
EEG reflects the electrical activity of the brain by recording through the scalp presynaptic and postsynaptic potentials arising from simultaneous firing of a group of neurons.Cortical discharges of 1.5 mV amplitude are amplified and decomposed with Fourier transformation.EEG, as opposed to fMRI and PET, reflects neuronal activity directly rather than using indirect measures such as blood deoxygenation and glucose utilization.Its temporal resolution is greater than both.QEEG can be conceptualized as the numeric interpretation of EEG waves and brain mapping as the two dimensional visualization of this interpretation.LORETA provides a three dimensional evaluation.It is generated by an algorithm for intracortical EEG computations.Spectral power density provides an estimate of radial current flow in localized brain regions.It also eliminates the confusion resulting from differences in reference electrodes.Therefore, this new approach is reported to yield more direct results.Descriptive diagnostic systems ignore neurobiological heterogeneity.The most crucial result of the STAR-D trial is the demonstration of the limited efficacy of pharmacotherapies and psychotherapies, especially in the long term.Thus situated, lack of effective treatment options makes it a requirement to try personalizing the existing treatment options so as to choose the best option for a specific individual.As a matter of fact, a single DSM diagnosis matches more than one treatment option.Personalized medicine is focused on biomarkers and endophenotypes.Biomarkers change among disease subtypes.Endophenotype is the heritable form of biomarker, which possesses genotypic and phenotypic information.Hitherto, personality and temperament questionnaires, cognitive function tests, neurophysiological tests, neurotransmitter metabolites, pharmacogenomics and pharmacometabolomics were suggested and studied as candidates.Though all proved distinctive, reliable and applicable to some extent, none gained firm ground in clinical practice.Heritability of some QEEG parameters such as alpha peak frequency and alpha spectral power density was shown in twin and family studies.The heritability quotients of these two variables were determined as 81 and 79% respectively.There rates are superior to p300 amplitude and latency values previously determined as 60 and 51% respectively.Alpha peak frequency was found to be 1.4 Hz slower in relation to COMT gene Val/Val genotype and was accepted as an endophenotype for treatment resistant depression.Also low-voltage alpha is associated with HTR3B, CRH-BP, GABA-B and BDNF Val66Met polymorphisms and heritability is reported to be 79–93% for these variables.Decrease in frontal alpha and theta activity, which has been conceptualized as impaired vigilance, is reported to predict response to antidepressants in depression.According to these studies nature and nurture act in tandem.Many EEG indicators have the capability to distinguish between neuropsychiatric conditions.This was described first for depression and ADHD, followed by bipolar affective disorder and dementia."A cautionary point is that DSM's ‘impaired daily function’ criterion is crucial in order to decide whom to treat; like the distinction between an artist and a CEO with a similar degree of impulsivity or between a street dweller and a petty criminal.Lemere remarked ‘the ability to produce a quality alpha wave is associated with the affective repertoire of the brain’.Increased or decreased alpha power density is a criterion for depression.Lieber and Newbury described two groups of depressive patients according to QEEG data in a 1988 study on 216 inpatients.The first group showed an increase in activity along with beta and/or slow wave whereas the second group showed an increase in slow wave activity.This data, in conjunction with the results of later research, can currently be interpreted as the first group having bipolar depression."QEEG's feasibility as a biomarker that can distinguish between unipolar and bipolar depression makes this interpretation noteworthy.The prognostic value of EEG has a long history.Slow wave EEG rhythm has been reported as a predictor and measure of clinical improvement under ECT.The induction level in delta band activity predicts the long term effect of ECT.Widespread slow wave activity is an index of antidepressant unresponsiveness.In a multicentered, large sample study that assesses treatment response for 8 weeks to escitalopram and venlafaxine with HAM-D, comparing cases with or without abnormal slowing of QEEG, treatment response rate to escitalopram was found to be 33% in cases with and 64% in cases without abnormal QEEG slowing; treatment response rate to venlafaxine was found to be 41% in cases with and 66% in cases without abnormal QEEG slowing.P3 and N1 latencies and amplitudes were differentiated in nonresponders in this study.Response solely to sertraline was observed when the decrease in alpha peak frequency was taken as a criterion.Increased theta band activity in rostral ACC activity predicts antidepressant response and yielded positive results in 19 of 23 studies.The treatments used in the studies mentioned in this metaanalysis are SSRI, TCA, TMS and sleep deprivation.Pretreatment rACC theta activity represents a nonspecific prognostic marker of treatment outcome in major depressive disorder.In their double-blind, placebo-controlled multicentered study, Pizzagalli et al. demonstrated that higher rostral anterior cingulate cortex theta activity at both baseline and week 1 predicted greater improvement in depressive symptoms in patients treated with either SSRIs or placebo, even when sociodemographic and clinical variables were controlled.According to their findings, of the 39.6% variance in symptom change, only 8.5% was uniquely attributable to the rACC theta marker.Bruder et al. have found pretreatment differences between SSRI responders and nonresponders regarding EEG alpha power or asymmetry.Treatment responders had greater alpha power than nonresponders and healthy subjects at occipital sites where alpha was most prominent.Responders showed less activity over right than left hemisphere, whereas nonresponders tended to show the opposite asymmetry.Neither alpha power nor asymmetry changed after treatment.According to their findings alpha power and asymmetry possessed reasonable positive predictive value but less negative predictive value.Increased or decreased parietooccipital alpha, described as hyperstable vigilance, predicts the response only to dopaminergic antidepressants and stimulants.Hyperstable vigilance can be described as increased but undistractible and targeted vigilance.On the other hand serotonin and dopamine transporter availability during long-term antidepressant therapy does not differentiate responder and nonresponder unipolar patients.There was no association between SERT and DAT availability.Alpha peak frequency is found to be slowed in depression unresponsive to TCA and also response rate to TMS is lower in these cases.In responsive cases alpha peak frequency is shown to be increased by 0.5 Hz at the end of the 4 th week.In a study using SPD mesures of median alpha value, positive predictive value was reported as 93.3% and specificity as 92.3% in 41 drug-free cases and no difference was found between SSRI ans SNRI use regarding prediction of treatment response.Response to 6-week paroxetine treatment was predicted by gamma synchonization in QEEG correlated with HAM-D in 18 drug-free cases.Discordance was defined as a severity variable in QEEG.It is the increase in relative activity while absolute activity is decreased.Cut off point is determined as 0.30.Although used frequently for theta and beta bands, it can be adapted to all other bands.Its validity is supported by PET studies and it is found to be correlated with low perfusion and metabolism.Discordance is an indicator of unresponsiveness to antidepressive treatment.Prefrontal theta cordance was reported to predict response to venlafaxine treatment in the first week in treatment-resistant depression.The same researchers reported that prefrontal theta cordance can also predict response to bupropion augmentation treatment and TMS.In the metaanalysis by Iosifescu et al. response prediction rate for SSRI, SNRI, TCA, TMU and ECT at the end of the first week was reported as 72–88%."In Spronk et al.'s regression model, better clinical outcome was characterized by a decrease in the amplitude of the Auditory Oddball N1 at baseline, impaired verbal memory performance was the best cognitive predictor, and raised frontal Theta power was the best EEG predictor of change in HAM-D scores.Arns et al. examined neurophysiological predictors of non-response to rTMS in depression.According to their results, non-responders were characterized by increased fronto-central theta EEG power, a slower anterior individual alpha peak frequency, a larger P300 amplitude, and decreased pre-frontal delta and beta cordance.In a relatively old study visual average evoked responses to four intensities of light were studied in hospitalized depressed patients receiving placebo, d-amphetamine, l-amphetamine, lithium and d- and l-amphetamine combined with lithium.The amount of increase in evoked potential amplitude or amplitude/intensity slope seen with amphetamine was also significantly correlated with the amount of increase in activation or euphoria ratings with amphetamine administration.These effects were most prominent in the P100 component that we have previously found to differentiate bipolar and unipolar depressed patient groups.Several studies show that the response to selective serotonin reuptake inhibitors can be successfully predicted by using the loudness dependence of auditory evoked potentials.Patients at the beginning of an antidepressant treatment who show an initially strong loudness dependence of auditory evoked potentials have a greater probability of responding to a serotonergic antidepressant, whereas patients with a weak loudness dependence will probably benefit more from a nonserotonergic agent.Recent electrophysiological studies of emotional processing have provided new evidence of altered laterality in depressive disorders.EEG alpha asymmetry at rest and during cognitive or emotional tasks are consistent with reduced left prefrontal activity, which may impair downregulation of amygdala response to negative emotional information.Dichotic listening and visual hemifield findings for non-verbal or emotional processing have revealed reduced right-lateralized responsivity in depressed patients to emotional stimuli in occipitotemporal or parietotemporal cortex.Individual differences of right-left brain function are related to diagnostic subtype of depression, comorbidity with anxiety disorders, and clinical response to antidepressants or cognitive behavioral therapy.In another study, responders to CT showed twice the mean right ear advantage in dichotic fused words performance than non-responders.Patients with a right ear advantage greater than healthy controls had an 81% response rate to CT, whereas those with performance lower than controls had a 46% response rate.Individuals with a right ear advantage, indicative of strong left hemisphere language dominance, may be better at utilizing cognitive processes and left frontotemporal cortical regions critical for success of CT for depression.The predictive power of QEEG on treatment response does not seem to be affected by gender.Were there findings on the contrary, they would be interpreted as originating from temperamental gender differences.In fact, among affective temperament types suggested as endophenotypes for affective disorders, depressive, cyclothymic and anxious temperaments are more common in females whereas hyperthymic and irritable temperaments are more commonly seen in males.While studies so far focused on the prediction of treatment response, the more important question is whether QEEG can predict recovery from depression.Tenke et al. asserted that the predictive capability of QEEG is lower for recovery and explained this with the failure of the previous treatment.There are case reports indicating that prefrontal theta cordance can rule out placebo response and dissimualtion and can predict manic shift.Our interpretation is that the condition called fade out of response to antidepressant is seen more frequently in bipolar depression and lower prediction rate may be due to these cases being in the bipolar spectrum.Should bipolar disporder be defined on the level of neuronal activity, incongruence between prefrontal and limbic activities must be mentioned.It is dysfunctional connectivity.It is the imparment in early P50 and N100 neural response.IFG activity decreases in mania whereas it returns to normal in depression and euthymia.Limbic activity is increased regardless of mood periods.Overactivation in medial temporal lobe distinguishes bipolar cases from cases of schizophrenia in tasks stimulating emotion and memory.It is shown to distinguish bipolar cases from unipolar cases and bipolar disorder type I cases from bipolar disorder type 2 cases in small-sample studies.Manic episode exhibits more frequency variation than depressive episode.It is frequently seen as increased beta activity and left dominant frontal alpha asymmetry.Similarly, frontal asymmetry is observed to persist in hipomania and, in a trait-based manner, in periods of remission.In a longitudinal follow up study on bipolar cases, severity of manic symptoms predicted worse insight without direct or moderating influences of global cognitive abilities.However, it must be stressed that the findings of this study are limited to six months of follow up.Lithium is an agent which corrects the frontal function in the fourteenth day, which is minimally effective in depressive episode and whose role in prophylactic treatment also encompasses cognitive function.While beta, left delta and theta activities are normalized with lithium, treatment response is most closely associated with basal dela activity.Lithium plasma concentration is correlated with theta activity.With the addition of carbamazepine to lithium, frontal delta activity increases while theta activity is decreased predominantly in right hemisphere.EEG abnormality is the predictor of unresponsiveness to lithium and anticonvulsant requirement at the end of three months.On the other hand, left dominant frontal changes may predict treatment response with lithium.After 20 weeks of lithium usage relative alpha activity is decreased in right centroparietal region.Among cases with non-epileptiform EEG abnormalities, cases non-responsive to valproic acid but responsive to lithium and the opposite were reported as 30–70% respectively.Unresponsiveness to lithium, carbamazepine and risperidone is associated with diffuse theta activity and high left frontotemporal amplitudes.Lamotrigine added in euthymia reinforces emotional stability by regulating cingulate cortex activity at the end of twelve weeks.It contributes to recovery by suppressing amygdala activity in depressive episode, correlated with HAM-D.It ensures inhibitory control by regulating emotion and cognition in prefrontal dorsolateral cortex after four weeks of antipsychotic treatment.In cases with borderline and antisocial personality disorder valproate-responsive aggression rate was 36.4% whereas in non-epileptic EEG abnormality it is 25%.No consistent QEEG changes were shown in cases with subthreshold mood irregularity after 12 weeks of valproic acid treatment.We think that the more severely pathologic the biological projection of impairment of impulse control in borderline and antisocial personality disorders, the more effective will be the specific treatment of the existing biological problem on the symptom.Psychotherapy has been shown to contribute to the normalization of IFG hipoactivity in the twelfth week.Awareness-based CBT has been shown to decrease right frontal beta activity.In a study investigating first manic episode cases in manic episode and subsequent period of remission, right frontoparietal and left frontotemporal beta activity are found to be increased in manic episode.No association was shown between beta activity and YMRS scores.While right frontal alpha activity was increased in wellness period, interestingly right frontal alpha activity distinguished between psychotic and non-psychotic cases.Lower alpha activity in euthymic bipolar cases compared to healthy controls is a previously reported finding.On the other hand, increased right alpha synchronization is reported in schizophrenia-like psychoses and epileptic patients with psychotic symptoms and this could be interpreted as a parameter pertaining to postpsychotic depression.Consequently QEEG harbours affective and cognitive components for the assessment of current situation in cases diagnosed with affective disorders.QEEG is more cost-effective and practical compared to other electrophysiological studies and functional imaging methods.It increases the cooperation of noncompliant patients by providing objective evidence and alleviates self stigmatization.The capability of these directly obtained data with high temporal resolution to support the diagnosis cross-sectionally is not a contribution to be underestimated.These directly obtained data with high temporal resolution contribute even more to clinical assessment in regards to treatment response monitorization.Additionaly it probably carries information about longitudinal course and residual symptoms in periods of wellness.Future studies should target depression subtypes, bipolar disorder subtypes, course features, comorbid conditions and possible differences among specific treatment algorithms.All authors listed have significantly contributed to the development and the writing of this article.This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.The authors declare no conflict of interest.No additional information is available for this paper.
QEEG is a relatively easy to apply, cost effective method among many electrophysiologic and functional brain imaging techniques used to assess individuals for diagnosis and determination of the most suitable treatment. Its temporal resolution provides an important advantage. Many specific EEG indicators play a role in the differential diagnosis of neuropsychiatric disorders. QEEG has advantages over EEG in the dimensional approach to symptomatology of psychiatric disorders. The prognostic value of EEG has a long history. Slow wave EEG rhythm has been reported as a predictor and measure of clinical improvement under ECT. The induction level in delta band activity predicts the long term effect of ECT. Current studies focus on the predictive power of EEG on response to pharmacotherapy and somatic treatments other than ECT. This paper discusses either QEEG can be a biomarker and/or an endophenotype in affective disorders, if it has diagnostic and prognostic value and if it can contribute to personalized treatment design, through a review of relevant literature.
197
The T-type Ca 2+ Channel Ca v 3.2 Regulates Differentiation of Neural Progenitor Cells during Cortical Development via Caspase-3
At the onset of corticogenesis, radial glial cells, which are the founding cortical progenitors, increase their pool through an extend proliferation in the ventricular zone of the cortex.As corticogenesis proceeds, radial glial cells give rise to intermediate progenitor cells that invade the subventricular zone.Neural progenitor cells go through a temporally controlled migration toward the cortical plate.Here, progenitors differentiate into neuronal and glial cells and create proper connections, following a specific spatial and temporal pattern.These complex events during embryonic development are strictly regulated by multiple biological mechanisms.Spontaneous fluctuations of calcium ions, which begin to occur before the onset of chemical synaptic connections, have been linked to cell proliferation, cell differentiation, and neurotransmitter specification.Nonetheless, the regulation of spontaneous Ca2+ activity in the development of neural tissues is not fully understood, nor the biological processes that decode and transduce this activity into a physiological state.The change in the cytosolic Ca2+ concentration is orchestrated mainly by channels and pumps.Voltage-dependent T-type Ca2+ channels are characterized by three different α1 subunits: Cav3.1, Cav3.2, and Cav3.3.T-type Ca2+ channels regulate various physiological processes, such as gene expression, cell proliferation and differentiation, and development of neuronal and cardiac diseases.For example, childhood absence epilepsy, idiopathic generalized epilepsy, and autism-spectrum disorders are correlated with polymorphism or mutations of the Cav3.2 gene CACNA1H.Cacna1h−/− mice exhibit many anomalous phenotypes in the central nervous system that affects brain functionality.T-type Ca2+ channels are highly expressed during early development, even before the expression of the other voltage-dependent L-, N-, P/Q- and R-type Ca2+ channels.It has been reported that T-type Ca2+ channels modulate stem cells proliferation and neuronal differentiation, but the mechanisms of action remain largely unknown.The Cysteine-containing, Aspartate Specific ProteASES are a class of enzymes that classically function as central regulators of apoptosis and have thus a fundamental role during morphogenesis and disease.In particular, caspase-3 is the final effector of both the mitochondrial and the death receptor apoptotic pathways, ending with the cleavage of many cellular substrates and induction of DNA fragmentation.Recent observations, however, reveal new roles for caspase-3 that are independent from cell death.Mitochondria-dependent activation of caspase-3 has been shown to be necessary for long-term depression and AMPA receptor internalization.Upon excessive intracellular Ca2+ elevation, mitochondria release cytochrome C and activate the intrinsic caspase pathway.Influx through the plasma membrane due to voltage-dependent Ca2+ channels has been shown to lead to mitochondrial disruption.Cellular differentiation and apoptosis have some common physiological processes suggesting that the fate of a cell, for example differentiation versus cell death, could be determined by a fine regulation of the same effectors.It has been shown that caspase-3 regulates the programmed cell death in zones of the brain subjected to high proliferation during early neural development.Nonetheless, caspase-3 has also been suggested to have a function in neural development in the proliferative zones, independent to the induction of cell death.Additionally, placenta-derived multipotent cells differentiating into functional glutamatergic neurons were shown to have active caspase-3 without inducing apoptosis.Here, we sought to examine what role, if any, spontaneous Ca2+ activity and caspase-3 have during early brain development and corticogenesis.Neuronal differentiation of R1 mouse embryonic stem cells and fetal AF22 and AF24 human neuroepithelial stem cells were carried out as previously described.Cells were used only for a maximum of 20 passages to avoid chromosome aberrations.We used the two mice strains C57BL/6 and C57BL/6-129X1/SvJ for in vivo experiments.C57BL/6 Cacna1h knockout was purchased at The Jackson laboratory and as controls C57BL/6 wild-type mice were used.Caspase-3 knockout mice with C57BL/6 background has been reported to have almost no abnormalities, whereas, 129X1/SVJ show a high rate of developmental brain abnormalities.Thus, for the phenotype analysis C57BL/6 Cacna1htm1Kcam mice were crossed with 129X1/SVJ mice to generate F1 C57BL/6-129X1/SvJ Cacna1h+/- animals.The F1 C57BL/6-129X1/SvJ Cacna1h+/- mice were crossed with each other to generate F2 C57BL/6-129X1/SvJ Cacna1h-/- embryos.All animal experiments were carried out under ethical approval by the Northern Stockholm Animal Research Committee.Reagents and concentrations, unless otherwise specified, were as follows: Mibefradil, KCl, Staurosporin, z-DEVD-FMK, and Procaspase Activating Compound-1.Calcium imaging in cell cultures was performed by loading the cells with the Ca2+-sensitive fluorochrome Fluo-3/AM at 37 °C for 20 min in N2B27 medium.Measurement of intracellular Ca2+ was carried out in a Krebs-Ringer buffer containing 119.0 mM NaCl, 2.5 mM KCl, 2.5 mM CaCl2, 1.3 mM MgCl2, 1.0 mM NaH2PO4, 20.0 mM HEPES, and 11.0 mM dextrose at 37 °C using a heat-controlled chamber with a cooled EMCCD Cascade II:512 camera mounted on an upright microscope equipped with a 20× 1.0NA lens.Excitation at 480 nm was assessed with a wavelength switcher at sampling frequency 0.5 Hz.MetaFluor was used to control the whole equipment and to analyze the collected data.Calcium imaging were performed on E16.5 embryonic brain slices, extracted from three C57BL/6 wild-type and three C57BL/6 Cacna1h knockout mothers, with one embryo from each mother used in the experiments.The brains were dissected from the embryos, embedded in 3% low-temperature melting agarose and cut into 300-μm slices using a Vibratome.Tissues were kept all the time in freezing cold, bubbled cutting solution containing 62.5 mM NaCl, 2.5 mM KCl, 1.25 mM NaH2PO4·H2O, 25 mM NaHCO3, 1 mM CaCl2·2H2O, 4 mM MgCl2·7H2O, 100 mM sucrose, and 10 mM glucose.Tissues recovered for 1 h at room temperature in bubbled artificial cerebrospinal fluid solution containing 125 mM NaCl, 2.5 mM KCl, 1.25 mM NaH2PO4·H2O, 25 mM NaHCO3, 2 mM CaCl2·2H2O, 1.5 mM MgCl2·7H2O, and 0.5 M glucose.The brain slices were bulk loaded with Fluo-4/AM in a custom-made loading chamber at 37 °C for 30 min as previously described.Measurement of intracellular Ca2+ was carried out in ACSF at 37 °C using a heat-controlled chamber with a 2-photon laser scanning microscope.Excitation was assessed with a Ti: Sapphire Chameleon Ultra2 laser tuned to 810 nm at sampling frequency 0.5 Hz.The data analysis and statistic were performed using Fiji and MATLAB.The overall Ca2+ activity was determined as a percentage of cells with 10% change in basal line activity.Immunocytochemical staining was performed on mES and hNS cells using a standard protocol consisting in 20-min fixation in 4% paraformaldehyde.Cells were blocked with 5% normal goat serum, and incubated with primary antibodies: Pax6, Nestin, TuJ1, and cleaved Caspase-3 at 4 °C overnight and then with Alexa fluorescent secondary antibodies for 1 h, together with 0.25% Triton X-100 and 1% normal goat serum.Nuclei were stained with TO-PRO-3 or DAPI for 5 min.Images were recorded with a confocal microscope and image analysis was carried out using ImarisColoc software or Fiji.For hNS immunocytochemical analysis, images of size 1920 × 3200 μm were acquired.Each image was divided into 84 grids of the same size and the average signal intensity was measured for each grid.The fluorescent intensity for Tuj1 was normalized to the fluorescent intensity of DAPI.Immunohistochemical staining was performed on embryonic mice brains at E14.5 as previously described.Three C57BL/6-129X1/SvJ Cacna1h+/- mothers were used for the immunohistochemistry staining of cortical sections.Embryos from these three mothers were genotyped to select five Cacna1h-/- and five wild-types for the analysis.Briefly, brains were dissected and post-fixed in 4% PFA at 4 °C overnight.For cryoprotection, the brains were immersed in 10, 20 and 30% sucrose and frozen in OCT at −80 °C until they were used.Fourteen-micrometer frozen coronal sections were cut using a cryostat.The slides were blocked with a TSA blocking reagent for 1 h at RT and then incubated with primary antibodies for 2 h at RT.After washing, the slides were incubated for 1 h at RT with secondary antibodies.The following primary antibodies were used: rabbit-Pax6 and mouse-MAP2, and following secondary antibodies: Alexa Fluor 488 anti-rabbit-IgG and Alexa Fluor 555 anti-mouse-IgG.Experiments were carried out using at least one embryo from six litters for each condition.Images were recorded with a confocal microscope and image analysis was carried out Fiji.GIPZ Lentiviral shRNAmir against scramble RNA and Cacna1h was bought from Thermo Fisher Scientific Open Biosystems and co-transfected with the packaging plasmids pMD2.G and psPAX2 into HEK293T cells.Virus production was performed as previously described using Lipofectamine 2000 to transfect HEK 293T cells.The hNS cells were transduced at day 1 of differentiation and collected for further analysis on day 4.hNS cells were transfected at the same time point with Lipofectamine 2000 with control and plasmid pIRES2-EGFP containing cDNA coding for Cacna1h.The Cacna1h plasmid for overexpression analysis was a kind gift from Dr. Edward Perez-Reyes, University of Virginia School of Medicine, Charlottesville, Virginia, US."DEVD–AMC was applied to measure caspase-3 activity in hNS cells using a fluorometric assay, according to the manufacturer's protocol.The cleavage of the fluorogenic peptide substrate was monitored in a Polar Star Omega fluorometer using 355-nm excitation and 460-nm emission wavelengths.STS to trigger cell death and z-DEVD to inhibit caspase-3 were used as positive and negative controls, respectively.Differentiating hNS cells were gently dissociated at day 4 using TrypLE express, collected, and stained with Annexin V-FITC conjugated antibody and Propidium Iodide, following the manufacturer’s protocol.Cells were analyzed with a FACSort flow cytometer.Background fluorescence was measured using unlabeled cells and compensation was applied during analysis using single stained cells and FlowJo software.Tetramethylrhodamine, ethyl ester, perchlorate was added at 400 nM concentration to differentiating hNS cells 20 min before collection to detect their mitochondrial potential.One cell sample was treated with 100 nM Carbonyl cyanide 4- phenylhydrazone as well, which permeabilizes the inner mitochondrial membrane to protons and disrupts the membrane potential, as a negative control.Cells were then collected and resuspended in 0.2% BSA until FACS analysis.At least 10,000 cells were analyzed for each sample using a FACSort flow cytometer.Background fluorescence was measured using unlabeled cells and compensation was applied during analysis using single stained cells and FlowJo software."Total RNA was collected from differentiating mES at days 0, 2, 4, 6, 8 and 10 and from hNS cells at day 4 using the RNeasy Mini kit according to the manufacturer's instructions.RNA was quantified using NanoDrop 2000 spectrophotometer and SuperScript II Reverse Transcriptase and random hexamer primers were used for cDNA synthesis.cDNA was amplified with LightCycler 480 SYBR Green I Master Kit and a LightCycler 1536 system.The primers used for the amplifications of both mouse and human mRNA Ca2+ channel families and neuronal differentiation marker genes are listed in Table 1.The PCRs were optimized to suit our conditions.PCR fragments were analyzed on agarose gel to verify product specificity.Relative gene expression was calculated using the comparative Ct method, as previously described, normalized against the house keeping gene TATA-binding protein.Primers were used at a final concentration of 1 μM."RNAscope® in situ hybridization assay was performed according to the manufacturer's instructions.Briefly, tissues were prepared using RNAscope Chromogenic Assay Sample Preparation for Fixed frozen tissue protocol.Embryonic brain slices were cut at 14 μm and hybridized with Mm-Cacna1h probe, Mm-PPIB-probe, and negative control probe DapB at 40 °C for 2 h. ISH was performed using a 2.5 HD Assay-BROWN and the images were captured using standard bright field.The image analysis was performed using Fiji.Mice brain cortex was dissected from E16.5 Cacna1h knockout mice.Protein fractionation was performed using the Subcellular Fractionation Kit, following the manufacturer’s protocol.The relative protein concentration was determined using Nanodrop 2000.Samples were subjected to SDS-PAGE and proteins were transferred onto nitrocellulose membranes.Western blot was performed as previously described.The following antibodies were used: anti-Caspase-3, anti-GAPDH, and anti-HDAC2.Three C57BL/6-129X1/SvJ Cacna1h+/- mothers were used for the immunohistochemistry staining of cortical sections.Embryos from these three mothers were genotyped to select five Cacna1h-/- and five wild-types for the analysis.Neuronal and radial glial cell distributions were analyzed using Fiji.To define positive immunolabel signals, a relative threshold was set as the background intensity with thresholds for 488 nm, 555 nm and 358 nm set at 3 times, 1.5 times, and 1 times the background intensity.Images were then binarized by defining pixels as either 1, with signals above the threshold, or zero when below their respective thresholds.To adjust for minor differences in size in the cortical sections: the row sum of the binary frequencies across the section were divided by the total number of row pixels across the section and expressed as a percentage The pial surface was set as the first emergence of three consecutive percentages above zero.A stringent cutoff was set for the bottom of the ventral surface at a minimum of 50%, thus defining the normalized pial-ventral length the height index was set as*100 where N is the normalized pial-ventral length, and xi is index number of each element from 1 to N d) to adjust for increment differences the values were summed at 5% intervals.Mean areas of fluorescent distribution were estimated according to Riemann Sums approximation of the intensity distribution curve for each cortical section.Prior to performing the statistical tests, the Shapiro–Wilks Test was applied to assess the normality of the data distributions, and equal variance using Levene’s test.Data were analyzed using either Student’s unpaired t-test or a one-way analysis of variance with Tukey’s post-hoc analysis.The Bonferroni correction was applied to maintain an overall type I error rate of 0.05 against multiple comparisons.Data are presented as mean ± standard error of the mean.Sample sizes represent the number of cells or brain slices and represent independent repeats or animals.Statistical analyses were either conducted with SigmaPlot® 12.5 or R base package.Statistical significance was accepted at *P < 0.05, **P < 0.01, or ***P < 0.001.To assess the influence of Ca2+ signaling on neural differentiation, we analyzed mouse embryonic stem cells during neural differentiation.We mapped the spontaneous Ca2+ activity in these cells for a period of 10 days.After 8 days of differentiation we detected a significant increase in the number of cells that exhibited spontaneous activity = 34.9; P = 0.005).We then tested at what day the cells became responsive to membrane depolarization by challenging them with 50 mM KCl.A clear increase in the percentage of cells showing Ca2+ response to this treatment occurred at day 6 = 15.04; P = 0.02).Both KCl-induced and spontaneous Ca2+ activities were dependent on external Ca2+ influx.Removal of extracellular Ca2+ from the medium abolished the KCl-induced responses.The impact of VDCCs on the spontaneous Ca2+ activity in mES cells at day 8 was then tested with a pharmacological inhibitor of VDCCs.Mibefradil, a VDCC inhibitor mainly acting on T-type Ca2+ channels, used at two different concentrations, 3 µM and 30 µM, almost completely blocked the number of cells displaying spontaneous Ca2+ activity.The response to membrane depolarization was partially inhibited in a number of cells by the lower concentration of Mibefradil and entirely by the high concentration.We then sought out to identify genes that could be linked to the ability of cells to respond with Ca2+ signaling that occurred at days 6–8.We focused our attention on genes encoding essential Ca2+ channels, including VDCCs, ryanodine receptors, and inositol 1,4,5-trisphosphate receptors.Interestingly, the mRNA expression of Cacna1h that encodes the T-type Ca2+ channel Cav3.2 showed a dramatic increase on day 6.Together these results suggest that Cav3.2 is a key player for spontaneous Ca2+ activity in mES that undergo neural differentiation.We continued our quest to assess the influence of Ca2+ signaling on neural differentiation by dividing our cells into two groups.Cells were cultured on coverslips with an etched coordinate system that enabled back-tracing after the experiment.We performed Ca2+ imaging and grouped cells according to their spontaneous Ca2+ activity.Since elevated cytosolic Ca2+ has been associated with increased caspase activity we assessed the association between Ca2+ active cells and their caspase-3 activity on a cell-to-cell basis.We observed increased cleaved caspase-3 in cells exhibiting spontaneous Ca2+ activity.In total 46.7 ± 3.3% of the Ca2+ active cells showed increased caspase-3, whereas a significantly lower fraction of 20.1 ± 0.8% of the non-active cells were positive for active caspase-3.Immunostaining for Tuj1, revealed that 41.1 ± 6.0% of the Ca2+ active cells were positive for Tuj1, whereas significantly fewer, 20.5 ± 7.0%, of the non-active cells were positive for Tuj1.Together these data suggest that early Tuj1-positive NPCs exhibit spontaneous Ca2+ signaling that increases their caspase-3 activity.Next, we sought to study the interplay between Cav3.2, caspase-3, and neural differentiation in human neuroepithelial stem cells.We first investigated if stimulating caspase-3 could modulate neural differentiation in these cells.Staurosporin is known to regulate caspase-3 activity in a dose-dependent manner.Stimulating caspase-3 with a rather low dose of 100 nM STS significantly increased the fold change of βIII tubulin mRNA compared to untreated cells.When the cells were challenged with the caspase-3 inhibitor z-DEVD-FMK the mRNA level of βIII tubulin significantly decreased.Procaspase activating compound-1 is a small molecule zinc chelator that is specific for activating the effector pro-caspases 3/7.Differentiating hNS cells in the presence of 25 µM PAC-1 caused a significant increase in the expression of βIII tubulin, evaluated with Tuj1 immunocytochemistry, compared to controls = 25.07, P < 0.0001).The effect of PAC-1 on βIII tubulin expression was inhibited with 20 µM z-DEVD = 25.07, P = 0.41).We also analyzed if our treatments affected the number of apoptotic cells with an Annexin V assay.Only the high dose of 2 μM STS significantly increased the number of cells undergoing apoptosis = 9.679, P < 0.005).TMRE, a positively charged dye that accumulates in active mitochondria with negatively charged membranes, was used to study the possible involvement of the mitochondria.The number of cells that incorporated TMRE significantly increased when cells were pre-treated with Mibefradil = 102.8, P < 0.05) and z-DEVD = 102.8, P = 0.067), while pre-treatment with 2 μM STS significantly reduced the number of cells stained by TMRE = 102.8, P < 0.005).Next, we examined the subcellular expression pattern of caspase-3, as it was previously shown that nuclear caspase-3 is a pro-apoptotic marker.No significant nuclear expression of caspase-3 was observed when cells were treated with either Mibefradil, KCl, or 100 nM STS.The higher dose of 2 μM STS, however, significantly increased the amount of nuclear caspase-3 in hNS cells = 66.46; P < 0.00001).When inhibiting caspase-3 with z-DEVD, a significant reduction in nuclear staining was detected = 66.46; P = 0.007).To investigate the specific role of Cav3.2 later during neural differentiation we altered its mRNA expression levels by targeting the CACNA1H gene with viral infections.Knocking down CACNA1H mRNA expression by 49 ± 19% gave significantly decreased DEVDase activity.When CACNA1H mRNA was overexpressed we observed a slight but non-significant increase in DEVDase activity.We next assessed the impact of altering the mRNA levels of CACNA1H on expression markers Pax6, Nestin, βIII tubulin, and MAP2.Knockdown of CACNA1H had sparse effects on the mRNA expressions of the early neuronal markers Pax6, Nestin and MAP2, whereas βIII tubulin mRNA significantly decreased compared to controls in CACNA1H siRNA lentiviral vector-transduced cells.We thereafter overexpressed CACNA1H and detected a significant Pax6 decrease and βIII tubulin increased in lentiviral vector-transduced cells.Overexpression had little effect on the transition from the proliferative state to early immature post mitotic cells before the onset of more mature NPC states occupying the CP.Namely, taken together these data indicate that altering CACNA1H affects both caspase-signaling and neural differentiation without affecting apoptosis.Next, we sought to investigate Cacna1h gene expression and function in the mouse brain.We characterized the expression of Cacna1h mRNA in vivo using RNAscope in situ hybridization.The Cacna1h probe was detected in the cortex region including the CP and SVZ/VZ of C57BL/6 mice at E14.5.Similar expression patterns of the Cacna1h probe were observed in 129*1/SVJ mice.We then performed Ca2+ recordings in slices from C57BL/6 mice with 2-photon laser scanning microscopy.These experiments showed that differentiating NPCs in the embryonic mouse cortex were exhibiting spontaneous Ca2+ activity at E16.5.To investigate the specific role of Cav3.2 we then monitored Ca2+ signaling in Cacna1h knockout mice.We observed that the spontaneous Ca2+ activity in the cortical region of knockout mice was significantly decreased.We thereafter examined if the caspase-3 expression pattern differed between wild-type and knockout animals.We performed Western blot analyses of cytosolic and nuclear fractions from the cortex region of E16.5 mice.Cleaved caspase-3 was detected only in cytosolic fractions.Interestingly, cleaved caspase-3 expression level was significantly lower in knockout mice in comparison to wild-type controls.Finally, we tested the influence of Cav3.2 on neocortical development in the brains of Cacna1h knockout mice.We carried out experiments on C57BL/6 Cacna1h knockout crossed with 129X1/SvJ and compared for possible cortical abnormalities between wild-type and Cacna1h knockout.We stained for Pax6 and MAP2, which are markers of cells residing in the VZ/SVZ or CP regions, respectively.Interestingly, we observed significant decreases in the density of radial glial cells in the VZ and neurons in the CP in Cacna1h knockout SvJ/BL6 animals.We observed a modest but significant reduction in the size of the CP and SVZ/VZ in Cacna1h knockout animals.Together these data show that Cav3.2 is a critical player of embryonic brain development.It is well known that Cav3.2 channels play a significant role in a large number of physiological and pathological processes in adults.However, less is known about their roles in the developing brain.Interestingly, T-type Ca2+ channels are highly expressed very early during neuronal development, even before the onset of L-, N-, P/Q- and R-type Ca2+ channels.Here we show that Cav3.2 plays a critical role in modulating neural differentiation during brain development.Spontaneous Ca2+ waves have been reported in the developing central nervous system and in stem cells.We observed that the origin of spontaneous Ca2+ activity correlated in time with the rise in expression of Cav3.2 mRNA suggesting its involvement in driving this signaling event.This assumption was verified by the fact that the spontaneous Ca2+ activity was abolished when cells were treated with an inhibitor of Cav3.2.The impact of T-type channels on spontaneous Ca2+ activity has not only been reported previously in neural cells, but also in cardiac cells and breast cancer cells.These and other reports show that T-type channels have diverse roles both in health and disease.We speculate that regulation of CACNA1H expression levels serves as a molecular switch that critically regulates spontaneous Ca2+ activity in individual cells and subsequently provides a bifurcation point for the underlying gene regulatory networks involved in cell fate determination."A number of reports have described a crucial role for caspase-3 in regulating differentiation.The results presented herein demonstrate a novel interaction between Cav3.2 channel activity and caspase-3 during neurogenesis.The responsible downstream target of caspase-3 in regulating differentiation remain unknown.Our results with the mitochondrial membrane potential dye suggest an involvement of the mitochondria during Cacna1h and caspase-3 activity.It would be tempting to hypothesize that the intrinsic pathway, under constitutive modulation by T-type Ca2+ channels, is involved in activating sublethal concentrations of caspase-3 via quantal release of cytochrome-c and caspase-9 activation.Nonetheless, further work will be needed to thoroughly address this question.Additionally, an increase in cytosolic Ca2+ can activate both caspases and calpains, which regulate the processes of differentiation, apoptosis and necrosis.A fine regulation of caspases versus calpains may be the determining factor that decides cell fate.Experiments on Cav3.2-knockout mice showed an attenuation of spontaneous Ca2+ activity in the brain of these animals and a significant reduction in the level of cleaved caspase-3 proteins.Furthermore, small but significant differences were detected in the size of the VZ/SVZ and CP between knockout and wild-type animals.Interestingly, we only observed this difference in 129X1/SvJ animals, which are reported to have caspase sensitivity.This knockout is not lethal for the mouse and they have been reported to have abnormal blood vessel morphology, cardiac fibrosis, and deficiencies in context-associated memory, other than reduced size.The fact that this knockout is not lethal may suggest that it could play a role for other cognitive dysfunctions, e.g., epilepsy or autism.Behavioral studies on 8–12-week-old mice with Cacna1h gene deletion have been reported to induce anxiety-like phenotypes, impairment of hippocampus-dependent recognition memories and reduced sensitivity to psychostimulants.Furthermore, human CACNA1H gene mutations have been associated with autism spectrum disorder.Such disorders were suggested to have a possible neurodevelopmental etiology.In summary, we report a novel signaling mechanism that connects Ca2+ entry through Cav3.2 with caspase-3 activation that regulates the differentiative capacity of NPCs during corticogenesis.
Here we report that the low-voltage-dependent T-type calcium (Ca 2+ ) channel Ca v 3.2, encoded by the CACNA1H gene, regulates neuronal differentiation during early embryonic brain development through activating caspase-3. At the onset of neuronal differentiation, neural progenitor cells exhibited spontaneous Ca 2+ activity. This activity strongly correlated with the upregulation of CACNA1H mRNA. Cells exhibiting robust spontaneous Ca 2+ signaling had increased caspase-3 activity unrelated to apoptosis. Inhibition of Ca v 3.2 by drugs or viral CACNA1H knock down resulted in decreased caspase-3 activity followed by suppressed neurogenesis. In contrast, when CACNA1H was overexpressed, increased neurogenesis was detected. Cortical slices from Cacna1h knockout mice showed decreased spontaneous Ca 2+ activity, a significantly lower protein level of cleaved caspase-3, and microanatomical abnormalities in the subventricular/ventricular and cortical plate zones when compared to their respective embryonic controls. In summary, we demonstrate a novel relationship between Ca v 3.2 and caspase-3 signaling that affects neurogenesis in the developing brain.
198
Viral load and antibody boosting following herpes zoster diagnosis
Primary infection with varicella zoster virus causes chickenpox, following which the virus establishes latency.It reactivates in up to 25% of individuals to cause the painful dermatomal rash known as shingles.During chickenpox or shingles, viral DNA is detectable in skin lesions, blood and saliva .Viral replication is accompanied by boosting of VZV antibodies consistent with antigenic, or endogenous, boosting.Few data exist, however, confirming the relationship between viral load and antibody titres during, and following, acute clinical VZV disease.The extent to which the presence of persisting viral DNA in blood or saliva indicates active viral replication likely to induce an immune response is also unclear.Immunocompetent children with chickenpox clear viral DNA rapidly so that it is no longer detectable two weeks after the rash has healed .In contrast, VZV DNA has been detected in blood for up to 6 months following shingles, albeit with falling loads .Asymptomatic shedding of VZV in saliva occurs more frequently in individuals who are immune disadvantaged .Better understanding of the spectrum of VZV reactivation is needed to inform use of biological markers of VZV reactivation in research.We aimed to investigate the relationship between VZV DNA levels and antibody titres by following acute shingles patients over 6 months, and to assess whether VZV antibody titre could discriminate patients with recent shingles from population controls for future research.Patients with shingles presenting to GPs in London between 2001 and 2003 were recruited consecutively for a prospective cohort study of disease burden, clinical and laboratory indices of zoster .Diagnosis was confirmed through detection of VZV DNA from vesicle fluid by PCR in patients with clinically-suspected zoster.Patients completed a baseline survey that included demographic information, history of chickenpox and previous shingles episodes, immune status and detailed information about the shingles episode.Blood samples were taken at baseline, one, three and six months to measure IgG antibody titre and viral load.Blood samples from healthy blood donors from a single time-point were also collected.Viral load was determined through detection and quantification of VZV DNA from whole blood.DNA extraction was performed using a QIAamp DNA blood mini kit, with eluted DNA stored at −20 °C.VZV DNA was quantified using a real-time PCR assay, which had a sensitivity threshold of <10 VZV copies/μl.VZV IgG antibody titres were measured using a validated in-house time resolved fluorescence immunoassay .Serum dilutions were tested in duplicate and the Europium counts obtained were interpolated against a standard curve of British Standard VZV antibody covering the VZV IgG range 0.39–50 mIU/ml.Sera producing Europium counts outwith the curve were retested at appropriate dilutions.Duplicate results were averaged and multiplied by the dilution factor to obtain a final mean antibody level.We recoded implausible IgG values above the 95th blood donor percentile as missing and log transformed viral load and antibody titre to provide a normal distribution.We summarised the median, IQR and mean of the log transformed viral load and antibody titre at each time point.As there was no evidence of a non-linear association between logged mean viral DNA load and logged mean antibody titre we used Pearson’s correlation coefficients to investigate associations between these variables at the same and subsequent time points for shingles patients.These relationships were further explored using multivariable linear regression models.Potential confounding effects of age, sex, ethnicity, immunosuppression, days since rash onset, prodromal symptoms, disseminated rash and antiviral treatment were investigated using causal diagrams.Variables were retained if they were theoretically relevant confounders, and/or associated with both outcome and exposure at the 10% significance level using a forward selection approach.To determine whether recent zoster could be identified from antibody levels, we undertook ROC analysis, comparing antibody levels in healthy controls with zoster patients.Antibody cut-off values to achieve 80% and 90% sensitivity or specificity, were calculated for each visit separately, after adjusting for age and sex.The study comprised 63 patients with shingles, with a median age of 56 years of whom 34 were male, and 441 blood donor controls.Viral load among shingles patients was highest at baseline and lowest at six months.Antibody titres rose from baseline to be maximal at one month then gradually declined, although titres remained elevated above baseline levels at six months.Viral load at baseline was positively associated with antibody titres at one, three and six months as shown in Fig. 2, although the strength of the associations were small to moderate.There was some evidence of a small negative correlation between viral load at one month and antibody titre at six months, but there were otherwise no significant associations between viral load measurements taken after baseline and later antibody titres.In multivariable linear regression models adjusted for age, sex, ethnicity and immune status, higher baseline viral load was associated with a higher antibody titre at one, three and six months.Antibody titre was higher in shingles patients at 1, 3 and 6 months from baseline, compared to controls; median log antibody titre was 3.16 among controls.ROC analysis demonstrated that to achieve 80% sensitivity, specificity would be 23.4%, 67.7%, 64.8% and 52.6%, whilst to achieve 80% specificity, sensitivity would be 28.3%, 66.1%, 52.6% and 38.6% at baseline, visit 2, 3 and 4 respectively.The best obtainable specificity, at 90% sensitivity, was 59%, and the best obtainable sensitivity, at 90% specificity, was 39%.We showed that baseline, rather than subsequent viral load was the strongest predictor of antibody titre at one, three and six months after an acute shingles episode.Antibody titres remained persistently elevated in shingles patients compared to healthy blood donors for at least six months, with the greatest discrimination between groups occurring at one month post shingles.Antibody titres could discriminate patients with recent shingles from healthy controls, however there was a significant trade-off between sensitivity and specificity.Reactivation of latent VZV is largely kept in check through cell-mediated immunity , with antibodies playing very little role in VZV control.Individuals with severe clinical VZV reactivation including those who develop post-herpetic neuralgia often have high antibody titres, which are believed to correlate with more widespread VZV replication .Our findings are consistent with this hypothesis.The lack of association found between viral loads at one, three and six months and antibody titres at the same and subsequent time points suggests that persistence of serum VZV DNA after shingles may be a function of decay rather than ongoing replication, although this finding needs to be tested in other larger populations.Antibody titre cut-off values could be used to identify patients with shingles 1–6 months previously, but with a large trade-off between sensitivity and specificity.Whether researchers choose to set cut off values to achieve a high sensitivity e.g. when using antibody titre as an initial screening test for recent shingles, or to be highly specific e.g. in a test aimed at diagnostic confirmation, will depend on the nature and context of their research.This study was limited by relatively small numbers of patients.Data on other potential confounding factors such as ethnicity and immune status in blood donors was lacking, so only age and sex were accounted for in the shingles patient-blood donor analysis.Nevertheless, as these factors were not associated with antibody titre in shingles patients, results are unlikely to have been notably affected.In conclusion, there is evidence for endogenous boosting of VZV antibody levels by clinical VZV reactivation and the level of boosting is dependent upon baseline viral replication.Additionally, antibody titres could discriminate post-shingles patients from healthy controls, although whether to prioritise specificity or sensitivity would depend on the study question.All authors report no conflicts of interest.Ethical approval was given by East London and the City Health Authority Research Ethics Committee 2002/38).
Background: Acute varicella zoster virus (VZV) replication in shingles is accompanied by VZV antibody boosting. It is unclear whether persisting virus shedding affects antibody levels. Objectives: To investigate the relationship between VZV viral load and antibody titres in shingles patients during six months following diagnosis and assess whether VZV antibody titre could discriminate patients with recent shingles from healthy population controls. Study design: A prospective study of 63 patients with active zoster. Blood samples were collected at baseline, one, three and six months to measure VZV DNA and IgG antibody titre. We compared VZV antibody titres of zoster patients and 441 controls. Results: In acute zoster, viral load was highest at baseline and declined gradually over the following six months. Mean antibody titres rose fourfold, peaking at one month and remaining above baseline levels throughout the study. Antibody levels at one, three and six months after zoster were moderately correlated with baseline but not subsequent viral load. Regarding use of antibody titres to identify recent shingles, to achieve 80% sensitivity, specificity would be 23.4%, 67.7%, 64.8% and 52.6%, at baseline, visit 2, 3 and 4 respectively, whilst to achieve 80% specificity, sensitivity would be 28.3%, 66.1%, 52.6%, 38.6%, at baseline, visit 2, 3 and 4 respectively. Conclusions: Clinical VZV reactivation boosted VZV antibody levels and the level of boosting was dependent upon baseline viral replication. While antibody titres could discriminate patients with shingles 1–6 months earlier from blood donor controls, there was a large trade-off between sensitivity and specificity.
199
Longitudinal serum S100β and brain aging in the Lothian Birth Cohort 1936
The calcium-binding protein S100β has clinical value as a proteomic biomarker of central nervous system damage.It is primarily found in glial cells, but also in some neuronal populations and in melanocytes, among other cell types.At nanomolar concentrations, S100β exerts neuroprotective and neurotrophic influences, but elevated S100β may contribute to further negative effects, as its presence at micromolar concentrations increases expression of proinflammatory cytokines, leading to apoptosis.S100β is elevated after traumatic brain injury in both cerebrospinal fluid and serum, with greater S100β concentrations prognostic of poorer outcomes and recovery.Serum S100β levels are also influenced by blood-brain barrier leakage, as well as from other sources such as bone fractures, exercise, muscle injury, burns and melanoma."Although S100β has been investigated as a biomarker in studies of head injury, depression, and neurodegenerative diseases such as Alzheimer's disease, the neurostructural correlates of S100β and its longitudinal trajectories in nonpathological aging are underinvestigated.S100β concentrations are positively associated with age, although some found no age effect in adulthood.Identifying possible biomarkers of brain aging is a key challenge, and serum S100β is one of the logical candidates, yet data on S100β and multimodal brain analyses in older participants are lacking.Two prior cross-sectional studies indicate that serum S100β is specifically associated with poorer white matter microstructure in a small sample of healthy participants, and in a small study of schizophrenia patients versus controls.Neither study found a significant association between S100β and gray matter—however, it should be noted that both adopted a voxel-based–morphometry approach which results in reduced power in the large areas of the cortex that show highly individualized patterns of gyrification, and insensitivity to discrete lesions; Tisserand et al., 2004).Another study found no association between S100β and either white matter fractional anisotropy or cortical thickness in a mixed sample of patients with psychosis, relatives, and controls.Thus, well-powered, longitudinal, multimodal imaging studies—in participants at an age that confers relatively high risk of brain structural decline—are required to examine the possible differential sensitivity of S100β to cross-sectional levels of, and longitudinal declines in, various imaging parameters and brain tissues.Other candidate MRI parameters that may relate to S100β are markers of cerebral small vessel disease burden.There is increasing evidence that BBB leakage occurs as an underlying pathology in SVD.The presence of white matter hyperintensities and perivascular spaces are important markers of SVD pathophysiology that increase with age, in cerebrovascular disease, are linked to increased risk of stroke, and are associated with both cognitive impairment and dementia.PVS are also associated with elevated plasma markers of inflammation in older participants.They are also more frequent in patients with lacunar stroke and WMH, and are more visible with increasing evidence of BBB leakage in patients with SVD-related stroke.PVS are also more visible with inflammation and BBB leakage in active multiple sclerosis plaques.Although plasma S100β is influenced by BBB leakage in head injury as well as general cerebral pathology, it remains unknown whether S100β is associated with these important SVD markers on brain MRI in healthy community-dwelling older adults.In the present study, we investigated the level and change in S100β and indices of structural and diffusion MRI, in a large cohort of older individuals measured at ages 73 and 76 years.Given that S100β concentration in blood may rise due to age, central nervous system damage, and BBB disruption, we hypothesized that relatively higher and increasing concentrations of plasma S100β would be coupled with lower and decreasing measures of brain structural and microstructural health."Prior evidence indicates that S100β is particularly strongly expressed in the human brain's white matter tracts, and that serum S100β is cross-sectionally associated with poorer white matter microstructure in small mixed samples with wide age ranges.Thus, we hypothesize that elevated and increasing serum S100β would be particularly pertinent to poorer and decreasing white matter structure, beyond measures of global atrophy and GM volume.Data are drawn from waves 2 and 3 of a longitudinal study of aging: the Lothian Birth Cohort 1936 study, when participants were a mean age of about 73 and 76 years, respectively.In 1947, Scotland tested the intelligence of almost all schoolchildren born in 1936, and the LBC1936 follows up some of those individuals—now in older age—who mostly live in the Edinburgh and Lothians area.The initial wave of LBC1936 took place between 2004 and 2007.It assessed 1091 individuals on aspects of their health, and physical and cognitive function, at around 70 years old.At wave 2 and wave 3, 866 and 697, respectively, returned at mean ages of about 73 and 76 years, a detailed MRI brain scan was added to the protocol at both waves.During a medical interview at each wave, participants reported their medical history.The Multi-Centre Research Ethics Committee for Scotland, the Scotland A Research Ethics Committee, and the Lothian Research Ethics Committee approved the use of the human participants in this study; all participants provided written informed consent and these have been kept on file.Serum samples were obtained from participants during the main physical and cognitive testing appointment at waves 2 and 3.The mean lag between waves was 3.77 years.After collection, samples were stored at −80 °C at the Wellcome Trust Clinical Research Facility, Western General Hospital, Edinburgh, until the conclusion of the wave."They were then transferred to the Department of Clinical Biochemistry, King's College London using cold-chain logistics, where they were stored at −20 °C until assays were conducted using a chemiluminescence immunoassay S100β kit on a LIAISON chemiluminescence analyzer.The lag between sample dispatch at the end of sampling and assay completion was an average of 44 days for 4 batches at wave 2, and 8 days at wave 3, respectively.The minimal detectable concentration of the assay was 0.02 μg/L.Intra- and inter-assay precision for both waves is reported in Supplementary Table A.1.Participants underwent whole-brain structural and diffusion MRI using the same 1.5 T GE Signa Horizon scanner at wave 2 and 3.The scanner is maintained with a careful quality control programme.Scans took place at the Brain Research Imaging Centre, Edinburgh, shortly after serum collection.Full details of acquisition and processing are available in an open access protocol article.Briefly, T1-, T2-, T2*-, and FLAIR-weighted sequences were co-registered.Total brain, GM, and white matter hyperintensity volumes were quantified using a semiautomated multispectral fusion method.WMHs were explicitly defined as punctate, focal, or diffuse lesions in subcortical regions and distinguished from lacunes and PVS by signal characteristics.Cortical or discrete subcortical infarcts were excluded by careful manual editing blind to other features.PVS were defined as fluid-containing small spaces running parallel with the expected direction of perforating vessels, appearing punctate in cross section and linear in longitudinal section, with <3 mm diameter.PVS were differentiated from lacunes or WMH on morphology, signal, and size criteria as previously defined.From the T2-weighted volumes, PVS ratings were performed by a trained neuroradiologist.Change in PVS between waves was scored by comparing scans at wave 2 and wave 3 side by side, blind to any other participant characteristics, and scored on a 5-point scale from −2 to +2, where 0 denotes no visible change.The diffusion tensor MRI acquisition comprised a single-shot spin-echo echo-planar diffusion weighted volumes acquired in 64 noncollinear directions, alongside 7 T2-weighted images.This yielded 72 contiguous axial slices.Repetition and echo times were 16.5 s and 95.5 ms, respectively.After preprocessing, water diffusion tensor parameters were estimated using FSL tools.A 2-fiber model with 5000 streamlines was then used to create brain connectivity data using the BEDPOSTX/ProbTrackX algorithm in 12 tracts of interest: the genu and splenium of the corpus callosum, bilateral anterior thalamic radiation, cingulum, uncinate, arcuate, and inferior longitudinal fasciculi.Probabilistic neighborhood tractography as implemented in the TractoR package identified the tracts of interest from the connectivity data.White matter tract-averaged FA and mean diffusivity were then derived as the average of all voxels contained within the resultant tract maps.All segmented images were visually inspected for accuracy, blind to participant characteristics, to identify and correct errors.We excluded from the analyses those with self-reported history of dementia, or Mini-Mental State Examination score of <24.This was based on prior reports of elevated S100β in dementia, the likelihood that these individuals were undergoing pathological CNS degeneration, and their low numbers in the current cohort.Given that elevated serum S100β is associated with melanoma, those who reported melanoma at either wave were also excluded.S100β concentrations for excluded participants are shown in Supplementary Table A.2.Both WMH measures were log transformed to correct skewness.Extreme outlying points for S100β at wave 2 and wave 3 were removed, along with 7 points at wave 3 that were below the sensitivity threshold of the assay.After exclusions, a total of 776 and 619 participants provided S100β data at ages 73 and 76 years, respectively, 593 and 414 of whom also provided brain MRI data.We used the maximum available sample size in all analyses.The main questions we addressed were are there associations between serum S100β and brain imaging variables cross-sectionally at age 73 years?,and are the changes in S100β from age 73 years to age 76 years correlated with changes in brain imaging variables across the same ages?,We used bivariate change score models in a structural equation modeling framework to test these cross-sectional and longitudinal associations between S100β and brain MRI variables, specifying a separate model for each brain MRI outcome.In the case of PVS analysis, the visual rating of PVS change was used in place of a latent change score, and correlated with the latent S100β change score.The volumetric brain indices were expressed as a proportion of intracranial volume in our main SEMs, and we also provided a supplementary analysis for uncorrected measures.Using the FA and MD measures across multiple white matter tracts, we derived a latent variable for waves 2 and 3, respectively, using the following: genu and splenium of the corpus callosum, and left-right averages of the anterior thalamic radiation, inferior longitudinal fasciculus, uncinate, arcuate, and cingulum.We imposed strong factorial invariance, constraining the intercepts of each tract measure and their loadings on the latent variable to equality across waves.We also included correlated residuals between corresponding tracts across waves, alongside 5 other significant tract-tract residual paths for gFA and 6 for gMD."This builds on our and others' prior work, which found that there is substantial shared variance in white matter microstructural properties across tracts of the brain in early life, middle, and older age.Thus, these general, latent, factors reflect common microstructural properties across white matter pathways.Finally, based on evidence of local white matter variation in S100β expression and cross-sectional associations between FA and S100β, we used the same framework as above to examine associations between S100β and tract-specific microstructure in each white matter tract of interest for FA and MD.Given that there was a short delay between serum collection and MRI scanning at both waves, we corrected MRI and S100β for their respective age in days at data collection within each model, along with sex, diabetes, and hypertension.To account for missing data bias due to attrition between waves, we took account of all available data, using full information maximum likelihood estimation.We assessed model fit according to the χ2 minimum function test statistic, the root mean square error of approximation, comparative fit index, Tucker-Lewis index, and the standardized root mean square residual.All statistical analyses were conducted in R version 3.2.2 “Fire Safety”.SEM was conducted with the “lavaan” package and the resultant p-values for the associations of interest were corrected for multiple comparisons with false discovery rate using the “p.adjust” function in R.Participant characteristics are shown in Table 1, and bivariate associations among study variables are reported in Supplementary Table A.3.Descriptive plots of S100β are in Fig. 2."S100β concentrations showed substantial stability of individual differences from age 73 years to age 76 years. "S100β concentrations were significantly higher at age 76 years than at age 73 years = 3.244, p = 0.001, Cohen's d = 0.118).Males showed lower S100β than females at both waves = 3.655, p < 0.001; wave 3: t, p = 0.002), but did not exhibit differences in their rate of change with age.When comparing baseline values of those who returned to provide an S100β sample at age 76 years from those who did not, there were no significant differences for S100β = 0.115, p = 0.908), TB volume = 1.577, p = 0.117), WMH volume = 1.277, p = 0.204).However, individuals who returned at age 76 years had significantly more GM at age 73 years than nonreturners = 2.800, p = 0.006).To satisfy the assumption of missing at random, under which FIML operates, baseline GM volume was included as an auxiliary variable when modeling associations between S100β and all other imaging variables.Significant increases in WMH volume and white matter tract MD, and significant decreases in TB volume, GM volume, and white matter FA exhibited by this cohort between wave 2 and 3 have been previously reported elsewhere.In the context of the current sample, all brain measures showed statistically significant mean changes over time, considering only those who provided scans at both waves."There were significant reductions in raw TB = 2.815, p = 0.005, Cohen's d = 0.191) and GM volume = 2.861, p = 0.004, Cohen's d = 0.197), and increases in WMH volumes = −4.267, p < 0.001, Cohen's d = 0.293).The visual ratings of PVS change across waves showed that PVS load either stayed consistent or became worse over time.Those who provided S100β and did not undergo an MRI scan were not significantly different from those who provided both—at either wave 2 or wave 3— in terms of age, S100β concentrations, and Mini-Mental State Examination score.However, those who undertook both elements of the study comprised a significantly larger proportion of males at wave 2, although not at wave 3.Individuals showed substantial variation in the degree to which S100β and the continuous MRI indices changed over time, as indicated by significant slope variances; slope means and variances from age- and sex-corrected univariate change score models are reported in Supplementary Table A.4.Results of the SEM analyses are shown in Table 2, with bias-corrected 95% confidence intervals from 1000 bootstraps.Model fit statistics are shown in Supplementary Table A.5.Models examining associations between the level and change of S100β and volumetric MRI indices showed adequate fit to the data = 42.629, RMSEA = 0.023, CFI = 0.993, TLI = 0.988, SRMR = 0.026; GM: χ2 = 42.337, RMSEA = 0.027, CFI = 0.987, TLI = 0.977, SRMR = 0.022; TB volume: χ2 = 131.237, RMSEA = 0.060, CFI = 0.931, TLI = 0.887, SRMR = 0.039).None of these measures showed significant cross-sectional associations with S100β at age 73 years or longitudinally.Running these volumetric analyses without correction for intracranial volume did not substantially alter the results.The model of visually rated PVS change showed an adequate fit to the data = 39.439, RMSEA = 0.042, CFI = 0.963, TLI = 0.936, SRMR = 0.032).There was no association between S100β at age 73 years and visually rated PVS change, and the nominally significant association with longitudinal S100β concentrations did not survive FDR correction.The models examining associations of S100β with white matter diffusion parameters both fitted the data well = 284.004, RMSEA = 0.019, CFI = 0.977, TLI = 0.969, SRMR = 0.035 and gMD: χ2 = 322.817, RMSEA = 0.024, CFI = 0.970, TLI = 0.959, SRMR = 0.048); tract loadings are reported in Supplementary Table A.7.At wave 2, higher S100β was significantly associated with “less healthy” white matter gFA, which survived correction for multiple comparisons.The 3-year association between declining gFA and increasing S100β was nonsignificant.Associations between gMD and S100β were nonsignificant for both level and change.Next, we examined the level and change associations between S100β and average white matter microstructure within each of the tracts of interest.Fit statistics indicated that all models fitted the data well.Results of the models are shown in Fig. 3, and Supplementary Tables A.10 and A.11.A higher concentration of S100β at age 73 years was significantly associated with “poorer” FA at the same age in the anterior thalamic radiation and cingulum bundle.Both survived FDR correction.There were also nominally significant associations with the level of the splenium and arcuate in the same direction, but these did not survive multiple comparison correction.The corresponding associations for tract MD were all nonsignificant cross-sectionally and longitudinally.These data represent the first large-scale study of longitudinal S100β concentrations and their association with longitudinal multimodal brain vascular and neurodegeneration MRI markers in community-dwelling older adults.We focused on multiple MRI indices of brain white matter because S100β is predominantly found in glial cells.We also considered measures of GM and global atrophy as comparators.Notably, our results suggest that individual differences in serum S100β concentrations may be potentially informative for specific aspects of brain white matter aging.We found that higher S100β was, in cross-sectional analysis at age 73 years, significantly associated with generally poorer white matter microstructure, with a small effect size.Further investigation of tract-specific effects indicated that this association is predominantly driven by lower FA in the anterior thalamic, arcuate, cingulum, and callosal fibers.The significant gFA-S100β association at age 73 years reported here contradicts some, but corroborates other previous cross-sectional associations in smaller samples.Our well-powered longitudinal design provides important new data on the coevolution of this serum biomarker with brain MRI, including several measures that had not previously been examined, such as white matter MD and markers of SVD.Given the prevalent expression of S100β in the corpus callosum, it is notable that associations between tract-specific change and S100β change for both FA and MD in the genu of the corpus callosum were not significant.This merits further investigation in longitudinal samples over a longer period with more sampling occasions.Although there has been relatively little research on the association between S100β and age-related brain and cognitive decline, our findings that higher concentrations are related to poorer white matter FA could partly be related to deleterious effects due to systemic inflammation.Systemic inflammatory challenge reportedly elicits increased BBB permeability in humans and rodent models, and there are relationships between higher inflammatory markers and lower brain metrics, including white matter markers of SVD.Consequently, it will be of interest to quantify the degree to which the relationship between inflammation and cognitive decline is mediated by S100β and brain structural outcomes, as well as to identify the potential genetic and lifestyle determinants of inflammation in well-powered longitudinal designs.Taken together, our results provide some limited support for the hypothesis that both provide meaningful and overlapping biomarkers of age-related white matter degradation.This study also provides novel information about the concentrations and stability of individual differences in serum S100β, in generally healthy older adults.These may suggest that serum S100β concentrations in the same individual may represent a relatively stable trait, although establishing this more robustly would require many more sampling occasions.We also provide information on sex differences in the context of important confounds of age, melanoma, and dementia.Significant associations between greater S100β at older ages have been reported in some studies, but were nominally negative in others or null.With respect to sex differences, our finding that females exhibited higher S100β corroborates the findings from some studies, whereas others report the converse pattern or no significant difference.Unlike the present study, those cited previously were all cross-sectional and represent a mix of single studies across very wide age ranges, across serum and CSF sampling, and comprising participants with various characteristics.Moreover, as far as the authors are aware, there is no large-scale data on the stability of individual differences in serum S100β concentrations across time in nonpathological older adults.The results reported here therefore address a substantial gap in our understanding of stability and longitudinal S100β trajectories in older community-dwelling adults.Although it has been hypothesized that observed increases of S100β with age could reflect age-related increases in myelin loss, it could also be that CNS cell “turnover” remains stable, but that cellular S100β concentrations are simply higher, or that S100β does not change, but that serum concentrations are driven by greater age-related BBB leakage.Our findings lend some support to the first or third interpretations.Nevertheless, it should be noted that white matter FA can be affected by multiple microstructural properties, including myelination, but also extending to axonal bore, cell membranes, microtubules, and other structures."As such, inferences about the weak associations of S100β with any specific microstructural property of the brain's white matter should be undertaken with caution.There are several study limitations.We note that our measure of change is based on a relatively brief period.Although older individuals are at higher risk of brain structural changes than their younger counterparts, the brief sampling window limits the opportunity for large brain structural changes to take effect, especially because this group was broadly healthy, but fairly typical of similarly aged community-dwelling adults in Europe.Further study with a longer sampling period or a larger sample is merited to increase our ability to reliably assess these potentially subtle coupled changes, and to account for the likelihood that observed changes over time are nonlinear.On a related note, our models of latent change derived from single-indicator latent measures did not allow for the independent estimation of measurement residuals, meaning that our measures of change here should be considered as essentially difference scores.These analyses at only 2 time points also preclude tests of nonlinear change and of lead-lag relationships of change in brain and serum markers.We also reiterate that S100β concentrations may be influenced by a number of factors, such as exercise, melanoma, dementia, sleep apnea, depression, time of year/season, bone fractures, muscle injury, and burns, only some of which were accounted for in the present analyses.Our measure of PVS and its change is likely to be relatively insensitive; the rater could not be blinded to time, and the binary and disproportionate nature of visually rated PVS change mean that the estimates reported here should be interpreted accordingly.Computational methods for PVS quantification that are currently in development may improve sensitivity to detect important aging-related changes.Finally, the narrow age range, ethnic homogeneity, and relative good health of study participants limits the degree to which our findings can be generalized to groups of different ages, ethnicities, and patients.Nevertheless, the fact that these characteristics obviate such strong potential confounds in the current analysis can be viewed as an important strength.Combined with the large sample size, longitudinal data, rich multimodal imaging parameters, same-scanner MRI acquisition, advanced and appropriate statistical modeling, and inclusion of important covariates, the present study is well-situated to test hypotheses about cross-sectional and short-term longitudinal associations between serum S100β and brain structural aging.High and increasing concentrations of serum S100β at this age is identified here as a potentially meaningful marker of poorer brain white matter health and, with further testing, risk of future dementia.These findings require replication in other well-powered healthy and pathological aging samples, and across a longer time period.The authors have no actual or potential conflicts of interest.
Elevated serum and cerebrospinal fluid concentrations of S100β, a protein predominantly found in glia, are associated with intracranial injury and neurodegeneration, although concentrations are also influenced by several other factors. The longitudinal association between serum S100β concentrations and brain health in nonpathological aging is unknown. In a large group (baseline N = 593; longitudinal N = 414) of community-dwelling older adults at ages 73 and 76 years, we examined cross-sectional and parallel longitudinal changes between serum S100β and brain MRI parameters: white matter hyperintensities, perivascular space visibility, white matter fractional anisotropy and mean diffusivity (MD), global atrophy, and gray matter volume. Using bivariate change score structural equation models, correcting for age, sex, diabetes, and hypertension, higher S100β was cross-sectionally associated with poorer general fractional anisotropy (r = −0.150, p = 0.001), which was strongest in the anterior thalamic (r = −0.155, p < 0.001) and cingulum bundles (r = −0.111, p = 0.005), and survived false discovery rate correction. Longitudinally, there were no significant associations between changes in brain imaging parameters and S100β after false discovery rate correction. These data provide some weak evidence that S100β may be an informative biomarker of brain white matter aging.