Unnamed: 0
int64
0
31.6k
Clean_Title
stringlengths
7
376
Clean_Text
stringlengths
1.85k
288k
Clean_Summary
stringlengths
215
5.34k
31,600
Insertion-and-deletion-derived tumour-specific neoantigens and the immunogenic phenotype: a pan-cancer analysis
Tumour mutations are a key substrate for the generation of anticancer immunity.1,Large-scale sequencing studies2 have led to the systematic annotation of mutational processes and somatic alterations across a broad range of human cancer types.The cumulative insight from these studies has advanced our understanding of oncogenesis at both a basic and translational level.The data have also been scrutinised for mutations that might play a role in the recognition of cancer cells by the immune system.The focus of these analyses to a large extent is on the single nucleotide variants, on account of the relative simplicity and reliability of calling sequence changes of one base pair fixed length.As a consequence, the effect of small scale insertion and deletion mutations on antitumour immunity has been poorly characterised despite the clear link of such mutations to oncogenesis3 and their potential to generate highly immunogenic peptides.The success of checkpoint inhibitor therapies underlines the notion that tumour-specific T-cell responses pre-exist in some patients and are kept under tight control via immune modulatory mechanisms.To date, checkpoint inhibitors have been approved for the treatment of six solid tumour types: melanoma, merkel cell carcinoma, renal clear cell carcinoma, non-small cell lung cancer, carcinoma of the bladder, and head and neck squamous cell carcinoma, as well as microsatellite instability high tumours of any tissue subtype.T cells reactive to tumour-specific mutant antigens have been detected across the common epithelial malignancies4 and neoantigens are increasingly shown to be the target of checkpoint inhibitor-induced T-cell responses5,6 and adoptively transferred T cells.7–9,Many investigators are leveraging whole-exome sequencing and RNA sequencing, focusing on non-synonymous SNVs, to predict expressed mutated peptides that bind MHC class I molecules.Neoantigen burden is closely related to the nsSNV burden, which varies significantly across cancer types, from one nsSNV in paediatric tumours to more than 1500 nsSNVs in tumours associated with microsatellite instability.10,However, less than 1% of the nsSNVs in expressed genes lead to detectable CD4-positive11 or CD8-positive T-cell7 reactivities in tumour-infiltrating lymphocytes.Accordingly, efficacy of checkpoint inhibitors is most marked in tumour types with a high nsSNV burden, including melanoma, lung adenocarcinoma, lung squamous cell carcinoma, head and neck squamous cell carcinoma, and carcinoma of the bladder,10 which reflects a higher probability of creating a neoantigen that will be presented to and recognised by T cells.Furthermore, within these tumour types, nsSNV and neoantigen burdens correlate with response to checkpoint inhibitors.12–16,A notable outlier is renal clear cell carcinoma, which has a relatively low nsSNV burden.Renal clear cell carcinoma is characterised by a high level of tumour-infiltrating immune cells17 and has been shown to respond to interferon-α, high-dose interleukin 2,18,19 and, more recently, checkpoint inhibitors,20,21 but the mutational and antigenic determinants of these responses are unknown.Evidence before this study,We searched for available evidence in PubMed, which revealed multiple publications documenting overall mutation rates and signatures by cancer type.The predominant focus of existing literature was on single nucleotide variation mutations, with no previous study done of insertion and deletion mutations on a pan-cancer basis.Regarding the association between somatic mutations and upregulation of antitumour immunity via checkpoint inhibition, several previous studies reported a link between high SNV load and improved response to checkpoint inhibition.Prevailing evidence suggests the mechanism of this association is linked to tumour-specific neoantigen reactive T cells.No previous pan-cancer study has investigated the difference between SNV and indel-derived neoantigens, despite the propensity of indels to generate highly mutagenic peptides via creation of a shifted novel open reading frame.Added value of this study,We did a pan-cancer assessment of indel load across 5777 tumour samples spanning 19 cancer types.Kidney tumours were observed to have the highest proportion and absolute count of indel mutations on a pan-cancer basis, a result which was replicated in two further independent datasets.Compared with SNV mutations, indel mutations were observed to generate three times more high-binding-affinity neoantigens, and nine times more mutant-specific binders.Finally, we assessed the association between indel load and checkpoint inhibitor response in three melanoma cohorts, which showed indel load to be more strongly associated with response than non-synonymous SNV load.Implications of all the available evidence,Our data highlight the importance of frameshift neoantigens alongside nsSNV neoantigens as determinants of immunotherapy efficacy and potentially crucial targets for vaccine and cell therapy interventions.Our observations in kidney cancer might reconcile the observed immunogenicity of this tumour type despite its low overall mutational burden.Indel mutations that cause a frameshift create a novel open reading frame and could produce a large quantity of neoantigenic peptides highly distinct from self.It has been hypothesised22 that novel open reading frames might be an ideal source of tumour-derived neoantigens and so induce multiple neoantigen reactive T cells, because of both an increased number of mutant peptides and reduced susceptibility to self-tolerance mechanisms.On this basis, we aimed to characterise the pattern of indel mutations with pan-cancer analysis and investigate their association with antitumour immune response and outcome following checkpoint blockade.Pan-cancer somatic mutational data were obtained from The Cancer Genome Atlas for whole-exome sequencing data of 5777 solid tumours, across 19 cancer types: bladder urothelial carcinoma, invasive breast carcinoma, cervical and endocervical cancers, colorectal adenocarcinoma, glioma, head and neck squamous cell carcinoma, chromophobe renal cell carcinoma, renal clear cell carcinoma, renal papillary cell carcinoma, liver hepatocellular carcinoma, lung adenocarcinoma, lung squamous cell carcinoma, ovarian serous cystadenocarcinoma, pancreatic adenocarcinoma, prostate adenocarcinoma, skin cutaneous melanoma, stomach adenocarcinoma, thyroid carcinoma, and uterine carcinosarcoma.We extracted patient-level mutation annotation files from the Broad Institute TCGA GDAC Firehose repository, which had been previously curated by TCGA analysis working group experts to ensure strict quality control.We performed replication analyses in two additional cohorts of patients with renal clear cell carcinoma: a whole-exome sequencing study of 106 patients with renal clear cell carcinomas reported by Sato and colleagues23 and a whole-exome sequencing study of ten patients with renal clear cell carcinomas reported by Gerlinger and colleagues.24,We obtained final post-quality control patient-level mutation annotation files for each study.To further test for an association between nsSNVs or indel loads and patient response to checkpoint inhibitor therapy we used four patient cohorts.The first dataset consisted of 38 patients with melanoma treated with anti-PD-1 therapy, as reported by Hugo and colleagues.25,We obtained final post-quality control mutation annotation files and clinical outcome data, and 34 patients were retained for analysis after exclusion of cases in which DNA had been extracted from patient-derived cell lines and patients in whom tissue tumor purity was below 20%.Four samples from Hugo and colleagues25 were taken after a short period on treatment, which raises the possibility that checkpoint inhibitor therapy itself might have affected mutational frequencies through possible elimination of immunogenic tumour clones.To be consistent with the original study, these samples were not excluded; however, we note the frameshift indel association presented becomes more significant with these cases removed.The second checkpoint inhibitor cohort comprised of 62 patients with melanoma treated with anti-CTLA-4 therapy, as reported by Snyder and colleagues.13, "All patients' samples were taken from fresh snap frozen tumour tissue with tumour purity of more than 20% so, accordingly, all 62 cases were retained for analysis. "The Snyder and colleagues' cohort also contained a number of samples taken on treatment; these samples have been retained for consistency; however, we note again the significance of results strengthens if they are removed.The third checkpoint inhibitor cohort comprised of 100 patients with melanoma treated with anti-CTLA-4 therapy, as reported by Van Allen and colleagues;12 one patient was excluded because of a tumour purity of less than 20%.The final checkpoint inhibitor cohort comprised of 31 patients with non-small-cell lung cancer treated with anti-PD-1 therapy, as reported by Rizvi and colleagues;14 all patients were eligible for inclusion.For these four cohorts,12–14 final mutation annotation files, including indel mutations, were not available, so we obtained raw BAM files and undertook variant calling using a standardised bioinformatics pipeline.To assess for a general association between nsSNVs or indel loads and patient overall survival we used a final cohort of 100 patients with non-small-cell lung cancer, as reported by Jamal-Hanjani and colleagues.26,We obtained final post-quality control mutation annotation files and clinical outcome data, and 88 patients were retained for analysis after exclusion of non-smokers.Non-smokers were excluded on account of differing cause of disease and dramatic differences in mutation counts, which were likely to confound analyses.We additionally considered clonal versus subclonal analysis of indel counts in this cohort; however, because of small indel numbers it was not possible to reliably subset the data in this manner.For whole-exome sequencing variant calling, we obtained BAM files representing both the germline and tumour samples from the cohorts of Snyder and colleagues,13 Van Allen and colleagues,12 and Rizvi and colleagues14 and converted these to FASTQ format using Picard tools SamToFastq.Raw paired-end reads in FastQ format were aligned to the full hg19 genomic assembly obtained from GATK bundle,27 using bwa mem.28,We used Picard tools to clean, sort, and merge files from the same patient sample and to remove duplicate reads.We used Picard tools, GATK, and FastQC to produce quality control metrics.SAMtools mpileup29 was used to locate non-reference positions in tumour and germline samples.Bases with a Phred score of less than 20 or reads with a mapping quality less than 20 were omitted.Base-alignment quality computation was disabled and the coefficient for downgrading mapping quality was set to 50.VarScan2 somatic30 used output from SAMtools mpileup to identify somatic variants between tumour and matched germline samples.Default parameters were used with the exception of minimum coverage for the germline sample, which was set to 10, and minimum variant frequency was changed to 0·01.VarScan2 processSomatic was used to extract the somatic variants.The resulting SNV calls were filtered for false positives with the associated fpfilter.pl script in Varscan2, initially with default settings then repeated with min-var-frac=0·02, having first run the data through bam-readcount.Only indel calls classed as high confidence by VarScan2 processSomatic were kept for further analysis, with somatic_p_value scores less than 5 × 10−4.MuTect31 was also used to detect SNVs, with annotation files contained in GATK bundle.Following completion, variants called by MuTect were filtered according to the filter parameter PASS.In the pan-cancer cohort, SNV and indel mutation counts were computed per case, considering all variant types.Across all 5777 samples, we observed a total of 1 227 075 SNVs and 54 207 indels.Dinucleotide and trinucleotide substitutions were not considered.The metric indel burden was simply defined as the absolute indel count per case and indel proportion was defined as follows,We repeated the same analysis in the two renal clear cell carcinoma replication cohorts.We estimated non-sense-mediated decay efficiency with RNAseq expression data, obtained from the TCGA GDAC Firehose repository.We estimated the extent of NMD for all indel and SNV mutations by comparing mRNA expression in samples with a mutation to the median mRNA expression of the same transcript across all other tumour samples in which the mutation was absent.Specifically, mRNA expression of every mutation-bearing transcript was divided by the median mRNA expression of that transcript in non-mutated samples, to give an NMD index.The overall NMD index values observed were 0·93 and 1·00, suggesting an overall 7% reduction in expression in indel mutated transcripts.Tumour purity in the renal clear cell carcinoma cohort was 0·54,32 quantified by histological assessment, and assuming constant expression in the remaining 0·46 normal cellular content, that would yield an adjusted 14% drop in expression in indel-mutation-bearing cancer cells.If we assume that tumour mutations are clonal, of heterozygote genotype, in a diploid genomic region, and wild-type allele expression in mutated cancer cells remains constant, a purity-adjusted reduction of 0·5 would be expected under a model of fully effective NMD.Hence these data suggest NMD operates with reduced efficiency in the renal clear cell carcinoma cohort; however, we acknowledge that these assumptions will have some effect.These data are presented as a global approximation of NMD efficiency, using methods in line with previous publications.33,NMD index values were −log2 transformed, with 0 indicating no mRNA degradation and plotted for indel or SNV mutations.We used PyClone34 and ASCAT35 to determine the clonal status of variants in the cohorts by Snyder and colleagues13 and Van Allen and colleagues.12,For each case variant calls were integrated with local allele-specific copy number, tumour purity, and variant allele frequency.All mutations were then clustered using the PyClone Dirichlet process clustering.We ran PyClone with 10 000 iterations and a burn-in of 1000, and default parameters.For a number of tumours the reliable copy number, mutation, and purity estimations could not be extracted, rendering clonal architecture analysis intractable and these tumours were omitted from the analysis."The following sample was excluded because of an absence of accurate copy number or clonality estimation in Snyder and colleagues' cohort13: V_MSK052. "For Van Allen and colleagues' cohort,12 reliable analysis of indel mutation clonality was not possible because of a lack of accurate copy number or clonality estimation in a number of cases: Pat02, Pat06, Pat100, Pat101, Pat103, Pat106, Pat110, Pat113, Pat131, Pat132, Pat135, Pat138, Pat139, Pat140, Pat148, Pat159, Pat160, Pat163, Pat165, Pat166, Pat170, Pat171, Pat174, Pat175, Pat24, Pat36, Pat38, Pat73, Pat77, Pat78, Pat79, and Pat92.For a subset of patients from the TCGA cohort, tumour-specific neoantigen binding affinity prediction data were also available and obtained from Rooney and colleagues.36,Briefly, the four digit HLA type for each sample, along with mutations in class I HLA genes, were determined using POLYSOLVER.37,We determined somatic mutations using Mutect31 and Strelka38 tools.All possible 9-mer and 10-mer mutant peptides were computed, on the basis of the detected somatic SNV and indel mutation across the cohort.Binding affinities of mutant and corresponding wild-type peptides, relevant to the corresponding POLYSOLVER-inferred HLA alleles, were predicted using NetMHCpan.39,High-affinity binders were defined as IC50 less than 50 nM.Wild-type allele non-strong binding was defined as IC50 greater than 50 nM.Accordingly a mutant-specific binder was used to refer to a neoantigen with mutant IC50 less than 50 nM and wild-type IC50 more than 50 nM.A strong binding threshold was used for wild-type alleles to ensure fair comparison between SNV-derived and indel-derived neoantigens, in view of the high incidence of wild-type non-binders for indels.We excluded cancers that were associated with a high level of viral genome integration, including cervical and hepatocellular carcinoma, but not head and neck squamous cell carcinoma.No TCGA dataset was available for Merkel cell carcinoma.Immune gene signature data were obtained from Rooney and colleagues,40 with gene sets defined as stated in the appendix.We did analysis for TCGA patients with renal clear cell carcinoma, for whom both RNAseq and neoantigen data were available.A high burden of frameshift indel high-affinity neoantigens was defined as more than 10 per case, and the percentage difference in expression was compared between the high indel neoantigen group and all other patients across each immune signature.We excluded immune signatures with minimal ssGSEA enrichment scores in all groups.The same analysis was repeated for a high burden of SNV-derived high-affinity neoantigens, with a threshold of more than 17 SNV neoantigens selected to size match the high burden groups across mutational types.We plotted the percentage differences in expression in heatmap format.We did correlation analysis within the high-frameshift indel neoantigen group.Across the four cohorts of patients treated with checkpoint inhibitors, we tested nsSNV, all-coding indel, and frameshift indel variant counts for an association with patient response to therapy.For each of these measures, high groups were defined as the top quartile and low groups were defined as the bottom-three quartiles.We used the same criteria across all four datasets and compared the proportion of patients responding to therapy in high and low groups.Measures of patient response were based on definitions consistent with how they were evaluated in the said trials, as follows."For Snyder and colleagues' cohort,13 long-term clinical benefit was defined as radiographic evidence of freedom from disease, evidence of a stable disease, or decreased volume of disease for more than 6 months.No long-term clinical benefit was defined as tumour growth on every CT scan after the initiation of treatment or a clinical benefit lasting 6 months or less."For Hugo and colleagues' cohort,25 responding tumours were complete response, partial response, and stable disease, and non-responding tumours were defined as disease progression. "For Van Allen and colleagues' cohort,12 clinical benefit was defined as complete response, partial response, or stable disease, and no clinical benefit was progressive disease or stable disease with overall survival less than 1 year. "For Rizvi and colleagues' cohort,14 durable clinical benefit was defined as partial response or stable disease lasting longer than 6 months, and no durable benefit was progressive disease less than 6 months from beginning of therapy.Survival analysis was done using the Kaplan-Meier method, with p value determined by a log-rank test.Relapse-free survival was defined as the time to recurrence or relapse, or if a patient had died without recurrence, the time to death.Hazard ratio was determined through a Cox proportional hazards model.Multivariate Cox regression was done with relapse-free survival versus indel load with stage, adjuvant therapy, age, and histology included in the model.We compared indel burden and proportion measures between renal cell carcinomas and all other non-kidney cancers with a two-sided Mann-Whitney U test.In the checkpoint inhibitor response analysis, nsSNV, exonic indel, and frameshift indel counts were each compared to patient response outcome using a two-sided Mann-Whitney U test."We did a meta-analysis of results across the four checkpoint inhibitor datasets using the Fisher's method of combining p values from independent tests. "We undertook immune signature correlation analysis using a Spearman's rank correlation coefficient.We carried out statistical analyses using R and considered a p value of 0·05 or less as being statistically significant.The funders of the study had no role in study design, data collection, data analysis, data interpretation, or writing of the report.The corresponding author had full access to all the data in the study and had final responsibility for the decision to submit for publication.We observed a median indel proportion value of 0·05 and a median indel count of 4, cohort-wide.Across all tumour types, renal clear cell carcinoma was found to have the highest proportion of coding indels, 0·12, a 2·4 times increase when compared with the pan-cancer average."This result was replicated in two further independent cohorts, with median observed indel proportions of 0·10 in Sato and colleagues' study23 and 0·12 in Gerlinger and colleagues' study24.Renal papillary cell carcinoma and chromophobe renal cell carcinoma had the second and third highest indel proportion, suggesting a possible tissue-specific mutational process contributing to the acquisition of indels in renal cancers.Renal papillary cell carcinoma and chromophobe renal cell carcinoma had the highest absolute indel count across all tumour types, closely followed by renal clear cell carcinoma.Renal clear cell carcinoma is characterised by loss-of-function mutations in one or more tumour-suppressor genes: VHL, PBRM1, SETD2, BAP1, and KDM5C,32 which can be inactivated by nsSNV or indel mutations.To exclude the possibility that these hallmark mutations were distorting the results, we recalculated renal clear cell carcinoma indel proportion excluding these genes; the revised indel proportion remained at 0·12.When we used previously published multiregion whole-exome sequencing data24 from ten cases of renal clear cell carcinoma to assess the clonal nature of indel mutations, 53 of 110 frameshifting indels were clonal in nature.The overall effect of NMD on the expression of indel-mutated genes was estimated to be 14%, suggesting it operates on a subset of transcripts.Next we sought to investigate the potential immunogenicity of nsSNV and indel mutations through analysis of MHC class I-associated tumour-specific neoantigen binding predictions in the pan-cancer TCGA cohort.Across all samples, HLA-specific neoantigen predictions were done on 335 594 nsSNV mutations, resulting in a total of 214 882 high-affinity binders, equating to a rate of 0·64 neoantigens per nsSNV mutation.In a similar manner, predictions were made on 19 849 frameshift indel mutations, resulting in 39 768 high-affinity binders with a rate of 2·00 neoantigens per frameshift mutation.Thus on a per mutation basis, frameshift indels could generate around three times more high-affinity neoantigen binders than nsSNVs, consistent with the prediction in a recent analysis of a colorectal cancer cohort.41,When both wild-type and mutant peptides are predicted to bind, central immune tolerance mechanisms might delete cells with the reactive T-cell receptor.42,Therefore, we repeated a pan-cancer analysis restricting the neoantigens to mutant-specific binders, and showed that frameshift indels were nine times enriched for mutant-allele-only binders.Of particular interest were genes that are frequently altered via frameshift mutations and with high propensity for MHC binding.In a pan-cancer analysis, these genes were enriched for classic tumour-suppressor genes, including TP53, ARID1A, PTEN, KMT2D, KMT2C, APC, and VHL.Collectively, the top 15 genes with the highest number of frameshift mutations were mutated in more than 500 samples with more than 2400 high-affinity neoantigens predicted.Tumour-suppressor genes have been a previously intractable mutational target, but they might be targetable as potent neoantigens.Furthermore, by virtue of being founder events, many alterations in tumour-suppressor genes are clonal, present in all cancer cells, rendering them compelling targets for immunotherapy.43,We next considered the clinical effect of indel mutations by assessing the association between neoantigen enrichment and therapeutic benefit.Consistent with a potential role of frameshifts in the generation of neoantigens, those tumour types approved for the use of checkpoint inhibitors were all found to harbour an above average number of frameshift neoantigens, despite substantial differences in the total SNV or indel mutational burden—eg, renal cell carcinoma.Overall, the number of frameshift neoantigens were significantly higher in the checkpoint inhibitor-approved tumour types versus those that have not been approved to date.However, the potential presence of frameshift neoantigens alone does not imply that they induce T-cell responses, and hence we tested their effect on checkpoint inhibitor efficacy.We used the exome sequencing results from an anti-PD-1 study25 in melanoma.We tested three classes of mutation, nsSNVs, in-frame indels, and frameshift indels, for an association with response to treatment.Although nsSNVs and in-frame indels had no association with response to treatment, frameshift indel mutations were significantly associated with anti-PD-1 response.The upper quartile of patients with the highest burden of frameshift indels had an 88% response to anti-PD-1 therapy, compared with 43% for the lower three quartiles."To confirm the reproducibility of this association, further checkpoint inhibitor response data were obtained from two additional melanoma cohorts: Snyder and colleagues' cohort13 and Van Allen and colleagues' cohort12. "We did the same analysis in each cohort and frameshift indel burden was significantly associated with checkpoint inhibitor response in both datasets.An overall meta-analysis across the three cohorts confirmed frameshift indel count to be associated with checkpoint inhibitor response, and with a more significant association than nsSNV count.The effect of clonality was additionally assessed, and clonal frameshift indels were found to have a further significantly predictive advantage beyond all frameshift indels, supporting previous work reported by our group.43, "Overall survival analysis was not different between high and low frameshift indel groups, possibly because of the effect of subsequent therapies on the overall survival. "We assessed the association between frameshift indel load and checkpoint inhibitor response in another tumour type by using data obtained from Rizvi and colleagues' small cohort14 of 31 patients with non-small-cell lung cancer treated with anti-PD-1 therapy; no difference was observed. "To further investigate the importance of frameshift indels in non-small-cell lung cancer, we did additional analysis using data from Jamal-Hanjani and colleagues' cohort26 of 100 cases, none of whom received treatment with checkpoint inhibitors. "Consistent with our previous findings,43 we observed that patients with lung adenocarcinoma whose tumour's harboured a high clonal neoantigen burden exhibited improved relapse-free survival compared with the bottom three quartiles.However, across all histological subtypes of non-small-cell lung cancer, survival was found to be significantly improved for patients with a high load of frameshift indels; by contrast, nsSNV load was not formally associated.Of note, the strongest prognostic predictor was for patients in the patients with a high load of both nsSNVs and frameshift indels, with elevated levels of both frameshift indels and nsSNVs, with no events in this group.Multivariate analysis showed some evidence of correlation between variables, so further investigation of nsSNVs and frameshift indels as predictors in larger patient cohort will be required to draw definitive conclusions.Analyses of the indel load and proportion of response achieved from phase 2 studies for the tumour types not approved for checkpoint inhibition were limited by the small sample size and variable patient inclusion criteria such as PDL-1 immunohistochemistry.Nevertheless, the proportion of patients achieving a response was higher in triple-negative breast cancer44 compared with other invasive breast carcinoma molecular subtypes, and triple-negative invasive breast carcinoma has a higher burden of frameshift and mutant-specific neoantigens.Furthermore, mutational burden has been reported as higher in BRCA1-mutated triple-negative breast cancer compared with BRCA-wild-type triple-negative breast cancer,45 and we specifically observed a higher indel load in these cases.However, this outcome did not correlate with tumour-infiltrating lymphocyte density, possibly because of the small sample size, absence of indel immunogenicity in this tissue type, or additional factors that modulate tumour-infiltrating lymphocyte density.Finally, although genomic data are not available to correlate with checkpoint inhibitor response in renal clear cell carcinoma, we analysed the association between frameshift neoantigen load and immune responses within the tumour using RNAseq gene expression data.Patients were split into groups on the basis of the burden of frameshift neoantigens versus SNV-neoantigens.A high load of frameshift neoantigens was associated with upregulation of immune signatures classically linked to immune activation, including MHC class I antigen presentation, CD8-positive T-cell activation, and increased cytolytic activity, a pattern not observed in the high SNV-neoantigen group.Furthermore, correlation analysis within the high frameshift neoantigen group showed that CD8-positive T-cell signature was correlated with both MHC class I antigen presentation genes and cytolytic activity.In this study, we analysed the pattern of indel mutations across 19 solid tumour types and found that renal clear cell carcinoma, renal papillary cell carcinoma, and chromophobe renal cell carcinoma have the highest indel rate as a proportion of their total mutational burden and the highest overall indel count and are enriched for mutant-specific neoantigens.We also observed that indel number is significantly associated with checkpoint inhibitor response in melanoma.Indels are thought to occur as a result of DNA strand slippage during DNA synthesis46 and their frequency is higher in repetitive sequences, especially those that are AT-rich.Indels are also generated through mutagen exposure, with a higher number observed in smoking than in non-smoking non-small-cell lung cancer40 and higher in UV-exposed versus UV-protected melanomas.47,Less is known about the repair of indels than SNVs; however, the role of the mismatch repair mechanism is illustrated by the microsatellite instability-high phenotype, characterised by excess indels in repetitive sequences as seen in patients with Lynch syndrome.Although renal clear cell carcinoma has been reported in patients with Lynch syndrome,48 this cannot account for the overall pattern of indel rates across renal clear cell carcinoma nor the comparatively low SNV burden.Most renal clear cell carcinomas have loss of chromosome 3p, which encodes the mismatch repair gene MLH1, but the remaining allele is rarely mutated in sporadic renal clear cell carcinoma.Another relevant gene encoded on 3p is FHIT, and its deficiency has been linked with indel accumulation in knockout mouse models, but the consequences of the heterozygous knockout are unknown.49,However, as loss of 3p is an infrequent event in renal papillary cell carcinoma and chromophobe renal cell carcinoma and indels are also elevated in both these tumour types, other tissue-specific phenomena are likely to contribute to the increased indel burden across all renal carcinoma subtypes.50,Renal clear cell carcinoma and renal papillary cell carcinoma arise in the proximal tubule and chromophobe renal cell carcinoma in the distal tubule of the nephron, and this shared tissue context might be important, even if the three subtypes are molecularly distinct.32,50,51,The nephron, and the proximal tubule in particular, play a crucial role in the reabsorption of vast volumes of renal filtrate and elimination of waste products of metabolism and toxins, with the effects of toxin elimination evident in the increased incidence of renal clear cell carcinoma in those individuals exposed to aristolochic acid.52,Ochratoxin A, a mycotoxin, induces renal tumours in rodents by causing double-strand breaks.53,Polymorphisms in genes involved in the repair of double-strand breaks are associated with an increased risk of renal clear cell carcinoma.54,Double-strand breaks are mostly repaired by non-homologous end-joining, which is error-prone and can increase the rate of small indels."Therefore, it is possible that an environmental toxin causes an excess of double-strand breaks in the nephron and that non-homologous end-joining's mutagenic potential is exacerbated by functional polymorphisms.In support of this notion we observed a higher rate of indels in triple-negative breast cancer, which is enriched for BRCA deficiency.BRCA1 has been shown to inhibit error prone non-homologous end-joining.55,However, we did not observe a correlation between indel load and tumour-infiltrating leucocytes density in BRCA1 triple-negative invasive breast carcinoma.We observed that indels, which alter the reading frame, generate three times as many predicted neoantigens as nsSNVs and nine times as many strong mutant-binding neoantigens where the wild-type sequence is not predicted to strongly bind the HLA molecule.Thus, frameshift mutations potentially result in a neoantigen landscape, which is both quantitatively and qualitatively more potent than that provided by an equivalent number of nsSNVs.In keeping with this notion, microsatellite instability-high colorectal cancer CD8-positive tumour-infiltrating leucocytes density correlates positively with the total number of frameshift mutations.56, "With the exception of polyomavirus-positive Merkel cell carcinoma and Hodgkin's lymphoma, renal clear cell carcinoma is the only tumour type with a relatively low nsSNV burden among the tumour types for which checkpoint inhibitors have been approved for clinical use.However, owing to a comparable frameshift burden its level of mutant-specific high-affinity neoantigens is similar to that observed in non-small-cell lung cancer and melanoma, and the same is true of renal papillary cell carcinoma and chromophobe renal cell carcinoma.Although the evidence for the immunogenicity of renal papillary cell carcinoma is sparse, complete responses have been noted with the use of both high-dose interleukin 257 and anti-PD-1 therapy.58,59,Therapeutic data in chromophobe renal cell carcinoma are limited.Given the differential benefit across patients, the spectrum of immune-related adverse events, and the cost of checkpoint inhibitor drugs, efforts to identify biomarkers of response are ongoing.PDL-1 expression and MSI-H status are the only biomarkers that have been linked to drug approval.Mutational and neoantigen burdens have been shown to correlate with clinical outcomes from checkpoint inhibitor therapy in patients with advanced melanoma, colorectal cancer, and non-small-cell lung cancer.13,14,16,However, some patients with cutaneous melanoma with a low nsSNV burden still derive benefit from checkpoint inhibitors, as do some patients with UV-protected mucosal melanomas,60 which have a characteristically low nsSNV burden.61,We analysed three melanoma datasets for which both response and mutational data were available.In two of the three studies,13,14 comprising a total of 96 patients treated with either anti-PD-1 or anti-CTLA-4 therapy, frameshift indel burden was a better predictor of response than nsSNV burden.In the third study12 of 100 patients treated with anti-CTLA-4 therapy, the nsSNV burden and frameshift burden were both significantly associated with checkpoint inhibitor response."We note that most of the patients in Van Allen and colleagues' cohort12 were pretreated and therefore any mutational biomarker assessment in this group might be less reliable.Although nsSNVs contribute greatly towards tumour immunogenicity in heavily mutated tumours, our analyses suggest that frameshift mutations also make a significant contribution relative to their overall low number.The contribution of frameshift indels in low nsSNV burden tumours might be of greater importance still, as illustrated by the fact that frameshift mutations contribute over a third of the neoantigen load in renal clear cell carcinoma.Mutational and checkpoint inhibitor response data were not available for renal clear cell carcinoma, hence we could not establish a direct association between frameshift indels and positive checkpoint inhibitor response.In terms of indirect evidence in renal clear cell carcinoma, we observed an association between the frameshift neoantigens and upregulation of machinery necessary for antigen presentation by the MHC complex and T-cell activation.Furthermore, the CD8-positive T-cell signature in the frameshift neoantigen-high group was closely related to cytolytic activity, suggesting the presence of antitumour effectors that could confer sensitivity to immunotherapy.However, no definitive conclusions can be drawn until checkpoint inhibitor response and indel load is directly investigated in a sufficiently powered series of renal clear cell carcinoma cases.For frameshift neoantigens to contribute to antitumour immunity the mutant peptides must be expressed.Frameshifts cause premature termination codons and the resultant mRNAs can be targeted for NMD.Published analyses62 of germline samples show that premature termination codons frequently lead to the loss of expression of the variant allele, but that some mutant transcripts escape NMD on the basis of the exact location of the frameshift within a gene.Combined analyses of mutational and expression data from more than 10 000 cancer samples showed that NMD is triggered with variable efficacy, and even when effective might not alter expression because of factors such as short mRNA half-life.33,RNAseq analysis in renal clear cell carcinoma cases showed a minimal change in mRNA transcript levels for frameshift indel-mutated tumours, suggesting NMD is operating on a subset of transcripts, as expected.In this context, the strongly hypoxic microenvironment that characterises renal clear cell carcinomas might be a contributing factor, with evidence showing NMD inhibition in cells subject to hypoxia and other perturbed microenvironmental conditions.63,Clonal frameshift mutations could be an important source of tumour-specific antigens for personalised immunotherapy strategies, including peptide vaccines and adoptive cell therapy.Tumour-reactive T cells recognising a frameshifted product of the CDKN2A tumour-suppressor gene were reported to mediate a potent in-vivo response in melanoma.64,In microsatellite instability-high colorectal cancer, frameshift neopeptide-specific cytotoxic T-cell responses were observed in patients harbouring those mutations.65,Cytotoxic T-lymphocyte responses to frameshifted proteins have been detected in healthy hereditary non-polyposis colorectal cancer-mutation carriers, raising the possibility of protective immunosurveillance in this population.66,Frameshift neoantigens are particularly pertinent in the context of mismatch repair-deficiency, which is a pan-cancer event, and crucially, frameshifts commonly occurring in microsatellite instability-high colorectal carcinomas have been shown to generate NMD-resistant transcripts.67,In support of this, in a study68 of PD-1 blockade in patients with microsatellite instability-high tumours from various cancer subtypes, functional analyses in a responding patient showed in-vivo expansion of frameshift neoantigen-specific T-cell clones.Frameshift neoantigens provide a unique opportunity to target common tumour-suppressor genes, such as such as TP53 and BAP1,69 and their founder status also enriches for clonal neoantigens.Acknowledging the qualitative difference in the neoantigen burden of renal clear cell carcinoma might be integral for optimising responses to checkpoint inhibitors.Neoantigens derived from driver mutations elicit profound T-cell exhaustion via chronic antigen stimulation, generating T-cell pools refractory to immune therapy.70,Thus, early administration of checkpoint blockade might further improve clinical benefit in cancers with particularly antigenic mutations such as renal clear cell carcinoma.It is also noteworthy that a high differential affinity between wild-type and mutant peptides is indicative of enhanced tumour protection in vivo.71,The enrichment of mutant-only binders by nine times in neoantigens derived from frameshift mutations relative to nsSNVs might therefore partly explain the predictive power of frameshift neoantigens in checkpoint inhibitor responses.A widely recognised challenge in bioinformatics is indel variant calling, due to the inherent nature of short-read sequencing technology; however, accurate indel calling can be achieved within both a research and clinical context with strict quality control procedures.72,While strict quality control procedures can ensure a low false-positive rate, as a consequence the true rate of indel mutations might be underestimated.In conclusion, we report that kidney cancers carry the highest pan-cancer burden of indel mutations.Futhermore, our data suggest that frameshift indels are a highly immunogenic mutational class; triggering an increased quantity of neoantigens and greater mutant binding specificity.Collectively, these data might reconcile the outlier nature of immunotherapy responses in renal clear cell carcinoma, highlighting frameshift indels as a potential biomarker of checkpoint inhibitor response and supporting the targeting of clonal frameshift indels by both vaccine and cell therapy approaches.
Background The focus of tumour-specific antigen analyses has been on single nucleotide variants (SNVs), with the contribution of small insertions and deletions (indels) less well characterised. We investigated whether the frameshift nature of indel mutations, which create novel open reading frames and a large quantity of mutagenic peptides highly distinct from self, might contribute to the immunogenic phenotype. Methods We analysed whole-exome sequencing data from 5777 solid tumours, spanning 19 cancer types from The Cancer Genome Atlas. We compared the proportion and number of indels across the cohort, with a subset of results replicated in two independent datasets. We assessed in-silico tumour-specific neoantigen predictions by mutation type with pan-cancer analysis, together with RNAseq profiling in renal clear cell carcinoma cases (n=392), to compare immune gene expression across patient subgroups. Associations between indel burden and treatment response were assessed across four checkpoint inhibitor datasets. Findings We observed renal cell carcinomas to have the highest proportion (0.12) and number of indel mutations across the pan-cancer cohort (p<2.2 × 10−16), more than double the median proportion of indel mutations in all other cancer types examined. Analysis of tumour-specific neoantigens showed that enrichment of indel mutations for high-affinity binders was three times that of non-synonymous SNV mutations. Furthermore, neoantigens derived from indel mutations were nine times enriched for mutant specific binding, as compared with non-synonymous SNV derived neoantigens. Immune gene expression analysis in the renal clear cell carcinoma cohort showed that the presence of mutant-specific neoantigens was associated with upregulation of antigen presentation genes, which correlated (r=0.78) with T-cell activation as measured by CD8-positive expression. Finally, analysis of checkpoint inhibitor response data revealed frameshift indel count to be significantly associated with checkpoint inhibitor response across three separate melanoma cohorts (p=4.7 × 10−4). Interpretation Renal cell carcinomas have the highest pan-cancer proportion and number of indel mutations. Evidence suggests indels are a highly immunogenic mutational class, which can trigger an increased abundance of neoantigens and greater mutant-binding specificity. Funding Cancer Research UK, UK National Institute for Health Research (NIHR) at the Royal Marsden Hospital National Health Service Foundation Trust, Institute of Cancer Research and University College London Hospitals Biomedical Research Centres, the UK Medical Research Council, the Rosetrees Trust, Novo Nordisk Foundation, the Prostate Cancer Foundation, the Breast Cancer Research Foundation, the European Research Council.
31,601
VPOT: A Customizable Variant Prioritization Ordering Tool for Annotated Variants
With the increasing use of next-generation sequencing methods, researchers are now faced with many genetic variants, from hundreds of thousands to millions, to evaluate.Software such as ANNOVAR and VEP use databases that provide functional consequences, pathogenicity predictions, and population frequencies to annotate genetic variants.There are many pathogenicity-prediction algorithms available, such as CADD, PolyPhen-2, SIFT, and MutationTaster2 , but there is no single algorithm that has been universally accepted as the best.Genetic variants predicted to be deleterious by multiple methods are likely to be of greater interest in disease studies .In practice, multiple pathogenicity prediction scores are utilized to increase the likelihood of identifying a disease-causing variant.Thus, to determine if a variant is likely to be disease-causal, all prediction scores are often considered together in addition to variant filtering based on other annotation metrics.This makes the prioritization of genetic variants a labor-intensive and cumbersome task.To facilitate this process, several variant prioritization tools have been developed.However, they are either web-based, making the analysis of whole-genome data difficult, or they do not provide an aggregated score across all annotation values.We have developed variant prioritization ordering tool, a python-based command line program that creates a single aggregated pathogenicity ranking score from any number of annotation values via customizable weighting.Using this score, VPOT ranks variants, allowing researchers to prioritize variants based on all annotation data and pathogenicity-prediction outcomes.The VPOT workflow consists of two main steps: variant prioritization and post-processing of the variant priority ordered list.Using ANNOVAR-annotated VCFs or tab-separated-values files as input the VPOT priority function creates a prioritization parameter file based on all the annotation elements found.The PPF will determine if the annotation fields are characters or numeric.It will list the range of values found within that field to aid customization by the user.By modifying the PPF, the user can select which annotation fields to use in the prioritization process and the weighting to apply to a specific range of values for each annotation field.Additionally, the PPF allows users to filter variants on fields attributes; for example, a population frequency threshold can be defined for Exome Aggregation Consortium/Genome Aggregation Database .A PPF only needs to be set up once as it can be applied repeatedly to prioritize variants in different samples if the annotation fields used within the PPF are available.While utilization of new prediction annotations would require modification of the PPF, VPOT will still run successfully without PPF modification, but it would utilize only the annotations indicated in the PPF.VPOT is designed to allow the user to customize their prioritization process based on annotations relevant to the disease of study.However, we also provide a default PPF with a list of recommended annotations based on our experience with one complex disease, congenital heart disease .The default PPF eliminates variants with a minor allele frequency higher than 0.1% with respect to control databases, low quality variants, and synonymous variants.The weighting criteria for each in silico predictor used in the default PPF are set to identify pathogenic variants based on pathogenicity threshold recommended by the individual algorithms or informed by the literature.The default PPF also weighs the most disruptive variants such as stop-gain, frameshift indels, and splicing variants highly.Annotated VCFs/TSV files and a PPF are passed as input to VPOT to perform the prioritization function on all variants.Using the PPF-customized weights, each variant is scored by aggregating all the user-defined values.This is done by calculating the sum of all encoded weights for each variant.A normalized score is also calculated by dividing by the maximum score found across all variants.All variants by default are returned and ordered in the output, which we call the variant priority ordered list.Variants with low score can be removed at this stage by providing a cutoff within the PPF so that only variants with scores greater or equal to the cutoff are included in the VPOL.For each variant, VPOT performs quality control checks on each sample’s genotype based on coverage and allele balance.The user, via the PPF, can customize the quality control check thresholds.If the sample genotype call fails these quality control checks, then it is marked in the VPOL.For each variant line in the VPOL each sample’s genotype is denoted as, “0″ for reference, “1” for heterozygous, “2” for homozygous alternate, or “.,for quality control failure.This prioritization step can be easily performed in parallel across many samples or repeated for new samples by using the same PPF as part of the input.VPOT provides several post-prioritization options to explore the VPOL.A summary statistics option generates a quick and simple variant report for the supplied VPOL highlighting the number of scored variants, and a list of genes that score in the top 75th percentile of variants found for each sample in the VPOL.VPOT allows researchers to apply a user-defined candidate gene list to filter any VPOL using the gene filtering option.VPOT can filter variants in the VPOL based on inheritance or absence from controls via the use of the sample filtering option.This option utilizes a ped format file.The sample filtering option can filter variants based on their case-control status by extracting variants that exist in case samples and not in control samples of a large cohort.The VPOT samplef option can also filter variants based on different Mendelian inheritance models.A complete family trio, defined by the presence of parents and proband, is required for this option.The de novo model identifies variants that only exist in the proband and not in any of the parents.The autosomal dominant model identifies variants that exist in both the proband and affected parent but not in the unaffected parent.The autosomal recessive model identifies variants that are homozygous for the alternative allele in the proband and heterozygous in both parents.The compound heterozygous model provides a filter that returns heterozygous variants in genes that have both proband-paternal and proband-maternal specific variants.For large cohort studies, it is recommended to run multiple VPOT processes for small subsets of samples in parallel to reduce computational time.To facilitate the ability to view all the samples in a single VPOL file, VPOT has a merge option to consolidate multiple numbers of VPOL files back to one VPOL.We used VPOT to identify potentially pathogenic gene variants in a family with a proband that had multiple congenital malformations.The family was subjected to whole-genome sequencing and over 7.7 million variants identified.Following filtering and prioritization by VPOT using the default PPF the number of candidate variants decreased to 587.Based on the family pedigree which shows that the parents were consanguineous, we used VPOT’s inheritance model filtering to refine the number of candidate variants based on an autosomal recessive inheritance model.After application of inheritance model filtering, 14 variants remained with a HAAO homozygous variant ranked first, consistent with the reported genetic cause in this family .The identification of the HAAO variant demonstrates the ability of VPOT to facilitate monogenic disease variant discovery in a systematic way.VPOT has been successfully used to prioritize variants in a congenital heart disease cohort of 30 families that were whole-exome sequenced , with the disease-causing variants in the three solved families ranked within the top 2% of all variants found.In another cohort of 97 CHD families that underwent WGS , clinically actionable variants were identified in 28 families, and VPOT ranked the majority of these variants within the top 1% of all variants found.Only two variants were not ranked within the top 1% of variants due to large disagreement in pathogenicity prediction between different methods.We have provided the PPF file used for the prioritization of variants in these studies as a default PPF for the study of complex diseases like CHD.VPOT’s approach to variant prioritization is to aggregate pathogenicity predictor scores since no single pathogenicity predictor score has been shown to predict pathogenic mutations reliably.Other packages have utilized this same approach, and we identified two for evaluation comparison that are most similar to VPOT, Variant Ranker – a web-based tool, and VaRank – a command line program.Both Variant Ranker and VaRank create a ranking value for variants based on a set of user-defined scores for pathogenicity predictors like VPOT.We compared the overall features and functionality between the tools.Both VPOT and VaRank have no restriction on the input file size, which is important for the analysis of variants resulting from whole genome sequencing.Annotation is controlled by the user for both VPOT and VaRank, although it is a separate process for VPOT and part of the tool for VaRank.This provides greater flexibility for the user to adopt newer releases of the human reference genome, and novel pathogenicity predictors, such as for splicing and non-coding genetic variants.For Variant Ranker, the variant annotation process is embedded within its workflow and cannot be modified by the user.All three tools rank variants based on the scores of multiple pathogenicity prediction methods.However the number of predictors vary, with the lowest seen in VaRank that uses only three fixed pathogenicity prediction tools, then Variant Ranker that uses seven fixed tools, and finally VPOT where the number is limited only by the predictors included in the annotation.Accounting for differences in the genetic architectures of different diseases, VPOT allows expert users to apply their specialized knowledge of disease to stratify results from in silico predictors.The user can select higher weighting for specific predictors to enhance the accuracy for the disease or study design in question.VPOT also allows fine-tuning of variant ranking as the user can define any number of scoring intervals for an annotation category.This allows the user to define different pathogenicity thresholds instead of a binary non-damaging/damaging scenario.Finally, both VPOT and VaRank are local machine tools, so there is no security concern with sensitive study data being stored in the cloud.We evaluated VPOT, VaRank, and Variant Ranker by prioritizing variants from an exome sequencing dataset on idiopathic hemolytic anemia used previously by Variant Ranker to demonstrate its effectiveness .Following as close as possible the default variant scoring criteria of Variant Ranker, VPOT also ranked the most likely causative gene PKLR in the fourth position like Variant Ranker.We were not able to replicate the same scoring parameters as Variant Ranker using VaRank due to the limited number of pathogenicity predictors scoring options.With VaRank, using its default scoring parameters the PKLR variant was ranked in 199th position with an annotation impact value of “Moderate”.Both VaRank and Variant Ranker provide CADD Phred score annotation but do not include it in their final ranking.CADD score is a commonly used pathogenicity predictor, and a minimum score of 20 has been used as a lower threshold for variants considered to be possibly pathogenic .Utilizing the flexibility of VPOT we added CADD into our annotation and PPF with a weighting for CADD Phred score above 20.Under these new ranking criteria, the PKLR variant was ranked first by VPOT.This demonstrates the benefit of VPOT’s customizability to allow the users to refine and tune the variant prioritization process.Finally, we compared the computational performance of the three tools when ranking files with different number of variants.The processing time for VPOT and VaRank includes the annotation of the input VCF.VPOT was consistently faster than both VaRank and Variant Ranker, and as the number of variants increased the time difference between VPOT and the others were magnified.Additionally, VPOT was the only tool able to complete variant prioritization task for samples containing up to four million variants.In comparing the amount of central processing unit memory usage for the local machine tools, VPOT required a significantly smaller amount of memory to perform the prioritization tasks compared to VaRank.VPOT provides a convenient way to prioritize genetic variants in disease sequencing studies.It is fully customizable, allowing researchers to filter on any annotation metrics and set weights for pathogenicity predictions that reflect their specific disease-variant hypothesis in question.The use of VPOT can be especially informative when analyzing sequencing cohorts containing many families, as the prioritization of variants can allow researchers to identify most likely disease-causal candidate variants quickly across all families.VPOT is highly scalable for large genome analysis.Whole-genome sequencing generates very large variant files, and there are now increasing requirements for prioritization of non-coding variants that make up ∼98% of the genome.As larger sequencing studies are performed, VPOT will further prove to be an extremely valuable tool.VPOT is freely available for public use at GitHub.Documentation for installation along with a user tutorial, default parameter file, and test data are provided.Additional datasets analyzed in the current study are available upon request from the corresponding author.EI developed the application tool, performed the analyses and drafted the paper.GC, DW and SLD participated in the design of the tool.EG participated in the design and helped to draft the manuscript.All authors read and approved the final manuscript.The authors have declared no competing interests.
Next-generation sequencing (NGS) technologies generate thousands to millions of genetic variants per sample. Identification of potential disease-causal variants is labor intensive as it relies on filtering using various annotation metrics and consideration of multiple pathogenicity prediction scores. We have developed VPOT (variant prioritization ordering tool), a python-based command line tool that allows researchers to create a single fully customizable pathogenicity ranking score from any number of annotation values, each with a user-defined weighting. The use of VPOT can be informative when analyzing entire cohorts, as variants in a cohort can be prioritized. VPOT also provides additional functions to allow variant filtering based on a candidate gene list or by affected status in a family pedigree. VPOT outperforms similar tools in terms of efficacy, flexibility, scalability, and computational performance. VPOT is freely available for public use at GitHub (https://github.com/VCCRI/VPOT/). Documentation for installation along with a user tutorial, a default parameter file, and test data are provided.
31,602
Heart rate variability in mental stress: The data reveal regression to the mean
The data set presented was obtained from the students studying in Chuvash State Pedagogical University.Fig. 1 illustrates the HRV changes that occurred in the different protocol conditions.The effects of mental stress on heart rate variability measures vary significantly according to quartiles of baseline.The LnSDNN changes were significant in first and second groups, but not in the group with a low baseline level.The passage from the rest session to the mental stress evoked a significant decrease of lnLF in the first and second groups, whereas lnLF in the third groups decreased insignificantly.Statistical analysis of lnHF revealed a significant decrease in this parameter in the second and second groups and insignificant changes of LnHF were observed in the third group.We hypothesized that these differences may be explained by regression to the mean.To test our hypothesis, we have examined scatter plots of change against rest measurement.Fig. 2 shows significant association between baseline levels and effects of mental stress, which confirms our hypothesis.Table 1 presents the descriptive statistics of change between rest and mental stress sessions and the results of ANOVA and ANCOVA.When ANCOVA was performed to evaluate the effects of group on changes in HR and HRV, no significant effects were found regarding the changes of HR, lnSDNN, lnLF, and lnHF scores from rest to mental stress after adjusting for the baseline level.The data obtained can be used to correct HRV measure for RTM.A total of 1206 students: 20.53 ± 0.11)) attending Chuvash State Pedagogical University participated in the study.We randomly selected 162 subjects to perform an arithmetic mental task.To assess the influence of baseline heart rate variability on heart rate variability during the arithmetic mental task subjects were divided into quartiles according to baseline HRV.To study the effects of arithmetic mental stress on autonomic regulation of heart rate and HRV, ECG recordings were acquired during both baseline and mental stress.HRV parameters were determined using the Kubios HRV analysis software .Ordinary least square linear regression models were used to assess the RTM .The model was defined as: = constant + b × baseline .The RTM effect manifests as a negative correlation between baseline values and stress-induced changes in heart rate and HRV.Analysis of covariance tests were used to assess the influence of the RTM effect on baseline and mental stress HRV measurements .RR duration was not Gaussian distributed.Therefore, HRV measurements were log transformed to provide a normal distribution.Analysis of variance testing with the post hoc Bonferroni correction was used to assess the effects of stress on HR and HRV measurements.A two-tailed P-value < 0.05 was considered statistically significant.
This data article aimed to assess whether there is a relationship between baseline heart rate variability (HRV) and mental stress-induced autonomic reactivity. Out of 1206 healthy subjects, 162 students were randomly selected to participate in this study. Participants were presented with a mental arithmetic task of 10 min duration. The task required serial subtraction of 7 from a randomly selected 3-digit number. During performance of this task as well as at baseline, ECG was recorded to acquire heart rate and HRV (high frequency, low frequency, the standard deviation of NN) data. Participants were divided into quartiles according to baseline HRV. Mental stress responses were compared across groups. We observed significant differences for autonomic reactivity scores between groups with high versus low baseline HRV. Linear regression results were consistent with the regression to the mean model and mental stress reaction (defined as mental stress value minus baseline value) negatively correlated with baseline values. Baseline-adjusted analyses did not demonstrate significant intergroup differences for changes in heart rate and HRV from rest to mental stress. These data suggest regression to the mean is a major source of variability of stress-related changes in heart rate variability.
31,603
Credibility of subgroup analyses by socioeconomic status in public health intervention evaluations: An underappreciated problem?
There is a clear social gradient in the vast majority of health outcomes, whereby morbidity and premature mortality are concentrated amongst the most socioeconomically deprived groups in society.Health inequalities by socioeconomic status are caused by a combination of bio-psycho-social exposures acting over the life course, and these exposures are themselves patterned by unequal distributions of power, wealth and income across society.Reducing health inequalities between the most and least socioeconomically deprived groups in society has been identified as a priority for policymakers in the UK for nearly four decades, although little progress has been made in reducing these inequalities to date.Within this context, there is increasing interest in identifying promising public health interventions, or social policies, which may be effective in reducing health inequalities, by achieving differentially large health gains in the most socioeconomically deprived groups in society.Similarly, there is a growing recognition, and concern, that some interventions or policies may increase health inequalities if they disproportionately benefit the most affluent groups in society, an effect termed “intervention generated inequalities”.There now exists a large body of literature on the potential differential effects of a wide variety of public health interventions and policies, across a number of different target outcomes and levels of action, many of which have been summarised in reviews and “umbrella” reviews of reviews.We became aware of the issue of subgroup analysis credibility whilst reviewing the main approaches that have been taken for the classification of public health interventions:the “sector” approach of Bambra et al.; the “six Ps” approach of McGill et al.; and the “degree of individual agency” approach of Adams, Mytton, White, and Monsivais.In the course of this work, we became aware of recurring methodological issues with the subgroup analyses reported.We therefore decided to examine more closely the methodological quality of widely cited and high-quality-rated public health intervention studies, claiming to demonstrate differential effects by social class.We report those findings here.Clearly, an important issue for consumers of evaluation research to consider is the “credibility” of such analyses: the extent to which a putative subgroup effect can confidently be asserted to be believable or real.Clinical epidemiological methodologists have proposed guidelines for conducting credible subgroup analyses within randomised control trials and assessing the credibility of reported subgroup effects, although this guidance may not always be applied by researchers in practice.For example, recent systematic reviews clinical trials in the medical literature and back pain specifically have shown that the majority of apparent subgroup effects that are reported do not meet many of the established criteria for credible subgroup analyses.Both of these reviews examined differential effects according to a range of population subgroups beyond those defined by SES, such as those defined by age and gender.There has to date been relatively little discussion of subgroup analysis credibility in evaluations of public health interventions generally, or with respect to health inequalities by SES specifically.This is an important issue given the role of such analyses in guiding decision making, regarding interventions that may reduce health inequalities: high quality, credible subgroup analyses can shed light on how interventions may either reduce or increase health inequalities and are therefore invaluable to aid effective decision making.Non-credible subgroup analyses, on the other hand, may produce spurious differential intervention effects by SES and lead to decision makers drawing erroneous conclusions about the effects of an intervention on health inequalities.In this paper, we reflect on our experience of assessing the credibility of subgroup analyses in a purposive sample of public health primary intervention evaluation studies that report differential impacts by SES.We aimed to purposively sample a diverse set of evaluations of public health interventions that reported differential health impacts by any marker of SES. ,Specifically, we aimed to identify a pool of intervention studies, that had been already quality-appraised in at least one recent structured review, and which claimed to show an impact on health inequalities by SES.We wanted a sample of studies that were sufficiently diverse, in terms of the sorts of interventions evaluated and settings studied, so as to provide good coverage across all three of the published categorisation systems for such interventions, in case those might be correlated with generaliseability of study findings.Our inclusion criteria were therefore that studies had to: i) be critically appraised as being of “moderate” to “high” quality in a structured review published in the last decade; ii) report on the evaluation of a public health intervention - meaning programmes or policies delivered at a higher level of aggregation than individual patients; iii) describe a public health intervention that was applicable to high-income countries; iv) evaluate the impact of a public health intervention with a credible study design and analysis); v) report a differential effect of the intervention by SES.We excluded studies that looked for a differential intervention effect by any marker of SES, such as income/family budget, education, or local-area average levels of deprivation, but did not find one.This decision was based on the fact that all but a handful the 21 studies we reviewed, which reported a differential effect by SES, utilised regression-based analyses with interaction terms for each interaction tested, between the observed intervention main effect, and the SES variable in question.We were well aware that such interaction analyses are notoriously low-powered but that the public health intervention literature rarely ever reports on the power of such analyses, even when none of the interactions examined are statistically significant, and the sample size of the study is unlikely to have been adequate for such interaction analyses.For example, the evaluation of altered food pricing by Nederkoorn et al., had only 306 subjects, half of whom were randomized to an online simulated food taxation intervention, but only 27% of whom had “low” daily food budgets – the SES marker examined.We refer the reader to more sophisticated guidance from academic disciplines, such as political science, which have long tended to have a more statistically sophisticated understanding of interaction analyses than the public health intervention literature."Intervention studies meeting these criteria were located by reviewing the primary studies included in: i) McGill et al.'s review of socioeconomic inequalities in impacts of healthy eating interventions, where we selected those intervention studies that the authors had identified as being likely to reduce or increase health inequalities by preferentially improving healthy eating outcomes among lower and higher SES participants respectively, and that the authors had also assigned a quality score of 3 or greater; and ii) Bambra et al.'s umbrella review of interventions designed to address the social determinants of health.We reasoned that these reviews would provide a suitably diverse sample of primary studies as these were the sources where we had originally identified the Six Ps and Sectoral approaches to categorising interventions.An additional two recent studies that were previously known to us, were also included in order to include evaluations of societal-level policies through natural experiments.The final number of primary studies was 21.The credibility of the subgroup analyses reported within each of the studies was assessed against the ten criteria outlined by Sun et al.The criteria refer to various aspects of study design, analysis and context and were derived largely from the guidance originally produced originally by Oxman and Guyatt, that was subsequently updated by Sun et al.Each study was assessed on these ten criteria using the scoring tool developed by Saragiotto et al., which allocates one scoring point for each of the criteria met, for a maximum score of ten."The ten criteria for credible subgroup analysis are outlined in Table 1, alongside Saragiotto et al.'s description of each.Each study was also scored on the Effective Public Health Practice Project Quality Assessment Tool, in order to assess the overall methodological quality of the studies.The EPHPP is a time-honoured critical appraisal tool for public health intervention evaluations that can be applied to both randomised and non-randomised intervention evaluation studies, and is comprised of six domains: selection bias, design, confounders, blinding, data collection methods and withdrawals and dropouts.Each component is rated as either strong, moderate or weak according to a standardised scoring guide and these scores are subsequently summed to provide an overall quality score.Studies are rated as being strong overall if no components receive a weak score, moderate if one component recieves a weak rating and weak if two or more components receive a weak score.Each of the 21 studies in our sample was rated according to the subgroup credibility criteria and also EPHPP by one of three pairs of reviewers.Each reviewer read and scored the studies independently, before meeting to discuss their scores and resolve any discrepancies in how each study had been rated.A summary of the studies that we examined is provided in Table 2, alongside the EPHPP rating and the number of subgroup analysis credibility criteria fulfilled for each.As shown in Table 2, 17 of the 21 studies that we scored were rated as being of either moderate or strong quality according to the EPHPP criteria.However, only 8 studies met at least 6 of the 10 criteria for credible subgroup analyses.Table 3 displays the number of studies that met each of the credibility criteria for subgroup analysis.The only criterion that was met by all of the studies was whether the subgroup variable was measured at baseline.SES was a stratification factor at randomisation in only 4 studies – although this criterion did not apply to 5 of the studies, which were not randomised trials, and so we adjusted the denominator of this criterion accordingly.Similarly, we note that the criterion “was the significant interaction effect independent, if there were multiple significant interactions?,did not apply to studies where a statistically significant interaction was not reported.The purpose of this study was to examine the credibility of subgroup analyses reported within a purposively sampled set of positively reviewed evaluations of diverse public health interventions, reporting differential effects by SES.Whilst the overall methodological quality of these studies was generally high - as evidenced by the positive ratings that the majority received on the EPHPP quality assessment tool - only 8 of the 21 studies that we examined met over half of the standard ten criteria for credible subgroup analyses.It is also important to note here that there is no particular recommended number of criteria that should be met before a subgroup analysis should be considered to be “credible”.Sun et al. argue against such dichotomous thinking, and instead suggest that the credibility of subgroup analyses should be assessed along a continuum running from “highly plausible” to “highly unlikely,” where researchers can be more confident that a reported subgroup effect is genuine as more of the credibility criteria are fulfilled.Previous systematic reviews have found that the credibility of subgroup analyses reported in clinical trials is generally low, although there has been very little review research that considered the credibility of such analyses in primary studies of public health intervention evaluations.Welch and colleagues have previously reported a systematic “review of reviews” subgroup analyses across “PROGRESS-Plus” factors in systematic reviews of intervention evaluations.The PROGRESS-Plus acronym denotes sociodemographic characteristics where differential intervention effectiveness may be observed, and refers to: Place of residence; Race/ ethnicity/ culture; Occupation; Gender/sex; Religion; Education, and Social capital.The “Plus” further captures additional variables where inequalities may occur, such as sexual orientation.The scale and scope of Welch et al.’s research was different to ours - in part because the authors examined systematic reviews rather than primary studies, and because they considered a wider range of potential subgroup effects beyond SES, such those defined by gender and ethnicity.The authors nevertheless noted that only a minority of reviews even considered equity effects, and that, similar to our findings in the present study, the credibility of the analyses conducted within those reviews was rated by the authors as being relatively low.Specifically, only 7 of the 244 systematic reviews identified conducted subgroup analyses of pooled estimates across studies, and these analyses only met a median of 3 out of 7 criteria used by Welch et al. for credible subgroup analyses.Recent guidelines now emphasise the importance of following best practice guidance for planning, conducting and reporting subgroup analyses in equity-focused systematic reviews – but, as our findings here demonstrate, this literature “has a long way to go” to comply with those guidelines.We note, in this regard, that a similar verdict has just been rendered by the authors of a new review of 29 systematic reviews of all types of public health interventions’ effects on health inequalities.Within our purposive sample of twenty-one primary evaluation studies of interventions, there was considerable variation in how many studies met each credibility criterion for subgroup analysis.One criterion that was met by relatively few of the studies that we examined were whether the subgroup effect was specified a priori, in terms of the subgroups examined.This is a crucial issue, as post-hoc analyses are more likely to yield spurious, false-positive subgroup effects, and the results of such exploratory analyses are best understood as being hypothesis generating, rather than confirmatory.A related criterion that few studies met was whether the direction of the effect was correctly pre-specified by the researchers.This is an important point because the plausibility of any observed effect is lowered when researchers previously predict only that there will be an effect without specifying its direction, or when the observed effect is in the opposite direction to that which was predicted.Notwithstanding the fact that previous studies on any question can clearly be wrong, it is important not to over-interpret effects when the direction was not correctly pre-specified.Several conceptual frameworks have been developed that researchers can refer to when considering how an intervention might have differential effects according to SES.With regard to interventions for diet and obesity for example, Adams et al. argue that the degree of agency required of individuals to benefit from an intervention is a major determinant of its equity impacts: interventions that require a high degree of individual agency are likely to increase health inequalities, whilst interventions that require a low level of agency are likely to decrease inequalities.Drawing on such theoretical frameworks to consider the differential impacts of interventions, at the planning stages of intervention evaluations, would help to improve the credibility of subgroup analyses considerably.There is also a need for researchers working on equity-focused systematic reviews to consider the credibility of subgroup analyses reported within primary intervention studies, and to weigh the conclusions that can be drawn from those studies accordingly.It is important to note here that the credibility of subgroup analyses is not currently included in some of the quality appraisal tools commonly applied in systematic reviews, such as the EPHPP.This explains the relatively high EPHPP scores of the 21 studies we reviewed, compared to their relatively low scores on the Saragiotti et al. scoring tool for subgroup analyses.We conclude that the fourteen-year-old EPHPP tool for quality-scoring in such reviews needs updating to reflect more recent methodological developments, especially in subgroup analysis based on interaction effects.The more recent “PRISMA” extension represents a significant improvement in this regard.The primary strengths of this research are the diversity of intervention evaluations considered, and the use of the most up-to-date and comprehensive set of criteria for credible subgroup analyses.The main limitation of this research is that the studies we examined were not identified via a systematic review of the literature, and this sample therefore cannot be considered to be representative of the field.In particular, the majority of the studies included were selected from a systematic review of interventions designed to promote healthy eating, although the range of policy and programme interventions evaluated in those studies was remarkably wide, spanning the full “degree of individual agency” typology laid out by Adams et al.It is therefore unclear whether these findings would generalise to the wider public health intervention literature, purporting to inform policy makers on “what works to reduce health inequalities by SES.,There is now a need to apply these credibility criteria to a fully representative set of evaluation studies of public health interventions.In addition, the credibility criteria for subgroup analyses that we applied were originally designed to be applied to RCTs, as is most clearly reflected by the criterion, “was the subgroup variable a stratification factor at randomisation?,More recent writings in the field of public health evaluation emphasise the role of sophisticated non-RCT quasi-experimental designs however, such as difference-in-differences with fixed effect variables for unidentified, non-time-varying confounders.The existing criteria for assessing the credibility of subgroup analyses may therefore need to be further adapted before being applied more widely to the public health intervention literature, where non-RCT designs are widely utilised.The scope of this study was also limited to evaluation studies that reported differential intervention effects by SES.In this context, our interest was primarily in the likelihood that a Type I error is made, where false-positive subgroup effects are identified and reported.Equally important however is the possibility of Type II errors, where researchers erroneously do not find any evidence of differential effects by SES.Such errors may be relatively common, as evaluation studies that are designed to test the main effects of interventions will likely be under-powered to detect interaction effects between the treatment andpotential effect modifiers.Finally, in addition to evidence on the effectiveness of public health interventions, both researchers and policy makers have highlighted the need to identify the theoretical underpinnings of interventions, and to better understand the causal pathways and mechanisms through which interventions generate differential health outcomes by SES.However, we found that the primary studies we selected did not contain sufficient contextual and qualitative information to provide significant insights into those mechanisms.In this sense, the public health intervention literature we sampled presents another sort of evidence gap.That gapmakes the assessment of the external validity of any demonstrated effect on health inequalities particularly hard to judge, because inadequate theory and contextual detail are included in published evaluations to enable the reader to make an informed judgement about external validity of the results.As pointed out by Pawson, the widespread adoption of newer forms of more qualitatively oriented, “realist” review would make an excellent counterpoint to purely quantitative assessments of effect-size per se.Realist review methods would allow more informed contextual interpretation and better identification of potential mechanisms of action of any given intervention, and their implications for a study’s external validity.We doubt that it would be helpful to merely issue more guidelines on such aspects of structured reviews of the equity aspects of public health interventions.We prefer the longer-term strategy of changing standard practice in this field so that future primary studies are simply expected by reviewers to provide richer contextual information.Such information would help to illuminate mechanisms of interventions’ effects, especially when they are differential across SES subgroups.There is increasing interest amongst researchers and policy makers in identifying interventions that could potentially reduce health inequalities by SES.The evidence regarding which interventions may be effective in doing so is often derived through subgroup analyses conducted in evaluation studies, which test whether the effect of the intervention differs according to participants’ SES.The methodological credibility of such analyses is only infrequently routinely considered, and our experience of applying established credibility criteria to a purposively selected set of evaluation studies suggests that this is an underappreciated problem.Researchers and consumers of the health inequalities literature should therefore make routine use of such criteria when weighing the evidence on which interventions may increase or reduce health inequalities.
There is increasing interest amongst researchers and policy makers in identifying the effect of public health interventions on health inequalities by socioeconomic status (SES). This issue is typically addressed in evaluation studies through subgroup analyses, where researchers test whether the effect of an intervention differs according to the socioeconomic status of participants. The credibility of such analyses is therefore crucial when making judgements about how an intervention is likely to affect health inequalities, although this issue appears to be rarely considered within public health. The aim of this study was therefore to assess the credibility of subgroup analyses in published evaluations of public health interventions. An established set of 10 credibility criteria for subgroup analyses was applied to a purposively sampled set of 21 evaluation studies, the majority of which focussed on healthy eating interventions, which reported differential intervention effects by SES. While the majority of these studies were found to be otherwise of relatively high quality methodologically, only 8 of the 21 studies met at least 6 of the 10 credibility criteria for subgroup analysis. These findings suggest that the credibility of subgroup analyses conducted within evaluations of public health interventions’ impact on health inequalities may be an underappreciated problem.
31,604
Development of the Adverse Outcome Pathway (AOP): Chronic binding of antagonist to N-methyl-D-aspartate receptors (NMDARs) during brain development induces impairment of learning and memory abilities of children
To support a paradigm shift in regulatory toxicology testing and risk assessment by moving away from apical animal testing towards mechanistic knowledge based evaluation, the Adverse Outcome Pathway concept has been proposed.AOP has been developed as a framework to facilitate a knowledge-based safety assessment that relies on understanding mechanisms of toxicity, rather than simply observing its AO.This framework helps to organize the existing information and data across multiple levels of biological organisation and identify correlative and causal linkages between the molecular initiating event and the key events at molecular, cellular, tissue, organism or population level, that when sufficiently perturbed by chemical exposures, result in AOs.Therefore, the AOP framework provides means to adapt mechanistic understanding for regulatory decision making by consolidating, managing and exchanging knowledge between the research and regulatory communities.A variety of molecular and cellular processes is known to be crucial to proper development and function of the central nervous systems.However, there are relatively few examples of well-documented pathways with comprehensive understanding of causally linked MIEs and KEs that result in AOs in the developing brain.The functional and structural complexity of the CNS, coupled with the dynamics of brain development, suggests that a broad array of MIEs may trigger the same adverse neurological outcome and on contrary the various AOs can be caused by the same MIEs.This complexity of brain development, including different susceptibility to toxicity induced by the same chemical at different developmental windows creates a real challenge for AOP development relevant to DNT evaluation.Currently, in the DNT field, there are only a few DNT AOPs developed.Here, we describe the AOP developed for impairment of learning and memory processes in children caused by inhibition of glutamate receptor N-methyl-d-aspartate function when it takes place during synaptogenesis.It is well documented in the existing literature that cognitive processes rely on physiological functioning of the NMDAR.In this AOP, binding of an antagonist to NMDAR was defined as MIE that causes inhibition of NMDAR function leading to reduced intracellular level of calcium, followed by reduced levels of brain derived neurotrophic factor, reduced presynaptic glutamate release, aberrant dendritic morphology, increased cell death, decreased synaptogenesis and neuronal network formation and function resulting in impairment of learning and memory in children that was defined as adverse outcome.Damage or destruction of neurons by chemical compounds during development when they are in the process of synapses formation, integration and formation of neuronal networks will derange the organisation and function of these networks, thereby setting the stage for subsequent impairment of learning and memory.Indeed, learning-related processes require neuronal networks to detect correlations between events in the environment and store these as changes in synaptic strength.Long-term potentiation and long-term depression are two fundamental processes involved in cognitive functions, which respectively, strengthen synaptic inputs that are effective at depolarizing the postsynaptic neuron and weaken synaptic inputs, thus reinforcing activation of useful pathways in the brain.A series of important findings suggest that the biochemical changes that happen after induction of LTP also occur during memory acquisition, showing temporality between the two KEs.Empirical support for the Key Event Relationships of this AOP is based mainly on data published after exposure to lead referring to in vitro, in vivo and epidemiological studies.It is well known and documented that Pb2+ is a potential inhibitor of the NMDA receptor that plays an important role in brain development and cognition.Chronic Pb2+ exposure inhibits NMDA receptor function, followed by a described cascade of KEs at the cellular and tissue level, causing impairments of nerve communication in the brain responsible for deficit in synaptic plasticity, and finally resulting in learning and memory impartment.There is evidence supporting a link between exposure to Pb2+ and learning and memory impairment coming not only from experimental data but also from cohorts, epidemiological studies discussed in the last KER of the present AOP."This AOP is the first one relevant to DNT evaluation, developed according to the guidance document on developing and assessing AOPs including its supplement, the Users' Hand book and submitted to the AOP-Wiki, an AOP repository within the OECD programme on AOP Development and Assessment.The present concise version presents data in tabular format and includes recently published bibliography.Furthermore, the sections dealing with the weight of evidence evaluation of KEs, KERs and overall AOP have been extensively revised as the way to assemble and assess the degree of confidence that supports AOPs has evolved and significantly as more experience has been gained in this field.As mentioned above the AOP concept describes a sequence of measurable KEs that are correlatively or causally linked, originating from a MIE in which a chemical interacts with a biological target, triggering the first KE downstream.The initiated by the MIE sequential series of KEs includes molecular, cellular, anatomical and/or functional changes in biological systems and ultimately results in a KE at tissue level leading to an AO of regulatory relevance for human health or eco-toxicological risk assessment.Understanding physiological pathways is the basis for describing the perturbations that occur following chemical exposure.The observed changes of biological state that should be measurable at different levels of biological organisation as well as its general role in biology should be covered in the description of KEs.KERs should assemble and organise evidence that would facilitate establishment of the scientific basis permitting extrapolation of the state of the KE downstream from the measured KE upstream.By definition AOPs are not chemical specific and the described KEs should be independent from any specific chemical initiator.However, an empirical support for KER description refers to the experimental data derived from exposure to chemicals to be able to illustrate understanding of the patters of biological responses between identified KEs based on reviewed literature.It is important that there are studies available where the compounds have been clinically proven to trigger the identified AO, supporting the developed AOP.For example, in the present AOP the empirical support for KERs is mainly based on experimental data obtained after exposure to Pb2+ as reference chemical since there are abundant in vivo, in vitro and epidemiological studies suggesting that cognitive deficit in children is linked to chronic exposure to this heavy metal.However, any chemical that will bind and block NMDA receptor function, triggering the described cascade of KEs during synaptogenesis will be relevant to this AOP according to the rule that AOPs should be chemically agnostic.In principle, AOPs are defined as linear, non-branching sequences of KEs, linking MIE to AO despite of the complex feed-back loops, defense mechanisms, modifying factor that contribute to the interactions between KEs.These can be described in the text of KERs.However, for this AOP was justified to present three KEs, causally linked to the MIE and AO, happening simultaneously since they are acting together leading the downstream KE.This representation is more practical both for the development and use of the AOP than breaking those multiple highly related pathways into separate AOPs.The whole AOP is evaluated by weight of evidence, applying modified Bradford-Hill considerations referring to the biological plausibility of KERs, essentiality of KEs and empirical support for KERs.Biological plausibility of KERs describes how well the mechanistic relationship between KE upstream and KE downstream is understood with respect to current knowledge.Essentiality should refer to experimental support proving evidence that blockage of any KE upstream will prevent KEs downstream or the AO.Empirical support for KER should assess concordance of the doses and time at which the upstream and downstream KEs are observed, and incidence concordance.The overall assessment of the level of confidence in the developed AOP is based on the essentiality experimental data for each KE, the biological plausibility and the empirical support for KERs.Due to its physiological and pharmacological properties, glutamate activates three classes of ionotropic receptors named: α-amino-3-hydroxy-5-methyl-4-isoazolepropionic acid, 2-carboxy-3-carboxymethyl-4-isopropenylpyrrolidine and N-methyl-d-aspartate.NMDA receptor, is composed of two NR1 subunits that are combined with either two NR2 subunits and less commonly are assembled together with a combination of NR2 and NR3 subunits.To be activated NMDA receptors require simultaneous binding of both Glu to NR2 subunits and of glycine to either NR1 or NR3 subunits that provide the specific binding sites named extracellular ligand-binding domains.NMDAR can also be activated indirectly through initial activation of KA/AMPARs.Binding of agonist to KA/AMPARs results in ion influx and glutamate release from excitatory synaptic vesicles causing depolarization of the postsynaptic neuron.Upon this depolarization the Mg2+ block is removed, allowing sodium, potassium, and importantly calcium ions to enter into a cell.At positive potentials, NMDARs then show maximal permeability.Due to the time needed for the Mg2+ removal, NMDARs activate more slowly, having a peak conductance long after the KA/AMPAR peak conductance takes place.It is important to note that NMDARs conduct currents only when Mg2+ block is relieved, glutamate and glycine is bound, and the postsynaptic neuron is depolarized.In hippocampus, NR2A and NR2B are the most abundant NR2 family subunits.NR2A-containing NMDARs are mostly expressed synaptically, while NR2B-containing NMDARs are found both synaptically and extrasynaptically.NMDA receptors, when compared to the other Glu receptors, are characterised by higher affinity for Glu, slower activation and desensitisation kinetics and higher permeability for calcium and susceptibility to potential-dependent blockage by magnesium ions.They are involved in fast excitatory synaptic transmission and neuronal plasticity in the CNS.Ca2+ flux through the NMDA receptor is considered to play a critical role in pre- and post-synaptic plasticity, a cellular mechanism important for learning and memory.The NMDA receptors have been shown also to play an essential role in the strengthening of synapses and neuronal differentiation, through long-term potentiation, and the weakening of synapses, through LTD.All these processes are important units implicated in the memory and learning processes.It is well understood and documented that Pb2+ has potent concentration-dependent inhibitory effects on the NMDA receptors function.These inhibitory effects of Pb2+ on NMDA receptors activation appear to be age and brain region specific.For Pb2+ the half maximal inhibition concentrations is significantly lower in cortical membranes prepared from neonatal than from adult rats.As regards the brain regions, hippocampus is more sensitive than the cerebral cortex since the IC50 for Pb2+is significantly lower in hippocampus.During synaptogenesis the hippocampus appears to be particularly vulnerable to Pb2+ exposure as in this brain structure NMDA receptors undergo subunit specific changes during development.Pb2+decreases the expression of hippocampal NR2A-subunit of NMDARs at synapses and increases targeting of NR2B-NMDARs to dendritic spines, resulting in decreased protein synthesis in dendrites that are important for learning and memory processes.To predict how potent an antagonist can be, the IC50 and the half maximal effective concentration of glutamate/glycine-induced currents in NMDA receptors was measured from brain slices and cells or in recombinantly expressed receptors.Traynelis et al. summarised the IC50 values for competitive, non-competitive and uncompetitive antagonists in different subunits of NMDA receptors.Also quantitative evaluation of Zn2+ binding to the NR2 subunits have been determined.It is worth noting that in contrast to chronic exposure and persistent antagonism of the NMDA receptor for a period of days, acute inhibition of NMDAR function may trigger different downstream KEs, such as up-regulation of the NMDARs, resulting in an increased influx of calcium and neuronal cell death.Thus, it should be described as a different KER that could lead to development of different AOP.Under physiological conditions, the free intracellular Ca2+ concentration is lower than the extracellular Ca2+ concentration.The latter under certain conditions may enter the cell and accumulate in the cytoplasm, cellular organelles and nucleus.Ca2+ acts as an important intracellular messenger and consequently regulates many different cellular functions.Therefore,Ca2+ homeostasis is tightly regulated by intracellular and extracellular mechanisms.In neurons, Ca2+ ions regulate many critical functions such as postsynaptic depolarisation and activation of Ca2+-sensitive proteins that trigger signalling pathways critical for cell physiology.Modification of the gene transcription by Ca2+ ions impacts long-term neurotransmitters release, neuronal differentiation, synapse function and cell viability.Thus, the Ca2+ that enters and accumulates in cytoplasm and nucleus is a central signalling molecule that regulates fundamental for learning and memory processes.There are a few studies examining the effect of Pb2+ exposure on the changes in intracellular Ca2+.Incubation of rat synaptosomes with Pb2+ stimulates the activity of calmodulin reaching the higher effect at 30 μM, whereas higher concentrations of Pb2+ causes inhibition.The Pb2+ IC50 values for inhibition of Ca2+ ATPase has been found to be 13.34 and 16.69 μM in calmodulin-rich and calmodulin-depleted synaptic plasma membranes, respectively.Exposure of rats to Pb2+ has also inhibitory effect on Ca2+ ATPase activity, causing increase in intra-synaptosomal Ca2+.Furthermore, there is evidence that Pb2+ exposure affects Ca2+ homeostasis causing alterations in the phosphorylation state of different kinases.For example, Pb2+ has been shown to interfere with MAPK signalling increasing the phosphorylation of both ERK1/2 and p38.However, the findings regarding calcium/calmodulin kinase II activity after exposure to Pb2+ are not clear.On one hand, Pb2+ has been found to cause reduction of CREB phosphorylation in the hippocampus of rats exposed during brain development.One the other, the levels of phosphorylation of CamKII have not been explored but only the mRNA expression levels have been studied in rat pups on PND 25 that received Pb2+ and reached blood Pb2+ levels 5.8 to 10.3 μg/dl on PND 55.In this study, CamKIIα gene expression has been found to be very sensitive to Pb2+ exposure in the frontal cortex.Pb2+ exposure impairs LTP in CA1 region of hippocampus derived from Sprague-Dawley rats as it has been recorded by whole cell patch-clamp technique.Pb2+ chronically or acutely applied, significantly reduces LTP in CA1 region of hippocampus from Wistar or Sprague-Dawley rats.In CA3, there have been was a dramatic difference in response as the age of animals increased.At 30 days, LTP was significantly reduced but at 60 days LTP was increased by about 30%.In the same brain structure and area, the effects of Pb2+ on LTP have been different in rats exposed to PND 30 and PND 60 after either perfusion of Pb2+ or from slices derived from rats after chronic developmental exposure to Pb2+.Inhibition of LTP has been recorded in CA3 area from animals sacrificed on PND 30, whereas potentiation has been measured in the same brain area derived from older animals with either exposure paradigm.However, when somebody interprets results related to this KER there is need to take into consideration the following parameters, which could explain some inconsistencies: the structural diversity of NMDA subunits at different windows of brain development can influence the functionality of the receptors and their permeability to Ca2+; the membrane potential due to pore blockade by extracellular Mg2+ and receptor phosphorylation; the entrance of Ca2+ into neuronal cell that can also happen through KA and AMPA receptors but at much lower extend in compareson to NMDA receptors.However, AMPA receptors may also contribute to Ca2+ signalling during CNS development; Ca2+ entry occurs also through L-type voltage-dependent Ca2+ channels, suggesting that there are more possible entrance sites for Ca2+ to get into the cytosol rather than only through NMDA receptors; Pb2+ has the ability to mimic or even compete with Ca2+ in the CNS, accumulating in the same mitochondrial compartment as Ca2+.So, it is possible that the reduced levels of Ca2+ after Pb2+ exposure may not be attributed only to NMDA receptor inhibition but also to the ability of this heavy metal to compete with Ca2+.BDNF initially is synthesised as precursor proteins, which is processed intracellularly to its mature form after proteolytically cleaved in the synaptic cleft by plasmin, which is a protease activated by tissue plasminogen activator.proBDNF is constantly secreted while tPA release and mBDNF production depends on neuronal excitation.Storage and activity-dependent release of BDNF has been demonstrated in both dendrites and axon terminals.The biological functions of mBDNF are mediated by binding to tyrosine kinase B receptors that lead to the activation of three major intracellular signalling pathways, including MAPK, PI3K and PLCγ1.TrkB-mediated signalling regulates gene transcription in the nucleus through the activation of several transcription factors involved in neurite outgrowth, synaptogenesis, synapse maturation and stabilization.On the other hand, proBDNF binds to the p75 neurotrophin receptor and activates RhoA, a small GTPase that regulates actin cytoskeleton polymerization leading to inhibition of axonal elongation, growth cone collapse, and apoptosis.There is no direct evidence linking reduced levels of intracellular Ca2+ to decreased BDNF levels as they have not been ever measured both in the same study after exposure to stressors.However, there are findings that strongly link the Ca2+-dependent signalling cascade to transcription of BDNF.Pb2+ decreases the ratio of phosphorylated versus total MeCP2 and consequently MeCP2 maintains its repressor function and prevents BDNF exon IV transcription.MeCP2 gene expression in the frontal cortex is very sensitive to Pb2+ exposure while in the hippocampus, the same gene is affected only at the higher exposure of rat pups with blood Pb2+ levels 5.8 to 10.3 μg/dl.The doses of Pb2+ that result in learning and LTP deficits in rats cause also the decrease in phosphorylation of CREB in cerebral cortex at PND 14 and the reduction in phosphorylation state of CREB in both cortex and hippocampus at PND 50.Interestingly, under similar experimental conditions no alteration at the phosphorylation state of CAMKII has been recorded.In primary hippocampal neurons exposed to 1 μM Pb2+ for 5 days during the period of synaptogenesis, both proBDNF protein levels and mBDNF were decreased with the latter to smaller extend.In the same in vitro model, Pb2+ also decreases dendritic proBDNF protein levels along the length of the dendrites and causes impairment of BDNF vesicle transport to sites of release in dendritic spines.Rat pups on PND 25 exposed to Pb2+ demonstrated blood Pb2+ levels 5.8 to 10.3 μg/dl on PND 55 and show no change at gene levels of BDNF.In mouse embryonic stem cells, Bdnf exon IV has been found to be down-regulated in cells treated with 0.1 μM Pb2+, whereas Bdnf exon IX has been found up-regulated.After becoming post-mitotic and during the differentiation process, neuronal cells undergo lengthening, branching, dendrite and dendritic spine formation.In human, dendrites appear as early as 13.5 weeks gestation in neurons while arborization begins only after 26 weeks.In rodents, during the first postnatal week, both pyramidal and nonpyramidal neurons go through extensive and fast dendrite growth, branching, and elaboration."Dendrite arbor's capacity and complexity continue to increase in the second and third postnatal week, however, much slower.During the same developmental window, dendritic spines begin to appear as a group that then mature.At this final stage of dendrite growth, a neuron possesses a dynamic dendrite tree, which has a greater potential for connectivity and synapse creation because of dendritic spine formation and maturation.Postsynaptic density-95, a protein involved in dendritic spine maturation and clustering of synaptic signalling proteins, plays a critical role in regulating dendrite outgrowth and branching, independent of its synaptic functions.Functionally, dendrites serve as post-synaptic part of a synapse, playing a critical role in the processing of information transmitted through synapses.They receive the majority of synaptic inputs comparing to the soma or the axon.Postsynaptic activity is closely related to the properties of the dendritic arbor itself, implying that the dendrites strongly influence and control synaptic transmission and plasticity.There is no direct evidence linking decreased BDNF levels to aberrant dendritic morphology as they have not been ever measured both in the same study after exposure to stressors.However, several studies provide empirical support for this KER.The reduction in the length of dendritic processes and the number of dendritic branches in hippocampal dentate granule cells was demonstrated after developmental Pb2+ exposure of Long-Evans hooded rat pups.More recently, it has been shown that the chronic exposure of rats to environmentally relevant levels during early life alters cell morphology in the dentate gyrus as immature granule cells immunelabelled with doublecortin display aberrant dendritic morphology.Exposure of rats to Pb2+ that was initiated at embryonic phase and terminated at PND 21 revealed that at PND 14 and PND 21 the number of dendritic spine on hippocampal CA1 area decreases by 32.83% and 24.11%, respectively.In separate in vivo study, low blood levels of Pb2+ in similar age of rats has led to significant decrease of BDNF concentration by 39% in forebrain cortex and by 29% in hippocampus.In cultured rat hippocampal neurons, low levels of Pb2+ cause reduction of dendritic spine density in a concentration-dependent manner.In the same in vitro model, exposure to 1 μM Pb2+ for 5 days during the period of synaptogenesis, significantly reduces proBDNF protein and extracellular levels of mBDNF.When mouse embryonic stem cells are differentiated into neurons, exposure to lead acetate causes reduction in the percentage of microtubule-associated protein 2-positive cells and in the mRNA levels of MAP-2, which is a dendrite marker, in a concentration-dependent manner.Glu is an amino acid, the main excitatory neurotransmitter that is stored in presynaptic vesicles by the action of vesicular glutamate transporters.Glu is mainly released from the presynaptic vesicles in a Ca2+-dependent mechanism that involves N- and P/Q-type voltage-dependent Ca2+ channels, closely linked to vesicle docking sites.The pre-synaptic release of Glu is controlled by a wide range of presynaptic receptors that are not only glutamatergic like Group II and Group III of glutamate metabotropic receptors but also cholinergic such as nicotinic and muscarinic, adenosine, kappa opioid, γ-aminobutyric acidB, cholecystokinin and neuropeptide Y receptors.Following its release, Glu exerts its effects via ionotropic and metabotropic receptors.Although Glu is available for binding to receptors for a short time, NMDA receptors show high affinity for this specific neurotransmitter that causes their activation compared to other receptors.Astrocytes play an important role in removing glutamate from synaptic cleft.During development, glutamate is known to play important role as it regulates neurogenesis, neurite outgrowth, synaptogenesis and apoptosis.In cortical cultured neurons obtained from rat pups PND 2-3, BDNF fails to induce Glu release at DIV 3 and 4.However, after 5 days in vitro culture or later, BDNF induces significant Glu release within 1 min after exogenous BDNF application.No studies have been found in the literature measuring both KEs after exposure to the same stressors.Interestingly, proton magnetic resonance spectroscopy in adults with childhood lead exposure shows decrease in Glu and glutamine in vermis and in parietal white matter of the brain.Cell death can be manifested either as apoptosis that involves shrinkage, nuclear disassembly, and fragmentation of the cell into discrete bodies with intact plasma membranes, or as necrosis that involves the loss of plasma membrane integrity.An important feature of apoptosis is the requirement for adenosine triphosphate to initiate the execution phase.In contrast, necrotic cell death is characterised by cell swelling and lysis.This is usually a consequence of profound loss of mitochondrial function, resulting in ATP depletion, leading to loss of ion homeostasis and increased Ca2+ levels.The latter activates a number of nonspecific hydrolases as well as calcium dependent kinases.Activation of calpain I, the Ca2+-dependent cysteine protease cleaves the death-promoting Bcl-2 family members Bid and Bax which translocate to mitochondrial membranes, resulting in release of truncated apoptosis-inducing factor, cytochrome c and endonuclease in the case of Bid and cytocrome c in the case of Bax.Two alternative pathways - either extrinsic or intrinsic - lead to apoptotic cell death.The initiation of apoptosis begins either at the plasma membrane with the binding of TNF or FasL to their cognate receptors or within the cell through mitochondria mediated pathways of apoptosis.Several in vitro and in vivo studies on cortical neurons have demonstrated that the survival of developing neurons is closely related with the activation of the NMDA receptors and subsequent BDNF synthesis/release, fully supporting the BDNF role as a critical neurotrophic factor.However, there are no studies in scientific literature reporting on change in both KEs, measured in the same experiment, following exposure to Pb2+.Neonatal mice exposed to Pb2+ and sacrificed after 8–24 h have shown increased apoptotic neurodegeneration in comparison to controls.This effect has been recorded only in animals treated with Pb2+ at PND 7, but not at PND 14, confirming the importance of the time of exposure during development in order for Pb2+ to induce apoptosis.Two to four weeks old rats treated for 7 days with 15 mg/kg daily dose of lead acetate show increased apoptosis in hippocampus.In older rats, it has also been shown that Pb2+ can induce apoptosis.However, in contrast to the first two in vivo studies, the animals in these experiments were old enough to evaluate the most sensitive window of vulnerability of developing neurons to Pb2+ exposure, confirming that Pb2+ treatment during synaptogenesis leads to significant neuronal cell apoptosis.In vitro evidence of lead-induced apoptosis has also been studied in cultured rat cerebellar neurons, and hippocampal neurons.However, a number of studies demonstrate that deletion of BDNF does not lead to significant apoptotic cell death of neurons in the developing CNS.In an in vivo Pb2+ exposure study, where female rats received 1500 ppm prior, during breeding and lactation shows no changes at mRNA levels of BDNF in different hippocampus section derived from their pups.Regarding Pb2+, the pre- and neonatal exposure of rats to Pb2+ show a decreased number of hippocampal neurons but no morphological or molecular features of severe apoptosis or necrosis have been detected in tested brains, possibly due to effective microglial phagocytosis.After exposure to led reduced concentration of BDNF in brain homogenates has been recorded in forebrain cortex and hippocampus.In other studies, pregnant rats have been exposed to lead acetate after giving birth until PND 20 reaching blood Pb2+ levels in pups of 80 μg/dl.In these animals, hippocampus was the most sensitive to Pb2+ exposure, showing an increase of caspase-3 mRNA as early as PND12.Synaptogenesis follows axonal migration, during which presynaptic and postsynaptic differentiation occurs.“Synaptic assembly” refers to the mechanisms involved in recruitment of molecules required for differentiation, stabilization and maturation of synapse.In human, synaptogenesis does not happen at the same time in all brain regions, as the prefrontal cortex lags behind in terms of synapse formation compared to the auditory and visual cortices.In contrast, synaptogenesis appears to proceed concurrently in different brain areas for rhesus monkey.The period of rapid synaptogenesis or the so-called brain growth spurt is considered one of the most important processes for neuronal networking that take place during brain development.This process plays a vital role in synaptic plasticity, learning and memory and adaptation throughout life.Many studies have indicated that synaptogenesis and dendritic spine formation happen in any order.Newborn rats exposed to 10 mg/ml of lead acetate from PND 2 up to PND 20 and 56 demonstrated significant decrease in spine density as shown in Golgi staining of hippocampal pyramidal neurons in CA1 region.Reduced presynaptic release of Glu is linked to LTP, which is considered the functional measurement of synaptogenesis.Indeed, measures of presynaptic function at glutamatergic synapses in chronically exposed animals have produced results that can be related to the effects of Pb2+ on Glu and LTP.Animals exposed to 0.2% Pb2+ show decrease of hippocampal Glu release, diminishing the magnitude of hippocampal LTP.Microdialysis experiment in animals exhibiting blood Pb2+ levels of 30–40 μg/100 ml show diminished depolarization-induced hippocampal Glu release.In another study, experiments in rats continuously exposed to 0.1–0.5% Pb2+ in the drinking water beginning at gestational day 15–16 resulted in decreased hippocampal Glu release, confirmed also in hippocampal cultures and brain slices exposed to Pb2+.The chronic in vivo exposure to Pb2+ during development results in a marked inhibition of Schaffer-collateral-CA1 synaptic transmission by inhibiting vesicular release of Glu, an effect that is not associated with a persistent change in presynaptic calcium entry.Synaptogenesis and refinement of the cortical network precedes the programmed cells death of neurons during development.Elevated blood Pb2+ concentrations in new-born rats prenatally exposed to 30 or 200 mg/l Pb2+ causes postnatally delay in synaptogenesis.In this study, Pb2+ treatment depresses synaptic number in pups at PND 11 to 15 but not in older pups.In rat hippocampal primary cultures, Pb2+ treatment has no effect on PSD95 puncta density nor has any effect on Synapsin Ia/b total gray value, puncta density, and integrated intensity but reduces the phosphorylation of Synapsin Ia/b.Pb2+ exposure also represses the expression of presynaptic vesicular proteins implicated in neurotransmitter release, such as synaptobrevin and synaptophysin.In mouse embryonic stem cells cultured in 3D aggregates, the treatment with Pb2+ causes around 25% of cell loss.In in vivo model, Pb2+ causes downregulation of Syn1 gene expression in the hippocampus of male offspring derived from female mice exposed to lead acetate in drinking water from 8 weeks prior to mating, through gestation and until postnatal day PND 10.At low Pb2+ levels, slow cortical potentials have been observed to be positive in children under five years old but negative in children over five years.However, age-related polarity reversal has been observed in children with higher Pb2+ levels.In experiments carried out in Wistar rats that have been fed with lead acetate from PND 2 until PND 60, the electroencephalogram findings show statistically significant reduction in the delta, theta, alpha and beta band of EEG spectral power in motor cortex and hippocampus with the exception of the delta and beta bands power of motor cortex in wakeful state."Male Sprague-Dawley rats have been exposed to Pb2+ from parturition to weaning though their dams' milk that received drinking water containing 1.0, 2.5, or 5.0 mg/ml lead acetate.Beginning from 15 weeks of age, the characteristics of the electrically elicited hippocampal after discharge and its alteration by phenytoin showed significant increase in primary AD duration only in the animals exposed to the higher dose of Pb2+, whereas all groups responded to PHT with increases in primary AD duration.In primary rat cortical neurons, Pb2+ slightly increases mean firing rate as measured by micro-electrode-array technology.Learning can be defined as the process by which new information is acquired to establish knowledge by systematic study or by trial and error.Two types of learning are considered in neurobehavioral studies: a) associative learning and b) non-associative learning.The memory to be formed requires acquisition, retention and retrieval of information in the brain, which is characterised by the non-conscious recall of information.Memory is considered very important as it allows the subjects to access the past, to form experience and consequently to acquire skills for surviving purposes.There are three main categories of memory, including sensory memory, short-term or working memory and long-term memory.At the cellular level the storage of long-term memory is associated with increased gene expression and protein synthesis as well as formation of novel synaptic connections.Learning-related processes require neuronal networks to detect correlations between events in the environment and store these as changes in synaptic strength.Long-term potentiation and long-term depression are two fundamental processes involved in cognitive functions, which respectively, strengthen synaptic inputs that are effective at depolarizing the postsynaptic neuron and weaken inputs that are not, thus reinforcing useful pathways in the brain.The best characterisation form of LTP occurs in the CA1 region of the hippocampus, in which LTP is initiated by transient activation of receptors and is expressed as a persistent increase in synaptic transmission through AMPA receptors followed by activation of NMDARs.This increase is due, at least in part, to a postsynaptic modification of AMPA-receptor function; this modification could be caused by an increase in the number of receptors, their open probability, their kinetics or their single-channel conductance.Summing up activity-dependent alteration in synaptic strength is a fundamental property of the vertebrate central nervous system that underlies learning and memory processes.It is appropriate to state that while much emphasis has been given on the key role of the hippocampus in memory, it would probably be simplistic to attribute memory deficits solely to hippocampal damage.There is substantial evidence that fundamental memory functions are not mediated by hippocampus alone but require a network that includes, in addition to the hippocampus, anterior thalamic nuclei, mammillary bodies cortex, cerebellum and basal ganglia.Each of these brain structures can be potentially damaged leading to more or less severe impairment of learning and memory.A series of important findings suggest that the biochemical changes that happen after induction of LTP also occur during memory acquisition, showing temporality between the two KEs."Furthermore, a review on Morris water maze as a tool to investigate spatial learning and memory in laboratory rats also pointed out that the disconnection between neuronal networks rather than the brain damage of certain regions is responsible for the impairment of MWM performance.Functional integrated neuronal networks that involve the coordination action of different brain regions are consequently important for spatial learning and MWM performance.Exposure to low levels of Pb2+, during early development, has been implicated in long-lasting behavioral abnormalities and cognitive deficits in children and experimental animals.Multiple lines of evidence suggest that Pb2+ can impair hippocampus-mediated learning in animal models."Rat pups have been exposed to Pb2+ via their dams' drinking water from PND 1 to PND 21 and directly via drinking water from weaning until PND 30.At PND 60 and 80, the neurobehavioral assessment has revealed that developmental Pb2+ exposure induces persistent increase in the level of anxiety and inhibition of contextual fear conditioning, being in agreement with observations on humans.Indeed, children exposed to low levels of Pb2+ display attention deficit, increased emotional reactivity and impaired memory and learning.In experiments carried out in Wistar rats, fed with lead acetate from PND 2 until PND 60, EEG findings show statistically significant reduction in the delta, theta, alpha and beta band EEG spectral power in motor cortex and hippocampus.After 40 days of recovery, animals have been assessed for their neurobehaviour and revealed that Pb2+ treated animals show more time and sessions in attaining criterion of learning than controls.Further data obtained using animal behavioral techniques demonstrate that NMDA mediated synaptic transmission is decreased by Pb2+ exposure.Selective impairment of learning and memory was also observed after blockade of long-term potentiation by AP5.The aim of the present AOP was to construct a pathway that captures the KEs and KERs that occur after binding of an antagonist to NMDA receptor in neurons during brain development referring mainly to hippocampus and cortex, two fundamental brain structures involved in learning and memory formation.Recent study reported that functional connectivity exists in cortical-hippocampal network and that the associative memory improves due to their cooperative function."Based on the supporting data for all KEs and KERs that are summarised in Table 1 and the modified Bradford-Hill considerations confidence in the supporting data is considered as high.The Biological plausibility for majority of the identified KERs is well documented as there is extensive mechanistic understanding supporting linkage between relevant KEs upstream and the KEs downstream, except for the KER between decreased neuronal network function that leads to learning and memory impairment.It is still unclear what modifications of neuronal circuits need to happen in order to trigger cognitive deficits, measurable in a learning and memory test.This KER is only partially understood and further research is required to better explain the relationship between these two KEs.Essentiality is also rated high because there is direct experimental evidence for most of the KEs showing that blocking KEs upstream prevents or attenuates the relevant KEs downstream and/or the AO.Studies on transgenic animals and specifically designed inhibitors have provided direct evidence indicating the essentiality of the KEs in the mechanism that underpin LTP and underlie learning and memory processes in developing organisms.Memory enhancement studies also supported the essentiality of certain KEs by providing indirect evidence like for example in the case of the KE-Decreased neuronal network function for which is not experimentally possible to get direct evidence.However, the empirical support for the majority of identified KERs cannot be rated high as in most occasions the KEup and KEdown of a KER have not been investigated simultaneously under the same experimental protocol.Furthermore, quantitative dose-response data on KERs are not available, therefore this AOP is mainly qualitative.Definition of thresholds for KEs upstream to be able to trigger KEs downstream is missing.For this reason WoE only for the first KER was rated as strong, for the second KER as “moderate”, whereas all others.It is well understood and documented that learning and memory processes rely on the physiology of the NMDA receptor.If the function of this receptor is blocked during brain development it can result in learning and memory impairment in children through a cascade of KEs described in this AOP.Synaptogenesis is a fundamental process of neuronal network formation and if disturbed can results in neurodevelopmental disorders."Therefore, early life exposure to environmental pollutants is critical in determining whether a child's brain development will provide a strong or weak foundation for future learning abilities.Many factors impact children brain development such as poor nutrition, foetal exposure to infectious agents but it is well proved that exposure to toxic environmental chemicals such as lead can directly impair brain and neurological development in children.Published experimental data, including epidemiological studies strongly suggest that environmental chemicals contribute in children to the lowered IQ, learning disabilities, attention deficit hyperactivity disorder and, in particular, autism.Lowered intelligence from early childhood exposure to lead exposure alone was estimated to result in about $675 million per year in income lost to those affected in Washington State.Clearly learning and memory deficit contributes to learning disability of children who have difficulty in reading, writing and learning new things, significantly interfering with school achievement.The burden of these conditions for families and society includes financial costs related to special education, medical treatment, law enforcement, and the social and emotional toll on the children and caregivers.Therefore, much effort is undertaken, including AOP concept, to scientifically prove which environmental chemicals could trigger a cascade of events leading to cognitive deficit in children.Learning and memory is an important endpoint or regulatory relevance and a wide variety of tests to assess chemical effects on cognitive functions is available and used for the study of neurotoxicity in adult and young laboratory animals.Currently, neurotoxicity testing guidelines require testing of learning and memory when DNT or neurotoxicity studies are triggered in order to comply with relevant US and EU regulations.However, for learning and memory assessment the guidelines methodology is flexible and its sensitivity varies, which may lead to some difficulties in test interpretation.Additionally, the OECD DNT TG 426 and US EPA OCSPP 870.6300 are rarely performed since it is costly, time consuming, require high number of laboratory animals and might provide scientifically unreliable information.Furthermore, these in vivo tests rely mainly on the read-outs of the final adverse effects by observing clinical signs, neurobehavioral performance and neuro-pathological changes recorded after animal exposure to chemicals without providing any mechanistic information on the underlying biological processes leading to the AO.This kind of information is provided within the AOP concept as illustrated in the described AOP.It is one of the first AOPs developed according to the OECD guidelines and underwent reviewing process by the scientists in the field and finally endorsed by the OECD Working Group of National Coordinators of the Test Guidelines Programme and Working Party on r Hazard Assessment.AOPs can be used for different regulatory purposes, aiming to use mechanistic toxicological information in order to develop novel testing strategies such as integrated approaches to testing and assessment.Indeed, this AOP can provide rational for in vitro assays selection that should be anchored to the KEs defined in this AOP since causative links described between the identified KEs would increase scientific confidence in such battery of tests.These assays should be based on mixed population of human neuronal and glial cells derived from induced pluripotent stem cells, permitting quantitative evaluation of the KEs, particularly those close to AO including reduced levels of BDNF, neuronal differentiation, synaptogenesis and PSD 95 as post-synaptic protein) and neuronal network function by the measurements of neuronal electrical activity.These in vitro assays are already well standardized and ready to be used.Some of the KEs presented in this AOP have already been identified as important endpoints for mapping of available in vitro DNT assays by EFSA.Quantitative measurements of the identified KERs are urgently needed to determine the thresholds for upstream KEs to be able to trigger downstream KEs, moving this current qualitative AOP towards more quantitative AOPs.Such quantitative data, combined with a computational modeling to predict human toxicity, could potentially permit risk assessment of compounds working primary via this AOP.IATA should be based on various sources of information, including not only in vitro methods but also other non-testing methods such as quantitative structure activity relationship, read across and in silico modeling.Indeed, computational chemistry methods were already applied to develop QSAR model, which allows to predict the activity of potent competitive NMDA antagonists.First, various molecular parameters were calculated for a series of competitive NMDA antagonists with known activity to link the computationally calculated parameters to experimentally determined molecules activity.The developed QSAR model allows to predict the activity of a potent competitive NMDA antagonists before its synthesis since only theoretically determined molecular parameters are used for the prediction.Another approach was applied to develop a QSAR model for non- competitive antagonists of NMDA receptor has also been developed by studying a series of 48 substituted MK-801 derivatives, permitting to predict the inhibitory activity of a set of new designed compounds.2D- and 3D-QSAR models have also been developed to establish the structural requirements for pyrazine and related derivatives selective for NR2B subunit of NMDA receptor antagonists.Moreover, AOP based QSAR models can also facilitate grouping of chemicals according to their biological activities and subsequent development of read-across approach.These QSAR models together with in vitro assays anchored to KEs of the described AOP should be included in IATA that could serve as a tool for an initial chemical screening and prioritization to identify those with potential to cause learning and memory impairment in children.However, this AOP represents one of many possible cascade of events leading to learning and memory impairment in children.Further development of AOPs, interconnected into network is required to have more comprehensive understanding of different toxicity pathways involved.Such AOPs network will facilitate identification of common KEs for multiple AOPs that should be considered as anchors for in vitro assays development, increasing probability of identifying potential DNT compounds, even if they cause toxicity through different pathways and triggered by various MIEs.The present AOP can encourage the development of new in vitro test battery and the use of these alternatives to assess NMDAR inhibitors as chemicals with potential to induce impairment of children cognitive function and at the same time reduce the use of in vivo studies.In addition, the majority of KEs in this AOP has strong essentiality to induce the AO and established adjacent relationships between them that would allow not only the development of testing methods that address these specific KEs but also the understanding of the relationship between the measured KEs and the AO.In addition, this AOP is expecting to make significant contribution to a recent international effort that aims to develop an OECD guidance document on DNT evaluation and accelerate the development and use of in vitro assays and other alternative tools capable of cost and time efficient testing of chemicals for their potential to disrupt the development of the nervous system.The weight of evidence for this KER is rated as strong.There is structural and mechanistic understanding supporting the relationship between the MIE and the KE down.Crystal structure studies were applied to study the binding of antagonists/agonists to NMDA receptors.Binding of antagonists to NMDARs, causes LBD conformation changes, which promote channel closure leading to reduced of Ca+2 influx.The decreased of Ca+2 ions influx can be measured and is considered as a readout of decreased NMDAR function.The biological plausibility of the relationship between the KEupstream Inhibition of NMDARs function and the KEdownstream Decreased Calcium influx is strong.The functional evaluation of NMDA receptors is commonly carried out by measurement of intracellular influx of Ca2+ upon NMDA receptor stimulation or inhibition.Calcium imaging techniques have been extensively utilised to investigate the relationship between these two KEs.Almost 15% of the current through NMDA receptors is mediated by Ca2+ under physiological conditions.However, the majority of the Ca2+ ions are rapidly eliminated by binding of calcium to proteins, reaching rapidly ~1 μM of free intracellular calcium concentration.In rat primary forebrain cultures, the intracellular Ca2+ increases after activation of the NMDA receptor and this increase is blocked when the experiments are performed under Ca2+ free conditions, demonstrating that the NMDA-evoked increase in intracellular Ca2+ derives from extracellular and not intracellular sources.There is extensive scientific understanding of the functional and structural mechanistic relationship between KEup: Decreased Calcium influx and the following KEdown: Reduced release of BDNF.BDNF transcription is induced by Ca2+ entering in the neurons through either L type voltage gated calcium channel or NMDA receptor.BDNF IV that is the most studied among its different exons has been shown to bind three Ca2+ elements within the regulatory region.Ca2+ binds to CREB facilitating BDNF transcription.The activation of the relevant transcription factor is triggered by the initial activation of CaM kinase, cAMP/PKA and Ras/ERK1/2 pathways mediated by the elevated intracellular Ca2+.Inhibitory studies, targeting different elements of these pathways, show that Ca2+ can also decrease mRNA BDNF levels.Increase of intracellular Ca2+ levels phosphorylates MeCP2, which inactivates its repressor function and permits the transcription of BDNF exon IV.Indeed, NMDA receptor activation has been shown to upregulate BDNF transcripts containing exon IV not only via Ca2+-dependent CREB but also through Ca2+ activation of MeCP2 transcription.There is extensive scientific understanding of the functional mechanistic relationship between KEup Reduced release of BDNF and the following KEdown Aberrant Dendritic Morphology.After activation of tyrosine kinase B receptors by BDNF, proteins such as Arc, Homer2, LIMK1 are released.These proteins that are known to promote actin polymerization lead to enlargement of dendritic spine heads.It has also been shown that BDNF promotes dendritic spine formation by interacting with Wnt signalling.Indeed, Wnt signalling inhibition in cultured cortical neurons causes disruption in dendritic spine development, reduction in dendritic arbor size, complexity and blockage of BDNF-induced dendritic spine formation and maturation.In addition, it has been shown that the inhibition of BDNF synthesis reduces the size of spine heads and impairs LTP.BDNF has been characterised as a critical factor in promoting dendritic morphogenesis in various types of neurons.BDNF synthesised in dendrites is known to regulate the morphology of spines.For example, spines in the absence of spontaneous electrical activity are significantly smaller than normal.On the other hand, simultaneous electrical activity and Glu release increase the size of the spine head, which has been shown to be dependent on BDNF presence.Mice caring the Val66Met mutation of Bdnf gene show dendritic arborization defects in the hippocampus.Interestingly, human subjects with the Val66Met SNP demonstrate similar anatomical features.The functional mechanistic relationship between KEup Reduced release of BDNF and the following KEdown Reduced presynaptic release of glutamae is not completely established.It has been shown that presynaptically activated TrkB receptors by BDNF enhances Glu release and increases the frequency of miniature excitatory postsynaptic currents in hippocampal neurons of rat.It has been reported that BDNF rapidly induces Glu transporter-mediated glutamate release via phospholipase C-γ/Ca2+ signalling.In contrast, it has been shown that antidepressants enhance PLC-γ/Ca2+ signalling leading to reduced levels of BDNF that cause decreased Glu release.However, in heterozygous BDNF-knockout mice it has been demonstrated that the reduced BDNF levels did not affect presynaptic Glu release.There is extensive scientific understanding of the functional mechanistic relationship between KEup Reduced release of BDNF and the following KEdown Neuronal cell death.BDNF is involved in the apoptosis occurring in developing neurons through two distinct mechanisms.mBDNF can trigger prosurvival signalling after binding to TrkB receptor through inactivation of components of the cell death machinery and also through activation of the transcription factor cAMP-response element binding protein, which drives expression of the pro-survival Bcl-2 gene.On the other hand, proBDNF binds to the p75 neurotrophin receptor and activates RhoA that regulates actin cytoskeleton polymerization resulting in apoptosis.It is proved that reduced levels of BDNF can severely interfere with the survival pathway of neurons in different brain regions, leading to cell death.BDNF mRNA levels dramatically increase between embryonic days 11 to 13 during rat brain development, playing important role in survival and neuronal differentiation.The latter has been supported by transgenic experiments where BNDF−/− mice demonstrated a dramatic increase in cell death among developing granule cells leading to impaired development of the cerebellar cortex layers.The neuroprotective role of BDNF has been further supported by the observed correlation between elevated BDNF protein levels and resistance to ischemic damage in hippocampus in vivo and K+ rich medium-induced apoptosis in vitro.Several studies addressing apoptosis mainly in the developing cerebral cortex have shown that other mechanism besides neurotrophic factors may be involved.Cytokines, as well as neurotransmitters can potentially activate a number of intracellular proteins that execute cell death, meaning that further branches to this AOP might be added in the future.It is well-established that loss of dendritic spine density and aberrant dendrite branch complexity leads to loss of synapse formation.Indeed, huge amount of research has been performed on dendrite arbour, dendritic spines and the molecular components of these structures that led to the elucidation of their role in higher order brain functions, including learning and memory.The appearance of extensive dendritic arbor and new spines coincides with synapse formation.Zhang and Benson have investigated the role of actin during the early stages of neuronal development by introducing an actin depolymerization protein named latrunculin A and conducting fluorescent imaging of synapse formation.At the early stages of neuronal development, it has been reported that the depolymerisation of filamentous actin significantly reduces the number of stable synapses and the presence of postsynaptic proteins.Most importantly, pre- and postsynaptic vesicles needed for synaptogenesis have not been found at contact sites as a result of depolymerisation of F-actin, proving the important role of dendritic arbor in synapse formation.Based on the exiting data biological plausibility for this KER was evaluated as strong and empirical support as moderate.It is well documented that the presynaptic release of Glu causes activation of NMDA receptors and initiates synaptogenesis through activation of downstream signalling pathways required for synapse formation.Lack or reduced presynaptic Glu release affects the transcription and translation of molecules required in synaptogenesis.The NMDA receptor activation by Glu during development increases calcium influx, which acts as a secondary signal for synaptogenesis.Eventually, immediate early genes activation is triggered by transcription factors and the proteins required for synapse formation are produced.Glu released from entorhinal cortex neurons has been shown to promote synaptogenesis in developing targeted hippocampal neurons.Similarly, Glu has been found to regulate synaptogenesis in the developing visual system of frogs.The ratio of synaptic NR2B over NR2A NMDAR subunits controls dendritic spine motility and synaptogenesis.The intracellular C terminus of NR2 recruits the signalling and scaffolding molecules necessary for proper synaptogenesis.Based on the current mechanistic knowledge biological plausibility and empirical support for this KER was evaluated as moderate.Under physiological conditions, in the developing nervous system, apoptosis occurs during the process of synaptogenesis, where competition leads to the loss of excess neurons and to the connection of the appropriate neurons.However, increased apoptosis leads to defective synaptogenesis as the reduced number of neurons decreases dendritic fields for receiving synaptic inputs from incoming axons.At the same time the loss of neurons also means that there are less axons to establish synaptic contacts, leading to reduced synaptogenesis and decreased neuronal networking.Recently, Dekkers et al. have reviewed how the apoptotic machinery in developing brain regulate synapse formation and neuronal connectivity.For example, caspase activation is known to be required for axon pruning during brain development to generate neuronal network.In Drosophila melanogaster and in mammalian neurons components of apoptotic machinery are involved in axonal degeneration that can consequently interfere with synapse formation.The neuronal network in developing brain shows a slow maturation and a transient passage from spontaneous, long-duration action potentials to synaptically-triggered, short-duration action potentials.At this stage, the neuronal network is characterised by “hyperexcitability”, which is related to the increased number of local circuit recurrent excitatory synapses and the lack of γ-amino-butyric acid A-mediated inhibitory function that appears much later.This “hyperexcitability” disappears with maturation when pairing of the pre- and postsynaptic partners occurs and synapses are formed generating population of postsynaptic potentials and population of spikes followed by developmental GABA switch.Glutamatergic neurotransmission is dominant at early stages of development and NMDA receptor-mediated synaptic currents are far more times longer than those in maturation, allowing more calcium to enter the neurons.The processes that are involved in increased calcium influx and the subsequent intracellular events seem to play a critical role in establishment of wiring of neuronal circuits and strengthening of synaptic connections during development.Neurons that do not receive glutaminergic stimulation are undergoing developmental apoptosis.The development of neuronal networks can be distinguished into two phases: an early “establishment” phase of neuronal connections, where activity-dependent and independent mechanisms could operate, and a later “maintenance” phase, which appears to be controlled by neuronal activity.These neuronal networks facilitate information flow that is necessary to produce complex behaviours, including learning and memory.The current mechanistic knowledge of biological plausibility and empirical support for this KER was evaluated as moderate.The ability of a neuron to communicate is based on neuronal network formation that relies on functional synapse establishment.The main roles of synapses are the regulation of intercellular communication in the nervous system, and the information flow within neuronal networks.The connectivity and functionality of neuronal networks depends on where and when synapses are formed.Therefore, the decreased synapse formation during the process of synaptogenesis is critical and leads to decrease of neuronal network formation and function in the adult brain.The developmental period of synaptogenesis is critical for the formation of the basic circuitry of the nervous system, although neurons are able to form new synapses throughout life.The brain electrical activity dependence on synapse formation is critical for proper neuronal communication.Alterations in synaptic connectivity lead to refinement of neuronal networks during development.Indeed, knockdown of PSD-95 arrests the functional and morphological development of glutamatergic synapses.Based on the current mechanistic understanding and evaluable empirical data biological plausibility and empirical support for this KER was evaluated as moderate.Learning and memory is one of the outcomes of the functional expression of neurons and neuronal networks.Damage or destruction of neurons by a chemical during synaptogenesis when the process of synapses formation takes place, integration and formation of neuronal networks could be impaired causing derange of the synaptic organisation and function.Such changes in the neuronal network formation could lead to subsequent impairment of learning and memory processes.Exposure to the potential developmental toxicants during neuronal differentiation and synaptogenesis will increase risk of functional neuronal network damage leading to learning and memory impairment.Long-term potentiation is a long-lasting increase in synaptic efficacy, and its discovery suggested that changes in synaptic strength could provide the substrate for learning and memory.Moreover, LTP is intimately related to the theta rhythm, an oscillation long associated with learning.Learning-induced enhancement in neuronal excitability, a measurement of neuronal network function, has also been shown in hippocampal neurons following classical conditioning in several experimental approaches.On the other hand, memory requires the increase in magnitude of an excitatory postsynaptic currents to be developed quickly and to be persistent for few weeks at least without disturbing already potentiated contacts.Once again, a substantial body of evidence have demonstrated that tight connection between LTP and diverse instances of memory exist.
The Adverse Outcome Pathways (AOPs) are designed to provide mechanistic understanding of complex biological systems and pathways of toxicity that result in adverse outcomes (AOs) relevant to regulatory endpoints. AOP concept captures in a structured way the causal relationships resulting from initial chemical interaction with biological target(s) (molecular initiating event) to an AO manifested in individual organisms and/or populations through a sequential series of key events (KEs), which are cellular, anatomical and/or functional changes in biological processes. An AOP provides the mechanistic detail required to support chemical safety assessment, the development of alternative methods and the implementation of an integrated testing strategy. An example of the AOP relevant to developmental neurotoxicity (DNT) is described here following the requirements of information defined by the OECD Users' Handbook Supplement to the Guidance Document for developing and assessing AOPs. In this AOP, the binding of an antagonist to glutamate receptor N-methyl-D-aspartate (NMDAR) receptor is defined as MIE. This MIE triggers a cascade of cellular KEs including reduction of intracellular calcium levels, reduction of brain derived neurotrophic factor release, neuronal cell death, decreased glutamate presynaptic release and aberrant dendritic morphology. At organ level, the above mentioned KEs lead to decreased synaptogenesis and decreased neuronal network formation and function causing learning and memory deficit at organism level, which is defined as the AO. There are in vitro, in vivo and epidemiological data that support the described KEs and their causative relationships rendering this AOP relevant to DNT evaluation in the context of regulatory purposes.
31,605
Measurement of Flow Volume in the Presence of Reverse Flow with Ultrasound Speckle Decorrelation
The volume of blood flowing into a specific organ or tissue is the most relevant factor determining whether the organ can get sufficient oxygen and nutrients for its metabolic demand.The supply of blood through blood vessels can be impaired by cardiovascular diseases.Accurately measuring the volumetric flow rate in blood vessels could be potentially useful in a variety of clinical applications, such as assessing the cardiac output, determining the degree of vessel stenosis in coronary arteries, monitoring the blood supply to the brain, evaluating kidney or liver failure and measuring the effects of the corresponding pharmacologic therapies.Direct measurement of volumetric flow rate in blood vessels remains a challenge both in clinical practice and research.Currently, magnetic resonance imaging is regarded as the gold standard, but its application in clinics is limited by low temporal resolution and poor accessibility.Ultrasound imaging is the most commonly used modality for estimating blood flow due to its affordability, real-time imaging, high temporal resolution, good and scalable spatial resolution and good accessibility.In clinical practice, although spectral Doppler and color Doppler are angle dependent, they have been used as the main ultrasound modalities for decades in investigating flow volume by multiplying the mean flow velocity with the vessel area.Conventional Doppler methods have repeatedly been shown to be prone to errors from many sources, and reviews on the issues are available in the literature."Instead of only detecting the flow velocity along the ultrasound beam and assuming the flow moving parallel to the vessel's long axis in the conventional Doppler methods, some advanced ultrasound techniques were proposed to have the vector flow in the scanning plane, such as vector Doppler, 2-D particle-tracking and transverse oscillation.However, all these techniques can only measure the blood flow velocities within the 2-D scanning plane which means that to estimate the volumetric flow it needs to be assumed that there is axis symmetry in the velocity profile of the vessel."Picot et al. came up with an idea to estimate volumetric flow using the 2-D through-plane velocity profile obtained from the vessel's oblique transverse view with conventional color Doppler imaging. "From the through-plane velocity profile, Picot's method provided a way to estimate the volumetric flow.The disadvantage of this method is that the oblique angle must be estimated when calculating the flow volume, which is not easy to obtain in practice.A full-field view of the 3-D blood flow was reconstructed using divergence-free interpolation, but it required the vessel to be scanned at multiple locations.3-D blood flow imaging with a 2-D matrix ultrasound transducer could also be an option to solve this problem since it provides a complete estimation of flow velocities in each dimension.Currently, the huge amount of data and the demanding hardware and computational requirements make methods based on 2-D matrix probes difficult and costly to implement.Ultrasound speckle decorrelation has shown the capability in estimating the through-plane flow velocity using a 1-D array transducer.With the through-plane velocity, the flow volume can be calculated by integrating the velocity over the luminal area when the vessel is scanned in the transverse view.The principle of ultrasound SDC is that the SDC over time follows a specific Gaussian curve.This Gaussian-based relationship has been derived by and was used in a series of studies, including 3-D ultrasound imaging, blood flow estimations and investigations of elastic tissue properties.However, the application of ultrasound SDC on estimating blood flow in arteries was limited by the low frame rate of conventional ultrasound and the weak signals coming from blood cells.The imaging frame rate must be high enough to capture the fast signal decorrelation due to rapid flow passing through the rather small elevational dimension of the transmitted acoustic beam.In the last decade, the emergence of plane-wave ultrasound techniques, which can increase the imaging frame rate by two orders of magnitude, has greatly expanded the capability of medical ultrasound imaging.Furthermore, the advent of microbubble contrast agents can also significantly enhance the ultrasound signal from blood.Using high frame rate imaging techniques and microbubbles contrast agents, we have recently demonstrated the feasibility of the SDC method both in vitro and in vivo, showing that the maximum measurable through-plane flow velocity can be well over 1 m/s, which is physiologically equivalent to most flow in the cardiovascular system.However, the current SDC method has an intrinsic limitation on differentiating the flow direction in the blood vessel.In other words, it can only estimate the through-plane flow speed but not the direction.In many parts of the cardiovascular system, there is bidirectional flow during certain periods of the cardiac cycle, especially when diseases exist in the vessel.Therefore, knowing the direction of the through-plane flow is crucial to accurately estimating the flow volume.In this study, our aim is to detect the through-plane flow direction in the SDC method so that accurate estimation of the flow volume can be achieved even when the flow is bidirectional.The idea is to rotate the probe to have a tilted angle between the scanning plane and the vessel radius direction while implementing the conventional SDC method.In this way, the through-plane flow direction can be differentiated based on the in-plane flow direction by assuming that the blood flow primarily moves along the longitudinal direction.Feasibility of this method was investigated using computer simulations and experimental flow phantoms.In our previous SDC method, the blood vessel was viewed in the transverse view as shown in Figure 2a.The upper part of the graph in Figure 2a illustrates the longitudinal view of the blood vessel being scanned by an ultrasound transducer which is represented by the rectangle positioned on top of the blood vessel at a 90° angle.The image of the blood vessel then is formed from the transverse view, which is a circle as shown in the lower part of Figure 2a.Our previous SDC method can estimate the speed of flow going through the scanning plane.However, it could not distinguish its direction, resulting in potential errors in the estimation of volumetric flow.In this study, the transducer was rotated from the previous 90° position to have a tilted view."The ultrasound image of the vessel's cross section is not a circle any more but an ellipse, as shown in the lower graphs in Figure 2b–f. By assuming that the flow is primarily moving along the long axis of the vessel, the through-plane flow direction at each spatial point of the ellipse can be distinguished from its corresponding in-plane flow direction which was tracked by UIV.For example, when the probe is tilted anticlockwise as in Figure 2b and the blood flow is moving from the left to the right as shown by the upper graph in Figure 2b, then in-plane flow will move from left to right as shown by the lower graph.With the same tilted position, when the flow is moving in the opposite direction, the in-plane flow also changes its direction.The transducer can also be rotated clockwise to establish the through-plane flow direction as shown in Figure 2.To improve the accuracy of the estimation of the flow volume, results from both anticlockwise and clockwise tilting were averaged to give one estimation.In Figure 2f, the blood flow in the vessel is bidirectional.This could happen during the transitions when flow changes its direction in the vessel within the cardiac cycle.In this case, the flow could move in both directions within the scanning plane.In the implementation of this method, no matter which direction the transducer was tilted, the through-plane flow at a specific position was initially defined as positive if the corresponding in-plane flow moves to the right and negative if in-plane flow moves to the left.Based on the initial definition, the net flow volume, which is the blood flow going downstream from the heart, was calculated within one complete cardiac cycle.If the calculated net flow volume turned out to be negative, its absolute value still represents the volumetric flow amplitude and the initial assumption of flow direction would be inverted.Computational fluid dynamics and Field II were used to simulate the bidirectional pulsatile flow and the ultrasound imaging procedure, respectively, to validate the feasibility of the proposed method against the ground truth."In this study, the inlet boundary condition in the CFD simulation was based off a typical flow pattern in a human's common femoral artery where reverse flow exists.The Fourier series coefficients of the mean flow velocity waveform Vmean estimated by Doppler method was adapted from published data.The mean velocity waveform in a straight tube can be represented by the Womersley equations.The peak Reynolds number was about 833, from which the maximum Dean number in the simulated flow was calculated from eqn as being about 550.This is larger than the Dean number of 260 found in the abdominal aorta, because the secondary flow could be larger in other parts of the cardiovascular system, such as the heart or the ascending aorta.The outlet was chosen to have constant atmospheric pressure and non-slip rigid wall condition was applied in the CFD model.Blood was assumed to be a non-compressible Newtonian fluid having the typical properties of normal healthy human blood, a density of 1060 kg·m−3 and dynamic viscosity of 3.5 mPa · s. STAR-CCM+ was used as the solver for the Navier-Stokes equations to obtain the full-flow velocity field in the 3-D domain.The time step for results output was set to 10 ms; thus, 100 values were available for the velocity variable within one cardiac cycle.The grid size was determined by the grid convergence.The CFD simulation took 1 h to complete with a 64-bit, 3.40-GHz Intel Core i7-4770 processor.Ultrasound simulation with Field II.Based on the flow velocity field from CFD, simulated scatterers updated their positions within the 3-D vessel domain spatially and temporally, and the Field II was used to generate simulation images.About 10 randomly located scatterers were defined in each resolution cell.High-frame-rate plane wave ultrasound was simulated to collect the B-mode images of the moving scatterers.The vessel was located at about 20 mm in the imaging depth.The parameters used in the Field II simulation are given in Table 2.Simulated ultrasound data were collected from three different locations: the straight part of the tube near the inlet, the middle of the curved part and the straight part of the tube near the outlet as shown in Figure 3.Secondary flow was expected at locations 2 and 3.Random Gaussian noise was added to the simulated ultrasound data to have a signal to noise ratio of 20 dB.Both straight vessel and curved vessel were investigated in the experiments.The straight vessel phantom was an in-house designed polyvinyl alcohol-cryogel-based wall-less phantom fabricated in a box with a luminal diameter of 5 ± 0.1 mm.The PVA-c phantom was made from three freeze-thaw cycles of the PVA solution, reported to give tissue-like acoustic properties.The curved vessel phantom was made of a natural rubber tube which was submerged in water and bent to have a similar curvature as in the CFD simulation, with a maximum Dean number of about 800.A piston pump was connected to the vessel phantom which generated a pulsatile flow at a flow rate of about 75 mL/min.The working fluid was water mixed with decafluorobutane microbubbles as contrast agents.A diagram of the flow phantom setup is given in Figure 4 where the vessel being scanned by ultrasound could be straight or curved.A transverse view of the vessel at the scanning location was obtained at first with a L12-3 v probe connected to a Vantage Veraonics 128 system.Then for collecting data, the probe was tilted anticlockwise manually by about 10o along the depth direction.Single angle plane wave imaging with a frame rate of 8000 Hz was used.Two to three pulsatile cycles of data were obtained each time.The same measurement was made to collect the data by tilting the probe clockwise.Each measurement was repeated three times.The transmitted sound wave had a central frequency of 8 MHz and a mechanical index of 0.19.A relatively low MI is necessary in this study to avoid significant microbubble destructions which could cause decorrelation and lead to overestimation of the flow velocity.All the experiments were conducted at room temperature.Separate measurements were made to scan the vessel at the same location but using the longitudinal view and applying the UIV method to get the velocity reference.Data collected by UIV technique were also repeated three times.The reference flow rate was obtained by directly measuring the fluid leaving the phantom with a measuring cylinder and a stopwatch.Radio frequency data generated from simulation or acquired from in vitro experiment were beamformed into B-mode images.Segmentation was then performed by manually selecting the lumen in the first frame and dynamically tracking the vessel wall in the subsequent frames using a localized region-based active contour segmentation.The decorrelation algorithm was applied to the segmented luminal area to estimate the 2-D through-plane velocity vy in eqn as it was done in a previous study.Specifically, in eqn, the BCWs σx, σy and σz in Field II simulation were calibrated by scanning a simulated speckle phantom and in experiments by scanning a speckle phantom with the transducer fitted on a computer-controlled translation stage; the in-plane flow velocities were tracked by the UIV method based on the B-mode images.With all these variables available, the through-plane velocity vy was derived for each spatial position in the lumen from eqn.Finally, the volumetric flow rate was calculated by integrating velocity vy over the dynamically segmented luminal area.The peak systolic in-plane flow velocities at location 2 from the CFD simulation are given in Figure 5.It showed that in-plane flow direction can be used to detect through-plane flow direction with a tilted scanning view.A vortex can be seen in the non-tilted transverse view due to the curved geometry, and through-plane flow direction cannot be derived from the in-plane flow direction if the vessel was scanned under this view.When the scanning plane was tilted by 10 degrees, in-plane velocities throughout the vessel lumen turned out to have the same direction when the flow is going forward at peak systole.A movie showing the in-plane flow patterns under these two views throughout the cardiac cycle is given in the Video S1.The through-plane flow velocities at three different scanning locations can be accurately estimated using the proposed SDC method.The results from location 3, where the largest secondary flow is expected, are given in Figure 6, together with the reference from the CFD flow field.The estimated and reference flow velocities within a 1-mm square area in the center of the vessel were compared throughout the cardiac cycle."A comparison of the velocity profile across the vessel's short axis was also made at four different cardiac phases indicated by the dashed line in the Figure 6a.It demonstrated that the through-plane flow direction can be obtained by the proposed method.A movie illustrating the 3-D velocity profile in the scanning plane at location 2 over one cardiac cycle is provided in Video S2.The temporal flow rates were also evaluated against the CFD ground truth and the results are shown in Figure 7.The true mean flow rate within one cardiac cycle was 121.2 mL/min, and the estimated mean flow rates at three scanning locations were 133.2, 134.0 and 141.6 mL/min, respectively.However, the estimated mean flow rates were 238.4, 256.5 and 265.3 mL/min if the proposed direction detection method was not applied.In these simulations, both flow velocity and rate tended to be slightly overestimated by the proposed SDC method compared with their references and explanation on this will be given in the discussion.Comparisons of the estimated flow velocity between the SDC method and the UIV method in a straight vessel phantom are illustrated in Figure 8.It shows that flow velocity estimated from both methods matched well.A movie showing that in-plane flow in the lateral direction changed its directions when the through-plane flow changed direction is given in Video S3.The volumetric flow rate calculated from the proposed SDC method was 71.2 ± 3.0 mL/min while the reference flow rate from the timed collection was 75.6 mL/min, indicating only a 5.8% underestimation.The estimated flow rate was 155.8 ± 5.8 mL/min when no direction detection was applied, which is a 106.0% overestimation.In the curved vessel, the SDC method can also distinguish the through-plane flow direction compared with the UIV reference."Only the flow velocity from the central area of the vessel was investigated in this case because it was difficult to get a reliable velocity reference across the vessel's short axis from the longitudinal view in the UIV method when the vessel was curved.The 3-D velocity profile obtained from the proposed method in the curved vessel is illustrated at two temporal points in the cardiac cycle.A movie showing the corresponding 4-D velocity profile is given in Video S4.The reference flow rate from the timed-collection method was 75.0 mL/min and the estimated flow rate by the proposed SDC method was 69.6 ± 4.9 mL/min, with a 7.2% underestimation.The flow rate was significantly overestimated when no direction detection method was applied.Ultrasound SDC provided a unique way to estimate volumetric flow in the blood vessel by imaging the vessel in the transverse view and estimating the spatially and temporally resolved through-plane flow velocity.In this study, a key limitation of the current SDC method, the lack of differentiation of flow direction which is important for volumetric estimation of some physiologic flow, was addressed by a newly proposed method through tilting the transducer from the transverse view and processing the in-plane flow information.To our knowledge, it is the first time for a study to directly measure the volumetric flow from its 2-D velocity profile in the presence of reverse flow, by a 1-D array probe without the need to assume the flow profile in the vessel or to obtain the knowledge of the beam angle relative to the vessel.Most other ultrasound techniques using a 1-D array transducer must assume an analytical or symmetric velocity profile, based on which the flow volume is estimated either from the velocity measurements at a point by spectral Doppler or from 2-D velocities by vector flow techniques in the longitudinal scanning plane.The proposed ultrasound method in this study has overcome those limitations with a conventional 1-D array ultrasound transducer, which makes it easy to implement.The improvement in accurately estimating the flow volume even in the presence of reverse flow makes the proposed method a promising technique in a range of clinical applications as mentioned in the introduction.In addition, this method might be more informative than the spectral Doppler in detecting a total occlusion by providing a 2-D cross-sectional velocity profile, but this requires further in vitro and in vivo validations.The feasibility of this method was demonstrated on straight and curved vessels by simulations and flow phantoms.It can estimate through-plane flow velocity with good accuracies in terms of magnitude and direction, which is not possible in previous decorrelation studies.It should be noted that all the negative parts in those SDC-estimated curves in Figures 6, 8 and 9 were mistaken as positive if the proposed method were not applied, leading to more than 100% overestimation in volumetric flow rate in the demonstrated scenarios.Therefore, a significant improvement was achieved by the proposed method with the ability of detecting through-plane flow direction.Currently, there is no other existing solution for this problem in terms of differentiating the blood flow direction in the ultrasound SDC method.Some overestimations of through-plane flow velocity can be seen in each case.These overestimations have been shown to be caused by the SDC method which tends to overestimate the through-plane flow velocity if the in-plane flow velocity is large.This error could be suppressed in traditional SDC method by having an absolute transverse view where the in-plane flow is small.In this study, more in-plane flow is introduced due to the tilted view which is required to detect the through-plane flow direction.To minimize this overestimation, the tilted angle should be kept small.In this work, a 10-degree angle was used.Overestimation of flow velocity leads to overestimation of volumetric flow rate as expected in simulation.In flow phantom, the volumetric flow rate was slightly underestimated and despite the flow velocity was overestimated.This was because the detection of through-plane flow direction depends on the UIV-estimated in-plane flow direction.The UIV technique is not sensitive enough to detect the change of in-plane flow direction at some places where the flow is slow.This could lead to overall underestimation of flow rate since the positive flow always outweighs the negative flow from the physiologic view, which means more positive flow could be mistaken as negative flow within a cardiac cycle.Further studies to correct the overestimation should be explored.In addition, a natural rubber tube was used in the flow phantom for collecting data in the case of a curved vessel.Its acoustic properties were not calibrated in this study.The deviation of its acoustic properties from human tissue might also affect the flow estimation, but its effect should not be significant.In this study, it was assumed that the blood flow in the vessel primarily moves along the vessel so that the through-plane flow direction can be worked out by tilting the scanning plane.Obviously, this method would work best at a condition where the blood vessel is straight.In human body, most blood vessels are not straight but curved to some degree.Secondary flow occurs with curved vessel, meaning that flow will not perfectly move along the vessel.The Dean number was adopted to measure the scale of secondary flow, and a Dean number of 260 was reported in the abdominal aorta.The Dean number would be larger when it comes to the flow in the heart, or the ascending aorta, so we chose to test this method on flows with larger Dean numbers which were maximally about 500 in simulation and 800 in flow phantom.The proposed method was demonstrated to be able to detect the through-plane flow direction despite the significant secondary flow.In the human body blood vessel geometries, such as bifurcations and branches, could be more complicated, leading to more complex blood flows or even turbulent flows.In these cases, titling the transducer might not be enough to effectively detect the flow direction.However, the transducer can be moved to scan an area where the flow is not turbulent or simpler because the flow volume will be conserved in a vessel.Microbubbles, which have been widely used in clinical practice and research, were used in the flow phantom as a contrast agent to obtain enhanced acoustic signals from the flow.The advantages of using microbubble for enhancing the signal-to-noise ratio in the decorrelation analysis have already been shown in previous studies.In principle, the SDC method could work with native blood but extra clutter filtering might be required to improve the SNR in this case.This requires further study.Further in vivo validations of this method are also necessary.A new method was proposed and evaluated for detecting the through-plane flow direction in the SDC method, which would enable the decorrelation method to estimate the volumetric flow rate more accurately in the presence of flow reversal.This method is capable of accurately measuring the physiologic flow using a conventional 1-D ultrasound array probe, which could impact a wide range of its clinical applications.
Direct measurement of volumetric flow rate in the cardiovascular system with ultrasound is valuable but has been a challenge because most current 2-D flow imaging techniques are only able to estimate the flow velocity in the scanning plane (in-plane). Our recent study demonstrated that high frame rate contrast ultrasound and speckle decorrelation (SDC) can be used to accurately measure the speed of flow going through the scanning plane (through-plane). The volumetric flow could then be calculated by integrating over the luminal area, when the blood vessel was scanned from the transverse view. However, a key disadvantage of this SDC method is that it cannot distinguish the direction of the through-plane flow, which limited its applications to blood vessels with unidirectional flow. Physiologic flow in the cardiovascular system could be bidirectional due to its pulsatility, geometric features, or under pathologic situations. In this study, we proposed a method to distinguish the through-plane flow direction by inspecting the flow within the scanning plane from a tilted transverse view. This method was tested on computer simulations and experimental flow phantoms. It was found that the proposed method could detect flow direction and improved the estimation of the flow volume, reducing the overestimation from over 100% to less than 15% when there was flow reversal. This method showed significant improvement over the current SDC method in volume flow estimation and can be applied to a wider range of clinical applications where bidirectional flow exists.
31,606
Reconstruction of the water content at an interface between compacted bentonite blocks and fractured crystalline bedrock
Sodium bentonite is a clay with a high content of montmorillonite which grants it a swelling behavior in presence of water .This property and its low permeability make it a natural choice to engineer groundwater barriers in applications such as geological disposal of radioactive waste.For example, the planned design of a repository for spent nuclear fuel in Sweden, denoted as the KBS-3 V method, comprises excavations of deposition tunnels approximately 500 m below the ground surface in crystalline bedrock.Deposition holes would then be drilled in the tunnel floors, and the canisters containing spent nuclear fuel in each hole would be embedded using compacted bentonite blocks and pellets.Bentonite is also considered as backfilling material for the deposition tunnels.The general idea is to insert partially saturated bentonite which then seals the underground openings as it draws water from the rock around the deposition tunnels and holes, and swells.One uncertainty is the global wetting pattern and water uptake rate of the buffer blocks under different in situ conditions and how these are influenced by the local rock properties.Understanding flow interactions between the rock matrix, rock fractures and bentonite is an important component of accurate predictive modeling of water and air flows in the subsurface repository and fractured rock system, with implications for inert and reactive transport beyond the local deposition holes.Recognition of this need to develop the understanding of the dynamics of in situ wetting of bentonite in natural rock cavities led to the Bentonite Rock Interaction Experiment.BRIE was set up to observe and document the early evolution of compacted bentonite blocks in situ under isothermal conditions.It was conducted in an underground tunnel approximately 415 m below ground at the Äspö Hard Rock Laboratory in southeastern Sweden.The characterization phase aimed at quantifying inflows into the BRIE tunnel, exploratory boreholes and deposition holes as well as describing the water-bearing fractures or zones responsible for those inflows.The wetting phase of the experiment took place in two deposition holes with radius R=15 cm and depths of 3.5 and 3.0 m from the tunnel floor.Instrumented bentonite blocks were put in place and left to saturate for 419 and 518 days.After that, the rock surrounding the deposition holes and the bentonite parcels were extracted and transported to the laboratory for sampling and analysis.Dessirier et al. used the gathered characterization data to build alternative models of the BRIE site and experiment.The array of models in that study served as a basis for scenario analyses of factors that govern patterns and rates of bentonite wetting with objective to relate different factors measured prior to deposition, to the subsequent bentonite wetting.Those results suggested that in most cases, the wetting rate of the buffer as a whole was not as strongly related to the total open-hole inflow rates as to the distribution of inflow along the holes, which emphasized the importance of local scale heterogeneity in permeability, including the absence or presence of water bearing rock fractures, in the deposition hole vicinity.Furthermore, the results presented in Dessirier et al. indicate a bias of models using a homogeneous rock matrix and representing the local fractures as homogeneous plates towards a consistent overestimation of the bentonite wetting rate.It has indeed been shown that flow in rough fractures takes place in a few preferential pathways.The absence of this flow channeling effect could explain the overestimation produced by homogenized models.However, the available characterization data, i.e transmissivity of borehole intervals and fracture mapping of rock cores, does not provide direct information on the channels intersecting the deposition holes and how to parametrize them within the models.This study focuses on the interpretation of BRIE data on bentonite wetting and high-resolution water content patterns at, and close to, the surface of the bentonite parcel, which then may reflect expected critical heterogeneities in water transfer from the rock to the bentonite.More specifically, we considered the bottom meter of the bentonite parcel retrieved from one of the boreholes: BRIE Hole 18.After the surrounding rock had been prepared for extraction by stitch-drilling all around the deposition hole, the bentonite parcel could be extracted from the tunnel floor by pulling it in one piece, without significant damage to the bentonite surface.The retrieved bentonite parcel was then carefully wrapped in plastic for transport to the laboratory.Photographs of the surface of the bentonite were taken in the laboratory almost immediately after excavation before sampling.In the photographs the more water saturated regions appeared as relatively dark.This paper will first describe the image processing performed to combine the different photographs into a single gray-scale map of the bentonite surface.It will then explore the correlation between this map and the laboratory measurements of water content at the sample locations before it tries to leverage the observed correlation to arrive to a finer reconstruction of the water content profile.This map of water content is of great interest as it should provide information on the original number, location and transmissivity of intersecting flow channels in the rock.In addition, this final state of water content in the bentonite parcel together with time series of embedded humidity sensors, could also possibly indicate how flow channels might have been dynamically redistributed in time over the bentonite/rock interface when the buffer wetting was under way.This kind of dataset should help improve the flow models for the natural rock barrier and give better estimates of the operating conditions for the bentonite buffer, which could in turn help buffer design.The question we would like to answer in the present paper is whether it is possible and informative to combine the sampling data and the photographs of the bentonite parcel to obtain increased resolution of the water content profile on the bentonite surface at the end of the experiment.A detailed distribution of the water content at the bentonite surface in contact with the rock would greatly help to understand the impact of flow channeling and two-phase flow behavior in fractured crystalline rock under high suction, and subsequently to assess the hydraulic conditions imposed on engineered barrier system.Combining the rapid execution and wide coverage of photographs with accurate local sampling would appear as a cost-effective technique to acquire such a fine scale distribution.To the extent of our knowledge, investigations related to the direct use of photographs in order to quantify the in situ water uptake of bentonite parcels in natural bedrock have not yet been published in scientific literature.This section will first provide details on the two sources of data available on the cylindrical bentonite parcel: samples and dismantling photographs.We focus on the gravimetric water content w profile that developed in the bentonite parcel after exposure to groundwater through the host rock.Different methods used to process the photographs, and to reconstruct the water content field on the outer perimeter of the bentonite parcel will be introduced.A total of 24 pictures were taken at 3 different elevations to cover the height of the bentonite parcel and under 8 different angles to cover the whole perimeter of the parcel.For this paper, we made use of the 8 pictures covering the lowest meter or so of the bentonite parcel.For comparison, known intersecting rock fractures are also represented in Fig. 2.These fractures were first identified during core mapping on exploratory boreholes.The traces shown in Fig. 2 correspond to the adjusted estimates of the fracture positions obtained from optical imaging of the deposition hole wall by a method known as Borehole Image Processing System after the hole was enlarged to a radius of 15 cm.The barrel distortion caused by the camera lens was deemed negligible after inspection of known straight lines in the photographs.To bring the photographs together it was necessary to correct for perspective and to unfold the cylindrical surface from the 2D picture.To this end, we used the visible horizontal discontinuities between bentonite blocks knowing that each block had a height of 10 cm.If we then consider the rectangular unfolded surface of a half-cylinder cropped by a small angle 2α due to the finite distance between the camera and the bentonite parcel, and a point of normalized coordinates ∈ 2 to map back to the photograph: the Y coordinate gives us the half-ellipse on which the point to map was located in the picture through the previously defined linear functions, and the coordinate X gives the parameter θ to get the exact point on this ellipse using the parametric Eq.By using this mapping on gridded coordinates in the unfolded surface, for each photograph we filled in the value at each pixel to create a new image.Once each photograph had been transformed into a fragment of the unfolded surface, it was positioned in a common frame of reference by using distinctive points and the overlap between neighboring fragments to obtain the complete unfolded surface.After each fragment had been positioned relative to the others, a panorama tool was used to correct the gray levels in the transition areas between fragments to obtain a continuous gray map and avoid sharp transitions between neighbors.The precedence was given to the central area of each fragment as these were subject to the same light exposure.The final product is shown in Fig. 2 along with the samples.One can visually observe a certain correlation between the darkest gray traces and the wettest samples which will be investigated in more details in Section 2.4.The direct interpolation of the samples gave a very smooth water content profile that contrasted with the visual inspection of the map assembled from the photographs of the bentonite parcel where sharp dark traces were visible.The dipping traces coincided in many instances with mapped fractures around the hole.Although no mechanistic relation was known to link the degree of darkness of the bentonite to its water content, the photographs seemed to contain fine scale information on the wetting process that even a tight sampling scheme could not capture for feasibility reasons.Fig. 5 shows a scatter plot of the water content calculated from measurements and the averaged gray value of the surrounding pixels at the sample locations.One could observe that a linear model fitted the data with a coefficient of determination R2 = 0.39 and the null hypothesis of the slope of the regression being zero is rejected with a significance level well below 1%.We therefore proposed to use the linear regression to obtain a deterministic trend of water content and to model the deviation from the regression line, or residual, as a spatially correlated random function in a process known as regression-kriging, also called universal kriging or sometimes kriging with an external drift.The results in Fig. 7a show that the regression term accounts for short range variations of water content in the range 0.15 to 0.25 with clear elongated patterns where fracture traces had been charted.The kriging map of the residuals in Fig. 7b shows longer range variations of water content in the range −0.05 to 0.12.Regression-kriging gives an exact prediction at the sample locations but renders also the fine scale variation and features observed in the bentonite photographs.As such it is considered to be a successful method to join the contributions of both data sources into a detailed reconstruction of the final wetting state of the bentonite surface in the field scale experiment.This distributed map of the water content at the surface of the bentonite would be a great asset to derive the distribution of inflows from the rock to the bentonite during the experiment which would give a realistic picture of the hydraulic behavior of sparsely fractured crystalline rock in contact with compacted bentonite barriers.Such derivation of the inflow distribution could likely be achieved by inverse modeling using the present surface water content map as a calibration target along with measurements on samples from the interior of the bentonite blocks, the weight of the retrieved individual blocks and time series from the sensors installed in some bentonite blocks.It should be noted that the incorporation of photographs in the interpolation process gave a very successful outcome despite the opportunistic nature of the photo-documentation procedure that was not originally intended to provide quantitative data.The hypothesized relationship between bentonite water content and gray scale should first be assessed under controlled conditions in the laboratory.Optimizing the photo-acquisition and processing could bring significant improvements to the technique.The spread away from the regression line, here modeled as spatially correlated residuals, can indeed be explained by multiple factors:differences in the scale of sample measurement and photographic resolution,biases introduced by the lighting and the photographing process,errors in the mapping of pictures to fragments,errors in the positioning of the fragments,errors introduced by the panorama leveling to ensure a continuous transition between fragments and,error introduced by the chosen regression model, here linear.It is interesting to note that the coefficient of determination R2 was higher for gray values and liquid saturation than with water content.Gray values show no linear correlation with dry density.It seems thus that the photographs were more sensitive to the saturation degree than to density variations in the bentonite.This phenomenon could be investigated further in a controlled laboratory setting, as previously mentioned, to assess the power of the correlations under optimal conditions and ascertain the regression model.Regarding the choice of the regression model, the linear model was chosen for simplicity.One could as easily conjecture an alternative regression model of the form w = a · pb + c with b≠1 for example, however the increase in variance for the lower range of pixel gray values made it difficult to fit an optimal exponent value.Mitigating the noise in the data, by improving points 1–5 would take precedence over debating that modeling choice at this point.Another explanation related to the disparity between volumetric samples and superficial optical measurements was that some cooling water used for core-drilling during dismantling might have found its way to the bentonite surface, thus potentially increasing the water content on and near the surface of the bentonite.If this phenomenon happened, owing to the very short exposure time and shallow penetration depths of increased moisture content, it could have impacted the gray values at the bentonite surface but would have less likely affected the sample values.This explanation can thus be supported by the fact that the positive residuals seem to correspond to areas of high water content where the radial gradient of water content is high as well, signs of a significant wetting process throughout the BRIE experiment.In parallel, the areas of negative residuals are characterized by low water content and low water content gradient towards the central part of the bentonite parcel, signs of a negligible wetting during BRIE and a potential superficial disturbance at dismantling.Such rapid wetting would influence preferentially the surface areas that were the driest and would explain why the regression to gray scale generally overestimates the water content of dry surface samples and underestimates the water content of relatively wet samples.Such a correlation of residuals with the final water contents and gradients could also result from the choice made of a linear regression model.In conclusion, the relationship between water content and grayness in bentonite should be investigated in laboratory conditions.Most of the listed sources of error could be substantially reduced by considering alternative dismantling and sampling techniques or by developing a consistent scanning technique, which could indirectly help improve the regression model.Another improvement could consist in using the three RGB values of the photographs as three different predictors instead of the conversion of the photographs to gray scale as a single predictor.Beyond the use of visible light, other spectral bands more sensitive to moisture content could be used to excite and image the bentonite, as it is done for remote sensing of soil moisture.Another source of information could come from optical, acoustic and electrical borehole imaging to detect rock anomalies and use criteria such as the distance to known anomalies in the rock as predictors.Inverse distance weighting and regression-kriging gave predictions with very different textures.To compare them, we performed a standard cross-validation test.Each sample value was compared to the predicted value obtained if that sample had been left out, and a root mean square prediction error was computed.The resulting RMSE for the IDW2 method and RK method showed almost identical values with 0.016 and 0.015 respectively.However note that this cross-validation only tested the modeled error on the heavily sampled lines followed by the sampling scheme.Another, more meaningful, point of comparison was the univariate statistics of the two predictions, e.g. their respective cumulative distribution functions.One can observe that the two predictions agree as to the median water content but that the IDW2 method predicts less extreme values than the RK method.By comparing the textures of the two predictions one can oppose a very smooth field with a rather long isotropic correlation range to a set of rather thin elongated features.Some of these features corresponded directly to mapped fractures, e.g the sine-shape trace between z = 0.4 and 0.8 m.The photo-documentation however showed more dark features than identified during deposition hole characterization.There was for example a source of uncertainty as to the nature of the vertical features observed on the photographs that stemmed from the technique used to install and dismantle the bentonite parcel.At installation, the cylindrical parcel was set up with an initial 1 mm gap which was later rapidly filled with water from the top to provoke an initial swelling and close the gap.The vertical features could then either correspond to induced fractures, permeability redistribution due to stress effects, preferential wetting during installation or dismantling.This paper studied the water content profile at the surface of a bentonite parcel retrieved after in situ wetting in fractured crystalline bedrock.It showed that by using regression-kriging it is possible to quantitatively merge and combine information from high-resolution photographs of the cylindrical bentonite parcel where wet areas appear as relatively dark, along with bentonite samples that could not be taken at a sufficiently high resolution to necessarily preserve the full heterogeneity in the pattern of wetting, but in which the absolute water content could be accurately determined.The resulting reconstruction is both exact regarding local sample measurements and successful to reproduce features such as intersecting rock fracture traces, visible in the photographs.It is shown to be more realistic than adopting inverse distance weighted interpolation of measured samples alone.A quantitative correlation between “postmortem” sample analyses of water content and the image gives confidence that much of the character seen in the images derives from heterogeneous wetting, although other processes related to the opportunistic nature of the photo-documentation and possible alterations of the wetting on the surface during extraction of the bentonite from the deposition hole are also relevant.A set of controlled laboratory experiments should be undertaken to assert the hypothesized relationship between the gray-scale and water content of compacted bentonite and to investigate the best function shapes to relate the two quantities.An improved scanning procedure could reduce the errors introduced by the geometrical transformations needed to unfold and stitch the different photographs into a single gray scale map of the bentonite surface.Similarly controlled lighting and exposure could improve the accuracy of the method.The images demonstrate important features of the wetting processes occurring during the experiment that would not be discernible from the postmortem sampling alone.Natural fractures generally seem to correspond to fine-scale wetting features on the surface of the bentonite.In addition axial and other features undetected during deposition hole characterization are clearly seen in the photographs.Many of the features within the image are diagnostic of flow processes at the rock/bentonite interface and detailed models of heterogeneous wetting can use features extracted from the images to condition local-scale rock models aimed at better understanding of the early evolution of water content within the bentonite.
High-density sodium bentonite combines a low permeability with a swelling behavior, which constitute two important qualities for engineered barriers in geological disposal of spent nuclear fuel. For example, the KBS-3V method developed in Sweden and Finland is planned to include compacted bentonite as the buffer material to embed canisters containing the spent nuclear fuel packages in deposition holes in deep crystalline bedrock. The partially saturated bentonite buffer will then swell as it takes up groundwater from the surrounding rock. It is important to quantify the water content evolution of the installed buffer to correctly predict the development of the swelling pressure and the prevailing conditions (thermal, mechanical, chemical and biological). This study aimed at quantifying the water content profile at the surface of a cylindrical bentonite parcel retrieved after in situ wetting in fractured crystalline bedrock. We demonstrate the possibility of using regression-kriging to quantitatively include spatial information from high-resolution photographs of the retrieved bentonite parcel, where more water saturated areas appear as relatively dark shades, along with bentonite samples, where detailed measurements of water content were performed. The resulting reconstruction is both exact regarding local sample measurements and successful to reproduce features such as intersecting rock fracture traces, visible in the photographs. This level of detail is a key step to gain a deeper understanding of the hydraulic behavior of compacted bentonite barriers in sparsely fractured rock. An improved scanning procedure could further increase the accuracy by reducing errors introduced by the geometrical transformations needed to unfold and stitch the different photographs into a single gray scale map of the bentonite surface. The application of this technique could provide more insights to ongoing and planned experiments with unsaturated bentonite buffers.
31,607
Hydro-plastic response of beams and stiffened panels subjected to extreme water slamming at small impact angles, part II: Numerical verification and analysis
Water impacts are known to occur for ships and offshore structures at sea due to relative motions between the liquid and the structure.Example scenarios leading to slamming are water entry and exit of ship bow and stern, offshore platforms subjected to steep breaking waves, high speed vessels travelling in waves and free-falling life boats.Structures subjected to impulsive loads from water slamming, may respond in the elastic or elastoplastic regimes depending on the load intensity, and there can be significant coupling effect between water pressure and the structural response, termed as hydroelasticity and hydro-elastoplasticity, respectively.Hydroelastic slamming has been studied extensively, for instance by Faltinsen , Kvalsvold and Faltinsen , Bishop and Price and Qin and Batra ; but similar attention has not been given to the hydro-elastoplastic or hydro-plastic slamming.In practice, offshore structures may be impacted by steep and energetic waves in extreme sea states, causing significant structural damage.For example, the accident of the offshore drilling rig COSL Innovator in the North Sea in 2015 led to one death and extensive damage to the cabins after being struck by an energetic horizontal wave.In order to maintain structural safety and to prevent such accidents to occur, rules and standards should be established for designing against extreme slamming loads.For structural design in the Ultimate Limit State conditions subjected to slamming, simple guidelines were introduced in DNVGL-OTG13 for the air gap calculation and in DNVGL-OTG14 for providing the temporal and spatial distributions of the design slamming loads.The rules focus on the peak pressure, the shape of the pressure impulse, the impulse duration and the pressure spatial distribution.Similarly, a few researchers studied plastic response of structures subjected to extreme slamming by assuming a certain temporal and spatial pressure distribution, such as Jiang and Olson , Jones and Henke .These methods, however, neglect the hydro-elastoplastic coupling between the structural response and water pressure, and do not reflect the real physics behind the phenomenon.Literature review has shown that limited knowledge exists for scenarios where the plastic response of a structure becomes dominant in the Accidental Limit States conditions.In order to bridge the knowledge gap and to obtain a deeper understanding of the hydro-plastic slamming phenomenon, Part I of the two-part companion paper firstly formulated an analytical solution for the hydro-plastic response of beams and stiffened panels subjected to extreme water slamming.Based on the analytical model, governing non-dimensional parameters were identified and discussed.The objective of this Part II paper is to assess the analytical model and to discuss its potential applications and limitations.The assessment requires comparisons against reliable reference solutions using experiments or numerical simulations.Quite a few experiments on slamming impacts are reported in the literature.However, they were mainly designed to study the slamming pressure on rigid bodies or the hydroelastic coupling between fluid and the structure.Very few experiments were carried out with extreme slamming loads that were capable of producing large inelastic structural damage.Shin et al. carried out repeated drop tests of unstiffened plates into a rectangular tank and recorded the cumulative plate damage.However, because of the small tank size, one can expect that the hydrodynamic pressures can be significantly affected by the confined water.In addition, because of the limited drop height, the deformations after the first drop were generally in the elastic range.On the numerical side, a few numerical simulations with the Arbitrary Lagrangian Eulerian method were carried out to study the elastoplastic responses of the structures to slamming, such as Cheon et al. , Luo et al. , Yamada et al. and Skjeggedal .In the ALE method, the structures are modelled with Lagrangian meshes while the fluid domain including water and air is discretized with Eulerian meshes.Upon iterations, the hydrodynamic pressure and boundary conditions are transferred between the structural and fluid domains.Based on this, this Part II paper verifies the proposed analytical model in the Part I paper by means of multi-material ALE simulations using LS-DYNA.Numerical settings of the ALE simulations are validated with the rigid-wedge drop tests by Zhao et al. and drop tests of elastic flat plates by Faltinsen et al. .Water entry simulations are then carried out for the flat plate strips and stiffened panels with different cross sectional dimensions and impact velocities.The analytical model is discussed with respect to the fluid flow, structural deflections, the pressure history, and the impulse.Potential application and limitations of the analytical method are discussed.During the deformations, significant coupling exists between the beam plastic deflection and the water pressure, denoted as hydro-plasticity.In stages 2 and 3, water pressure acts as an added mass effect and pushes the decelerating structure to deform.For stage 1, apart from an added-mass term, we have a second pressure term related to an added-mass time change effect due to the moving hinges leading to a change in the structural mode.By equating the rate of internal and external work, the governing motion equations are found, and are solved numerically with the fourth order Runge-Kutta method.The explicit NLFEM code LS-DYNA version 971 with the multi-material ALE algorithm was employed to verify the analytical formulas.Prior to the simulation of hydro-plastic slamming, validation of the numerical setup and the accuracy of simulation results were assessed by comparison with a 2D rigid-wedge drop test by Zhao et al. and the drop test of a horizontal flat elastic plate by Faltinsen et al. .Water and air are modelled with multi-material Eulerian meshes while the structure is modelled with Lagrangian meshes.Coupling is enabled in a way that the Lagrangian structure domain imposes displacement and velocity boundary conditions on the Eulerian fluid, which in return imposes hydrodynamic pressure on the structure.The water and air domains are modelled using the 1 point ALE multi-material solid elements.Material properties of the fluids are defined with the NULL materials and the linear polynomial equation of state.The properties adopted for water and air are listed in Table 1.The values have been validated by Bae and Zakki through comparison with experiments.The penalty-based coupling method is applied to model contact between the fluid and the structure.During contact, the fluid nodes are allowed to have a small penetration into the structure.Resisting forces are then imposed between the contact points on the structural elements and the fluid nodes.The penalty factor corresponding to the contact stiffness of interacting bodies is set to the default value of 0.1.The contact damping is selected to be 0.9 times the critical damping according to Stenius et al. .The fluid-structure coupling takes place in the normal direction to the body surface when the fluid tends to enter the structure, i.e. in compression only.Zhao et al. carried out a drop test of a 2D rigid wedge in MARINTEK.The deadrise angle of the rigid wedge was 30° and the drop height was 2 m.The main dimensions of the tested section are shown in Table 2.A 2D model is established in LS-DYNA as shown in Fig. 2.Because the problem is symmetric with respect to the body central axis, only half of the domain is modelled.The water domain is 0.75 m in width and 0.5 m in depth while the dimension of air domain is 0.75 m × 0.4 m. Both the fluid and structure domains are discretized with a uniform mesh size of 2.5 mm.In the thickness direction, one element was modelled for the fluid domains.Nodal velocities in the fluid domains are fixed in the y direction to enable a 2D fluid flow.The nodes along the left wall of the fluid domain are constrained in the x direction to enforce the symmetry condition.Elements of mass points are added to the top of the rigid wedge such that the mass of the experimental wedge including ballast weights, is reproduced exactly.The time step size is automatically calculated by the LS-DYNA solver.The value is very small and is typically in the order of 10−6 s.A snapshot of the flow field of water and air simulated during water entry is given in Fig. 3.At this stage, the water rise-up along the wedge produced a jet detached from the structure.The water jet, water-air mixture and flow separation are reasonably captured.The local details of the jet cannot be considered accurate when the related dimensions are comparable to the local cell size.This has consequences on the further jet evolution and on the local mass conservation, but the effects for the fluid-body interaction during the slamming are expected to be limited.This is confirmed by Fig. 4 that compares water-entry forces acting on the wedge from the ALE simulation and from the experiment.Except for the initial oscillations, the simulation agrees well with the measurement both in terms of behaviors and maximum values.Faltinsen et al. conducted a drop test with a horizontal flat elastic plate.The main parameters for the plate are shown in Table 3.The drop height is 0.5 m, yielding a measured velocity of about 3.03 m/s when water entry starts.The 2D model is established in LS-DYNA as shown in Fig. 5.Half of the domain is modelled due to symmetry conditions.The water domain is 0.5 m in width and 0.5 m in depth while the dimension of air domain is 0.5 m × 0.2 m.The water and air domains, as well as the plate, are discretized with a mesh size of 2.5 mm.Nodal velocities in the fluid domains are fixed in the z direction to enable a 2D fluid flow.The nodes along the right wall of the fluid domain are constrained in the x direction to enforce the symmetry.In order to model the rotational stiffness at the plate boundary consistently with the model tests, an elastic beam connects the support to a rigid plate as shown in Fig. 5.The length and the elastic modulus of the beam are calibrated to reproduce the rotational stiffness in the experiment.Mass points are distributed along the boundaries to reproduce the same mass of the experimental plate, including ballast weights.Fig. 6 compares the pressures obtained in the simulation and measured in the experiment.The peak pressure and the pressure during plate vibration are in good agreement with the experimental ones.Negative pressure is not captured in the ALE simulation because the initial atmospheric pressure is not modelled.This is consistent with the observation of Wang et al. ; who also simulated this experiment with the ALE formulation.According to the experiment, the negative pressure, i.e. relative to the atmospheric pressure, leads to the cavitation and ventilation phenomena, and is not captured numerically.This effect is considered secondary for the maximum deflections and stresses induced by slamming on the plate.Fig. 7 compares the plate nodal velocities and deflections from the experiment and from the ALE simulation.The deflection velocity at plate midpoint is in good agreement with the experiment.The rigid-body velocity is well captured in magnitude, but there is a substantial phase difference.It seems that the rigid-body velocity is in phase with the mid-plate deflection velocity, but this is not observed from the experiment.The resulting plate deflection agrees reasonably well with the experimental curve.The above results show that the ALE simulation reproduces the water-entry experiments of rigid and deformable bodies reasonably well.It is therefore concluded that the present slamming modelling and numerical set-ups are reasonably sound and can be applied for the hydro-plastic slamming analysis of beams and stiffened plates.Numerical set-ups and convergence tests for the ALE simulation of hydro-plastic slamming are described in this section.The steel material with a yield stress of 355 MPa is used for the plates and stiffened panels.A linear hardening model with a small hardening stiffness is used to reduce the influence of hardening as the analytical model assumes an elastic-perfectly plastic material.The parameters for the material are shown in Table 4.For the 2D water entry simulation of flat plates, a water domain with dimensions of 3 m × 2 m and an air domain of 3 m × 1 m were established.The flat plate is 1 m in length.The plate boundary nodes are fixed against all degrees of freedom except for the vertical z direction.One shell element is modelled in the thickness direction.The fluid nodes are fixed in y direction to enable a 2D condition.A convergence test is carried out to determine the mesh size for the fluid and structure domains in Section 4.3.The plate thickness is set as 3 mm, 6 mm, 10 mm or 20 mm with an initial impact velocity of 5 m/s, 10 m/s or 15 m/s.For 2D water entry simulation of stiffened panels, the water and air domains are modelled with the dimensions of 4 m × 2 m and 4 m × 1.5 m, respectively,.In the thickness direction, the domain extension equals the spacing between stiffeners.In order to verify the analytical model comprehensively, 6 stiffener cross sections are modelled, covering different area ratios, panel lengths and panel thicknesses.The dimensions are given in Table 5.The panel stiffness varies from weak to strong, yielding large to small permanent deflections for a given initial impact velocity.Different cases for water entry of flat stiffened panels are simulated with the initial impact velocity being 7 m/s, 10 m/s or 15 m/s.The fluid nodes are fixed in y direction to enable a 2D flow condition.The plate boundary nodes are fixed against all degrees of freedom except for the vertical z direction.The mesh sizes for the fluid domain and structures are determined by a convergence test in Section 4.3.A convergence test is carried out for a flat plate strip with the dimensions 1 m × 0.02 m × 6 mm impacting the water with an initial velocity of 15 m/s.The fluid and structure mesh sizes are the same.Five different mesh sizes of 100 mm, 50 mm, 25 mm, 10 mm and 5 mm are tested.The resulting plate central deflections are plotted in Fig. 10.The convergence curves of the impulse in the acoustic phase, the total impulse and the maximum plate deflection with decreasing mesh sizes are plotted in Fig. 11.It is found that the maximum deflections and permanent deflections reduce with decreasing mesh sizes.The trend of the deflection curves becomes similar when the mesh size is equal to or smaller than 50 mm.The magnitude of the deflection curve converges with a mesh size of 10 mm and 5 mm.The impulses in the acoustic phase becomes stable for a mesh size of no larger than 50 mm while the total impulse tends to converge at a mesh size of 10 mm.Considering both efficiency and accuracy, the mesh sizes of the fluid and structure domains are kept the same, and set equal to 10 mm for the water entry of flat plate strips and 25 mm for the water entry of stiffened panels.Plates and stiffened panels subjected to local slamming loads are often part of a large structural system with significant mass, e.g. a ship or an offshore platform, such that the global structure remains virtually unmoved during and after slamming.To account for this, large ballast masses should be attached to the slammed structures such that it keeps the initial speed virtually unchanged during and after slamming.To enable this behavior, mass points are distributed uniformly along the boundaries of the slammed structure.A sensitivity analysis was carried out for plates and stiffened panels by varying their ballast masses.The resulting response is plotted in Figs. 12 and 13.The base case for the 2D flat plate water entry is a plate strip with dimensions of 1 m × 0.02 m × 6 mm impacting the water with an initial velocity of 10 m/s.The mass of the bare plate strip is 0.942 kg.The base case for water entry of stiffened plates is T2 cross section stiffened panel with an initial impact velocity of 15 m/s.The mass of the bare stiffened plate is 133 kg.Figs. 12 and 13 show that the nodal velocities at the boundaries, representing rigid-body velocities of plate strips and stiffened panels, decrease under slamming loads.The velocity reduction depends on the total mass.For the flat plate strip, a total mass of 10 kg, 100 kg, 1000 kg and 2500 kg yields a velocity reduction of around 65%, 12%, 1.7% and 0.7%, respectively.The rigid motion velocity of a stiffened panel with a total mass of 3 tons, 20 tons and 100 tons decreases by about 33.3%, 6.6% and 2%, respectively.For water entry of flat plates, the velocity of the plate middle node increases quickly from −10 m/s to a mean value of 0 m/s with some oscillations.Regardless of the total mass, the velocity history at the middle node is very similar before it converges to the rigid body motion velocity at the end of slamming.A similar phenomenon is also found for stiffened panels.The differences in velocities and displacements between the nodes at the boundaries and in the plate middle describe the structural local deformations.The structural deformations tend to converge to a constant value when the total mass is large enough.In subsequent simulations, a total mass of 2.5 tons and 100 tons is used for water entry simulations of plate strips and stiffened panels, respectively.This yields less than 5% reduction of the rigid-body velocities after slamming.For cases with smaller masses, the input velocity for the analytical model should be based on the mean of the rigid-body velocity.Numerical predictions of the fluid flow and plate deformations are shown in Fig. 14.The plate strips are 1 m in length and 0.02 m in width.The plate thickness is 6 mm and the initial impact velocity is 10 m/s.The corresponding displacement profiles for half a plate with a time interval of 0.4 ms are shown in Fig. 15.They demonstrate that the plate gets a significant change of curvature over a relatively short distance and this may be interpreted as a plastic hinge.As time increases, the instantaneous hinge position, marked as a red point in Fig. 15 at each time instant, moves towards the plate center.It is interesting to notice that the positions of the travelling hinge at different time instants lie virtually on a straight, horizontal line for a time period.This implies that, the deformation velocity at this stage on average counteracts the drop velocity.This is clear evidence that the initial deformation velocity is on average equal to the drop velocity.In addition, the wider view confirms negligible plate deformations within less than 1 ms from the initial impact.To the left of the travelling hinge, the imposed plastic curvature seems to be fairly constant and the ‘arm’ behind the hinge rotates only as a rigid body.The parallel arms represent major characteristics of the travelling hinge stage.It is found that the rotating arms become no longer parallel to each other before the hinge reaches the beam middle.This is because, the deformation of the thin plate follows Path 2, where the pure tension stage 3 is reached but the moving hinges have not met in the middle.From Fig. 15, it seems that it takes more time to reach the pure tension stage in numerical simulations than predicted using the proposed theory.This is due to the large elastic deflections before entering the plastic regime, which is not accounted for in the theory.The plot confirms that the travelling hinge concept is useful in describing the actual displacement field.With the imposed velocity from the acoustic phase, the plate builds up deformations over time in the free-deflection stage until all the energy is dissipated and the permanent deflection is reached.During this process, water is accelerated upwards, forming jets that leave from the structure sides.A small portion of elastic energy may be released through plate vibrations about the permanent deformations.Fig. 17 shows that the plate stiffness is a crucial factor to determine the peak pressure in the acoustic stage and the slamming duration.Given the same impact velocity of 10 m/s, the peak pressure and impulse of the acoustic stage increase with increasing plate thickness while the slamming duration reduces.It is interesting to find that the total impulse remains virtually the same regardless of the plate thickness.Fig. 18 compares the central deflections of flat plates with a thickness of 3 mm, 6 mm, 10 mm and 20 mm during 2D water-entry as predicted by ALE simulations and by the proposed analytical model.The initial impact velocity is 10 m/s.The simulations show that the plates deform to their permanent deflections with small elastic oscillations about the mean deformations, and the plastic energy is dominant.Fig. 19 shows the deflection history of the 6 mm plate with an initial impact velocity of 5 m/s, 10 m/s and 15 m/s.From Figs. 18 and 19, the permanent deflections predicted with the analytical model agree well with those from the ALE simulations for the selected plate thickness and impact velocity ranges.It is observed that the permanent deflection is somewhat overestimated especially for small impact velocities.This is mainly because the analytical model assumes that all energy is dissipated by the plastic deformation and the elastic energy is neglected.It is interesting to find from Figs. 16 and 19 that the durations of the acoustic and the free-deflection stage remain virtually insensitive to the initial impact velocity.Permanent deflections are reached virtually at the same time for all velocities.The non-dimensional permanent deflections of plate strips are plotted versus the non-dimensional velocity in Fig. 20 for different mass ratios.Reasonable agreement with ALE simulations is demonstrated.The numerical results confirm that the non-dimensional velocity is dominant.One of the ALE data point deviates slightly from the curve because the elastic energy becomes important in this case.The general features of the fluid flow during water entry of stiffened panels are quite similar to those shown in Fig. 14 for flat plates.The T3 stiffened panel deformations with an initial impact velocity of 10 m/s are shown in Fig. 21.The panel is subjected to significant plastic flow at the supports and the beam middle span, and undergoes large plastic deformations.The proposed analytical model has shown good accuracy in predicting the permanent deflections of flat plates and stiffened panels during water impact when the elastic energy is small relative to the total energy.However, when the elastic energy is comparable to the total energy, the accuracy decreases.It is interesting to assess quantitatively how much the elastic energy may occupy for different cases.The analytical model assumes that the locally slammed structures are part of a global structure with a significant mass such that the rigid-body velocity of the local structure remains virtually unchanged during water entry, but in practice the total mass may not be large enough to keep the rigid-body velocity constant.For such cases, the mean rigid-body velocity before and after water entry should be used as input for the model.Fig. 29 compares the deflections of plates and stiffened panels with finite ballast weights.ALE simulation results for both cases have been presented in Figs. 12 and 13 respectively in conjunction with the sensitivity study of ballast weights.The considered flat plate strip is 1 m × 0.2 m × 6 mm in dimensions, and it impacts water with an initial velocity of 10 m/s.The total mass including the ballast weight is 10 kg.During water entry, the rigid body motion velocity drops from 10 m/s to 3.5 m/s, and therefore a mean velocity of 6.75 m/s is used as input in the analytical model.The considered stiffened plate is with T2 cross section and impacts the water with an initial velocity of 15 m/s.The total mass including the ballast weight is 3 tons.During the water entry, the rigid-body motion velocity drops from 15 m/s to about 10 m/s, and therefore a mean velocity of 12.5 m/s is used as input.The results show that the permanent deflections are well predicted with the mean rigid-body motion velocity.However, the deflection curves are no longer in phase with the ALE simulation curves.The difference is due to the gradually changing rigid-body velocity, but permanent deflections are still well captured.To be consistent with the analytical model, the ALE simulations assumed a material with little hardening.In practice, the strain hardening can be significant for marine steels.In addition, the slamming phenomenon is highly impulsive, and the strain rate effect can be important, but has not been considered here.Both the strain hardening and the strain rate effect increase the material strength, yielding a lower permanent deflection.The proposed model is thus conservative in this respect.Local buckling may occur for panels with slender stiffener webs, and cross-sections with large Aw/At ratios are more susceptible to torsional buckling.Both effects are not included in the developed model.However, Yu et al. found that as long as local tripping or buckling do not occur in the early stages of the deformation, the model is reasonably accurate for stiffened panels.This is because membrane forces mainly govern the resistance at late stages of deformation, and local buckling will then have limited effect.In addition, the effect of local buckling and the neglect of strain hardening counteract to some extent.More validation work on different stiffened panel dimensions should be carried out.This Part II of the two-part companion paper verifies the analytical model proposed in Part I by comparing model predictions with results from the multi-material ALE simulations.The modelling and numerical settings of the ALE simulations were validated by comparison with water-entry experiments of a rigid wedge and a flat elastic plate.Hydro-elastoplastic simulations were carried out for beams and stiffened panels, and the results were discussed.The following conclusions are drawn:1.The proposed hydro-plastic model is capable of predicting large inelastic permanent deflections of plates and stiffened panels during flat or nearly flat water impacts with good accuracy both in magnitude and in phase.The coupling between hydrodynamic loads and structural deformations is well captured.The model works well when the ratio of the elastic energy relative to the total kinetic energy is less than 15%.A key element of the theoretical model is the travelling hinge concept used to describe the structural deformation.The validity of the concept is confirmed from the snapshots of displacement profiles of plate strips from the hydro-elastoplastic slamming ALE simulations.In the acoustic stage, the maximum pressure increases with the impact velocity and the structural stiffness, and the impulse imparted to the structures is close to the structural momentum with a deformation velocity equal to the initial impact velocity.In the free deflection phase, the interaction with hydrodynamic actions is important.The pressure in this phase is lower but the duration is significantly longer.The total impulse including the acoustic phase and the free-deflection phase is proportional to the impact velocity regardless of the structural stiffness.The rising time, however, is determined by structural stiffness and not sensitive to the initial impact velocity.The non-dimensional diagrams for the permanent deflection of plate strips and stiffened panels as a function of the impact velocity, have been proved useful by comparison with ALE simulations.The simplicity of the diagrams makes them good candidates to be utilized in rules and standards concerned with design against extreme water slamming in ULS and ALS conditions.
An analytical model has been proposed for the response of beams and stiffened panels subjected to extreme flat or nearly flat water impacts in Part I of the two-part companion paper. The model aims to capture the significant hydro-plastic coupling between large plastic structural deformations and the hydrodynamic pressure. Governing non-dimensional parameters for the hydro-plastic slamming phenomenon were identified and discussed. This Part II paper verifies the analytical model proposed in Part I by comparison with the hydro-plastic slamming response of beams and stiffened panels using multi-material Arbitrary Lagrangian Eulerian (ALE) methods in LS-DYNA. Numerical modelling and settings with the ALE simulations are firstly validated by comparison against drop-test experiments of a rigid wedge and of an elastic plate. Then, water entry simulations of flat plates and stiffened panels are carried out, where structural deformations go into the plastic regime. The simulated scenarios cover different plate thicknesses/cross sectional dimensions of stiffened panels, and various initial water-entry velocities. The analytical model is discussed with respect to the fluid flow, structural deflections, the pressure history and the impulse. Validity of assumptions of the analytical model is also discussed. Potential applications and limitations are indicated. The proposed design curves are well suited to be utilized in rules and standards for designing against extreme water slamming.
31,608
Sustainable material selection for building enclosure through ANP method
The dramatic rise in urban population has resulted in the rapid development of infrastructures all over the world and the construction industry has become one of the most progressive sectors in today's world .Consequently, the construction process and its related stages, such as fit out, operation and demolition of buildings, are important factors which affect the environment not only in direct ways but also in an indirect manner .It is noteworthy that the construction industry is responsible for non-renewable resources’ consumption, waste generation and air, soil, and water pollution.In accordance with UNEP and OECD, 25–40% of energy use, 30–40% of solid waste load and between 30–40% of generated greenhouse gases emission are global consequences of the built-up environment .The main reasons for this high rate of energy consumption are expressed as rapid population growth, more reliance on building services equipment, improving indoor comfort level and increasing elapsed time in buildings .Current annual consumption of materials is 60 billion tons and researchers expect that this rate will rise by two times until 2050.It should be noted that about 40 percent of the total materials is consumed in the construction industry.This amount is expected to rapidly increase in the future .Due to the limited resources and the importance of environmental issues, sustainable lifestyle is projected to become a prominent trend these days .A considerable tendency towards sustainability strongly depends on the decisions made by construction stakeholders including owners, managers, designers, corporations etc.The precise selection of building materials is recognized as the very basic way that sustainable principles can be applied to construction projects by designers .Abeysundara et al. developed a quantitative model to select sustainable construction materials based on environmental aspects such as embodied energy, economic issues like market price and cost, and social factors such as thermal comfort, aesthetics features, quick construction and resistance.The results of their study revealed that environmental factors are more preferable in the decision-making process.Lam et al. provided a model for property developers in order to select the best material suppliers by using Fuzzy Principal Component Analysis.This model consists of five steps; determining the selection criteria, data collection, employment of Triangular Fuzzy Numbers, liner transformations and finally using Principal Component Analysis to score suppliers and choose the best supplier with the highest score.Ogunkah and Yang proposed a framework to evaluate sustainable construction materials in six categories including general/site, environmental/health, economic, socio-cultural, emotional, and technical factors.Reza et al. studied three types of block joisted flooring systems in the city of Tehran through life cycle analysis and AHP method.In accordance with triple-bottom-line sustainability, the main criteria were divided into environmental, economic and socio-political factors.Akadiri and Olomolaiye defined a set of criteria to select sustainable materials by reviewing the literature.Having modified the criteria set by means of a pilot questionnaire, they specified technical, socio-economic and environmental criteria as the main indicators.In further studies, carried out by Akadiri et al. , a model has been presented to rank sustainable assessment criteria by fuzzy extended analytical hierarchy process.They asserted that there was no specific and acceptable framework to choose sustainable construction materials.Consequently, they offered a set of guidelines to evaluate the proper criteria for sustainable material selection.They suggested that the criteria should be comprehensive, applicable, transparent and practicable.Florez and Castro-Lacouture , proposed objective and subjective factors applied by designers to choose the best sustainable material.Objective factors refer to technical and quantifiable data including environmental requirements, budget constraints and LEED-requirements while subjective factors are employed to quantify visual perceptions of sustainability.Govindan et al. presented a model to evaluate sustainable materials through a hybrid multi-criteria decision-making methodology.The findings showed that environmental aspects followed by social sustainability were of the highest priority, respectively.Economics, which is usually in conflict with environmental concerns, had the lowest rank.Life cycle assessment is a standard international method to analyze environmental impacts of any product, process or system.It is a ‘cradle to grave’ approach considering all stages of a system’s life - raw material acquisition, manufacture, transportation, installation, utilization and finally recycling and waste management-.As shown in Fig. 1, LCA can be defined in four steps: Goal and scope definition, Inventory analysis, Impact assessment, and Interpretation phase.Goal and scope definition phase: The first step of a LCA is to define the scope and the key goals expected to be achieved.This phase is significant in determining the orientation and the gamut to which the LCA will be directed.The system boundaries, the functional unit, the impact categories and the relevant scenarios are specified in this phase.Inventory analysis phase: In life cycle inventory analysis, data are collected and computed in order to quantify input and output of the product system over its whole life cycle.The input is comprised of energy, water and raw material, while the output includes emissions sent into the air, water and land.The extent of the process is determined by the goal and scope defined in the prior phase.Impact assessment phase: In the third phase, the input and output of the investigation are translated into environmental impacts categories.In fact, numerical indicators of particular categories quantify the environmental burden of the system or the product.Interpretation phase: Life cycle interpretation is the last step of the LCA analysis.It incorporates the findings of LCIA and LCI phases to specify the significant input and output and environmental impacts of a system.It evaluates and analyzes the results, draws conclusions, describes limitations and provides recommendations in order to make the best decision.Decision-making is a vague human activity that often occurs in any individual life in which two or more alternatives are available.The multiple, incommensurate and contradictory criteria make the decision a complicated issue .Multiple criteria decision-making is a set of developed concepts, methods and techniques that helps decision makers reach a complicated decision more systematically .Analytic Hierarchy Process was originally proposed by Saaty .AHP arranges all criteria, factors and corresponding elements in a hierarchy tree, then uses a series of pair-wise judgments to prioritize them .There are three super matrices associated with ANP.The unweighted super matrix includes the local priorities which are extracted from the pairwise comparisons.The weighted super matrix is presented by multiplying all elements in a component of the unweighted super matrix by the corresponding cluster weight matrix, so that the sum of numbers in each column is 1.The column vectors of the cluster weight matrix could be specified by the eigenvectors of the pairwise comparison of clusters.The limit super matrix is achieved by multiplying the weighted super matrix by itself.When the numbers of columns become the same for each column, the limit matrix will be reached and the matrix multiplication process will be stopped.Final priorities: The priorities of elements can be found from the corresponding columns in the limit super matrices.More calculations can be made to gain desirability index .To do further studies about ANP, Saaty , Erdem and Ozorhon and Ozcan Deniz are recommended.In this section, through life cycle assessment and ANP method, a computer model was designed to choose sustainable materials.Three alternatives were evaluated for exterior enclosure in a residential building in Tehran as a case study.Detailed steps are presented below.The inventory flows of environmental impacts were obtained from BEES 4.0 database.Economic data were collected from a price list in the construction field annually published by Management and Planning Organization in Iran.Socio-cultural data, as qualitative data, was achieved by the pairwise comparisons of ANP method.For any socio-cultural criteria and sub criteria, pairwise comparisons between three alternatives were made based on experts’ opinion.First, the experts completed the pairwise comparison questionnaire, and then their incorporated opinion was achieved by geometric mean of their answers.The technical data were obtained from material standard books.Impact assessment is more significant than inventory analysis to choose materials.After determining the relationship between inventory and environmental impacts, criteria’s importance and priority were specified.The impacts included four groups: Technical, socio-cultural, economic, and environmental impacts.Then the priorities of criteria and sub-criteria and preferences of alternatives were computed based on experts’ consensus and ANP decision-making method.To realize the importance of the criteria and the sub-criteria, eight pairwise comparison questionnaires were designed based on ANP method.Thirty architects and designers in Tehran responded to this questionnaire.The experts have a range of 5–20 years of experience in designing buildings in Tehran, moreover, they are completely familiar with sustainable building principles due to their researches or academic positions.30% of experts have a PHD degree in architecture, 53% of them have M.Sc while the rest of them have B.Sc.The reliability of the test was controlled by Cronbach’s alpha.Cronbach’s alpha provides a measure to observe the internal consistency of a test.Internal consistency defines the scope in which all the elements of a test weigh the same concept or construct.Different reports determine acceptable values of alpha ranging from 0.70 to 0.95 .In this study, Cronbach’s alpha was calculated by SPSS and it was at an acceptable level.At the final stage, the results were analyzed, significant subjects and criteria were specified, the conclusions were reached, the limitations were revealed, and the recommendations were made based on the results of the prior phases.In this study, the main criteria were divided into four groups including economic, technical, environmental and socio-cultural criteria, which were also parted into further sub and subsidiary criteria.According to literature review, the list was presented by the authors and modified based on five experts’ opinion through adding, removing, or changing the classification.Super Decision software, version 2.2.6 Beta was used to establish the model and make ANP calculation.According to Fig. 4, the first level is comprised of the main criteria to select the most sustainable materials for exterior enclosure.This level is divided into four categories: Technical, socio-cultural, economic and environmental criteria.In the next two levels, sub-criteria and subsidiary criteria were presented.There were three alternatives for exterior enclosure at the last level- including brick and mortar wall, aluminum siding, and cedar siding.It should be noted that the dependence relationships among the criteria and clusters were identified based on literature review.In addition to external dependencies, there were internal dependencies between the sub-criteria and subsidiary criteria, based on experts’ opinion.Consequently, the super matrix can be specified as follows:In the third step, after entering all the pairwise comparisons in the unweighted super matrix, the cluster matrix was multiplied by the unweighted super matrix to reach the weighted super matrix.Afterwards, the limit super matrix was achieved by multiplying the weighted super matrix by itself.It is remarkable that the number in each row of the limit matrix demonstrates the preference or priority of the element.At the last step, alternatives were prioritized by the row number in the limit matrix.In Fig. 2, the raw column consisted of the numbers of limit super matrix and indicated the priorities of alternatives.The normalized numbers were mentioned in the normal column.Through dividing normal numbers by the maximum of them, ideals were computed and the alternatives were represented between 0 and 1 according to their desirability as shown in Fig. 6.Regarding the super decision calculation, the best sustainable material is aluminum siding.The second one is brick and mortar wall while cedar siding represents the least sustainable option between the three alternatives.However, it is possible to assess the alternatives based on different sustainability indicators.Here, the priorities of criteria to select sustainable material were reviewed based on experts’ opinion.The numbers in weighted super matrix and limit super matrix highlight the importance of the criteria, sub-criteria and subsidiary criteria.Numbers gained from the weighted super matrix represent the importance of criteria in their own group and they are normalized in their cluster, whereas numbers in the limit super matrix show the importance of criteria to achieve the main goal i.e. to select the best sustainable material.In Table 5, the importance of sub-criteria in their groups and their significance to achieve the goals of the model are presented.The most important criteria were fire resistance, human comfort and health, life expectancy, water resistance and environmental impacts, respectively.There were two sub-criteria in the technical criteria group and three sub-criteria in the environmental criteria group.Although the most important sub-criterion, fire resistance, is in the technical criteria group, it also affects human safety during construction process which is located in human comfort and health group.Eventually, the importance of environmental subsidiary criteria can be compared with each other.The most important subsidiary criteria were human health, energy saving and thermal insulation, safety during construction process, human health and global warming, respectively.As indicated, human health and comfort along with energy consumption and climate change were of high priority.This study aimed at presenting a model to choose sustainable construction materials.The model mainly focused on local experts’ opinion to consider internal factors and the existing situation.The model was examined to choose exterior enclosure material of a residential building as the most common building in Tehran.However, it can be extended to any given element of buildings in different sites and for diverse types.Thus it is logical to achieve different priorities and results under various conditions.There were some notable limitations in conducting this study.It was challenging to sort environmental criteria into three groups including energy and resources consumption, human comfort and health and environmental impacts.For instance, air pollutants, ecological toxicity and ozone depletion which were sorted into environmental impacts group definitely affect human health.Regarding quantitative deficiency, lack of precise information and complexity of relations and interactions, these effects were not considered in this study.To evaluate energy efficiency of a building, it is necessary to use simulation software such as Energy Plus, DOE-2 etc.Moreover, annual energy consumption, energy unit cost, interest rate and economic methods such as NPV are needed to calculate energy cost during buildings’ operation period.However, in order to simplify the computation in this study, thermal resistance of the exterior enclosure was used as the energy cost indicator.In this study, experts were designers and architects.However, designers and architects choose the materials in design and engineering phases, their choice can be impressed by other stakeholders in whole life cycle of a project.It is recommended that in future studies, weights and priorities of the criteria be computed based on other stakeholders’ viewpoints such as civil and structural engineers, owners and clients, and occupants as well as governmental organizations and legislatures.Eventually, it can be interesting to examine the possibility of presenting a model to choose sustainable construction materials concerning all stakeholders’ opinion.
One of the most optimal strategies to achieve sustainable constructionis to select materials which reduce the environmental footprint. In this regard, designers and architects are suggested to observe these considerations in the earliest stages of the design process. This study presents a model to choose the best sustainable material for buildings. Hence, life cycle assessment is used to perceive the holistic impacts of materials on the environment which considers all phases of the product's life. Herein, through sustainable principles, selection criteria were divided into four groups which are marked as economic, technical, socio-cultural and environmental factors. Then, each of them was assigned to a number of sub-criteria. Analytic Network Process was applied as a multi-criteria decision-making method, assuming relationships among the criteria and the sub-criteria. Priorities of the criteria were computed based on experts’ consensus, extracted from pairwise comparison questionnaires. To establish the selection model, three alternatives, including brick and mortar wall, aluminum siding, and cedar siding, were proposed for exterior enclosure. The results revealed that aluminum siding is the best sustainable alternative while cedar siding represents itself as the least sustainable option. The importance of the criteria and sub-criteria in choosing sustainable materials was determined through this model.
31,609
Neurobiological Basis of Language Learning Difficulties
Children with developmental language disorders struggle to learn new words and syntactic constructions .Is this a linguistic problem, or do they exhibit difficulties with learning new information more generally?,Learning is not a unitary phenomenon.Neuropsychological studies have suggested functional and neurological distinctions between different types of learning.Ullman and Pierpont were the first to suggest that the procedural learning system, which is involved in implicit learning, was impaired in individuals with SLI.They proposed that procedural impairments could account for poor learning of grammatical rules, such as the past tense inflection of regular verbs.The postulated impairments in procedural learning were not, however, specific to language; they would have broader effects, with deficits predicted in the acquisition of any skill involving sequences – irrespective of whether the sequences were sensorimotor or abstract.By contrast, declarative learning systems, which support the sort of idiosyncratic mapping required to learn new vocabulary or inflection of irregular verb forms, were argued to be relatively intact.The procedural deficit hypothesis inspired a series of studies examining the non-linguistic procedural learning abilities of children with SLI, typically using a serial reaction time paradigm.A meta-analysis of eight studies using an SRT paradigm with children with SLI and age-matched controls revealed small but significant effects of language impairment on this task , of the order of 0.33 of a standard deviation.In studies with younger participants, larger effect sizes were found.In SLI, learning was impaired more when sequences were long and complex .Similar problems with learning implicit sequences in the SRT task are also seen in younger typically developing children matched on grammatical ability, suggesting that implicit sequence learning in SLI may be immature rather following an atypical developmental trajectory .The learning abilities of individuals with dyslexia have also been examined using SRT measures, motivated by broader theories suggesting that the automatisation of learning is impaired in this disorder .A meta-analysis of nine studies that used SRT paradigms with individuals with dyslexia revealed a moderate effect of having dyslexia .This meta-analysis also indicated that age and sequence type influenced the likelihood of finding a difference between dyslexic and control groups.SRT paradigms emphasise the motor aspects of procedural learning.However, these groups also show learning impairments in non-motor paradigms.Both children with dyslexia and adults with SLI appear to have difficulty extracting structure from novel sequences in artificial grammar learning paradigms.These difficulties in making judgements about grammaticality are not related to problems holding information in mind because deficits are present even when children with dyslexia can accurately recall a training sequence from memory .Other studies have found that children with SLI perform worse than typically developing peers at extracting regularities from speech streams in statistical learning paradigms , although they can extract relevant information when exposure is doubled .Adults with dyslexia also find extracting regularities in these statistical learning tests difficult, and their performance correlates with their reading ability .Although these tasks stretch the definition of procedural learning provided by Ullman , there is some indication that the same implicit learning processes are involved.The aforementioned implicit learning studies have shown that those with language disorders are less able to learn regularities in sequences, even when these are non-linguistic.Difficulties with sequential learning are not confined to the encoding stage.There is emerging evidence that individuals with SLI and dyslexia do not consolidate and retain sequence knowledge as effectively as other children.There is some evidence for these learning deficits patterning with individual differences in grammatical skill , but not vocabulary , in children with SLI.Although the literature reviewed above suggests that children with SLI and dyslexia are impaired in sequential procedural learning tasks, these deficits could simply indicate a generalised learning deficit.In the following we argue that this is not the case, on the basis of studies that have probed declarative learning as well as non-sequential procedural learning.Declarative learning is thought to be an area of relative strength in children with SLI and dyslexia .Despite this, relatively few studies have empirically examined declarative learning in these groups.In tasks that involve encoding and retrieving word lists, children with SLI perform poorly relative to age-matched controls .However, individual differences in working memory seem to account for these differences , suggesting that the ability to hold information in mind for short periods of time may be the limiting factor for declarative learning.Another study demonstrated that children with SLI and controls show equivalent non-linguistic paired associate learning .In addition, this study showed that their rate of learning verbal–visual mappings over four sessions is comparable to their typically developing peers, although their initial learning of these mappings was more severely affected .Children with dyslexia also show equivalent learning to age-matched peers on visual-visual paired-associates learning, but less well when verbal-visual or verbal-verbal mappings must be made .These results suggest that paired-associated learning is impaired when it requires learning of a novel sequence of speech sounds – which taxes the procedural system – but that learning of arbitrary associations, which employs declarative learning, is intact.Further evidence that declarative memory is unimpaired comes from a study reporting that, when an implicit AGL task is made explicit, learning differences are no longer seen in a group of adults with dyslexia .In addition to relative strengths in declarative memory, not all forms of implicit or procedural learning are impaired in individuals with developmental language disorders.Both children and adults with dyslexia showed similar implicit learning to controls in non-sequential contextual cueing tasks .Children with SLI also show learning similar to that of age-matched controls in other non-sequential procedural learning tasks such as the pursuit rotor task; they do not differ from controls in eyeblink conditioning, which engages corticocerebellar circuits .However, a sequential learning deficit cannot explain all the evidence.Probabilistic category learning tasks, such as the ‘weather prediction’ task, have also been used to probe procedural learning in these groups.Adults with dyslexia or SLI , but see do not acquire implicit categorical knowledge at the same rate as age-matched controls.One possibility is that individuals with language disorders struggle in conditions where learning dimensions are not explicitly defined.Another is that these learning deficits occur concurrently with core sequence learning difficulties, perhaps owing to impairment in overlapping neural circuits.In summary, individuals with language and literacy disorders have difficulties with procedural learning in sequence-based tasks, but appear to be relatively unimpaired on declarative and non-sequential procedural learning measures.This may explain why their difficulties are more prominent in language tasks – which heavily load on to extracting and producing sequential information.However, no type of learning is purely declarative or procedural in nature, and the ways in which these distinctions apply to language learning in particular needs clarification.Language learning involves many different processes, such as extracting implicit knowledge about how sequences of sounds and words combine, learning novel mappings between words and referents, and consolidating learned knowledge to make it readily accessible.We review here how neurobiological learning systems are involved in some of these different aspects of language learning, and how these map onto conventional knowledge about the roles of these systems.Although we describe differences in the structure and function of the striatum and hippocampus, these structures are connected to each other as well as to the cortex and other subcortical structures.Functional interactions between these regions have been described during learning .Consequently, changes in functional neural activity in one of these regions during language learning do not imply that this region is solely responsible for that type of learning, but rather that this might reflect a local change within a hub of a broader learning network.The extraction and encoding of verbal sequential regularities is particularly relevant to learning the phonology and grammar of a language.These are learned implicitly, and can be considered as examples of procedural learning.The frontal cortex and the basal ganglia appear to be relevant to such learning .For example, the left inferior frontal gyrus and the bilateral striatum are recruited for statistical learning of word boundaries in an artificial language .People with striatal degeneration are impaired at using sequential regularities in artificial speech streams to derive ‘morpho-syntactic’ rules and ‘words’ .However, extracting sequential regularities is not purely dependent on the striatum, and declarative memory systems also show some involvement in this process.A study using a Hebb repetition learning task replicated findings of a correlation between striatal activation and learning .Nevertheless, multivariate analyses revealed that the hippocampus and medial temporal lobe were coding the identity of repeating sequences.More recently, overlapping spatiotemporal networks that include auditory cortex, regions considered to be part of the dorsal speech-processing stream , including the striatum and the hippocampus, have been shown to be differentially engaged as people learn to identify ‘words’ in an artificial language .Similar cortical results have been shown using a natural language task and artificial grammar learning paradigms .These findings indicate that interactions between corticostriatal and corticohippocampal regions occur over the course of learning.Word learning involves mapping a novel sequence of sounds to a referent.Learning arbitrary mappings is a classic ‘declarative’ task, and there is ample evidence suggesting that the hippocampus is an important region for encoding such mappings.For example, in an fMRI study examining how adults learn new vocabulary, activity over the left hippocampus and fusiform gyrus declined as associations between pseudowords and pictures were repeated .Other word learning studies have shown that hippocampal activity at the encoding stage relates to whether words are subsequently recalled or recognised .Davis and Gaskell have suggested a two-stage account for word learning, where rapid initial learning dependent on the hippocampus is followed by a slower consolidation process where there is a transfer of learnt information to the cortex, particularly superior temporal, inferior frontal and premotor regions.It would be rash to conclude that the hippocampus is necessary and sufficient for word learning.Studies on patient H.M. indicate that residual semantic learning is present, despite his bilateral and complete hippocampal lesions .In addition, cases with bilateral hippocampal damage sustained in childhood perform at average to low-average levels on standardised verbal measures , suggesting that semantic learning can rely upon areas adjacent to the hippocampus within the MTL.Furthermore, regions involved in word learning extend beyond the MTL.Recent work shows that creating sound–meaning links also recruits the striatum.The ventral striatum is activated as these links are learned, suggesting a role for reward-based circuitry in learning novel words .The dorsal striatum also responds to feedback in verbal paired-associate tasks , especially when participants believed that the feedback was indicative of achievement .The striatum is also recruited when learning to produce novel words.Activity in the striatum decreases as people covertly repeat words in their native language , as well as when they learn words in a non-native language : this reflects articulatory learning of the sequence of sounds, from an initial phase of sequencing novelty to habitual performance of an utterance.This is evidence that both corticostriatal and corticohippocampal networks are involved in word learning, although they seem to be tied to different aspects of this process.Corticostriatal networks are responsive to the motor and sequential demands of word learning, with some indication that reward-related circuitry might play a role in sound–meaning mapping.When learning a language, listeners must also learn to group sounds they hear into the categories relevant in that language.Given that speech sounds are multi-featured and variable, single acoustic features cannot be used to learn these distinctions.Learning occurs in a probabilistic fashion and theoretically should involve procedural learning systems.A few studies have explored the brain systems involved in speech category learning .A recent study examining the dynamics of non-native speech category learning in adults showed that this learning is initially associated with activation in both hippocampal and corticostriatal circuits.Across learning trials, participants’ behavioural responses indicated a shift from a rule-based strategy to one that is more procedural.In line with the crossover to a procedural strategy, the corticostriatal system showed increased activation during learning and was associated with better categorisation performance.In summary, domain-general learning mechanisms involving striatal and MTL circuits are also recruited for speech and language learning.Corticostriatal systems are involved when adults learn speech sequences for articulation and when complex regularities in auditory sequences must be extracted.MTL circuits are relevant for learning arbitrary and explicit however, no single speech or language behaviour is associated with corticostriatal or MTL circuits alone; instead there are interactions between these learning systems as language is learned.Given the difficulties in language learning experienced by children with SLI, we might expect them to exhibit structural or functional differences in neurobiological learning circuits.A simple prediction based on their behavioural profile is that they should show abnormalities in the basal ganglia, but their hippocampi and medial temporal cortices will resemble those of age-matched controls.However, given that SLI and dyslexia are neurodevelopmental language disorders, we might expect the profiles of impairments to change during development.The majority of studies on the brain bases of SLI and dyslexia focus on cortical anatomy, with particular reference to hemispheric asymmetries .However, the neurobiological literature needs to be interpreted cautiously, given the inconsistencies in the direction of results, the small numbers in each group, the heterogeneity in defining the disorder, and the different age-ranges used across different studies.Bearing in mind these caveats, there is evidence of subcortical abnormalities or atypicalities in individuals with SLI, particularly in the striatum.Studies converge to indicate that the volume of the caudate nucleus is altered in children with SLI relative to typically developing peers .Some studies suggest that a reduction in volume is observed , which would pattern with the bilateral reductions in caudate nucleus volumes observed in affected members of the KE family .However, others have reported increases in caudate nucleus volume .The changes in directionality of differences might be accounted for by differences within the analysis pipeline across these studies.The available literature also indicates that striatal differences are affected by age.Early differences observed in striatal volumes between children with SLI and typically developing children appear to normalise by late adolescence , although longitudinal studies are necessary to confirm this point.In contrast to the findings from individuals with SLI, structural differences in the striatum are only inconsistently observed in those with dyslexia.A recent well-powered cross-linguistic study found only one regional difference – reduced grey matter in the left thalamus .With respect to language processing, stimulation studies of the thalamus indicate a ‘specific alerting response’, which could gate the entry of language information to the perisylvian cortex, and is implemented via thalamic connections to the striatum and cortex .The alerting response is thought to accelerate language and memory processes because gating of different cortical networks could allow enhanced encoding and retrieval of specific memories .However, even this structural difference is not observed across all studies.A possible explanation for lack of consistent structural differences is the behavioural heterogeneity displayed by this group .While phonological skills are thought to be impaired in those with dyslexia, dyslexic readers do not struggle with identical aspects of phonology and there are children with reading disorders who have unimpaired phonology .In addition, dyslexia results from a combination of multiple risk factors, including phonological problems, as well as motor, oral language, and executive functioning deficits .Studies comparing dyslexics to controls may therefore be grouping together individuals with varying aetiologies.Functional studies, however, have indicated that adults with dyslexia show hyperactivation of the striatum .This striatal overactivity was not seen in children with dyslexia, leading the authors to suggest that striatal overactivity may be a compensatory mechanism in adulthood.In line with this, a recent study suggests that children with dyslexia show striatal overactivity when phonological tasks are simple but not when they are complex .Functional studies of children with SLI also report increased activity in the head of the right caudate nucleus for phonological and executive tasks .Striatal changes may not suffice to cause language disorders.In the study by Badcock and colleagues , the unaffected siblings of children with SLI also had significant reductions in the volume of the caudate nucleus relative to typically developing children.It is possible that striatal abnormalities act as a heritable risk factor for language disorders, but other risk factors are necessary before the disorder manifests.If this is the case, some neurological differences may be protective.A recent structural network analysis showed that the hippocampus, temporal pole, and putamen were less strongly connected in individuals with a higher risk for dyslexia relative to those with low risk .Intervention studies with dyslexia suggest that hippocampal volumes are enlarged after training where behavioural gains are made, suggesting successful compensatory change .Finally, structural and functional differences between children with SLI and controls have been reported in inferior frontal , temporal , and inferior parietal cortex , and in white matter tracts connecting these regions .These differences suggest that it is important to consider the entire learning system, including regions of the brain that might be involved in the consolidation and storage of linguistic and sequential knowledge.A cautionary note from this area of research is that findings from brain imaging do not consistently replicate .There is a need for well-powered neuroimaging studies to address brain–behaviour relationships in language disorders, allowing us to take into account the heterogeneity of language disorders and their diagnosis.To ensure that fMRI findings are not simply descriptive of a specific sample, we need to test whether fMRI findings generalise beyond the tested sample, scanner, and stimuli used.Individuals with SLI and dyslexia have difficulties in performing sequential procedural tasks and learning from feedback, but not in simple mapping tasks or non-sequential implicit learning.In language learning tasks, corticostriatal systems have been shown to be involved in acquiring complex motor routines that are relevant to speech and in learning speech categories from feedback.Given the evidence of abnormalities in the structure and function of corticostriatal systems in developmental language disorders, a plausible bridging hypothesis is that dysfunctions of corticostriatal systems can explain difficulties in learning language.These difficulties are likely to have greater impact on aspects of language that involve learning complex rules that are probabilistic and sequential, such as phonotactics and morpho-syntax, but would also affect the ease with which learned motor skills become habitual.A facet that is currently missing from the literature is that both neurobiological and behavioural studies in these groups suggest that the influence of corticostriatal learning systems, and their impact on behaviour, changes substantially with age.There is a need for longitudinal studies in this area – to explore the trajectory of corticostriatal dysfunctions during development as well as how these pattern with learning behaviour.Such studies would also be helpful in establishing whether these learning differences cause language disorders, or whether they are a consequence of the same."Corticostriatal dysfunctions have also been noted in psychiatric and other neurodevelopmental disorders, such as schizophrenia, obsessive-compulsive disorders, Tourette's disorder, and attention-deficit/hyperactivity disorder .However, different computational models explain the behavioural learning profile in each of these disorders."For instance, dysfunctions of the ventral striatum and orbitofrontal/prefrontal cortices are linked to ADHD, but Tourette's disorder is better explained by an imbalance of the direct/indirect pathways .It is not yet clear what distinct corticostriatal circuit dysfunction might distinguish language disorders from these other disorders with that exhibit very different symptomatology.One way to probe the specificity of learning impairments in developmental language disorders is to use learning tasks that are known to pattern with specific brain regions or pathways.Our working hypothesis is that developmental language disorders are more likely to be associated with corticostriatal loops involving the dorsal striatum, and that learning impairments in this group will be more evident when stimulus–response associations rather than state values must be learned.Probing learning in these groups is likely to be helpful for designing better intervention.Different strategies are likely to be of benefit to typically developing children and those with language disorder.For example, studies with typically developing children suggest that greater variability in sentence structure is beneficial for learning syntax, but this variability did not aid children with language disorders .Comprehending the nature of learning difficulties in children with language disorders will allow us to design interventions to circumvent these issues.Understanding the neurobiological interactions between learning systems might also offer insight into what might be optimal strategies.Studies of patients with acquired striatal or MTL damage suggest that altering the way we present information in a task, for instance, by changing the timing or valence of feedback , affects learning performance as the relative involvement of striatal and MTL systems is changed.We need fMRI studies on children with SLI and dyslexia that use tasks that tap into language learning, and that are known to activate striatal or MTL systems.These will be key to understanding whether and how these learning strategies might alter learning outcomes for those with language disorders.Are procedural learning difficulties a cause of language learning difficulties?,The alternative explanations are that they co-occur with developmental language disorders, or are a consequence of language disorders.Are procedural learning difficulties specific to language disorders?,Procedural learning impairments have been reported in a range of different neurodevelopmental disorders such as autism, ADHD, and Williams syndrome.Is procedural learning particularly vulnerable during development?,If this is the case, is there a set of procedural learning difficulties that distinguish language disorders from other neurodevelopmental disorders?,On a related note, what corticostriatal dysfunctions are specific to developmental language disorders?, "Corticostriatal dysfunctions are observed in psychiatric and other neurological disorders, for example Tourette's disorder, addiction, and Parkinson's disease.What is the best network model to explain the behavioural difficulties faced by children with developmental language disorders?,Are abnormalities in the structure and function of corticostriatal systems linked to individual differences in learning?,What behavioural measures and brain activities are reliable indices of procedural and declarative learning systems?,Are there interactions between learning systems that can be exploited for learning?,If so, why do relatively typical hippocampal learning systems not compensate adequately in developmental language disorders?,Can the conditions that promote learning in neurotypical individuals be applied to aid those with developmental language disorders, or do the conditions that benefit learning differ?,When learning sequential and non-sequential information, do children with language disorders engage different neurobiological learning systems?,What makes one system resilient and not the other?
In this paper we highlight why there is a need to examine subcortical learning systems in children with language impairment and dyslexia, rather than focusing solely on cortical areas relevant for language. First, behavioural studies find that children with these neurodevelopmental disorders perform less well than peers on procedural learning tasks that depend on corticostriatal learning circuits. Second, fMRI studies in neurotypical adults implicate corticostriatal and hippocampal systems in language learning. Finally, structural and functional abnormalities are seen in the striatum in children with language disorders. Studying corticostriatal networks in developmental language disorders could offer us insights into their neurobiological basis and elucidate possible modes of compensation for intervention.
31,610
Molecular Investigation of the Ciliate Spirostomum semivirescens, with First Transcriptome and New Geographical Records
The genus Spirostomum Ehrenberg, 1834, currently comprises eight species of ciliates found globally in fresh and brackish water habitats.These single-celled eukaryotes can be found in high abundances, and some species can obtain body sizes that are visible to the naked eye, e.g. S. ambiguum.The ciliate S. semivirescens is a large protist with densely-packed endosymbiotic green algae that resemble Chlorella.Despite its large size and conspicuous bright green color, it is still largely absent from published global ciliate species lists, with only a few sparse records of the species.This makes the species an ideal candidate in which to investigate its biogeography.Neither the algal endosymbiont nor the ciliate host have benefitted from molecular examinations, even though this is an active area of research for other species of ciliates, especially Paramecium bursaria.Different ways to adapt to anoxic environments have been described among ciliate species.Ciliates are also known for their wide diversity of genetic codes, where stop codons are recoded to be translated into amino acids.To get insight into how such traits have evolved, large-scale data sets, which cover the whole genome content of the species of interest, are needed.In this study we generate such data by RNA sequencing at the single cell level.S. semivirescens was specifically targeted, as it has been missing from earlier examinations of this well-studied genus.In the research presented here, S. semivirescens was isolated from freshwater habitats in the UK and Sweden.Transcriptome data was also generated from another Spirostomum species to complement our investigation into S. semivirescens.Data generated in this study is a necessary piece for improved understanding of the Spirostomum genus and the whole suborder Heterotricha.Molecular data for S. semivirescens is provided for the first time, along with the first molecular identification of the symbiotic algae associated with this species.The Spirostomum semivirescens found thriving in the UK’s anoxic ditch sediments matched exactly the previously described records of occurrence and morphology from a fen pond ∼100 meters away.Densities of up to 15 cells per mL were observed, with cells being maintained in natural samples for one week after collection.When left undisturbed for about one hour the ciliate builds an external case or coating; the ciliate is contractile, and retracts into the casing if disturbed.This could provide protection during a dispersal event.S. semivirescens was not observed to form cysts; however, there are records of other Spirostomum species being able to form cyst precursors.Cells were always found to be densely packed with bright green endosymbiotic algae.S. semivirescens from Swedish study sites was immediately identified from the freshly-collected samples from both locations as being morphologically identical to the UK strain, and the additional diagnostic literature.S. semivirescens was found to be 800–1,500 μm in length and 25–45 μm in width with more than 50 cells being measured.Densities of up to 30 cells per mL were observed, but more often were found to be 5 per mL from both locations, each showing productive ciliate concentrations, with green Frontonia reaching up to ∼1,000 per mL, especially from an algal mat sampled in Stadsskogen.The S. semivirescens cells were observed to build a loose casing, be contractile, and always densely-packed with endosymbiotic green algae.The casing observed in the Swedish specimens of S. semivirescens was larger and less densely packed than observed in the UK, perhaps due to different composition of available sediments and/or to the length of time that the ciliate samples were left undisturbed, allowing them to build a larger protective coat.The samples were collected during a warm period in August 2015, but S. semivirescens was later found to thrive during much colder periods in winter, even being regularly recovered from the habitat under a ∼15 cm thick layer of ice.For all seven transcriptomes a total of 9.3 Gb sequencing data was generated.Low levels of contamination were indicated by MEGAN that assigned less than 5% of the contigs as prokaryotic in each assembly.Less than 4% of the contigs were classified as Viridiplantae, despite the high number of algal endosymbionts in S. semivirescens.For 17% of the 23,933 transcripts in the co-assembly more than 10 reads from each of the six S. semivirescens mapped and for 49% of the transcripts 10 reads or more from at least three different replicates mapped.Based on this level of consistency between the transcriptomes and the similar relative expression level of transcripts between replicates we decided to use the co-assembly in downstream analysis.The phylogenetic analysis of the Spirostomum genus was based on a concatenation of 18S rRNA gene, 28S rRNA gene and the internal transcribed spacer region in between the two rRNA sequences.The tree topology showed that members of major Spirostomum clades grouped together in the same way as observed in earlier studies.However, relationships between these clades changed and S. teres together with S. yahiui, S. dharwarensis and S. caudatum was placed as a sister clade next to S. minus.The regions used to infer the phylogeny shared only a total of one mismatch in the 18S rRNA region, no mismatches in the ITS region and three in the 28S rRNA region between the six replicates.Based on the number of PCR cycles used prior to sequencing this is in line with what could be expected from polymerase errors.Therefore, only one S. semivirescens taxon is placed in the tree.The phylogenetic analysis indicates that S. semivirescens is most closely related to the members of the clade earlier referred to as “minus clade 2”.This clade consists of Spirostomum minus and an unnamed species first discovered by Shazib et al.The unnamed species in “minus clade 2” was placed as sister taxon with high support together with the colorless Spirostomum species found during this study which was not identified prior to sequencing.No algal 18S rRNA gene could be found in any of the transcriptome assemblies, despite the high number of algal endosymbionts in S. semivirescens.It is possibly that lysis of the algae was inefficient leading to poor transcriptome coverage of the endosymbiont.However, in five of the six S. semivirescens transcriptome assemblies a 28S rRNA gene could be found with high identity to Chlorella vulgaris.Transcriptome data of another ciliate that harbors similar endosymbiotic algae, Stentor polymorphus, has been generated for species from the same pond in Stadsskogen sampled in this study.If the algae observed in the S. semivirescens transcriptomes are contamination, the same contamination could potentially be observed in the S. polymorphus transcriptome.No 28S rRNA gene identical to the assumed S. semivirescens endosymbiont could be found in the S. polymorphus data.Instead another 28S rRNA gene with high identity to Chlorella vulgaris was found.Except for the 28S rRNA gene with high identity to Chlorella vulgaris, no other algae related rRNA sequence was detected more than once in each transcriptome.Both the Spirostomum and Stentor algal endosymbiont sequences branched together with Chlorella vulgaris in the tree.Members of the Spirostomum genus are often encountered in the oxygen-depleted sediment layers of waterbodies.Thriving in these habitats would require the ciliate to be able to respire under anoxic conditions.Therefore, proteins involved in previously described anaerobic respiration pathways were searched for in the S. semivirescens transcriptome to get better insight into its anaerobic lifestyle.A tblastn search could identify a match for the rhodoquinone biosynthesis protein RquA of the bacterium Rhodospirillum rubrum.The match had an 86% query coverage and 47% sequence identity.The putative RquA sequence from S. semivirescens carried a 21 amino acid mitochondrial targeting sequence at the N terminus with a predicted probability to be targeted to the mitochondrion of 86%.This is consistent with previous reports that have shown eukaryotic RquA has predicted mitochondrial localization.The assumption that the identified potential RquA sequence in the S. semivirescens transcriptome is indeed a true RquA is further supported by the presence of a 9 amino acid motif.This motif contains glutamine and valine in RquA instead of aspartate and glycine used by UbiE and UbiG at corresponding positions.UbiE and UbiG are methyl transferases involved in the ubiquinone biosynthesis, which has a high sequence similarity to RquA.Based on the tblastn search we could not find any evidence for the presence of hydrogenosomes, pyruvate formate lyase activity or dissimilatory nitrate reduction, which have been found in other microbial eukaryotes.The investigation of the codon usage showed that TAA, TAG and TGA are used by S. semivirescens as stop codons.A relationship between gene expression levels and TGA frequency could be observed where TGA was more common among the genes with low expression.Only 3% of the 500 genes with the highest expression had TGA as a stop codon while 19% of the 500 genes with the lowest expression were terminated with TGA.A similar relationship between TGA stop codon frequency and gene expression level was observed when analyzing the Spirostomum transcriptome generated in this study and the Stentor polymorphus transcriptome previously generated.Specimens of S. semivirescens have been recorded from oxygen-depleted, freshwater habitats in the UK before; the isolates used in this study represents a further habitat in the UK, and new records for Sweden from two sites separated by ∼30 km.All strains were observed to be morphologically identical.The molecular analysis revealed identical molecular sequence between strains at the highly variable 18S rRNA level, confirming the match between the two sample groups of this large >1 mm ciliate.By investigating this species at a wider global resolution, the geographical distribution of these micro-organisms has been expanded; at a distance of over 1,600 km between the sampling sites investigated in the UK and Sweden, the discovery of a strain with matching molecular sequences supports previous findings that microbial species thrive wherever the right conditions for their growth are found globally.This has wider implications for global microbial dispersal, particularly ciliate biogeography and biodiversity, with this species being a good target to investigate from other world regions for comparisons at a global level.S. semivirescens has thus far been recorded from Germany, UK, and now Sweden.The Spirostomum minus viride investigated by Foissner and Gschwind in Germany fits the morphological features of S. semivirescens and both are probably conspecific.Records from Russia and Japan have also been reported, which comes to demonstrate that species’ biogeography expands as sampling efforts increase.The phylogenetic relationships found in this study show that the S. ambiguum clade was placed differently compared to previously published phylogenies.When using RAxML instead of IQ-TREE to calculate the phylogeny the same topology as in Shazib et al. was achieved with 66 in bootstrap support for S. ambiguum together with S. subtilis branch as sister clade to both groups of S. minus.The bootstrap support for S. subtilis as sister clade to S. ambiguum was 56.Since the IQ-TREE package contains a wider selection of evolutionary models to choose from and is reported to often find topologies with higher likelihoods than RAxML, the bootstrap values from IQ-TREE were maped on the bayesian tree.S. subtilis was placed as the deepest branching taxa in the Spirostomum genus as seen before in Boscaro et al. but not in Shazib et al.S. semivirescens could be placed with high support in the Bayesian tree as a close relative to S. minus.This is consistent with the similar morphology of the nuclear apparatus, where S. semivirescens and S. minus share the moniliform macronucleus shape.The closest relatives found for the endosymbiotic algae were C. vulgaris and C. variabilis, both reported as endosymbionts in other ciliate species.The TGA frequency was estimated to 11% of the stop codons in S. semivirescens, based on the genes used to investigate the relationship between expression level and stop codon frequency.In another heterotrich, Stentor coeruleus, the TGA frequency is 9%, based on the CDS file available from the online database StentorDB.Swart et al. report 5% and 1% TGA stop codon frequency for Climacostomum virens and Fabrea salina, respectively.However these estimations for C. virens and F. salina were only based on 285 and 96 proteins, respectively.Only 38 species out of 283 had a TGA stop codon frequency below 12%, several of these species could have already had their TGA reassigned since Swart et al. predicted less than 10 TGA stop codons for 11 of these cases.The relatively low TGA frequency among these heterotrichs indicates that TGA termination could have a higher fitness cost compared to the other stop codons.Therefore, there could potentially be a higher fitness gain in replacing the TGA codon for genes with a high expression level compared to genes with a lower expression level.Such selection pressure could cause the observed bias with fewer TGA in highly expressed genes as in S. semivirescens.Since it has been suggested that codon frequency is reduced prior to reassignment of codons, this leads to the question if S. semivirescens could be in an early stage of codon reassignment.Close relatives such as Blepharisma have already reassigned the TGA stop codon, Condylostoma magnum can use all three stop codons, uncluding TGA, as both stop and sense codon and Climacostomum virens has been suggested to be in a transitory state of stop codon reassignment.Given these observations in other heterotrichs, the connection between stop codon reassignment and stop codon bias with expression level could be worth further investigations.Gene expression levels for different stop codons have been investigated before in model organisms, but have turned out to have no relationship.Gene expression levels for different codons have mainly been investigated for sense codons before and have been observed in eukaryotes, e.g. Saccharomyces pombe, whoes stop codon frequency also correlate with expression level.However, in S. semivirescens for most sense codons the frequency is changed slightly for the top 1,000 highest or lowest expressed genes and for some sense codons the frequency is not affected by expression level at all.Interestingly, the TAA frequency, which seems to be affected by mutational biases, is rather constant for S. semivirescens when comparing different expression levels.In S. semivirescens the decrease in TGA frequency with higher expression levels is leading to an increase in TAG frequency, a change that requires the change of two nucleotides, instead of one.We suggest that S. semivirescens uses rhodoquinol-dependent fumarate reduction to respire under anaerobic conditions.This is based on the high sequence identity to RquA found in Rhodospirillum rubrum, and the presence of the expected motif and mitochondrial targeting tag.A potential RquA sequence could also be found as well in the Spirostomum sp. data generated in this study.In both cases the sequence identity to R. rubrum RquA was above 40% and query coverage above 85%, the RquA motif was found and probability of export to mitochondria were over 70%.Since a putative rquA gene was found in the two Spirostomum species and this gene has also been reported in several other ciliates from the class Heterotrichea, the whole Spirostomum genus might use this pathway for anaerobic respiration.The heterotrichs formed a monophyletic group in a phylogenetic analysis of the RquA protein from both prokaryotic and eukaryotic species.The relationship between the heterotrichs in the RquA phylogeny mirrored the topology of a phylogenetic analysis for their respective 18S rRNA genes.Additionally, Stairs et al. located a potential rquA sequence in the Stentor coeruleus genome, generated by Slabodnick et al., which gives further support to that heterotrichs code for rquA within their genome.Therefore we suggest that the identified rquA genes in this study are highly unlikely to be a contamination.As more data are generated at the genomic level for different species in the Spirostomum genus, the relationship between major clades can be resolved.With the rRNA data that is currently available, S. semivirescens can be assigned as closest relative to S. minus and the endosymbiotic algae was identified as a member of the Chlorella genus.Insights into the transcriptome suggest that S. semivirescens use rhodoquinol-dependent fumarate reduction for respiration under anoxic conditions, which is likely also used by the other members of the genus since it has been observed in other species from the class Heterotrichea that also thrive in anoxic habitats similar to those where S. semivirescens is found.Our observations indicate that S. semivirescens could be in an early stage of codon reassignment.Therefore S. semivirescens could potentially be a relevant species to study for a better understanding of the evolution of the genetic code.Our results also indicate that it is possible for ciliates with identical morphologies, but from distant geographical areas, to also have identical molecular signatures.Study sites: UK study site.Ciliates were sampled during June 2015 in Dorset, South England from a fen pond and from a freshwater ditch, both located on the flood plain of the River Frome.Spirostomum semivirescens had previously been shown to thrive within this area and this site is known to be a hotspot of ciliate biodiversity, with sampling efforts often revealing the S. semivirescens species.The fen habitat is densely wooded and dimly-lit with temporary ponds rich in organic sediment.The ditch had similar parameters, and was about 100 meters away from the fen.Oxygen levels were very low.The sediment-water interface was sampled using a corked 500 mL caged sample bottle on a line.The corked line was pulled once the apparatus had sunk, to allow water and sediment within the desired oxygen-depleted depths to be collected.The area sampled in the fen pond and the ditch had a depth of less than 30 cm.1 mL subsamples were observed in a Sedgewick Rafter chamber.Many cells were encountered and examined, with densities of up to 15 cells per mL of sediment subsample.S. semivirescens cells collected from this location were hand-picked under a dissecting microscope using a micropipette, and were stored in RNAlater for transport to Uppsala University, Sweden for transcriptome analyses.cDNA synthesis was performed within three days of removal from the UK sampling site and storage.Sweden study sites.Samples were collected from two freshwater locations during August 2015.Air temperature was 25 °C in full sun, with water temperature of 18 °C recorded.The first location investigated was Stadsskogen “city forest” – an ancient, densely-wooded and dimly-lit forest area.Within this habitat, a small pond location was chosen, and a pH of 6.0 was recorded, with a conductivity of 47 μS/cm.Samples collected ranged from a ∼30 cm depth to shallow ∼4 cm samples obtained by hand along the shoreline and on submerged algal mats.The second location sampled was a shallow eutrophic farmland pond with dense organic sediment, at “Oxhagen” in full sunlight with some aquatic plant coverage.Within this location, a pH of 6.6 was recorded, with a conductivity of 292 μS/cm.Samples were taken from 30 cm deep zones from various areas along the middle and edge of the pond.Sampling methods were identical to the technique used in the UK.Samples were taken back to the laboratory at Uppsala University, with subsamples being analyzed on a 1 mL Sedgewick Rafter chamber.1 liter of water was taken from the sampling locations for laboratory analysis of the pH using a handheld PW9420 pH meter.To determine conductivity a Crison conductimeter 522 was also utilized on the removed samples within 2 hours of collection.Samples were examined within 3 hours of removal, and the ciliates were found to thrive naturally for at least one week in the 500 mL bottles.Both sites were extremely productive for ciliates, with many harboring endosymbiotic algae such as Stentor polymorphus, Frontonia sp. and Loxodes rostrum.Anaerobic ciliates of the genus Plagiopyla, Metopus and Caenomorpha were present, as the sediment layer was largely oxygen depleted.cDNA generation and sequencing: Both preserved and fresh ciliates were washed twice in double distilled water before single cells were picked in a 0.4 μL volume into a 0.2 mL PCR tube.cDNA synthesis was done according to the Smart-seq2 protocol.Aliquots were diluted to 0.2 ng/μL based on dsDNA concentration measured with a Quant-iT PicoGreen dsDNA Assay Kit.The diluted aliquots were prepared for sequencing using Nextera XT DNA Library Preparation Kit.Two S. semivirescens replicates were sampled from each sampling site, i.e. the fen in Dorset, the pond in Stadsskogen and the pond in Oxhagen.The Spirostomum species lacking algae were sampled in Stadsskogen.For all six S. semivirescens replicates sequencing was done on Illumina MiSeq, 300 base pair, pair-end reads using v3 chemistry.The unidentified Spirostomum species were sequenced on Illumina HiSeq, 250 base pair, pair-end reads.Transcriptome assembly: Raw reads were trimmed with Trimmomatic v0.35 by first removing primer sequences and DNA library preparation related sequences with the settings ILLUMINACLIP:2:30:10.Also in the following order LEADING:5, TRAILING:5 SLIDINGWINDOW:5:16 and MINLEN:80 were applied.Artificial reads were identified and removed using BLAST v2.2.30+ by a blastn search against the NCBI UniVec database.Transcriptome assembly was both carried out with Trinity v2.2.0 and SPAdes v3.9.0.The SPAdes assembly was done with a k-mer size of 99 and only used for the phylogenetic analysis since the rRNA contigs assembled by SPAdes were larger than in the Trinity assembly.In all other analysis, the Trinity assemblies were used.Full transcriptome alignment to NCBI nr database was done with DIAMOND v0.8.37 on sensitive blastx mode.The alignment results were analyzed with MEGAN v5.8.3, which contig assignments were used to estimate the fraction of the data originating from the host, algae or prokaryotes.Identification of anaerobic respiration pathway: Anaerobic respiration proteins previously found in other eukaryotes were searched for in the transcriptomes via tblastn search.To search for the presence of hydrogenosomes queries with hydrogenase, pyruvate:ferredoxin oxidoreductase and the maturase proteins HydE, HydF and HydG were used.Both pyruvate formate lyase and the enzyme to activate this protein were search for to detect pyruvate formate lyase activity.Also nitrate reductase, fumarase and RquA were used as queries to detect other anaerobic pathways.Phylogenetic analysis: The rRNA sequences used in the phylogenies were identified with Barrnap.The ciliate sequences used to infer the phylogeny were gathered by downloading all Spirostomum sequences available in the SILVA database and all sequences generated by Shazib et al.The algae sequences were gathered by using the identified 28S rRNA gene from S. semivirescens as a seed in a blastn search against NCBI nt database.CD-HIT V4.6.6 was used to remove identical sequences.Multiple sequence alignments were produced by MAFFT X-INS-i where the CONTRAfold algorithm was used for pairwise structural alignment.The multiple sequence alignments were manually curated.BMGE was used to trim the curated alignment.Bayesian inference tree topology was calculated with PhyloBayes v1.5a using the CAT + GTR model.Four chains were used and both trees ran until maxdiff calculated by the PhyloBayes bpcomp-command were below 0.1.Burn-in was selected by monitoring – log likelihood plotted against generation of trees.For the ciliate Tree 13000 generations was generated and the burn-in was set to 1000.For the algae Tree 37000 generations was generated and the burn-in was set to 1000.Maximum likelihood trees were calculated with IQ-TREE using the TIM + R2 model for the ciliate and TN + R3 model for the algae.The model tester in the IQ-TREE package selected the models in the maximum likelihood tree according to the Bayesian Information Criterion.Two long branches were removed in both the ciliate and the algae phylogeny that could potentially produce artifacts in the tree topology.To rule out that that the identified rquA sequences from the tblastn search were contamination we repeated the phylogenetic analysis by Stairs et al.Additional sequences added in this phylogeny were the potential rquA sequences identified in this study and a potential rquA sequence from the transcriptome of Stentor polymorphus.Multiple sequence alignment done by MAFFT L-INS-i was trimmed with trimal and the tree topology was calculated with IQ-TREE, using ultrafast bootstrap approximation with the LG + C50 model that was selected by the Bayesian Information Criterion.Stop codon usage analysis: To analyze codon usage all six S. semivirescens replicates were assembled with Trinity v2.2.0 to a single assembly.To this assembly raw reads were mapped using Bowtie 2 with the settings “–end-to-end –k 20 –D 20 –R 3 –N 1 –L 20 –i S,1,0.50 –X 1000”.Because of the redundancy, often caused when assembling transcriptomes de novo, the contigs were clustered to transcripts using Corset v1.06.The longest open reading frame from the longest contig in each cluster where then extracted.A blastp search against NCBI nr database using DIAMOND v0.8.37 was then used to select all contigs with hits to Stentor coeruleus, Paramecium tetraurelia, Oxytricha trifallax, Stylonychia lemnae, Tetrahymena thermophila, Pseudocohnilembus persalinus and Ichthyophthirius multifiliis to discard contamination for downstream analysis.The species used to select contigs for further analysis represented the seven ciliates with the most blast hits.Selecting contigs based on more species would not have changed the outcome of the analysis since potential additional species had few hits and would in most cases have a hit as well to any of the seven mentioned species.The count matrix calculated with Corset during the clustering step was then used to rank the extracted open reading frames based on their expression level.To take different sequencing depth for each library into consideration, the total number of mapped reads for each species were used to normalize the number of mapped reads to each transcript.These values were then added together for each transcript to rank all the transcripts based on their normalized sum of mapped reads.The statistics for stop codon usage and the relationship to expression level was finally collected based on the transcripts selected by the blast search and the ranking of the normalized sum of mapped reads.Additionally, the transcripts were ranked based on expression level for each of the individual replicates, for comparison with the average, to assess the feasibility of averaging out the noise and the consistency between replicates.The redundancy-reduced co-assembly used in the codon analysis has been deposited in GenBank under the accession GGNT00000000.An assembly for the unidentified Spirostomum species was generated in the same way and deposited in GenBank under the accession GGNU00000000.The first versions for both transcriptome assemblies are described in this paper.The accession number for the raw reads reported in this paper is SRA: SRP145156.
The ciliate Spirostomum semivirescens is a large freshwater protist densely packed with endosymbiotic algae and capable of building a protective coating from surrounding particles. The species has been rarely recorded and it lacks any molecular investigations. We obtained such data from S. semivirescens isolated in the UK and Sweden. Using single-cell RNA sequencing of isolates from both countries, the transcriptome of S. semivirescens was generated. A phylogenetic analysis identified S. semivirescens as a close relative to S. minus. Additionally, rRNA sequence analysis of the green algal endosymbiont revealed that it is closely related to Chlorella vulgaris. Along with the molecular species identification, an analysis of the ciliates’ stop codons was carried out, which revealed a relationship where TGA stop codon frequency decreased with increasing gene expression levels. The observed codon bias suggests that S. semivirescens could be in an early stage of reassigning the TGA stop codon. Analysis of the transcriptome indicates that S. semivirescens potentially uses rhodoquinol-dependent fumarate reduction to respire in the oxygen-depleted habitats where it lives. The data also shows that despite large geographical distances (over 1,600 km) between the sampling sites investigated, a morphologically-identical species can share an exact molecular signature, suggesting that some ciliate species, even those over 1 mm in size, could have a global biogeographical distribution.
31,611
Establishing life is a calcium-dependent TRiP: Transient receptor potential channels in reproduction
For a few decades, it has been known that fertilization-induced increased intracellular calcium levels are a ‘conditio sine qua non’ in the dialogue between spermatozoa and the oocyte in order to orchestrate the development of new life.However, the function of calcium and its regulation throughout subsequent reproductive events like implantation and placental development remains rather enigmatic.After fertilization and excessive cell division, the blastocyst will travel through the fallopian tubes and arrives in the uterus where it will implant in the endometrium.To allow for implantation, the latter will be appropriately prepared by the combined actions of estrogen and progesterone, culminating during the window of implantation.The transition of the endometrium into a receptive state is accompanied by changes in cell morphology, gene expression and upregulation of adhesion molecules .Successful nidation of the embryo in the endometrium is followed by invasion of trophoblast cells through the epithelium and into the stroma, propagating stromal decidualization in humans or inducing it in rodents .Decidualization is the progesterone-dependent differentiation of fibroblast-like endometrial stromal cells into large, secreting decidual cells.The decidua will provide nutrition for the developing embryo prior to placentation.During placental development, trophoblast cells will acquire specialized abilities within the range of migration, secretion of hormones and cytokines, and vascular remodeling.The resulting placenta functions as a surrogate for different organs as it combines a multitude of functions that are separated in the adult.Moreover, it establishes an interface between maternal and fetal circulation in order for gases and nutrients to exchange without evoking an immune response.In addition, the placenta will transport electrolytes like calcium and magnesium to the growing fetus.The transport of these minerals across the placenta was extensively studied and was previously reviewed .Concisely, the transport of magnesium and calcium across the placenta increases exponentially towards term to cope with increasing demands and requires the presence of specialized transporters.However, as in all other cells, the proper functioning of placental trophoblast cells as such depends on the intracellular levels of ions, and more specifically calcium .Calcium is the most pervasive signaling molecule as it acts as secondary messenger and regulates many cellular processes like gene transcription.In normal resting conditions, the free intracellular calcium concentration is below 100 nM, however, it can transiently increase to locally reach the micro molar range .In addition, many signaling events occur below the micro molar range and spatiotemporal concentrations must therefore be tightly regulated.These subcellular concentrations are not regulated by creation or destruction of calcium ions, but rather by the in- and efflux trough calcium transport systems, like calcium permeable ion channels, calcium pumps, calcium binding proteins and calcium exchange molecules .Ion channels can detect a multitude of signals, transducing them into cellular responses, and are, therefore, of paramount importance in many physiological processes.Transient Receptor Potential channels are a diverse superfamily of ion channels that act as cellular sensors in order to regulate intracellular calcium and magnesium concentrations .Through the last years, the knowledge regarding the expression and functionality of TRP channels in the reproduction process is upcoming.However, fundamental research on the role of TRP channels in implantation and placentation is still in its infancy and their importance is often disregarded.Not only are ions such as calcium and magnesium important for the growing fetus, they could also play a role as secondary messengers to confer the many functions of trophoblast cells, including cell migration, invasion, hormone secretion, glycogenesis, and vascular remodeling .Placentopathies of unknown etiology are thought to be major contributors to common pregnancy complications such as unexplained miscarriage, recurrent pregnancy loss, gestational diabetes, intra-uterine growth restriction, and preeclampsia .Therefore, a thorough understanding of placentation is necessary.This review focusses on the role of calcium during reproduction and summarizes the current knowledge on the involvement of TRP channels during different steps of the reproduction process.During the reproductive cycle, variations in the hormones estrogen and progesterone engender molecular and morphological changes in the ovaries and in the uterine wall.The endometrium, the inner lining of the uterus, is a highly regenerative tissue that consists of two epithelial cell populations, i.e. the luminal epithelium lining the lumen of the uterus, and the glandular epithelium that covers the uterine glands, and the endometrial stroma .Estrogen-induced endometrial proliferation during the first halve of the reproductive cycle is followed by post-ovulatory progesterone-dependent differentiation, a process called decidualization, which is a prerequisite for successful embryo implantation.Embryo implantation is a multi-step process initiated by apposition of the hatched blastocyst and followed by adhesion, attachment and subsequent invasion of trophoblast cells through the epithelium and into the stroma, propagating stromal decidualization in humans or inducing it in rodents .This maternal response to the implanting embryo is required to overcome the peri-implantation period during which there is no direct contact with the maternal circulation.Decidualization is the progesterone-dependent process of endometrial remodeling and includes the differentiation of stromal cells into enlarged, round, pseudo-epithelial decidual cells.Decidual cells are characterized by the cytoplasmatic accumulation of glycogen and lipid droplets as a source of nutrition for the developing embryo .Hence, before formation of the definitive placenta and the commencement of hemotrophic nutrient supply, i.e. the stage of development when fetal nutrition involves the direct uptake of nutrients by fetal placental cells from circulating maternal blood, histotrophic nutrient supply through the decidua is the main source of nutrition.The state of the art concerning placental development in human is limited, with major gaps in knowledge extending from early implantation at day 6–7 to 5–6 weeks of gestation.However, during the last decade, great efforts have been made to improve the understanding of early implantation process.After initial attachment to the epithelial layer, trophoblast cells will differentiate in an outer primitive syncytium, which penetrates deep into the decidualizing stroma by the secretion of lytic enzymes, and an inner cytotrophoblast layer .Maternal blood filled lacunae within the syncytium appear and will eventually become the intervillous space.In a second phase of the placental development, the underlying cytotrophoblast layer will penetrate the outer syncytium, giving rise to the primary villi.By the 4th week of gestation, tertiary chorionic villi are fully developed and are surrounded by two trophoblast layers, filled by mesenchyme and fetal blood vessels.Cytotrophoblast cells continue proliferation to form column extravillous trophoblast cells and the cytotrophoblastic shell.At the top of the anchoring villi, the shift from a low to high oxygen environment will induce the differentiation of the column cytotrophoblasts of the villi into invasive extravillous trophoblasts and cross the decidual border .Via processes known as interstitial and endovascular invasion, the EVTs will invade maternal spiral arterioles or replace the resident endothelial and smooth muscle cells, respectively, resulting in low-resistance conduits.By 10–12 weeks of gestation, the feto-maternal exchange is fully established.Hitherto, fundamental research on placentation in human is limited because of ethical and practical considerations.This has incited the use of rodent models, as they share the placental characteristics of being chorio-allantoic with a discoidal shape and a hemochorial interface, where maternal blood is in direct contact with fetal-derived trophoblasts .Murine placentation starts when mural trophectoderm cells evolve to primary trophoblast giant cells and invade the mesometrial side of the endometrium.Meanwhile, continuous proliferation of polar trophectoderm cells at the antimesometrial side gives rise to the extraembryonic ectoderm and the ectoplacental cone.The former will expand to form the chorionic epithelium.At E8.5, the mesoderm-derived allantois will cover the chorion in a process called chorioallantoic fusion, which is followed by branching morphogenesis until the dense structure of the labyrinth is formed.Simultaneously, a layer containing spongiotrophoblast cells that originates from the ectoplacental cone resides between the labyrinth and the maternal decidua, and is demarcated by the outer secondary TGC.From E13.5 onwards, a new population of glycogen trophoblast cells will appear in the junctional zone and increase in number by 250-fold at E15.5, while progressively invading the decidua.Eventually, the mature placenta consists of three layers i.e. the maternal decidua, the junctional zone, and the labyrinth .During fetal development, >7 million oogonia have developed into primary oocytes.However, at birth, the oocytes are held in meiotic arrest until puberty, during which a small population will be triggered to mature each cycle.Growth factors released by the granulosa cells surrounding the oocyte upon high levels of luteinizing hormone will promote oocyte maturation .Although the exact role of calcium in oocyte maturation has been subject to controversy, calcium deprived oocytes will not complete meiosis I properly.In addition, injection of calcium is a necessary and sufficient condition for meiosis resumption in vitro .L-Type calcium channels were shown to be involved in the calcium influx that precedes nuclear maturation .Moreover, the achievement of critical hallmarks that endow the oocyte with the competence to become activated in response to sperm, depends on calcium oscillations within the oocyte .Indeed, it is generally accepted that calcium oscillations are of utmost importance in mouse oocyte activation upon fertilization and further development.Interestingly, inhibition or stimulation of calcium signaling did not affect the development of the blastocyst as such, but rather affected implantation, and fetal development and survival, respectively.These findings illustrate the powerful implications of calcium signaling, influencing implantation and even post-implantation development .Although successful fertilization of each oocyte requires proper intracellular calcium signaling, it should be noted, however, that species-specific difference exist.Whereas fertilization in some species, e.g. frogs, fish, sea urchins and others, is followed by a single calcium transient, fertilization in most mammals generates multiple wavelike calcium oscillations .Although calcium is critical for the initiation of cellular motion in spermatozoa, normal swimming behavior does not require an increase in intracellular calcium.However, the ability to successfully fertilize the oocyte does depend on the elevation of intracellular calcium .Indeed, calcium is crucial during spermatogenesis and has proven indispensable for capacitation and subsequent fertilization.Capacitation is the penultimate step in the maturation of spermatozoa that renders them competent for fertilization and involves hyperactivation and subsequent destabilization of the acrosome membrane ."Hyperactivated motility enables the sperm to penetrate the egg's protective vestments.The increased motility is caused by an influx of calcium, which will induce increased cAMP levels in order to increase the sperm motility.In 2001, whole-cell voltage clamp identified the family of calcium permeable ion channels, CatSpers, in the sperm tail, which were shown to be important for sperm motility and capacitation .The inhibition of these channels is released by high concentrations of progesterone produced by the cumulus cells surrounding the oocyte .Moreover, the penetration of the zona pellucida and the subsequent fusion of the sperm membrane with the egg membrane requires an increase in intracellular calcium in the sperm head, governed by the opening of intracellular calcium store and calcium permeable channels .Indeed, the IP3 receptor, TRP channels, store-operated channels and voltage-gated calcium channels like Cav3.1, Cav3.2 and Cav3.3, were shown to be present in the sperm head.Once the sperm has penetrate the cumulus cells of the corona radiate, the membrane of the acrosome fuses with the overlying sperm plasma membrane, an exocytotic process called the acrosome reaction.Consequently, the content of the acrosome is released, including enzymes such as hyaluronidase and calcium, which is necessary for successful sperm-egg coat penetration and fusion.Interestingly, a specific calcium channel blockers, like diltiazem, nifedipine and verapamil, can inhibit the acrosomal reaction, highlighting the important role of calcium in this process.Finally, fusion of the sperm – egg membranes is followed by calcium oscillations in the oocyte in order to activate fertilization .The mechanism and channels underlying these processes are reviewed in detail elsewhere .Emerging evidence has shown the significance of calcium signaling in endometrial epithelial and stromal cells during processes such as embryo implantation and decidualization.Preceding implantation, the blastocyst is stabilized in the uterine lumen by luminal fluid reduction in order to establish a close contact with the endometrium.Chloride and sodium channels like the CFTR and ENAC channels, respectively, mainly regulate the balance between fluid absorption and secretion .In fact, extracellular ATP could induce Cl− secretion and inhibit Na+ absorption in a calcium-dependent manner ."Recently, it was shown that epithelial cells are able to sense the embryo's quality prior to implantation.Brosens et al. reported that exposure of endometrial epithelial cells to medium in which human embryos were cultured, induced oscillatory increases in intracellular calcium.Moreover, these responses were more pronounced and disorganized when this culture medium originated from incompetent embryos .These embryo-induced calcium oscillations displayed striking similarities to the oscillations observed upon application of trypsin, a serine protease released by the hatched pre-implantation embryos.Short exposure of EECs to trypsin induced COX-2 expression, resulting in prostaglandin production required for implantation.These findings were in line with a previous study stating that trypsin-induced PGE2 release from EECs was abolished in the presence of an intracellular calcium chelator, BAPTA/AM .These findings suggest that trypsin-induced activation of the epithelial Na+ channel might result in cellular depolarization, inducing a calcium influx via voltage-gated calcium channel.Although the authors provided evidence for a decreased calcium influx upon trypsin application when extracellular calcium was omitted, this control was not performed for the trypsin-induced PGE2 release.Before embryo attachment, the epithelium has to undergo adaptations in order to become receptive for the implanting embryo.This conditioning of epithelial cells for blastocyst attachment involves an extensive reorganization of the apicobasal cell polarity to gain apical adhesion competence and allow cell-to-cell interaction with trophoblast cells, like the expression of adhesion molecules such as integrins.Interestingly, it was shown that mechanical force applied to integrins at the apical pole of RL95-2 cells, a high-receptive EEC line , induced changes in intracellular calcium concentration that were abolished when actin polymerization was halted.Moreover, calcium signaling of RL95-2 cells upon trophoblast contact involves the opening of calcium channels in the membrane, inducing a transient increase in intracellular calcium.In addition, disruption of this cell-to-cell contact also directed to calcium influx that seemed to be governed by receptor-mediated calcium channels, evidenced by a reduction in signaling upon SKF-96365 application, an inhibitor of receptor-mediated calcium channels .These findings suggest that embryo attachment induces changes in the cytoskeleton, which coincided with an increase in intracellular calcium concentration.However, SKF-96365 does not only inhibit TRPV2, TRPC3/6/7 channels, T-type calcium channels and store-operated calcium entry, it can also induce a reverse operation of the Na+/Ca2+ exchanger within the same concentration range or activate cation influx of unknown origin at higher concentrations in the range of 30 to 100 μM .Furthermore, downregulation of S100A11 or CaBP-d9k, two types of calcium binding proteins, not only reduced embryo implantation rates in mouse, it also had adverse effects on the expression of endometrial receptivity-related factors by affecting calcium uptake and release from intracellular calcium stores .Vice versa, trophoblast responses to the interaction with extracellular matrix components of the receptive uterus are also regulated by intracellular calcium signals .Stromal decidualization is essential for successful implantation and subsequent pregnancy.One of the main key mediators of decidualization in human is cyclic AMP, which triggers intracellular signaling pathways and affects diverse downstream molecules.Concisely, decidualization can be induced in stromal cell cultures by the application of cAMP and progesterone for rapid differentiation, or by application of estrogen and progesterone for slow differentiation.Successful decidualization is measured by the secretion of decidual markers like prolactin and insulin-like growth factor binding protein 1 in the culture medium.Interestingly, a reciprocal regulation exists between calcium and cAMP, arguing for a crucial role of calcium in cAMP-dependent decidualization .Earlier reports have shown that calcium negatively regulates decidualization since ionophores, alamethicin and ionomycin, reduced cAMP-stimulated expression of decidual markers.Moreover, continuous increase of intracellular calcium decreased cAMP concentrations, whereas the calcium channel blocker nifedipine promoted decidual gene expression .However, these in vitro findings are in contrast with the in vivo decidualization results .Unlike in human, in whom decidualization occurs spontaneously every month in response to increasing progesterone levels during the mid-luteal phase, decidualization in rodents requires the presence of a blastocyst in the lumen.However, the injection of deciduogenic stimuli in an appropriately primed uterus could also induce the formation of a deciduomata.The most frequently used deciduogenic stimuli are concanavalin-A coated sepharose beads, which bind to epithelial surface glycoproteins and mimic the presence of a blastocyst, or the intraluminal injection of sesame oil .Interestingly, the calcium ionophore A23187 could serve as a deciduogenic stimulus, suggesting that calcium is involved in the decidualization process .Moreover, it was reported that decidualized areas contained significantly higher amounts of intracellular calcium compared to the non-decidualized interjacent areas.Interestingly, simultaneous injection of calcium with concanavalin-A coated sepharose beads prevented the formation of deciduomata when administered the third day of pseudopregnancy, whereas this treatment had no effect at day four.These results indicate that premature calcium-mediated modifications are detrimental for embryo implantation and supports the idea of a limited period of receptivity .Later, it was shown that luminal calcium plays an important role in facilitating the induction of decidualization as the intraluminal application of calcium channel blockers, nifedipine, verapamil, nicardipine and diltiazem, reduced decidualization in pseudo-pregnant mice .An important note to make is that paracrine signaling from epithelial cells to stromal cells was not taken into account in in vitro studies.Therefore, it might be possible that increasing calcium levels are crucial in epithelial cells to produce PGE2, as shown before , whereas stromal calcium levels are preferentially kept low.Taken together, calcium has proven to play an important role in many aspects of the early reproduction process.However, the paucity of research makes it difficult to formulate decisive answers concerning the exact role of calcium.Indeed, the importance of calcium is primarily based on effects of nonspecific calcium channel blockers and/or ionophores.Moreover, poor experimental setups, the lack of functional assays and the absence of studies that validate or confirm these findings indicate that conclusions should be drawn carefully.Collectively, additional experimental data are required to identify calcium permeable ion channels and other key players in these processes.In the future, tissue specific genetically modified knockout animals are necessary to further improve the knowledge of calcium signaling during the decidualization process.Extensive research has been performed concerning the transfer of ions across trophoblast cells and have identified numerous players, including Ca2+-ATPase , Na/Ca-exchanger , Calbindin D9K and D28K , L- and T-type calcium channels , and TRPV5 and TRPV6 channels .Interestingly, mice lacking NCX die at E10 due to a smaller and avascular placental labyrinth layer .The presence of these specialized calcium transporters, especially in the basal plasma membrane of the syncytiotrophoblasts, ensures that the fetus is maintained hypercalcemic relative to its mother.Many features of calcium transport in the placenta were previously reviewed in detail .However, apart from being a mineral crucial for proper fetal development, it has become apparent that many functions of the syncytiotrophoblasts are regulated by intracellular calcium, including hormone secretion , nitric oxide production , and transport protein functionality .Evidently, the role of calcium in the human placenta is studied in in vitro settings e.g. on placental explants, reconstituted membranes, primary trophoblast cultures derived from villous tissue, or placenta-derived cell lines such as choriocarcinomas like BeWo and JEG-3 cell lines.The maternal part of the placenta, the decidua, synthesizes and secretes prolactin and human chorionic gonadotropin in a calcium dependent manner .Later, it has been suggested that calcium influx through VGCC is important for hormone secretion , however, other studies have argued against this because of the lack of functional evidence in patch-clamp experiments and until now, no consensus has been reached concerning their existence .Furthermore, L-type calcium channel blockers, diltiazem and verapamil, also modestly inhibit TRPV6-mediated calcium uptake at concentrations >10 μM .Moreover, the presence of non-selective cation channels , store-operated and receptor-operated calcium channels in the human placenta have also been described.Nevertheless, the role of calcium in specific cell functions remains mostly unknown.For example, in the liver, conversion of glycogen into glucose is mediated by glycogen phosphorylase, the activation of which is indirectly regulated by calcium release from intracellular stores .However, in glycogen trophoblast cells of the mouse placenta, the accumulation of glycogen granules occupies almost all of the cytoplasm, limiting the space for organelles .Therefore, the question remains whether influx of extracellular calcium through calcium permeable channels residing in the plasma membrane is necessary for glycogen conversion in trophoblast cells.Transient Receptor Potential channels are a family of cation conducting channels with low selectivity for almost all of the members.Their role exceeds classical sensory transduction, as evidenced by their involvement in many non-sensory physiological processes.This versatility is the result of their capacity to convey diverse stimuli into the influx of cations, which in turn can be received as an electrical signal or induce signaling transduction.Moreover, TRP channels are indispensable in calcium and magnesium homeostasis.Structurally, TRP channels consist of six transmembrane domains that assemble as tetramers to form a cation-conducting pore .They exhibit voltage-dependent behavior, albeit weak compared to VGCC, which is regulated by the voltage-sensing domain that is formed by the S1 to S4 transmembrane segments.The central pore of TRP channels is formed by S5–S6 .In mammals, the TRP superfamily contains 28 different members, which can be divided into 6 subgroups based on their sequence homology: ankyrin, vanilloid, canonical, melastatin, polycystin, and mucolipins .TRPA1 is the only member of the ankyrin subfamily and operates as an irritating-receptor since pungent compounds from mustard, garlic and environmental irritants such as formaldehyde and acrolein are able to activate the channel .The TRPV family is mostly known for its most renowned member TRPV1, famous for its activation by heat and capsaicin, although TRPV2-4 exhibit high temperature sensitivities as well .Besides being insensitive to heat, TRPV5 and TRPV6 differ from all other TRP channels by their high calcium-selectivity .Moreover, their exclusive expression in epithelial cells further suggests an important role in calcium uptake and homeostasis .TRPC channels operate as receptor- or store-operated proteins and can form hetero-multimers within the confines of TRPC1/4/5 or TRPC3/6/7.Although TRPC2 is a pseudogene in humans, it acts as a pheromone detector in rodents .The TRPM subfamily consists out of eight different members.TRPM3 is a calcium permeable ion channel expressed in sensory neurons that can be activated by the neurosteroid pregnenolone sulphate .TRPM4 and TRPM5 are calcium-activated channels that exclusively permeate monovalent ions.TRPM6 and TRPM7 are unique in the TRP channel family as they contain a kinase domain.While permeable for both calcium and magnesium, their sensitivity to physiological Mg2+-ATP concentrations suggest a role in magnesium homeostasis.However, unlike TRPM7 that is ubiquitously expressed, TRPM6 expression is limited to epithelial cells where it can regulate magnesium uptake .When co-expressed, the association of TRPM6 with TRPM7 as functional heteromers reduced the tight inhibition of TRPM7 by Mg2+-ATP and renders the complex constitutively active in the presence of physiological Mg2+ concentrations .TRPP2, also referred to as PKD2 or polycystin 2 and the two polycystin-like proteins TRPP3 and TRPP5 belong to the TRPP subgroup.PKD1 is a large membrane receptor with 11 transmembrane domains, and PKD2 functions as a calcium –permeable TRP-like channels that interacts with the C-terminus of PKD1.Mutations in both PKD2 and PKD1 are the cause of autosomal dominant polycystic kidney disease.The mucolipins are most likely restricted to intracellular vesicles, such as lysosomes.Mutations in TRP channels can lead to several hereditary diseases like focal segmental glomerulosclerosis, autosomal dominant metatropic dysplasia, Charcot-Marie-Tooth disease type 2C, mucolipidosis type IV, and hypomagnesaemia with secondary hypocalcaemia .For fertilization to be successful, capacitation of the sperm is paramount.Capacitation involves an increase in intracellular cAMP, which induce tyrosine phosphorylation and eventually results in an increased sperm motility in order to fertilize the oocyte.However, high levels of extracellular calcium were shown to negatively modulate the phosphorylation capacity , thereby decreasing capacitation.Therefore, absorption of calcium within the epididymis, in order to establish a luminal calcium gradient from the proximal to the caudal part, is important for proper maturation of the sperm.Mutant male TRPV6D541A mice, in which a single point mutation was inserted in the pore-forming loop to generate a calcium impermeable channel, showed a reproductive phenotype of hypofertility.Although TRPV6D541A mice exhibited normal copulatory behavior, a negligible amount of two-cell stage embryos was observed after mating and they failed to produce offspring .Interestingly, trpv6 transcripts were not evident in spermatozoa and the autonomous calcium signaling in the spermatozoa was not affected.However, TRPV6 was observed in the apical membrane of the epididymal epithelium, where it was suggested to be important to establish the luminal calcium gradient.Indeed, a 10-fold increase in calcium concentration of caudal epididymal fluid was observed in the absence of functional TRPV6, resulting in impaired motility and fertilization capacity of the mutant sperm.Further evidence of TRPV6 expression in rat caudal epididymal epithelial cells was provided by whole-cell patch-clamp experiments showing a constitutively active current showing similar electrophysiological properties of the wild type TRPV6 current .Thereafter, it was shown that this sole mutation is responsible for the phenotype, as mice in which the ion-conduction pore was completely removed presented similar impairments in sperm motility and viability .Mature sperm cells require high sensitivity to detect appropriate cues, such as slight changes in temperature, pH, osmotic pressure and the presence of various chemical stimuli, in order to travel into the reproductive tract towards the oocyte .Indeed, these psycho-chemical stimuli trigger and regulate intracellular calcium signaling, thereby affecting motility, capacitation and the acrosome reaction of the sperm cells.Since TRP channels can act as cellular sensors to detect environmental stimuli, their endogenous expression in sperm cells was no surprise.Moreover, the thermosensitivity of some members of the TRP channel family renders them ideal candidates to mediate sperm thermotaxis.The endogenous presence of the thermo-sensitive TRPV1 channel have been described in sperm of fresh water teleost fish, Labeo rohita.Activation of TRPV1 by an endogenous activator NADA increased the quality and the duration of the sperm movement .Expression of the cold-sensitive TRPM8 channel has been localized in the tail and head of mouse and human sperm.Later, the expression of the cold sensitive TRPM8 has been described in the testis of rats and sperm cells of vertebrates ranging from fish to higher mammals .In addition, patch clamp recordings in testicular mouse sperm revealed TRPM8 currents upon application of menthol and icilin that could be blocked by TRPM8 inhibitors.Functional assays using sperm from wild-type mice showed that TRPM8 activation significantly reduced the number of sperm cells undergoing the acrosome reaction following capacitation.The effect of TRPM8 during the advanced capacitation process or at the time of zona interaction has however, not been tested .Moreover, endogenous expression of the osmo-sensitive TRPV4 channel was shown in all vertebrate spermatozoa ranging from fish to mammals .In human sperm, TRPV4 is present as N-glycosylated protein and activation of the channel induces calcium influx .Using whole cell voltage-clamp recordings, mRNA experiments and immunocytochemistry in human ejaculated spermatozoa, TRPV4 was identified to be responsible for CatSper-independent outward rectifying cation currents, which were potentiated upon capacitation.Given the high levels of sodium in the reproductive tract , TRPV4 might be important to provide a sodium influx that is necessary for membrane depolarization as a prerequisite for CatSper activity .Interestingly, also the TRPC2 protein was detected in sperm, and anti-TRPC2 antibody reduced the sustained calcium response elicited by egg coat proteins in mouse sperm and the zona pellucida-induced acrosome reaction .However, in contrast to the previously mentioned CatSpers, genetic disruption of TRPV1, TRPV4, TRPM8 and TRPC2 does not affect the male fertility.This could indicate that the contribution of the calcium influx through these TRP channels in sperm cells is not essential for survival of the sperm cell.Moreover, the correct assessment of a role for TRP channels in processes, such as capacitation and the acrosomal reaction, is limited by the lack of specific pharmacology and antibodies.TRP channels in oogenesis,Calcium influx is also required for oocyte maturation and egg activation.Currently, not much is known concerning the molecular identities of calcium-permeable channels that underlie the initiation of embryonic development.Recent studies have suggested a possible role for TRPV3 in mouse oocytes and eggs since the channel is progressively expressed during oocyte maturation.Interestingly, 2-APB, the agonist of TRPV3 induced currents that reach a maximal density and activity at metaphase II, the stage of fertilization.Moreover, activation of TRPV3 provokes egg activation by inducing a robust calcium entry .Nevertheless, the promiscuous activation and inhibition abilities of 2-APB demand further research to confirm a role for TRPV3 in oocytes.After ovulation, the mature oocyte is transported down the fallopian tube towards the uterine cavity.TRPV4 transcripts and protein have been identified in ciliated epithelial cells of the mouse female reproductive organs, where it plays a role in mechanotransduction process.TRPV4 channels were found to be stimulated by changes in mucus viscosity, leading to increased intracellular calcium levels .This is required for the autoregulation of the ciliary beat frequency.However, several studies using mice lacking the trpv4 gene did not report any reduced fertility or an apparent defect in reproduction .Finally, outwardly rectifying non-selective current were described in immature oocytes, matures oocytes and 2-cell stage embryos.The inhibition by NS8593 and extracellular magnesium, and the activation by Naltriben suggest that TRPM7 homomers might be responsible for these currents.In addition, the presence of the TRPM7 inhibitor affected the development of the 2-cell stage embryos to the blastocyst .Genetic ablation of TRPM7 resulted in early embryonic lethality before E7.5 of embryogenesis, indicating the need in early implantation events.However, the absence of pups when TRPM7 was deleted in the embryonic cells but not in extraembryonic visceral endoderm indicated that embryonic lethality resulted from a requirement for TRPM7 in the developing embryo rather than a compromised maternal-fetal transport .Later, it was shown that TRPM7 was essential only for early stages of embryogenesis .However, TRPM7 might be important in calcium signaling in the oocyte, since aberrant calcium signaling can influence implantation and even post-implantation development without affecting blastocyst development .In mammalian biology, the uterus must remain quiescent to accommodate a growing organism in utero, where after it has to contract powerfully to expel the mature fetus.It is generally admitted that extracellular calcium is essential for the generation of spontaneous and agonist-induced contractions in both rat and human myometrium.In this light, several studies have investigated the role of TRP channels in the myometrium.In human myometrium, RT-PCR analysis have shown TRPC1, TRPC3, TRPC4, TRPC5, TRPC6 and TRPC7 mRNA whereas protein expression was shown for TRPC1, TRPC3, 4 and 6 in primary cultured human myometrial smooth muscle cells .In addition, TRPV4 has been found to be of importance in labour.TRPV4 expression is increased throughout gestation and its activation by endogenous ligands like prostaglandin increases myometrial contractility .In animals models of preterm labour, blockade of TRPV4 prolongs pregnancy .Although TRPV4 is expressed in the mouse uterus, TRPV4-null mice seem to reproduce normally.However, whether these mice are resistant to stimuli known to induce preterm labour remains entirely unknown.Thus, TRP channels could play a role in controlling myometrial calcium concentrations and may be important transducers of agonist-mediated signals that increase at the time of parturition and labour.TRPA and TRPV channels - TRPA1 and TRPV1 mRNA and protein expression were shown in the rat uterus and their expression was upregulated upon treatment with synthetic estrogen analogue diethylstilbestrol, but not by 17β-estradiol.Formalin and capsaicin were used to demonstrate the functionality of TRPA1 and TRPV1, respectively, in a small population of isolated endometrial cells.Of note, stromal and epithelial cells were not investigated separately, and appropriate controls during calcium measurements were missing .Moreover, TRPV1 expression was investigated during gestation as an anandamide-binding receptor, revealing a dynamic pattern with maximal expression at midgestation, although immunohistochemistry indicated that TRPV1 expression was mainly found in uterine NK cells .Recently, we investigated the molecular, and when possible the functional, expression of the TRPA, TRPV, TRPC, and TRPM subgroups in human endometrium and compared it to their expression in the murine uterus during an induced menstrual-like cycle and the natural estrous cycle .In contrast with previous findings, the expression of TRPA1 and TRPV1 could not be confirmed on mRNA or functional levels.We did, however, find significant endometrial expression of TRPV2, TRPV4, TRPV6, TRPC1, TRPC3, TRPC4, TRPC6, TRPM4, TRPM6 and TRPM7, some of which showed variation according to the cycle phase, while others showed unaltered expression levels.These findings confirmed the regulation of TRP channels by steroid hormones in the endometrium or in other tissues , and suggested possible hormone-dependent regulation for others.For instance, TRPV2 expression was significantly upregulated during the late luteal phase and menstruation, suggesting its involvement in progesterone-dependent stromal decidualization.In this light, molecular and functional assessment of TRPV2 revealed its exclusive expression in stromal cells , evidence by a robust calcium influx upon stimulation of primary endometrial stromal cells by the TRPV2 agonist tetrahydrocannabinoid in the presence of CB1 and CB2 inhibitors .However, the expression pattern in the murine uterus during the induced menstrual-like cycle was different compared to human, suggesting a more complex regulation of TRPV2 by steroid hormones or growth factors .In contrast, both human and murine TRPV4 expression in the endometrium seems to undergo downregulation during late luteal/decidual phase, which is consistent with the progesterone receptor-dependent downregulation of TRPV4 in mammary gland epithelial cells in the presence of progesterone .In addition, these findings are in line with previous results showing TRPV4 protein expression in pregnant and non-pregnant uterus .On a cellular level, the expression of TRPV4 was more abundant in mouse epithelial cells compared to stromal cells, evidence by increased mRNA levels and increased number of responding cells upon application of GSK1016790A, a specific TRPV4 agonist .Moreover, the estrogen-dependent expression of TRPV6 was evident in the human endometrial biopsies and in murine uterus during both the natural oestrus as the induced-menstrual cycle.Furthermore, upregulation of TRPV6 in the human endometrium during the proliferative, estrogen-dominant, phase of the menstrual cycle was previously reported on mRNA and protein level .Accordingly, TRPV6 is expressed in the murine uterus during the oestrus cycle with the highest expression during the estrus phase .In addition, uterine TRPV6 expression during gestation reach a maximum in the middle of pregnancy and in late pregnancy .On the contrary, rat TRPV6 was upregulated at diestrus and by progesterone supplementation .These in vivo findings of TRPV6 modulations were confirmed on cellular levels.Indeed, treatment of Ishikawa epithelial cells with estrogen further indicated the estrogen-dependency of TRPV6 expression, which was regulated by the estrogen receptor.TRPV6 was suggested to be a key mediator of calcium uptake during transcellular transport, i.e. calcium that is taken up by TRPV6 at the apical membrane, binds intracellularly to CaBP-9k or -28k, where by it is transported to the basal side and will be extruded via Plasma membrane Calcium ATPase 1 .The fact that TRPV6 protein was localized in apical surface of the luminal and glandular epithelium , was confirmed in our study by qRT-PCR revealing exclusive expression in epithelial cells.Interestingly, female mice lacking functional TRPV6 are subfertile reflected as an increased latency to conceive and the smaller litter size .These findings suggest that TRPV6 might play an important role in the embryo-epithelium calcium signaling.However, more recent studies did not confirm female infertility and showed that this phenotype was caused by male hypofertility of mice lacking functional TRPV6 or the D541A pore mutant of TRPV6 .TRPM channels – TRPM2 expression was found in human endometrial samples with maximal expression in the late secretory phase.In situ hybridization and qPCR indicated TRPM2 expression in both epithelial and stromal cells, although regulation by estrogen was only found in stromal cells.Nevertheless, despite containing an estrogen-responsive gene element, TRPM2 expression was not increased during the estrogen-dominant phases of the menstrual cycle .Later, qPCR studies in rat uterine tissue revealed the highest expression of TRPM2 during the proestrus phase.Accordingly, the expression was significantly upregulated by estrogen in an estrogen receptor-dependent manner.Immunohistochemistry demonstrated accumulation of TRPM2 in the stromal cells but especially in the myometrium .However, we were not able to detect significant TRPM2 expression in neither human nor mouse endometrium compared to other TRP channels .Precisely, TRPM2 mRNA was found to be higher than TRPV1, TRPA1 or TRPM1 but much lower than TRPM7, indicating a minor role for TRPM2 in reproduction.In addition, no phenotype in reproduction was reported for mice lacking TRPM2 .Unlike the stable and ubiquitous expression of TRPM7, TRPM6 mRNA was upregulated during the follicular and early luteal phase, and was exclusively expressed in mouse epithelial cells.Our data are consistent with previously shown positive regulation of TRPM6 mRNA and functionality by 17β-estradiol in the kidney .Recent studies convincingly showed no functional expression of the sensory TRPM channels, TRPM3 and TRPM8 in ESC and EEC .TRPC channels - Increased expression of TRPC1 and TRPC6 in human endometrial biopsies during the late luteal phase could suggest an upregulation by progesterone or suppression by high estrogen levels.Nevertheless, these variations were not found in murine uterine tissue.Instead, a comparable expression pattern was observed for all TRPC channels, e.g. TRPC1, TRPC3, TRPC4 and TRPC6: an increased expression in the E2-dominant phase, which decreased towards a minimum during menstruation .Given the expression of TRPC channels in the myometrium and the fact that the myometrium was not removed from uterine tissues during the experiments, TRPC expression might decrease as a consequence of the gradual thinning of the myometrium during the decidualization process.With regard to cellular expression, TRPC1 and TRPC4 expression and functionality were more abundant in mouse stromal cells compared to epithelial cells by application of the agonist Englerin A in fluorimetric calcium imaging experiments .The expression levels of TRPC6 mRNA was more pronounced in the epithelium cells compared to the stromal cells.However, TRPC6 functionality in epithelial cells could be found in only a minority after stimulation by OAG, the synthetic homologue of diacylglycerol.Indeed, calcium imaging experiments in mouse endometrial cells and patch clamp experiments in human stromal cells provided evidence for a robust OAG-induced calcium increase in stromal cells compared to epithelial cells .Although it was previously shown that calcium is an important mediator during decidualization, the expression and functionality of TRP channels during this process is very scarce.A single study determined the expression of TRP channels in human endometrial stromal cells before and after decidualization, induced by the application of estrogen and progesterone for 14 days.This study identified the presence of mRNA for TRPC1/4/6, TRPV2/3/4 and TRPM3/7 in stromal cells .Except for the presence of TRPM7, later reports confirmed this expression pattern .Unfortunately, the expression in decidualized stromal cells was only determined for TRPC1, TRPC4 and TRPC6.This revealed an increase in TRPC1 mRNA and protein levels, whereas TRPC4 levels remained stable and TRPC6 levels were upregulated solely by estrogen treatment.Finally, the decidualization-induced upregulation of the markers PRL and IGFBP1 was significantly reduced in stromal cells in which the TRPC1 was absent by siRNA silencing.Conclusively, it was found that increased calcium influx, mediated by increased TRPC1 expression was important for the upregulation of PRL and IGFBP1.However, so far, TRPC1 knockout females do not display subfertility or impaired decidualization.The possibility of redundancy with other TRPC channels demands the generation of conditional and inducible knockout animals.Nevertheless, fertility in mice lacking all subtypes of the TRPC subfamily, i.e. hepta-knockouts, was not affected .TRPV channels –Placental expression of TRPV1 was found at E13, E15 and E18, whereas no protein expression could be observed at E21.In addition, incubation of the placenta with the TRPV1 activator capsaicin increased the nitric oxygen synthase activity.This TRPV1 expression pattern was later confirmed by others, stating that mRNA and protein expression in the placenta was significantly decreased at E19 compared to E14.Furthermore, it was shown that TRPV1 expression was confined to the giant trophoblasts of the junctional zone and syncytiotrophoblasts of the labyrinth zone at E14.Additionally, TRPV1 expression could be found in all cell types at E16, suggesting that during the second half of pregnancy the placenta is more sensitive to endocannabinoid stimulation via TRPV1 .Interestingly, TRPV1 was found to be expressed in cell cultures of human cytotrophoblasts and syncytiotrophoblasts isolated from villous tissue.Here, capsaicin and anandamide reduced cytotrophoblast viability and induced morphological alterations suggestive for apoptosis, as these events coincided with loss of mitochondrial membrane potential and an increase of caspase3/7 activity.Additionally, in vitro cytotrophoblast differentiation into syncytiotrophoblasts was halted by triggering TRPV1 activation.In contrast to the previous studies, the functionality of TRPV1 was assessed by calcium fluorimetry, albeit with supra-physiological doses of capsaicin .However, to date, no reproductive phenotype has been described in mice lacking TRPV1.Although TRPV2 expression is yet to be investigated in the placenta, mice lacking functional TRPV2 have reduced embryo and birth weight and are more susceptible to perinatal lethality.In addition, placental weight at E16.5 and E18.5 was decreased, suggesting a role for TRPV2 in placental development .However, more detailed expression studies of TRPV2 in the placenta are required.The calcium selective TRPV5 and TRPV6 channels are described to be co-expressed in the human placenta, where they co-localize with Calbindin-D .Additionally, TRPV6 expression levels in the placenta are more advanced compared to its levels in the kidney and small intestine, suggesting a critical role in calcium transport for normal fetal growth .More specific, the TRPV5 and TRPV6 expression is described in cytotrophoblast cells isolated from human term placentas.In addition, their expression correlated with the calcium uptake potential along the differentiation into syncytiotrophoblasts, the layer that is responsible for maternal calcium transfer, indicating a role in basal calcium uptake .Further research of TRPV5 and TRPV6 in purified apical and basal syncytiotrophoblast membranes reconstituted in giant liposomes provided evidence for the functional presence of both channels.As it is unlikely that they would participate in calcium transport in the basal membrane because of the necessity of active transport to overcome the hypercalcemic levels in the fetus, an indirect mechanism to participate in transcellular calcium transport was proposed, e.g. modulation by parathyroid hormones .PTH was shown to contribute to placental calcium transfer and trophoblast growth .Although, this effect is mainly translated as enhanced calcium efflux from the basal membrane , it cannot be excluded that PTH might modulate other calcium transport systems.Indeed, PTH was shown to activate the cAMP-PKA pathway, thereby phosphorylating TRPV5, which increased the open probability of the channel in the kidney .Recently, TRPV6 was suggested to play a critical role in flow-induced calcium influx and microvilli formation in human trophoblastic cells .However, to date, these results are not supported by in vivo evidence.In mice, TRPV6 mRNA was detected in the mouse placenta at E10, primarily in the labyrinth and spongy zone, and moderately at E14 and E17, whereas expression in the amnion increased from E11 to E13 where after it decreased towards term .On the contrary, other reports described that placental expression of TRPV6 increased by 14-fold during the last 4 days of gestation, correlating with increased demand for calcium in bone mineralization .In line with these findings, placental TRPV6 expression in rat steadily increased and peaked at E20.5 and was reported in the labyrinth, the spongy layer and giant cells.Moreover, TRPV6 expression was found to be progesterone and estrogen receptor dependent .Although differences in expression profile are reported, it is clear that the juxtaposition of uterine and placental TRPV6 expression supports its role in maternal-fetal calcium transport.Indeed, TRPV6 knockout fetuses are born with significantly lower calcium concentrations in the fetal blood and amniotic fluid, and the transport activity of radioactive calcium was reduced by 40%.Supporting this idea, the channel was mainly localized in the intraplacental yolk sac that is important for maternal-fetal calcium transport in mice .Interestingly, the expression of TRPV6 was evaluated in knockout models when decreased calcium transport was expected.For instance, placental-specific igf2 knockout mice exhibit hypocalcemia and adaptive changes in placental calcium transport.However, placental TRPV6 expression was not altered at E17 and E19, whereas Calbindin D-9k was significantly lower in igf2 knockout placentas at E17, suggesting that not TRPV6 but Calbindin D-9k expression is rate-limiting for fetal calcium transfer in rodents .Nevertheless, TRPV5 and TRPV6 expression was increased in placentas of Calbindin D-9K and D-28k knockout mice .In addition, placental TRPV5 and TRPV6 were upregulated in rat placentas exposed to hypoxic conditions as a model that resembles preeclampsia .To further establish the role of TRPV5 and TRPV6 in the placenta, additional research is required to assess their exact role in different placental areas, like the labyrinth, the spongy layer and giant cells.TRPM channels – Although the expression of TRPM2, TRPM4, TRPM6, and TRPM7 was found in mouse or human placentae, only TRPM6 has been closely examined.Mice deficient in TRPM6 show embryonic lethality and neural tube defects, however, the loss of TRPM6 appears to affect different stages of embryogenesis.In line with the most prominent TRPM6 expression at E10.5, most embryos were absorbed by E12.5, however, few embryos develop to term, although they showed neural tube defects and died before weaning .Ion transport mainly takes place in the labyrinth were the maternal sinusoids are separated from fetal blood capillaries by two layers of syncytiotrophoblasts.Branching morphogenesis in mice starts at E8.5 after chorioallantoic fusion and SynT layers are still distinguishable.Interestingly, in situ hybridization indicated that the expression of TRPM6 expression is restricted to SynA positive cells at E8.5, indicating a function in synT-I cells.Surprisingly, TRPM6 expression was not detectable in the neural tube but rather in the visceral yolk sac endoderm and the extraembryonic chorion.This expression is continuous in the mature placenta as it can only be found in the syncytiotrophoblast of the labyrinth.The fact that magnesium levels were reduced in E9.5 mutants is completely in line with its expression in the cells that are responsible for maternal-fetal exchange.Consequently, embryo turning did not occur in any of the mutant embryos at E9.5 and embryonic lethality started after E10.5 resulting in only a few viable mutants after E14.5 .Simultaneously, it was found that TRPM6 mRNA significantly increased after initiation of fetal bone mineralization, as its expression was higher at end gestation, and that its expression was confined within labyrinthine trophoblast.These findings were confirmed on functional level by calcium fluorimetry and patch-clamp recordings .Altogether, it is suggested that TRPM6, together with TRPV6, is important for the transport of magnesium in syncytiotrophoblast cells of the labyrinth.High expression levels of TRPM7 RNA were described in human placenta .However, TRPM7 is a ubiquitously expressed channel and genetic ablation of TRPM7 resulted in early embryonic lethality before E7.5 of embryogenesis .As previously mentioned, TRPM7 was essential only for early stages of embryogenesis, indicating a minor role for TRPM7 in placental functioning .TRPC channels –RT-PCR revealed expression of TRPC1, TRPC3 and TRPC6 in placentas from first and second trimester or term, whereas TRPC4 was most prominently expressed in second trimesters or term placentas.However, TRPC1 expression could not be confirmed on protein level, whereas TRPC3 and TRPC6 protein expression was evident in term placentas.TRPC3 and TRPC4 were mainly located in the cytotrophoblast cells underlying the syncytiotrophoblasts in the first trimester placentas, but not in the microvillous membrane.In contrast, the expression of TRPC3 and TRPC4 in term placentas was confined to syncytiotrophoblast in microvillous and basal plasma membrane.TRPC6 was detected in syncytiotrophoblasts of both first trimester as term placentas.These findings were in a line with another study reporting placental expression of TRPC1, TRPC4 and TRPC6, but not for TRPC3 .TRPP channels – Polycystin 2 expression was shown in term human syncytiotrophoblasts where it behaves as a nonselective cation channel .Moreover, PC-2 cation channel function was inhibited by reactive oxygen species, suggesting a role in preventing calcium overload in stressed cells in the placenta .Interestingly, the loss of both PKD1 and PKD2 results in embryonic lethality.The expression of PC-1 and PC-2 could be found in placentae of mice during E10.5 to E14.5.Notably, placentae of both pdk1 and pkd2 had impaired branching in the labyrinth resulting in a disorganized network .PKD1 mutants display polyhydramnios as early as E12.5, followed by systemic edema of the whole embryo at E13.5, which in turn causes embryonic death with no abnormalities in fetal hearts.These events are secondary to vascular leaks and massive hemorrhage, although the major axial vessels were intact, suggesting that pkd1 does not play a role in vasculogenesis but rather in angiogenesis .In addition, mutants developed spina bifida occulata when they would reach late embryonic stages and the skeletal development was retarded at newborn stage .From E12.5 onwards, a disorganized fetal arteriole and capillary network was observed in pkd1−/− placentae resulting in a significantly reduced number of vascular branches.At the maternal-fetal interface, hemorrhage and necrosis with fibrin deposition were observed, although no differences in labyrinthine or placental area was described.As with many placental phenotype, the effect of pkd1 ablation was background specific, with more severe and earlier abnormalities found in the C57BL/6 background compared to the mixed background.Inactivation of Pkd1 only in the epiblast indicated that edema and polyhydramnios originate from the pkd1 deletion in the fetal part, rather than placental insufficiency.Nevertheless, mutants derived from this approach showed a variable degree of improvement in placental architecture and a substantial fraction survived until birth.As pkd1 was deleted in the fetal part of the placenta, comprising the endothelial lining of the labyrinth, and the placental phenotype was only partly rescued, an endothelial specific pkd1 and pkd2 knockout mouse was created by using the Tie2-cre transgene.The placentas from these crosses showed a similar phenotype to that described for the pkd1 and pkd2 null animals.Although the endothelial mutants developed polyhydramnios, none of the embryos were edematous.Different targeted mutations were described in the pkd2 gene, all of which result in lethality between E12.5 and term .These findings suggest that PC-1 and PC-2 are required in both the trophoblast and the fetal vasculature compartment of the placenta .Preeclampsia is a complication occurring during pregnancy, which is characterized by gestational hypertension, proteinuria, oedema and foetal syndromes like intrauterine growth restriction.With a prevalence of 7–10% of pregnancies worldwide , it represents a major contributor of maternal and perinatal morbidity and mortality .Although the pathogenesis remains largely unknown, it is clear that the placenta is the direct cause of the disease, as delivery almost completely resolves the symptoms.It is suggested that defective invasion of spiral arteries by cytotrophoblast, and consequently improper arterial remodeling into low-resistance vessels, is the initial trigger.This increased arterial resistance and the remaining sensitivity to vasoconstrictive substances will lead to placental ischemia and hypertension, and thus will induce maternal and fetal complications .Although the use of calcium supplementation to reduce the risk for PE is disputed , it was shown that calcium transport in syncytiotrophoblast isolated from PE placentas was less efficient compared to normal placentas.In addition, this impaired calcium transport originates from decreased calcium entrance as a result of decreased mRNA and protein expression of TRPV5 and TRPV6, decreased calcium buffering due to reduced CaBP-9k and -28K expression, and decreased calcium extrusion resulting from diminished PMCA expression.This aberrant calcium transfer in PE might be caused by oxidative stress .In contrast, Yang et al. reported increased mRNA and protein levels of TRPV6 both in the fetal and maternal preterm placenta of PE patients, while in the central placenta, the expression was unchanged.However, in term placenta, only fetal TRPV6 expression was increased in PE patients.TRPV6 was most abundantly expressed in the cytoplasm of syncytiotrophoblasts in the fetal placenta .In addition, one study reported an increased mRNA expression and decreased protein expression of TRPV1 in PE placentas.As TRPV1 was mainly localized in the apical membrane of the syncytiotrophoblast cells, it might play a role in the impaired calcium homeostasis found in PE .In addition, long before placental defects were described in pkd knockout mice, it was shown that patient with autosomal dominant polycystic kidney disease have increased frequency of hypertension .Interestingly, in line with the findings documented in Pkd1 knockout mice, recurrent pregnancy loss was suggested to be caused by homozygosity of PKD1 mutations .Although it can be proposed that this is a result of embryonic lethality, patients with ADPKD are often born with reduced birthweight and this is associated with earlier onset of end-stage renal disease .Despite increased occurrence of maternal complications includingeclampsia and hypertension, ADPKD patients only had a slight potential of increased rates of fetal complications such as intrauterine growth restriction, prematurity and fetal demise .Magnesium deficiency was reported to be associated with PE.In a similar approach, the expression of TRPM6 and TRPM7 was investigated in preterm and term placentas of PE and control patients.In fetal, central and maternal placental tissue, the expression of both TRPM6 and TRPM7 was significantly reduced in both preterm and term PE placentas.This alteration was mediated by pre-eclamptic stress.TRPM6 and TRPM7 were mainly localized in the cytoplasm of extra-villous syncytiotrophoblasts, but not in the extra-villous cytotrophoblast or epithelial cells of intra-villous blood vessels .Evidence from rodent models - Although the extent of trophoblast invasion and arterial remodeling in mice does not reach the myometrium as in humans, suggesting that preeclampsia caused by shallow invasion does not occur naturally in rodents, mice have been used to study PE .Hypoxic stress is an important factor in the pathophysiology of PE.In placentas of hypoxic rats, which mimicked the clinical manifestation of PE such as hypertension, proteinuria and increased levels of soluble fsm-like tyrosine kinase-1, both the mRNA and protein expression of TRPV5 and TRPV6 were significantly increased.These findings are in line with what was described for human PE placentae .Another mouse model for PE is the inhibition of catechol-O-methyltransferase .Although increased calcium serum levels were observed in these mice, the placental expression of TRPV5 or TRPV6 was unaltered .These findings indicate that aberrant expression of TRP channels can be detrimental for the developing placenta and consequently the fetus.However, the knowledge concerning TRP channels in the placenta is still limited and additional research is necessary, especially considering the fact that the lack of functional TRP channels have been described in many other channelopathies .Taken together, calcium has proven to play a key role in many aspects of the early steps of the reproductive process.Establishing the intracellular calcium concentration is required for male and female fertility and to provide a successful implantation and placentation process.However, since dysregulation of the intracellular calcium concentrations could affect spermatogenesis or interfere with the hypermotility of the sperm cells, calcium permeable ion channels involved in these processes could be promising targets for the development of novel anti-conceptiva.Calcium is also reported to be involved in important processes like decidualization and placentation.However, the paucity of research makes it difficult to formulate decisive answers concerning the exact role of calcium in these steps.Future studies may reveal the exact mechanisms whereby calcium influx via TRP channels regulate the decidualization and placentation process.In recent years, evidence has accumulated indicating that TRP channels are widely expressed in male and female reproductive organs.However, it is surprising that only a limited amount of trp genes knockout animals showed an infertility phenotype or prenatal lethality.One might argue that a strong redundancy in function of the various TRP proteins does exist which may compensate for the loss of function in knock-outs.In addition, supplemental research is required to investigate the specific involvement of TRP channels in pathogenesis like intra-uterine growth restriction, repeated implantation failure, recurrent pregnancy loss and preeclampsia.However, since functional expression of TRP channels has been shown in the myometrium, they are viable targets for interventions to treat preterm labour in human patients.In conclusion, additional fundamental research is required to establish the role of TRP channels during the early stages of embryo implantation.The Transparency document associated with this article can be found, in online version.
Calcium plays a key role in many different steps of the reproduction process, from germ cell maturation to placental development. However, the exact function and regulation of calcium throughout subsequent reproductive events remains rather enigmatic. Successful pregnancy requires the establishment of a complex dialogue between the implanting embryo and the endometrium. On the one hand, endometrial cell will undergo massive changes to support an implanting embryo, including stromal cell decidualization. On the other hand, trophoblast cells from the trophectoderm surrounding the inner cell mass will differentiate and acquire new functions such as hormone secretion, invasion and migration. The need for calcium in the different gestational processes implicates the presence of specialized ion channels to regulate calcium homeostasis. The superfamily of transient receptor potential (TRP) channels is a class of calcium permeable ion channels that is involved in the transformation of extracellular stimuli into the influx of calcium, inducing and coordinating underlying signaling pathways. Although the necessity of calcium throughout reproduction cannot be negated, the expression and functionality of TRP channels throughout gestation remains elusive. This review provides an overview of the current evidence regarding the expression and function of TRP channels in reproduction.
31,612
The Muillean Gaoithe and the Melin Wynt: Cultural sustainability and community owned wind energy schemes in Gaelic and Welsh speaking communities in the United Kingdom
Community involvement in renewable energy generation and in regulating energy consumption has increased during the course of the past decade as concerns regarding climate change and energy prices intensify .Although large scale, traditional power plants will continue to have a role to play in energy generation, it is becoming increasingly accepted that decentralised and community owned projects will also have a role in the future energy mix .Community Energy Projects – energy projects that are part or fully owned by a recognised community of place or interest – are increasingly seen as a means of creating renewable energy in a sustainable way .Indeed, CEPs are being developed across Europe and globally .Community energy is an umbrella term used for a variety of initiatives managed by communities including projects that focus on generation of renewable energy, energy conservation, and the bulk-buying of energy for a community .The national government of the United Kingdom has acknowledged a role for CEPs in the country’s future energy generating mix, with the administration of 2011–15 pledging support for the sector through the publication of the Community Energy Strategy .This strategy also recognises, to a degree, the connected benefits of including communities in energy generation schemes such as the creation of stronger communities, opportunities for skills development and education, and financial benefits.Scholarly research has highlighted the ability of CEPs community energy groups to contribute beyond purely energy target measurements, towards economic and social sustainability.Indeed, it is becoming widely recognised that the community energy sector ‘incorporates a wider range of sustainability objectives’ than merely the production of renewable energy.However, beyond the social and economic benefits of the sector, there has been little, if any, in-depth research or acknowledgement of the ability of community energy to contribute towards cultural benefits and sustainability.Despite the call for more ‘human-centred’ research methods within energy scholarship, such as in the field of cultural anthropology, specific research incorporating such approaches within the community energy field are rare.Culture is an ambiguous term.It is one of the most difficult terms to define according to even the most experienced cultural studies academics .It can be socially constructed, and imbued with a plethora of different meanings by different people and as such it is a ‘difficult’ phenomena to study.Academics from anthropology and cultural studies define the term as the thing that completes who we are, ‘without which we are “incomplete and unfinished animals”’.If considered as an adjective, culture can include worldviews or ways of knowing and interpreting the world, symbols, assets and institutions.Alignment with the symbolic definition is reached by Murphy and Smith who use it as an umbrella term to include a people’s relationship to place, a language, dialect, the traditions of working the land, religion, history, values and heritage.Soini and Birkeland emphasise that the sustainability of cultural attributes should bear as much value as that given to ecological, economic and social sustainability.Such attributes are seen as the basis for social and economic wellbeing .It is a term increasingly incorporated within environmental management domains, although it can cause broad and complicated definitions .There are campaigns by United Nations Educational, Scientific and Cultural Organisation along with United Cities and Local Governments to ensure that culture is added as the fourth pillar of the sustainable development model .This call requires national governments to be mindful of such matters within national sustainable development measures by including “a cultural dimension in all public policies”.A recent example of this is the Well-being of Future Generations Act 2015, which recognises the importance of cultural wellbeing within future sustainability goals, and particularly the Welsh language as a facet of Welsh cultural life .This law, recognises how the cultural wellbeing of a society can stimulate wider wellbeing, such as health and social cohesion.This issue is reflected in North American indigenous research that has shown a link between cultural, language and identity revitalisation and its beneficial effects on community and health wellbeing .Similarly, in environmental management spheres, there are increased calls to consider cultural worldviews when managing and developing natural resources .Despite an increased understanding of the need to include culture within the sustainable development model and environmental management practices, cultural sustainability tends to be maligned or excluded from more specific public policies, developments, and goals .This is despite the damaging effects that neo-liberalism, capitalism, individualism and globalisation seems to have had on cultural communities .Cultural values and sustainability within the energy sector remains an issue rarely discussed and a research area rarely explored.This is despite growing evidence showing that culture, world views, native language and history can play a significantly important role in the way that communities have shaped concerns and goals in relation to the energy sector .Cultural factors, then, can be decisive in shaping ‘preferences about energy resource management decisions and energy use’.In this paper, we will bring into focus cultural sustainability within the community energy sector through exploring four CEPs within marginalised, peripheral communities in Scotland and Wales.These case studies will be discussed in relation to peripheral communities elsewhere, with the hope being that our findings relating to cultural sustainability in connection with community energy developments, will have wider international resonance.The remainder of this paper is divided into four sections, beginning with a more detailed review of cultural dimensions of energy developments.Although literature focused on community energy and culture is scarce, some concepts can be drawn from previous research that attempts to touch upon these issues.Firstly, Murphy argues that cultural attributes are a force that fuels opposition to large energy developments, be they fossil fuel or renewable projects.Energy projects can be resisted by communities who draw on their history and collective identity within an area, as evidenced in research based in an Irish speaking community in County Mayo, West Ireland .Murphy observed how this community interpreted sustainability in predominantly cultural terms through their identity, language, history, and relationship to fellow residents and place.It was this cultural attachment and desire to protect their cultural heritage that guided residents’ opposition to the gas refinery proposed for development in their area.Other communities within various indigenous communities across the globe have challenged similar development of large energy projects or infrastructural developments, by drawing on cultural, historical, collective identity, and language features.Such an example is in recent opposition to oil/tar sands development by First Nation communities in northern Alberta, Canada.This particular campaign against the development of a large-scale fossil fuel extraction scheme, saw the indigenous Cree, Chipewyan, Dené and Métis people campaign on the grounds of being the traditional, cultural community of the area affected .Large energy infrastructure has also been more recently opposed amongst First Nation Sioux communities at Standing Rock in Dakota.The proposed Dakota crude oil pipeline threatened the Oahe lake within the reservation in environmental terms but also ran through culturally important land for the Sioux people.The rejection of this development on the 4th of December 2016 was partly achieved through the involvement of the indigenous Sami people of north Norway, who persuaded a Norwegian bank, invested in the development, to pull out .Many cultural rights issues were raised during this period, including the values and right to land and resources, as well as re-addressing the historical dispossession of indigenous peoples in the USA.However, this case is likely to be revived following President Trumps Executive Order in January 2017 which grants the developer access to the land to progress with the pipeline .A similar case of opposition guided by cultural identification has been demonstrated on the Isle of Lewis in Scotland.Local residents objected to a privately owned, large-scale 234 wind turbine development on Mòinteach riabhach Leòdhais – the Brindled Moor on the Isle of Lewis .The abundance of expressive Gàidhlig words that the local communities had for describing the moor, and how the proposed windfarm threatened to eradicate this heritage of words through changing the landscape, fuelled protests against the development .Another example of opposition guided by cultural identification is the case of the Roineabhal mountain on the Isle of Harris, which had been targeted as a potential mining opportunity for road chippings.The battle by local residents against the global mining developers was partly guided by cultural and historical underpinnings, particularly the cultural meaning of the Roineabhal to the local, indigenous, community .Similarly, on Ynys Môn in north west Wales, cultural drivers have spurred opposition to the re-development and expansion of a nuclear power plant on the site of Wylfa, particularly its possible impact on the nature of Welsh language communities of the area .Murphy suggests that there is a historical narrative of loss and dispossession within the Gaelic cultural context along with a specific, Celtic ‘place-based vision of sustainability’ that could be fuelling opposition to large corporate interest groups.This notion of loss and dispossession could be a narrative shared by other post-colonial, indigenous and smaller world cultures.Such “indigenous and economically marginal communities” have been recognised as peripheral areas in which unjust energy processes can take place, such as the siting of energy infrastructure.These contemporary energy developments might very well be replicating historical experiences of dispossession and disempowerment imposed on peripheral communities and indigenous communities .Disempowerment is also reflected in some themes explored within energy justice literature – research that addresses the integrity of the energy sector, and the relationship between those who benefit and lose within that system .It is argued that the energy sector has distributed benefits unequally through past models – economically, socially, spatially and through policy implementation.Cultural injustices can also take place as reflected in literature on justice as recognition – justice in relation to recognising and responding to the needs of different identities and groups within a society, including culturally based identities.It is possible that cultural injustices could occur within low-carbon transitions as they have ‘the potential to distribute…costs and benefits just as unequally as past transitions without governance mindful of distributional justice’.Without mindful governance of justice as recognition, energy transitions could be harmful to particular cultural groups within society.This is particularly the case if there is a lack of understanding of the context and settings within which energy justice issues take place .Apart from fuelling opposition towards large energy developments, cultural underpinnings can also lead to greater sympathy being engendered towards smaller, locally owned projects.The successful opposition of the large windfarm development on Lewis discussed above eventually led to the development of the smaller, community owned Baile an Truseil wind project on the Galson Estate in the north of Lewis .In contrast to the proposed larger wind development, this smaller community wind farm was perceived by local residents to be a more considerate development in keeping with the socio-cultural qualities of the area .The ability of cultural attributes to inspire the uptake of community energy can be seen in areas where there is a tangible link between place and people with a history of dispossession .The colonialism and dispossession of territorial and natural resource rights experienced by the first nation peoples of Canada is an interesting case in point.Their culture, language and traditions, coupled with a historic narrative of loss and dispossession seem to play an intrinsic guiding role in their increasingly active participation in the development of local renewable energy projects in rural Canada .Similarly, research involving the Navajo Nation in the USA, who have experienced a turbulent history of cultural dispossession and invasive infrastructural developments on their historical land, found that cultural sustainability was intrinsic to their views and values regarding the future of energy development .Further research on the development of community driven wind and solar projects amongst indigenous peoples in North America also show that cultural identity can drive the uptake for renewable energy in an integrated manner, capturing the need for cultural revitalisation and human well-being .It could be that culture can inform and inspire the take up of smaller, less invasive and just energy projects, along with contributing towards cultural revitalisation.The aim of this paper is to look in depth at the cultural context in which community energy groups have been established and are being established in Wales and Scotland.Through a series of interviews, the research also looks at how community energy, and the income stream that it creates, can be a means of bolstering cultural features amongst communities.The paper also examines how community energy projects contribute towards communities’ cultural sustainability.Rather than being a force for opposing development as discussed by Murphy and Smith and MacFarlane , we are open to the idea that culture could play an important role in propelling communities to pursue energy projects.The following section outlines the methods and methodology used to gather and explore the empirical data upon which this paper is based.Semi structured interviews were undertaken across four case sites in rural north west Wales and north west Scotland at the end of 2013.Initial contact had been made through a scoping study period, and further illustrative sampling was made through introductions and enquiries via email, phone calls and snowballing.CEPs were chosen where Scottish Gaelic and the Welsh language are still used as spoken community languages.In Wales, the two case sites were ‘Ynni Llanaelhaearn’, Pen Llŷn in Gwynedd and ‘Ynni Talybolion’ in Llanfechell, Ynys Môn in Wales.Llanfechell is a village on the outskirts of the coastal town of Cemaes on the north coast of Ynys Môn, the Isle of Anglesey.According to the 2011 Census, within the parish of Mechell the population is 1293 .Llanfechell is based in a rural area in the north of the island, where agriculture is one of the main industries, along with employment in the public sector.Anglesey Aluminium had also been one of the major employers for the north west of the island until its closure in 2009.The Wylfa nuclear plant and its possible replacement, Wylfa B have also provided work for local islanders.Llanaelhaearn is a village on the eastern arm of the Pen Llŷn peninsula.The last census showed that the ward of Llanaelhaearn consisted of 1683 citizens .In Scotland, the two case studies were, Tiree Trust on the Isle of Tiree and Horshader Trust in Siabost, Isle of Lewis:Siabost is a township that comprises of north Siabost, new Siabost and south Siabost, and is on the west coast of the Isle of Lewis in the Outer Hebrides of Scotland.Siabost is to the west of Mòinteach riabhach – the Brindled Moor described as “several hundred square miles of bog, hag, crag, heather, loch and lochan that make up the interior of Lewis”.Siabost is approximately 40 miles away from Stornoway, the main town of Lewis and Harris.Unlike a traditional village that has a distinctive centre, Siabost is a dispersed township and its households are scattered across a few miles of coastline.North, south and new Siabost have a collective population of approximately 280 people .Eilean Tiriodh, The Isle of Tiree is the furthest westerly island of the Inner Hebrides.The 2011 Scottish census reported a 15% fall in the population number from 770 to 653 since the 2001 census .National Records of Scotland also supply projections for possible future population and demographic scenarios of Scotland, and predict that there will be further depopulation and aging of the communities of Argyll and Bute, including on the island of Tiree .All four case sites were pursuing or had already constructed a community owned wind turbine.All four had been pursued under the initiative of local residents.The four community energy projects were in different stages of developing their respective community wind turbine projects.However, each project aimed to reach similar goals, most centrally the sustainability and long term viability of their communities.All four had included the retention and development of their cultural heritage, including language, as a clear project aim.Rural west Scotland and rural west Wales have many cultural similarities.Most obvious of these are that Cymraeg and Gàidhlig are community languages in both areas.These languages are depositories within which historical and cultural practices such as poetry, song and image, are encapsulated.Language was therefore a central cultural feature in each community and became a focal point for this research.34 semi-structured in-depth interviews were conducted in November and December 2013 amongst active members of the community energy projects, and members who were on the periphery of these projects.Interviews in Wales were conducted in Welsh and have been translated for use in this paper.Pseudonyms have been used, and no distinguishing descriptions of interviewees used – to ensure participants anonymity.The analysis method was based on bricolage analysis which allows for a ‘free interplay of techniques during the analysis’.This involves the use of a number of different approaches in order to examine a wider array of aspects which make up the interview including themes, narrative and content.In contrast to a systematic approach, meaning is constructed through an interaction of analysis techniques .Some codes were data-driven and others were theoretically informed .What follows is a discussion of the insights generated through the in-depth interviews.Emotion towards place, a tangibly strong sense of feeling towards a geographic place, its people, culture, common history and language, was acutely felt amongst many of the interviewees.These feelings included a strong bond with the history of the people, literature, artists and photographers, religious leaders, and the local dialect of a language.There were tangible links back to the early history of local saints, and even more ancient local standing stones and iron hill forts which were within or in close proximity to each community case site in this research.These communities’ history spanned back millennia.These deep roots in the past were a central part of the local culture.Certainly, a sense of being different and peripheral to a mainstream, homogenised, anglicised and globalised British culture was acutely sensed.This sense of place is best left described by one of the interviewees, here describing the people of the western side of Lewis,“There are roots there that run intuitively back over centuries…there are people living in the landscape…they can relate themselves to people who lived there hundreds of years ago and know intimately the way you know the back of your hand and the appearance of one side of your face…the history and the folklore of the area they live in.Now there are areas, many other areas in Britain I’m sure where that is the case, but they are increasingly isolated and development and change is eradicating that probably at a continuing if not an increasing pace,Calum’s point reflects the concept posed that ‘where there is a stronger sense of the public or the common, a more anthropological and moral-political way of understanding culture…is still strong’.The Gáidhlig and Cymraeg languages were found to be of central importance when interviewees were asked to describe their local culture – the first and inimitable cultural symbol used by most of the interviewees.Cultural life was often tied up with the language of the community.Language sustainability was also cited as a reason for pursuing each community energy project.Many indications were made to the importance of both languages as bearers of other cultural practices, such as poetry, song, history, depictions of the natural environment and traditions.However, these aspects considered so distinctive within each community, were depicted to be eroding.The impact of newcomers on local language and culture was documented in all four case sites.English was heard more frequently in local schoolyards and the language of volunteering was also changing, as committees had to accommodate English speaking members of the community.All of these factors had a cumulative effect on the stability and normality of Welsh and Scottish Gaelic language use.Culture was also described as a glue that kept a community together and their relationship to a geographical place strong.Despite a desire to retain their knowledge of the past and maintain the bond with traditions passed on through history, this glue, which had kept a community together and their relationship to their place strong, was seemingly wearing away.The erosion of cultural life, such as the weakening of the traditional ceilidhs on Tiree, the tradition of calling on neighbours, diminishing storytelling traditions and the dwindling numbers of Welsh and Scottish Gaelic speakers were repeatedly touched upon in the interviews.There was a recognition that traditions, and the social bond created by these traditions, were weakening.However, there was also a strong desire to maintain and strengthen these features.This desire to preserve traditions not only included preservation of language, music and poetry, but relationships with the natural world such as the practice of crofting in the Scottish case studies.The natural environment was clearly treasured amongst many interviewees, although concurrently, there was a belief that communities, and the cultural features that they retained, deserved equal protection and support.It was repeatedly proposed by the interviewees that when speaking about the natural environment it was also necessary to consider the economic, historical and cultural aspects of that environment,“The environment is more than just the physical environment, it’s an economic environment, it’s a cultural environment; social too, and that’s where it’s important to look at the environment more widely…it would be a dangerous triumph if somebody saved the surrounding landscape and that nobody would live …that you had dying communities at the foothills of the mountain.,Culture formed a core part of a sense of place for many interviewees, and was something that carried an emotional weight in the form of duty – a duty to maintain their unique culture.CEPs that generated a new, sustainable income stream were seen to be a means of taking responsibility for a place,“Well, there’s a feeling of duty to carry on particular traditions…there’s a feeling of duty to look after the place, for the next generation…the emphasis on staying here and making the place better and…improve your own place, and that’s what we’re trying to do…,The case study sites had correspondingly strong bonds to their cultural history and traditions.These communities should not however be over-simplified as being marginal, picturesque communities absorbed only with some remnants of ‘quaint’ cultural practices, history and language.They are areas where real human challenges are being faced.Poverty, perceived ineffective governance and threats to service provision were also a concern.Added to these are additional threats towards their cultural, historical, linguistic and place identities.These threats were compounded by what was described as a rapidly homogenizing world.It was perceived that deep-rooted, local cultures were being discarded in order to participate in a new international culture which was, according to Robert from Tiree, “not a bad thing.But…you lose as well as you gain in that choice”.Advancement of a mainstream cultural homogeny could result in cultural poverty and a disconnection with local ‘psychohistory’ – the knowledge of one’s own history .However, the communities under study in this project saw each of their CEPs, and the potential income stream that could be generated, particularly with the higher Feed in Tariff1 rates prior to 2015, as a means of strengthening their indigenous communities.It was believed that there were economic benefits arising from renewable energy project ownership that could contribute to the sustainability of the cultural features of all four communities.The need to strengthen the local economy in order to fortify local cultural aspects was a common goal across the case sites.There was also hope that the social benefits, the gathering of the community for a common goal, such as cutting peats communally and other ‘old’ traditions could be replicated, in a new form.There were communitarian benefits as well as more tangible economic benefits.However, there was no desire for there to be a regression towards an ‘old’ way of life, as Walter and Gladys articulate below,Walter: People don’t want to live in a theme park either,Gladys: Or a museum,Walter: They want to be able to live here and now,Finding the balance between achieving cultural sustainability within a liveable economic structure was the intended goal amongst the four case studies.Economic uncertainty threatened the cultural aspects that were of such importance to the interviewees.The link between both matters were acutely felt, and at the heart of each community renewables projects strategy.In the same way that the granite quarries, the tweed mills and agriculture had supported communities of the past in terms of employment; community-led economic development was perceived as being able to create work and provide an economic seedbed for cultural life to thrive.Each respective community wind turbine scheme was seen within this context – as being a means of developing a local economy, creating local jobs and allowing for a more prosperous future both socially and culturally.Despite uncertainty about the cultural future of their communities, there was a growing conviction that such aspects ought to be protected, and that CEPs could contribute towards this goal.Inward investment through the wind turbine on Tiree for example, was seen to make the island a more attractive place to live – subsequently having a beneficial effect on the culture of the island,“If it helps keep people here and not leave, then by default, it’s supporting the culture… it’s a wider benefit to the culture by making sure that we don’t get any smaller and any weaker, or any more fragile.,Creating a community income stream allowed groups to provide finance for certain cultural events, community services, job creation and other project development.These included the development of traditional music events, language courses, projects to develop traditional skills, allotments, community museums, nurseries, historical events, a community swimming pool, and community parks.Each community foresaw the capacity to employ project officers to deliver these activities through their new income stream and additional match funding.This was a clear aim for the Talybolion energy project in Llanfechell, who, despite not being able to invest in such activities at time of interview, intended to do so with their projected income,“…we wanted to keep the values and the cultural pattern that are in this vale.We felt that that we could do that if we had our own income rather than depending on other people.,Tiree already had examples of how their new income stream from the wind turbine, distributed through what the Tiree Trust had called the ‘Windfall Fund’,2 was being used for cultural stimulation.One such example was donating funds towards the annual Tiree Music Festival, which had been bringing hundreds of people to the small island and “putting Tiree on the map..helping bring people here which is then helping the tourist industry and income stream to the whole island”.Another group that benefited from the ‘Windfall Fund’ was the Tiree and Coll Gaelic Partnership, a charity group that specifically worked on the development of the Gaelic language, historical knowledge and archives on the isles of Tiree and neighbouring Coll,“I think if it hadn’t been for the Windfall Fund… would have…gradually ended up tired… it’s made a difference between viable and disintegration and when it comes to sort of heritage infrastructure and sort of producing employment for lovely bright young Tiree people…it’s a fantastic energy boost to the economy and the… I think the energy of the community.,Similarly, the Fèis Thiriodh, a Tiree based group teaching and learning traditional Scottish and Tiree Gaelic music received funding from the ‘Windfall Fund’ to promote ‘ar ceòl, ar cànan ‘s ar dualchas’ – our music, our language and our culture.The ‘Windfall Fund’ also part funded the post of a staff member at the Tiree Trust, responsible for developing cultural projects on the island.Such was the case in Siabost, and the hope in Llanfechell and Llanaelhaearn.Other projects that the ‘Windfall Fund’ funded on Tiree included a local drama group that had developed Gaelic language performances, a community tapestry project depicting the history of the island, and funding for the Tiree Maritime Trust.Money was given to build a boat house to store the traditional lug boats used on the island, and retell the maritime traditions of the island;,“… do little training courses every now and then on how to restore boats and things like that so…apart from that fact…it’s built an asset for the community, a physical asset for the community.It’s also helping to promote the culture and heritage side of the sailing on Tiree.,There certainly seemed to be more confidence in Tiree due to their new income stream and how it could contribute to the protection and promotion of cultural aspects on the island particularly for An Iodhlan, the historical centre,“Well it just makes it all a bit more positive doesn’t it… knowing that there’s this huge pot of money – it will be once the loans payed off – that all community groups can apply to…to keep them going, instead of everyone having to worry about, oh, where’s the money going to come after fund-raising…it’s a much, much more positive thing and that…makes you plan more for positive projects that you want to do with your community group…because we know we’re in a secure position, where we’re not going to have to worry next year about whether we’ll be open or not …,Horshader Trust in Siabost, who will be managing the money generated from their community turbine, were already supporting cultural projects.The Tormod an t-Seòladair project developed knowledge about glass plate negatives taken by Dr. Norman Morrison – a native of Siabost who had used local people as his subjects for photographic negatives taken in the early 20th century.Although not funded by Horshader, in kind contributions were given, and a tangible desire to pursue similar cultural programmes and projects in future was clear.Using income generated by the wind turbine to build upon the success of existing historical groups, there was a desire for the cultural aspects of the past to be imparted from older generations to younger generations and that there was a need to “repatriate these things”.Specifically, Horshader hoped that they could fund a museum project in the area to facilitate this repatriation of history and culture for local people and visitors to the area,“The idea is to restore that museum and that really does… brings back to life if you like, the crofting … as well as the language as well, so that’s a…that I hope will be supported by the turbine project,Community energy was seen as a way of being able to develop community facilities and amenities, contributing towards turning the tide on depopulation patterns, and thereby allowing local cultural practices a seedbed in which to thrive.Similarly, the community wind turbine projects underway in Scotland, and particularly the activities that these projects could fund were seen as a way of encouraging more opportunities for people to socialise, and come back into contact with each other and thus again encourage the resilience of traditional cultural activities.Although the Welsh case sites’ community energy projects were not operational and generating an income stream at time of interviewing, there were already many ideas about how the money generated by their proposed wind turbines could be used towards cultural sustainability.In Llanaelhaearn, this had already been the remit of their community cooperative ‘Antur Aelhaearn’.Established in 1974 the cooperative aimed to protect and develop the area as a culturally strong Welsh and Welsh speaking region and instil a sense of local confidence.The wind turbine project was seen as being of vital importance in the continuation of this vision,“That’s why I think the work with this turbine is important and I think that it gives a chance for us to do things that would help in relation to keeping the language…the heritage you know – all kinds of things that we could do with it to help…,Plans to help the community included developing a heritage centre in the chapel building the cooperative had acquired.The heritage centre would include information on local historical and cultural figures, including a section for interpretation of the Tre Ceiri site – the Iron Age Hill Fort above Llanaelhaearn.The museum was also a cultural focus that could showcase their musical heritage in the area, and a hub for developing language classes.There were also plans to develop a nursery in the village, retaining young families and attracting others – again, seen as a way of bolstering cultural attributes within their community.Llanfechell also had plans to ensure that the cultural heritage of their area was to be protected through their community energy project, a vision that was included within their memorandum,“One of the objectives is… ‘to utilise revenue to support assistance and development of the linguistic and cultural education and heritage of the communities of Mechell and Llanbadrig’…it’s…very important to have that clause in…that Welsh cultural realities would be prominent part of the thinking.,Here too, there was a desire to develop a community historical hub, musical events and language activities, as well as to develop their allotment site and buy a community shop, in order, partly, to stimulate their cultural heritage.However, interviewees across the case sites also indicated that it would be disingenuous to presume that a wind turbine alone could ‘save’ a culture.The struggle between preserving small cultures against the perceived homogenising effects of globalisation is considerable, as highlighted below,“…the forces of…cultural homogenisation are not just felt on Tiree.These are very strong forces…technology has shrunk the world and homogenised the world…and I think you can have a million community turbines but I don’t know can compete with that.,Nevertheless, there was a clear will amongst interviewees that their community energy project would contribute somewhat towards the cultural growth and sustainability of their communities.Bolstering economic structures was imperative for this aim.The development of the Welsh and Scottish Gaelic languages was also central to how these communities framed their cultural sustainability.Language was repeatedly used to illustrate cultural distinctiveness amongst the interviewees It was predicted by interviewees that through strengthening the local economy the language in turn would be strengthened, as a strong local economy would allow local people to stay rather than move away, thereby preserving the language amongst community members in both Wales and Scotland as, “…you’re allowing Gaelic speakers to stay and use their Gaelic – it keeps it alive…”.Tiree was already providing practical support for language initiatives.On Tiree, projects that were supporting language sustainability were seen in the same light as other sustainability measures;,“ various criteria that we want the projects to hit and it’s – involving young folk, involving Gaelic, involving sustainable environmental things, involving old people, and…it’s just a scoring thing that we have…,Indeed, the subsidising of Ulpan3 courses via the Windfall Fund made courses more affordable for locals on Tiree.Funds were also used to contribute towards the employment costs of a culture officer also now trained as an Ulpan tutor.Although the connection was not conspicuous at first glance, the turbine was in fact contributing towards supporting the language on Tiree,“You don’t see the connection between the turbine there and supporting Gaelic on the island, but that’s what it’s doing.It’s doing it indirectly by being able to fund that project that makes it easier for people that are resident to access courses.,Developing the Ulpan courses on the island, also had the benefit of attracting further funding.The local Argyll council had shown an interest in sending staff to learn Gaelic on the island.There was also a potential for Tiree to develop into a language learning hub, a vision included in Tiree Trusts’ Community Growth Plan .A development of this sort was viewed as being able to provide a new economic benefit for the island, as well as encouraging more uptake of the language locally.“…we’re now running a project to have Ulpan courses on Tiree so hopefully it could turn Tiree into a bit of a hub for Gaelic learning…that’s our long-term plan.As of early next year, we’ll be running parent classes for locals…that essentially would lead to nine-week residential course that hopefully we’ll be advertising internationally, so that’s the grand plan.,However, there remained difficulties in inspiring residents to engage with language learning itself on Tiree, and that money on its own would not be a panacea for language revival;,“I mean the problem…is getting people wanting to go to it, ‘cause there is…sort of a large investment in learning a language..I think many people living on Tiree today would say…it’s not worth it…that’s what seems to be the calculation that people are making, whether you’re putting ten or a hundred thousand pounds into that project, that doesn’t make a huge difference.So…it certainly, it’s a positive influence, but it probably needs more than just money.Unless you can conceivably drag Tiree a hundred and fifty miles north – which would be good!,Support was also offered through Horshader in Siabost for groups that were focused on Gaelic language activities.Supporting the language and cultural heritage was a part of their criteria.As Siabost is considered a stronghold for the Gaelic language, it was suggested that the area could benefit from further Gaelic language developments.There was certainly an appetite amongst the interviewees that there should be investment made into the Gaelic language, even in practical terms with the running of the project.The language was already being used in Horshaders’ offices by the Development Officer, allowing local people to feel comfortable in communicating ideas about developing the area,“I can speak to them in both languages, the elderly like that….so I think it’s easier, I think it’s definitely easier.I think it’s easier for them to also say to me what kind of projects they want…and to speak in both languages,Support was also in line for developments in case studies in Wales despite not having yet reached the development phases of the examples in Scotland.Firstly, in Llanaelhaearn there was already a contract by Antur Aelhaearn to conduct a language impact study to show the possible benefits that ownership of a community wind project could entail for the Welsh language.Interviewees believed that the community turbine would contribute towards strengthening the language,“…this is a chance to strengthen the language locally.Certainly, it won’t weaken her and…there’s a chance for the wider strategy to strengthen the language and her foundations, and keep her for many years hopefully.,The idea of funding free Welsh lessons for local people in Llanaelhaearn was mentioned as a direct means of strengthening the language.Many interviewees saw the potential for their community energy projects to contribute towards funding such a venture and thereby support the development and sustainability of their language.This was seen in a wider context of ensuring the community’s economic and social sustainability as a whole.This was also the desire in Llanfechell, who had enshrined language development within their memorandum.These goals reflects the hopes that were held in the Scottish examples.However, it was argued, in all case sites, that before addressing the issue of language protection, community stability had to be achieved.A solid bedrock was needed for the language to develop, as alluded to in the excerpt below,“I think that…you have to build a real community with a real life before you can address the issue of the language in a meaningful way.,Economic stability was seen as the essential foundation that was needed for these communities’ culture, and attached languages, to thrive.In each case site, although their community wind turbines were not considered a panacea, they were seen as meaningful and potentially positive contributors towards cultural sustainability.The cultural underpinnings of each community under study were of significant importance and value for interviewees.Ensuring a viable future for these cultural traditions, be it language use, traditional practices, repatriation of historical knowledge or reclaiming the relationship between people and land, was considered an imperative.We build upon past community energy research that suggest that CEPs contribute beyond purely energy target measurements, i.e. towards economic and social sustainability .We propose that some CEPs also contribute, or aim to contribute, towards long term cultural sustainability, as evidenced through the case studies above.It has been argued that the effects of neoliberalism and globalisation have had particularly harmful effects on place attributes such as culture, language, tradition, history, memory and community.This is not only applicable to the cultures under study here, but amongst other indigenous communities across the globe, particularly North America and Canada .The homogenising effects of these phenomena have been depicted by the case site interviewees, with descriptive analogies of how their communities are changing, local cultural attributes are abandoned, and socialising is becoming rarer in the face of modernity.There are fewer opportunities for communities to come together and create social bonds that can bolster local cultural activities and sustain local attributes such as language use.CEPs, however, seems to present a way of re-kindling some of these social and cultural bonds by offering an opportunity for communities to gather once again for a shared aim, and to create objectives that include the strengthening of local cultural attributes along with posing a new reason for community members to socialise.Although inspiring engagement is a particularly modern challenge, in the face of increasing individuality – community energy is perceived as offering an opportunity to turn the tide on this trend.Furthermore, community energy is perceived as being counter to the history of cultural injustice dispossession and exploitation experienced at the hands of past large infrastructural energy projects .Rather than being ‘economically marginal communities’ where unjust distributional energy processes take place; peripheral, rural and culturally distinct communities, as illustrated by the case sites in this research, have become the owners and developers of their own local energy projects.CEPs, as extolled by the interviewees across the four case sites in this study, can also invest new income streams into cultural activities such as local language courses, events, and even employment opportunities for local people.These activities combine to create a more resilient community with strengthened facilities and services that encourage people to remain, return or move to the area, which in turn could contribute towards the flourishing of cultural practices and traditions.Community owned renewable energy projects have been acknowledged as allowing communities to benefit from ‘natural resource wealth gains while simultaneously facilitating holistically sustainable development’.This has been evidenced in this paper.Cultural sustainability was considered to be of as much value as ecological, economic and social sustainability amongst the interviewees and a clear driver and aim for all projects.This acknowledgement of the value of cultural sustainability and justice at community level, mirrors efforts in the global policy arena to ensure that culture is added as the fourth pillar of the sustainable development model .Already, projects in Scotland are investing in initiatives that lead to cultural and language sustainability both directly and indirectly.Language and cultural sustainability are central to the Welsh case sites, and one of the factors that has driven the projects to develop.Language threat was also a reason for pursuing CEPs.This paper shows that rather than culture being a force for opposing energy developments , it can also be a force that drives communities to develop their own indigenous projects.Culture can be decisive in shaping ‘preferences’ within the energy sector, and lead to the uptake of indigenous, culturally sensitive natural resource use and renewable energy projects .Further research could also reveal the importance of culture to CEPs in other communities of place such as within urban settings where multiple cultures coexist.Further research in this subject area would have a significant value to the community energy sector, and be of particular interest to communities whose cultural identity and language are under threat.It would seem evident from this research, that communities themselves have always understood the interplay between economic, social, environmental and cultural sustainability.As seen from the case studies included here, communities themselves are best posed to know what their communities need in cultural terms, and being owners and administrators of their own community energy schemes could allow them to achieve their aims.They acknowledge the need for an economic pathway, offered through developing CEPs, to enable cultural benefits to take place.Scholarship on such issues is yet to catch up.We suggest that research in this vein could be developed within energy justice literature, i.e. further explorations into the cultural benefits and justices that can arise within the low-carbon transition, be it within the community energy sector or large scale developments.This paper has taken an important step to begin addressing this knowledge gap, and opens the door to an avenue of research, which recognises that both place and culture matter.
There is a shortage of scholarly research into understanding the cultural values, drivers and outcomes of community renewable developments. This paper contributes towards addressing this gap, by comparing four community renewable projects set in Scottish Gaelic speaking Scotland and in Welsh speaking Wales. Not only do cultural values drive the developments of these community energy projects, but evidence gathered here through qualitative interviews show that these communities aim to contribute towards the long term cultural sustainability of their respective areas. This research paper focuses on how community wind energy projects in Scotland and Wales have contributed towards the retention of cultural attributes, particularly language retention and revitalisation. It also contributes to a deeper understanding of the cultural reasons why historically indigenous communities are turning towards the renewable energy sector (and developing their own local projects) as a way to help achieve cultural sustainability through economic development.
31,613
A Perspective on Smart Process Manufacturing Research Challenges for Process Systems Engineers
Smart manufacturing is a stated priority of most major economies, including those of the United States, China, and the European Union.It is mostly framed in terms of better use of big data—that is, measurements and market data—and intra-machine connectivity, particularly using the Internet of things.While comprehensive and timely data and massive connectivity are necessary conditions for this revolution, they are not sufficient.It is also important to have smart algorithms for intelligent and timely use of the data.This is the domain of process systems engineering.Process manufacturing, in which products are mostly continuous fluids or solid streams with fluid-like properties and molecular differentiation, presents different challenges than those of mechanical manufacturing.This paper reviews perspectives on smart process manufacturing and the potential contribution of and challenges for PSE, its research, and its practice community, in making the most of this revolution.This is a short perspective, so references are very selective and are not meant to be comprehensive.The smart manufacturing revolution is said to have three phases:Factory and enterprise integration and plant-wide optimization,Exploiting manufacturing intelligence, and,Creating disruptive business models.All three phases have resonance in the process industries.The first phase is already underway, and the PSE community has been in the vanguard of providing tools and techniques for facilitating integrated design and operation.Ideas and research results for the second phase suggest that whole supply chains can be integrated more seamlessly in order to provide products more quickly, efficiently, and sustainably; however, such integration certainly remains a major challenge for the industry.Although we have seen little change in business models in the process industry over the past decades, smart manufacturing promises to enable us to develop new business models—for example, to deliver personalized medicine—in an efficient and sustainable way in the future.The current model of long-term contracts for the supply of large amounts between each part of the supply chain, which is common in chemicals, would not be appropriate.We need a model that allows the supply of bespoke products in small amounts, which will likely be of much higher added value and which will require the direct influence of the cost of product development, cost of manufacture, and strength of demand.This remains a major challenge.A set of challenges for smart process manufacturing in the United States was discussed at a workshop in April 2008, resulting in a comprehensive report .A specific test bed was proposed—the steam reforming of methane—in order to demonstrate and benchmark progress .More recently, Li addressed the challenges for the petrochemical industry from an industrial perspective.These challenges are common internationally for countries with a well-developed process industry base.It is clear from both these contributions that PSE lies at the heart of the smart process manufacturing challenge.Over the past 50 years, PSE researchers have been developing methodologies—mostly computational, but not all—to be able to optimize whole systems, whether at the unit, plant, or enterprise level.A recent issue of the American Institute of Chemical Engineers Journal celebrated the work of a pioneer in this field, Professor Roger Sargent, who has been working since the 1950s.Sargent has taught many people around the world and has inspired many more, as reflected in the 38 papers in this issue, most of which are relevant to this topic.This paper considers each of the three phases in turn, and then examines some of the key technical challenges that arise.I will particularly consider research progress and challenges that confront the PSE research community in enabling smart process manufacturing to progress more rapidly.I will reflect not only on petrochemical and commodity chemical manufacturing, but also on specialties and medicines, as well as on contributions that consider wider environmental impacts that are part of the system of systems that we influence.While some challenges and opportunities are similar to those in other manufacturing sectors, there are distinctive differences.A key tenet of smart manufacturing is plant-wide optimization, which is not new in process engineering.Process engineers have been considering systems of connected unit operations and looking for better—or even optimized—solutions for a long time, with these activities driving their education.Plant-wide optimization is at the heart of PSE thinking.The routine use of simulation tools with embedded optimization capability has resulted in plants being optimized for profitability and, increasingly, for minimizing environmental impact while seeking sustainable production.Many tools for process integration have been developed, all of which are based on steady-state models.Process integration approaches have been used to design heat-integrated plants and, to some extent, whole sites .Real-time optimization and model-based control have enabled solutions for optimizing the dynamic behavior of operations in short to medium timescales.Their implementation is not universal, but it is common, particularly in petrochemical plants .Enterprise integration has also been a goal through the use of whole supply chain models and business software systems.Many tools are available, and some experience of deploying these tools is discussed in Section 5.Plant-wide optimization is an area that would benefit from more benchmarking and testing to give more confidence.Coordination of multiple enterprises and their customers, most of whom are other businesses within an extended supply chain, remains a challenge.Although this is a technical problem, it is also about relationships: ensuring that the valuable commercial and strategic relationships that have been developed are not disrupted by any proposed technical solutions.Smart manufacturing seeks to involve the customer more closely in order to have a more responsive and agile system.Many supply chains producing domestic products now produce on demand, with very short production and delivery timescales.The process industry typically produces intermediate products that are either processed further or used to produce specific products.For example, the plastics industry produces many polymers and many different grades for different end uses.The manufacture of a raw polymer is followed by various stages of treatment, forming, molding, and assembly before the polymer becomes a final product for the consumer.As a result, most process manufacturers have a remote relationship with the end users of the final products.Each stage has its own dynamics, inventory, uncertainty, and commercial drivers.In order to become more responsive and agile, the process industry will need to incorporate information technology-enabled manufacturing intelligence, with communication occurring between all parts of the supply chain.Clearly, commercial and technical challenges are associated with this objective.It will require computational methods that can handle multiple stages within the supply chain that support different types of commercial relationships as well as different dynamics at each stage.It will need to be able to take into account technical constraints on flexible manufacturing at each stage, and incorporate the ability to handle uncertainty in demand and production.The processes will be customer-driven and sensitive to markets, but will include various contractual constraints in dealings between different elements of the supply chain.End-user suppliers will have huge amounts of data on trends in customer demand in order to allow the prediction of expected demands, as they currently do for consumer-oriented product industries.This will shift to incorporating more immediate-demand data, which should rapidly influence manufacturing in all parts of the supply chain.Although immediate-demand data is now common for the fresh food industry and for the processed food industry , it would be quite a departure for the chemical, petrochemical, and pharmaceutical industries.Cao et al. presented a data-driven refinery-scheduling model that can incorporate unexpected events from data over a one-day period; however, this approach is still a long way from the overall system responsiveness that is common in the food industry.The aim of smart process manufacturing is to support an agile, robust, and sustainable process industry that minimizes waste while maximizing profitability.Perhaps the biggest change in chemical plants over the last few decades was the introduction of the coordinated control systems that are now in place.The basic structure of the set of connected unit operations has not changed much for a considerable period of time.Environmental performance has added significant pressure to the industry, resulting in more integrated design and operation, with less end-of-pipe treatment.Smart manufacturing could provide more motivation for significant change through small-scale and microscale local production, for example, which would bring production closer to the consumer.This would be essential for the potential development of personalized medicine, and perhaps also for the manufacture of more individualized personal products and smart materials for specialized use.Changes of this nature would require new process synthesis and intensification methods.We may also see significant changes to the molecules and mixtures that we produce.Perhaps the most significant change could be a broadening of cross-disciplinary research, as engineering interacts more closely with the natural sciences, the social sciences, and medicine in order to provide frameworks and tools for businesses that seek to meet customer demands more quickly and accurately.This phase is inevitably the least clear of the three phases of smart manufacturing.In the discussion above, I considered the three phases of smart manufacturing, as seen from the process industries.I now consider a set of enabling topics and related research challenges.These topics are: flexibility and uncertainty, responsiveness and agility, robustness and security, the prediction of mixture properties and function, and new modeling and mathematics paradigms.A key issue in smart manufacturing is the ability to be flexible and respond to uncertainties in the marketplace and in raw material quality.Since the 1980s, a rich seam of research work has tackled this problem.Based on assumed bounds for uncertainty, optimization-based approaches have been proposed to account for uncertainties, beginning with stochastic programming and using a superstructure as the basis of an optimization problem to minimize a quantifiable uncertainty index .A good recent review can be found in Steimel et al. .Most approaches find that a design based on a steady-state analysis will satisfy all expected uncertain conditions, inevitably leading to conservative designs.Steimel et al. demonstrated their two-stage optimization framework on the hydroformylation of dodec-1-ene.We need a way to balance the likelihood of large excursions using a probabilistic approach that can also use historical data and patterns within the data to inform the design.Of course, extreme events may occur, making it necessary to have elements designed in to take account of extreme events and use patterns in the data to provide indications of an approaching extreme event, thus enabling us to avoid taking extreme action that may be environmentally damaging or may even cause shutdown.Although some researchers have considered uncertainty in the dynamic response either through control measures or enhanced design, much is needed to make these efforts comprehensive and useable, given the need for discrete decisions and tradeoffs between many alternatives.Rather than solve the complete dynamic optimization problem, which is intractable for realistic problems, Wang and Baldea use pseudo-random signals to identify a data-driven input/output model.Using process intelligence through simplification, data analysis, or multi-level representations provides a possible way to efficiently solve large-scale problems while allowing the continuous refinement of predictions and actions.Deterministic optimization approaches identify the parameters yielding the smallest operating space that will accommodate the full range of expected uncertainties.These approaches produce conservative results since the outer extremities of uncertainty ranges are very unlikely and may not be critical.The earliest paper listed above uses a stochastic approach with great potential, as discussed in the review by Sahinidis .Stochastic solutions allow a designer to determine what level of risk is acceptable, and then design accordingly; thus, they require some engineering judgment about final design robustness and whether extreme events must be handled or not.All these methods are very expensive computationally, as they require the solutions for many optimization problems.Thus, there is much scope for improving their efficiency, as well as for testing and evaluating methods on substantially sized problems from industrial practice.We can then identify the limitations and weaknesses of these methods more fully.However, we have a fairly comprehensive toolset to use.It is challenging to find ways to make these methods both practical and not overly conservative.As mentioned above, a key element of smart manufacturing is to match production to demand through prediction and real-time control.This has two elements: the ability to decide on a course of action based on the information received, and the ability to achieve that outcome.Within PSE, the second element is tackled by focusing on controllability: Can these actions be achieved according to the model, and how can this actually be done?,Much work has been done on controllability in the PSE community.These methods are not yet adequate for large problems with nonlinearities, and it is difficult to incorporate heuristic knowledge from experienced practitioners.Process control is now dominated by model-based control , which permits integrated operations, although the computational burden can become significant.It is typical for real-time optimizers to work with steady-state models to determine optimal strategies and then implement them using model-based controllers to ensure coordinated and responsive systems.Although these are mature technologies, they may not have been tested for the more agile requirements expected in the future, in which customer demands are much more varied and frequently changing.While a considerable amount of historic trend data has been collected on operations, the chemical industry does not incorporate large demand databases directly into their control systems.However, this is being done now in many consumer product industries, including the food industry.The resulting responsiveness is creating new challenges in ensuring robustness .Data repositories provide trends in demand, and are reliable when changes are regular and relatively smooth.However, challenges occur as a result of big events such as failures or large market shifts in response to political changes, for example.Does matching production to demand by using control measures have the potential to make systems more sensitive, or will it make them unstable?,It will also be a major challenge to ensure that the required accuracy of data-driven models is suitable for each specific area.Accuracy requirements will vary considerably for different areas.PSE researchers have performed numerous studies for supply chain research using discrete optimization models.In a recent review, scholars considered supply chain optimization to be particularly relevant for high-value low-volume products .Although they did not identify any single method as the best one, the reviewers concluded that decomposition and hierarchical algorithms have consistently provided good results.The process industries will gradually see more connections between customer data and demand-driven manufacture.Li et al. showed how a data-driven global optimization framework can be used for the planning process of an entire petrochemical complex.Sahay and Ierapetritou showed how agent-based technology can be used to optimize multi-enterprise supply chains.Many practical issues of implementation confront user companies.For example, enterprises need methods and tools that are able to handle company interfaces across the supply chain as well as the broad range of commercial and contractual relationships that exist.For example, vaccine production can require rapid response in an emergency, while retaining the safety and quality of the product over an effective timespan.Personalized medicine will require very small-scale production, and may require an entirely new type of business model and technical solution.Along with speed and agility, customers also want certainty: of supply, of quality, and of safety.The discussion in the literature of design under uncertainty addresses part of this issue, in that the designs that are produced allow for all predicted uncertainties, resulting in rather conservative designs.Of course, our models are approximations based on assumptions of the physics and chemistry involved, and have parameters that can be inaccurate or flawed.However, even assuming that we have considered all possible uncertainties, things can still go wrong: Elements in the manufacturing process break down, communication systems fail, predictions are wrong, and so on.There is a rich seam of work exploring fault detection in process plants .Fault detection is likely to become more important as we use increasingly sophisticated instruments to get better quality measurements that also have a greater propensity to either predict with bias or fail altogether.Hazard detection must be incorporated directly into the systems because operating close to optimal conditions usually puts extra pressure on operations, resulting in a greater likelihood of failure.Issues of data robustness and security are a new aspect to be considered.Can we guarantee data accuracy and ensure security from competitors and other agents seeking to cause difficulties?,As instruments increase their local intelligence, and as greater inter-connectedness occurs through the Internet of things, the potential for security breaches also grows, as shown by recent hacking cases.Although PSE researchers have not traditionally worked in this area, computer scientists have made major strides in cybersecurity, as most countries consider cybersecurity to be a major national priority.By working closely with our colleagues in computer science, PSE researchers can ensure that developments in cybersecurity inform our methods and software.The business of the process industries is to manufacture chemical products.For a long time, we focused on producing molecules that were required for further processing, such as methanol and ethylene.The chemical industry was originally rooted in the production of dyestuffs—synthetic colors for the textile industry, used to replace expensive naturally occurring minerals.Dyestuffs produced an effect, or function, that customers were willing to pay for.This function sometimes came from a single molecule and was sometimes produced by a mixture.In fact, we still manufacture products with a specific, well-defined function; for example, gasoline is a complex hydrocarbon mixture with specific functional requirements such as octane number, flash point, and cloud point.The personal products industry has also been seeking to manufacture products with a specific function.In the future, will we be able to follow customer demands more closely based on data trends, predictions, and market intelligence?,This problem contains many challenges.One challenge is our limited ability to predict function, and thus our limited ability to design mixtures to achieve specific functional demands by customers.Much progress has been made in the capability to design polymer blends, solvent mixtures, and electrolytes, leading to considerable commercial use of predictive methods .Many challenges still exist regarding predicting the functional effect of complex mixtures of many substances and designing mixtures for specific functions that are desired by consumers, particularly when it is difficult to characterize the function itself using models.Many properties, such as taste, are very personal and difficult to predict.Another challenge is the need to optimize the molecular characterization of the whole supply chain, from primary manufacture, through intermediates, and through to the final product.A supply chain may involve many enterprises that have different systems, different business models, and a need to keep their unique selling point and commercial secrets confidential, particularly around specific products.Can this be achieved?,Finally, the development of personalized medicine is a major upcoming change.Medicines will be tailored to personalized requirements based on diseases and their progression, metabolism, physical condition, and personal needs.Personalized medicine presents many challenges to medicine regulators; however, assuming that these challenges are resolved, it will require a considerably different manufacturing strategy with personalized specifications of function, dosage, and delivery.In order to optimize for customer needs, we can consider our ability to optimize function based on physiological models integrated with production models.High-performance computing and communications have been crucial for PSE developments.However, mathematics has been the key enabler for PSE tools and techniques and will continue to be so for smart process manufacturing.The development of computational optimization techniques in the 1950s and 1960s led to powerful tools and techniques that are now in common use in the process industries and beyond.In the 1980s and 1990s, the development of discrete optimization as a reliable and tractable problem led to the development of mixed-integer nonlinear programming solution techniques , which led in turn to great progress in the whole area.Disjunctive programming now allows us to handle solutions to problems with logical conditions .We still struggle with discontinuities and with finding globally optimal solutions , and handling a full range of dynamic scenarios is still a challenge.We need methods that allow the visualization of large-scale problems to help understand and verify solutions.Another enabler is the ability to model large-scale problems with complex mixtures and complex geometries.Generally speaking, modeling tools are still the domain of experts.Although there has been progress in considering how best to automate the process modeling workflow and the modeling of units and systems , the tools remain difficult to use.Engineering education and training has embraced this problem; however, making the tools more intuitive and robust would certainly help.Model accuracy is important, and strongly relies on the ability to predict the properties and functional performance of complex mixtures.Finally, when tools interact with big data repositories, model accuracy can be taken into account systematically, using methods for quantifying uncertainty, for example, in order to account for situations when data is unreliable.A large community of researchers in computer science are studying the issues involved in handling big data.This will involve new forms of data, such as huge volumes of images, text, and so on, and will require the tools of knowledge management .Enabling methods is the work of PSE researchers, so the development of new methods will continue to be a big part of our work.We will also continue to work with and draw from colleagues in other disciplines, including computer science, mathematics, and physics.Smart process manufacturing confronts us with an increasingly cross-disciplinary set of challenges.The process industries have already made progress in smart manufacturing ideas, with PSE researchers and practitioners as key enablers.I have referred to some of the key published research work.Many of these ideas have been put into practice.However, there is very little benchmarking in the public domain.The publication of such information is always controversial.There is a need for a consolidation of reports on the specific beneficial outcomes of plant-wide optimization—perhaps on an anonymized basis, or resulting from application at one or more industrial-scale plants and sites.I have highlighted some of the challenges confronting the PSE research community in achieving the full benefits of smart manufacturing.Many of these challenges revolve around how information is shared and passed between the units, plants, and sites of all the enterprises involved in a specific supply chain.There are also challenges in ensuring that all key aspects are properly modeled, particularly where health, safety, and environmental concerns require accurate predictions of small but critical amounts at specific locations.Although we have some of the technology that is required for a rapid and agile response to customer demands, the process industry’s relationship to the end user makes it a particular challenge.It is difficult to predict whether this shift will bring about entirely new business models.A key message is that in order to enact smart manufacturing, the PSE research and practice community needs to collaborate with other disciplines.For the most part, the process industry is challenge-oriented, and teams are not based on traditional discipline boundaries.The education and training of engineers in universities is also becoming more cross-disciplinary.Collaboration will certainly be a key requirement for bringing about smart process manufacturing.
The challenges posed by smart manufacturing for the process industries and for process systems engineering (PSE) researchers are discussed in this article. Much progress has been made in achieving plant- and site-wide optimization, but benchmarking would give greater confidence. Technical challenges confronting process systems engineers in developing enabling tools and techniques are discussed regarding flexibility and uncertainty, responsiveness and agility, robustness and security, the prediction of mixture properties and function, and new modeling and mathematics paradigms. Exploiting intelligence from big data to drive agility will require tackling new challenges, such as how to ensure the consistency and confidentiality of data through long and complex supply chains. Modeling challenges also exist, and involve ensuring that all key aspects are properly modeled, particularly where health, safety, and environmental concerns require accurate predictions of small but critical amounts at specific locations. Environmental concerns will require us to keep a closer track on all molecular species so that they are optimally used to create sustainable solutions. Disruptive business models may result, particularly from new personalized products, but that is difficult to predict.
31,614
Antibacterial and antifungal activities of phenolic compound-enriched ethyl acetate fraction from Cochlospermum regium (mart. Et. Schr.) Pilger roots: Mechanisms of action and synergism with tannin and gallic acid
In traditional medicine, plants are used as folk preparations alone or in mixture containing two or more species.Already in clinical uses, herbal medicine preparations are available as enriched and standardized plant extract products.For folk and clinical use, therapeutics effects are based on the principle of multi-component therapy, where pharmacological activity is the product of synergistic interaction of several constituents present in these preparations.Natural products have already been consolidated as the main source of agents with antimicrobial activity.Therefore, search targeting to new active metabolites from plant highlight as a promisor tool in pharmaceutical development of antibacterial and antifungal.Extracts of medicinal plants, a combination of several extracts, enriched extracts and new isolated natural compounds have been described as active, including against multi-drug resistant pathogens.Thus, a multi-component phytotherapeutic approach may establish a new perspective to the treatment of infectious diseases.Cochlospermum, a tropical genus native from southwestern United States, Mexico, Central and South America, Africa, the West Indies and Australia, has been described in folk medicine to treatment of several clinical condition.In Brazil, Cochlospermum regium,Pilger, popularly known as ‘algodão-do-campo’ or ‘algodãozinho’ is the principle specie employed with medicinal purposes.The roots this typical shrub of the Brazilian Cerrado has been traditionally used such as hepatoprotective, analgesic, antihypertensive and anti-inflammatory in treatment of rheumatism, arthritis and acne.Antibacterial effect also been revealed, and C. regium is widely employed in folk medicine against urogenital infections.Hydroethanolic extract and its ethyl acetate and butanol fractions from of C. regium roots has exhibited in vitro activity against Staphylococcus aureus and Pseudomonas aeruginosa, being the polar fraction more active.Among the constituents isolated from the EtOAc fraction, dihydrokaempferol 3-O-β-glucopyranoside was the major, together with other minor constituents, such as dihydrokaempferol, gallic acid and ellagic acid.All of them are devoid of antimicrobial activity, except gallic acid which showed considerable effect against different species of pathogenic bacterial and fungal.In addition, other phenolic compounds, such as flavonoids and tannin, have been also identified in some studies.However, limited information are available regarding the antimicrobial mechanism of action of extracts from C. regium roots, synergic effect of major phytochemical components of extract, and it antimicrobial spectrum of activity.In this context, we investigated the chemical and biological potential of C. regium roots with focus in antibacterial and antifungal activities.To verify as phytochemical compounds from the EtOAc fraction interact to exercise antimicrobial activity, we used fraction alone and enriched with gallic acid and tannic acid, two know antimicrobial constituents this specie.Additionally, antibacterial spectrum was evaluated to Gram-positive and Gram-negative species, and antifungal activity was tried in pathogenic yeast.Lastly, the mode of action of fraction and activities metabolites was investigated against know target of cells bacterial and fungal.The dried and powdered plant parts belonging to the underground system of C. regium,Pilger were collected in Campo Grande and identified by Ubirazilda M. Resende.A voucher specimen was deposited in the CGMS Herbarium.Vegetal material was submitted to extraction procedure using exhaustive maceration in 70% ethanol for 3 days at 25 °C.This procedure was repeated three times as described in the literature, then concentrated under reduced pressure.Posteriorly, the extract was filtered and evaporated to dryness under reduced pressure and a brown amorphous residue was obtained.This residue followed to solubilization in methanol: water at a ratio of 9: 1 and was partitioned into EtOAc fraction.EtOAc fraction was submitted to liquid chromatography evaluation and the concentrations of the main constituent, as well as of gallic acid, were determined.The reference compounds used were gallic acid and DHK-glucoside isolated from a plant with a confirmed purity of 90.7%.A Shimadzu liquid chromatography system with a CBM-20A communication module, two LC20AD pumps with a 20 μL injector loop and an SPD-M20-A diode-array detector adjusted for the 210–600 nm range was used.Separation was achieved using a Kinetex C18 column and a Security Guard pre-column with the same specifications.The chromatograms were monitored at 270, 325 and 294 nm, followed by analysis and integration using LCsolution software.DHK-glucoside, ellagic acid and gallic acid were identified by comparison with authentic samples.Other compounds were identified by comparison of their retention times under identical analysis conditions and comparison of their UV spectra.The tested compounds were eluted using a solvent mixture containing 1% acetic acid, water and methanol, and the flow rate was 0.9 mL·min−1.The gradient was performed as follows: 0–3 min, 10% B; 3–10 min, 10–35% B; 10–40 min, 35–60% B; 40–43 min, 60–100% B; 43–45 min, 100% B; 45–48 min, 100–10% B; and 45–48 min, 10% B.The elution was carried out over 43 min, aiming to return to the initial conditions and maintain the stability of the new sample for analysis.To detect the phenolic compounds present in the EtOAc fraction, further TLC plates were performed.This fraction was applied to silica gel 60 F256 plates and eluted in chloroform : acetic acid : methanol : water 64:32:12:18 and BAW 4:1:5 as solvent system.Previously isolated compounds from EtOAc fraction, and commercially available tannic acid and gallic acid were also applied.The revelation was conducted using polyethylene glycol spray reagent and the plates were observed under a 365 nm UV lamp.In addition, 1% ethanolic FeCl3 spray reagent was also used.EtOAc fraction was solubilized in 10 mL of 50% ethanol at a concentration of 10 mg·mL−1.An aliquot of 5 mL was used for quantification of tannins; this sample was treated with 0.05 g skin powder for 60 min under stirring.The sample was again diluted to 0.1 mg·mL−1, because of the high concentration of phenolic compounds.Both minor and major concentration solutions were submitted to quantification of total phenols by the conventional method described by Herald et al.The analyses were made in triplicate using 0.1 mL of the extracts, 1 mL of distilled water and 0.1 mL of Folin–Ciocalteu reagent, and after 6 min was added to 0.8 mL of sodium carbonate solution 75 g·L−1.The analyte was maintained in the dark at room temperature for 90 min and the reading was made using a spectrophotometer at 760 nm.The concentration of phenolic compounds and tannins from the ethyl acetate fraction was expressed as μg·mg− 1 equivalent of gallic acid using the equation of the standard curve obtained.The microorganisms used in this study are originated from the American Type Culture Collection and were kindly provided by the Reference Microorganisms Laboratory of the Oswaldo Cruz Foundation.Antimicrobial activity was determined against four Gram-positive: Staphylococcus aureus 29213, Staphylococcus epidermidis 12228, Streptococcus agalactiae 13813 and Listeria monocytogenes 15313; four Gram-negative: Pseudomonas aeruginosa 25619, Klebsiella pneumoniae 43816, Acinetobacter baumannii 19606 and Escherichia coli 25922; and four yeast: Candida albicans 10231, Candida krusei 34135, Candida glabrata 2001 and Candida tropicalis 28707.Second specifications of ATCC, Candida albicans 10231 and Candida tropicalis 28707 are highly resistant for antifungal agents belonging to the azole class.Antibacterial activity was evaluate by minimal inhibitory concentration determination using the microdilution assay described in document M07-A9 of the Clinical and Laboratory Standards Institute, with modifications.A stock solution was prepared by dilutions of the ethyl acetate fraction in 2% dimethylsulfoxide and stored in −20 °C to posterior use.20 μL this solution was diluted in microplates content Mueller-Hinton broth to a concentration range of 15.62–1000 μg·mL− 1.Posteriorly, 100 μL of bacterial inoculum was include in wells of microplates, and after it was incubated at 35 ± 2 °C by 18 h. MIC value was defined as the lowest extract concentration at which the well was optically clear.Streptomycin and Chloramphenicol were included as positive controls.DMSO 2% solution was used as solvent control.The minimum bactericidal concentration was determined according to the MIC values second Lyu et al.From each well with absence of growth, 10 μL was aliquoted and transferred to Mueller-Hinton agar plates and incubated at 35 ± 2 °C for 24 h.The result was defined as the lowest sample concentration that kill at least 99.9% of the initial inoculums.All tests were made in triplicate in three independent experiments.Catalase enzymatic activity was measured by the intensity of degradation of hydrogen peroxide into water and oxygen.Overnight cultures of bacterial strains were diluted with MHB to give a final inoculum of approximately 106 CFU·mL−1.Microbial culture rates previously treated with sub-inhibitory concentrations of EtAcO fraction, gallic acid and tannins were transferred to a microplate contenting 100 μL of 3% H2O2.Blistering was assessed by assigning scores to its intensity.The control cultures were prepared similarly with untreated inoculum and 2% DMSO was employed as solvent control.S. agalactiae ATCC 13813, specie that not express catalase enzyme, was utilized as negative control.The potential antifungal was evaluate using MIC determination trough of microdilution assay described in document M27-A2 of CLSI, with some modifications.Stock solution of EtOAc fraction, gallic acid and tannin fraction ware diluted in Sabouraud-Dextrose Broth in same range used to antibacterial tests.After addition of 100 μL of fungi inoculum in each well, microplates were incubated at 35 ± 2 °C by 48 h.The concentration at which shows no visible growth was defined as the MIC.Ketoconazole and Nystatin were used as the positive control, while 2% DMSO was used as the solvent control.The minimum fungicidal concentration was determined by plating of 10 μL of well that not showed visible growth on Sabouraud-Dextrose agar.After incubation at 35 ± 2 °C for 48 h, MFC was defined as the lowest sample concentration that kill at least 99.9% of the initial inoculums.MIC and CFM were realized in three independent experiments, using tree replicates for experiment.To evaluate the effect of C. regium, gallic acid and tannin on the fungal membrane, we used the exogenous ergosterol binding assay.The MICs were determined against C. albicans, C. glabrata, C. tropicalis and C. krusei by the standard broth microdilution procedure described above.Duplicate plates were prepared: one contained compounds and exogenous ergosterol and the other contained compounds alone."For activity be related with the ability of binding to ergosterol, the exogenous sterol would prevent the binding to the fungal membrane's ergosterol and MIC value increase.Nystatin, an antifungal which is known to bind in ergosterol, was used as positive control.The assay was realized in tree independent experiment.Synergic effect between EtOAc fraction from C. regium roots with tannin and gallic acid was studded using checkerboard assay.Stock solutions of EtOAc fraction was tested in concentrations ranging from 7.81 to 1000 μg·mL−1.Posteriorly, solutions of tannin and gallic acid in the same range concentration were combined in a 1:1 ratio to evaluate the antimicrobial effect resulting from the interaction with EtOAc fraction.The fractional inhibitory concentration was determined by the sum of the ratios between the MIC of the sample alone and the MIC of the combination.To interpretation of results, the interaction was defined as synergistic, additive, indifferent or antagonistic.The MBC and MFC interactions of EtOAc fraction with gallic acid and tannin were also determined, as described anteriorly.The HPLC profiles were recorded at 270, 325 and 294 nm, and the compounds were identified by comparison with the retention times and UV spectral data available for gallic acid, DHK-glucoside and ellagic acid.In addition, from the extensive data available in the literature for phenolic and flavonoid compounds, it was possible to characterize the presence of other dihydrokaempferol and kaempferol derivatives.The chromatography profiles of the constituents present in this fraction of C. regium are shown in Supplementary dates.As observed in Table 1, quantitative analysis of the gallic acid and DHK-glucoside showed that the concentrations of these compounds were 22.61% and 3.07%, respectively.EtOAc fraction submitted to TLC and revealed with NP/PEG reagent accord with those described in the literature, i.e., they confirmed the presence of gallic acid and flavonoid derivatives in the fraction, while visualization in FeCl3 solution showed the presence of tannins.Total phenolic compounds and tannins were determined at concentrations of 725.5 μg·mg−1 of fraction and 214.9 μg·mg−1, respectively.The antibacterial activity of EtOAc fraction from C. regium roots and of compounds tannins and gallic acid are summarized in Table 2.As observed EtOAc fraction showed highest antibacterial effect against the non-fermenters Gram-negative P. aeruginosa and A. baumannii, been that in this last at level 2-fold higher that to chloramphenicol and streptomycin.EtOAc fraction was also active in Gram-positives coccus, with MICs ranged from 250 μg·mL−1 against S. epidermidis to 500 μg·mL−1 against S. aureus and S. agalactiae.However, antibacterial activity against Enterobacteriaceae and Gram-positive bacillus was not satisfactory.Similar results were showed with ethanolic extract and EtOAc fraction from C. regium roots against S. aureus and P. aeruginosa.On the other hand, differently of results this paper, E. coli was sensible to EtOAc fraction and S. aureus of origin bovine not showed growth inhibition with the exposition of hexanic, cloroformic and metanolic fraction.Genetics and environmental variation on vegetal specie, the differ extraction and fractionation methods used, as well as the origin of microorganisms might, in part, explain this differences.Isolated phytocompounds, tannins and gallic acid, were also tested.Tannin was active against all bacteria, been that A. baumannii and S. epidermidis showed the high sensibility to bacteriostatic effect this compound.In A. baumannii, for example, tannin was 8-fold more activity that the antibiotics chloramphenicol and streptomycin.Furthermore, a bactericidal effect was shown against A. baumannii, S. aureus and S. epidermidis at concentration of 1000 μg·mL−1.In fact, tannins are traditionally associated to antibacterial activity, and vegetal products rich in tannin showed the highest effects on pathogens bacteria.The ability this phytochemical class of chelate essential metals as co-factor in metabolic pathways microbial, inhibit directly several enzymes and to complex structural protein of membrane cellular, might contribute in inhibition of bacterial proliferation.Already the gallic acid exhibited an inhibitory activity against S. aureus at same level of EtOAc fraction.However, differently this extract, gallic acid was also active against E. coli.The absence of antibacterial activity against E. coli in EtOAc fraction might is explicated because the concentration of gallic acid in this vegetal material is sub-inhibitory.In fact, considering the concentration of 3.05 mg/g of gallic acid in EtOAc fraction, the microplate wells will have only 3.7 μg.mL−1 this phytocompound, which is lower that the MIC for E. coli.The antibacterial activity of gallic acid showed a considerable bactericidal effect against E. coli.This is important, because compounds that have bactericidal activities are strong candidates for clinical use, once the complete elimination of pathogens is the safest option.In according with this study, gallic acid is major active against Gram-negative species.This phenolic compound have the ability of induce damage in outer membrane, changing the hydrophobicity, charge, ionic transport and molecular composition this structure.The rupture of homeostasis in outer membrane of Gram-negative decrease pathogenicity, inhibit growth or induce the dead bacterial, at depend of specie considerate.EtOAc fraction from C. regium roots, as well as tannins and gallic acid were investigated to antibacterial mechanism of action using catalase inhibition assays.As observed in Table 3, EtOAc fraction inhibited catalase activity produced for K. pneumoniae and S. aureus at concentration of 250 μg·mL−1.In addition, C. regium was able in reduce the activity this enzyme in P. auruginosa, S. epidermidis and L. monocytogenes in relation to growth control.These effects might are attributed to tannins.It blocked the activity of catalase in the bacterial K. pneumoniae, S. aureus and S. epidermidis at concentrations 4-fold lower than MIC of each specie.Moreover, tannin reduce H2O2 degradation in A. baumannii and P. aeruginosa.In turn, gallic acid only inhibit the catalase of E. coli, been that the activity this enzyme remains the same as for growth control in another microorganisms tested.Excessive H2O2 is harmful for many pathogens.It is converted to highly reactive oxygen species such as hydroxyl radicals and the halogenated acids hypochlorous, hypobromous, and hypoiodous, by enzymes of immune system.ROS promote oxidative and nonoxidative damage, which result microorganism elimination.Thus, rapid and efficient ROS removal is of essential importance for pathogens survival.In prokaryotes organisms, H2O2 mainly stimulates production of antioxidants enzymes such as catalase.Thus, compounds that have ability to inhibit this enzyme, in addition to induce accumulates of ROS that showed microbicidal effect, it prevent microorganism escape from host immune response.In this context, EtOAc fraction from C. regium roots and tannins showed antibacterial effect because they are able in blocked the catalase activity produced for some pathogens, promoting so accumulate of ROS.Antifungal effect of EtOAc fraction, tannins and gallic acid was determined against common pathogenic Candida, in which MIC and MFC values are summarized in Table 2.All Candida strain evaluates were sensible to effect of EtOAc from C. regium roots, with MICs in rang of 125–1000 μg·mL−1.C. albicans was particularly susceptive to fungistatic activity of fraction, tannin and gallic acid, been that to this last at level 4-fold higher that in positive control.C. krusei and C. glabrata were most sensible to tannin and has also growth inhibited with EtOAc, but it were not sensible to gallic acid.C. tropicalis, in turn, was weak susceptible to all treatments studded.However, gallic acid and tannin showed fungicidal effect against this specie.Inácio et al. has revealed that C. regium roots showed good antifungal activity against C. albicans and that this effect is directly influenced by environmental and phonological factors, been that adult plants cultivated in poor soils and collated during the fall or winter display higher antifungal action.Already ethanol extract of leaf from C. regium showed good activity against the C. krusei, but was inactivity against C. albicans and C. tropicalis.Antifungal activity of tannin and gallic acid were widely described, and our results suggest that this phytocompounds are anti-Candida bioactive substances presents in EtOAc fraction from C. regium roots.One of the proposed mechanisms for antifungal agents is their binding to membrane ergosterol.Polyenes, an antifungal class represented by nystatin and amphotericin B, showed activity by this mechanism, which leads to fungal membrane disruption and loss of intracellular content.To determine whether the EtOAc fraction from C. regium roots, tannin and gallic acid bind to the fungal membrane sterol, the MIC was determined with and without the addition of exogenous ergosterol.If the activity was caused by binding to ergosterol, MICs values will be higher with the addition of fungal sterol.In according with results summarized in Table 4, the effect of ergosterol exogenous was specie-specific to the EtOAc fraction, which induce an increase at MIC of C. krusei but not change the MIC of C. glabrata, C. albicans and C. tropicalis.This is the first evidence on antifungal action mechanism of C. regium.MIC for Gallic acid were change against C. albicans in the presence of exogenous ergosterol, indicating the biding this phenolic compound to membrane.Two mechanisms of antifungal action involving ergosterol are known, so the compounds may: bind to membrane ergosterol forming pores in this structure, or inhibit enzymes involved in the synthesis of ergosterol, thereby reducing the content of that macromolecule.In addition to ergosterol binding showed in this study, Li et al. showed that the gallic acid also decrease ergosterol content in filamentous fungus Tricophytum rubrum by inhibit the enzymes sterol 14α-demethylase and squalene epoxidase.The exposition to exogenous ergosterol increase MICs of tannins against Candida species evaluated, except to C. tropicalis.In fact, tannins are known to bind to cell membrane structures of different pathogens.Lopes et al., showed that phlorotannins extracts from Cystoseira nodicaulis M. Roberts significantly reduce the ergosterol amount in Candida cells, and phlorotannins extracts from C. usneoides M. Roberts and Fucus spiralis Linnaeus decrease ergosterol content in dermatophytes.Thus, tannin has affinity by ergosterol, and the binding this polyphenol to fungal membrane is also possibly involved in your antifungal action.The MIC of antifungal nystatin increased in the presence of ergosterol for all species tested, validating the experimental conditions this study.The antibacterial activity of combination between EtOAc fraction, tannins and gallic acid ware evaluated against the selected microorganisms using Checkerboard method.As showed in Table 5, taninns presented synergic interaction with EtOAc fraction against the bacterial S. aureus and K. pneumoniae, as well as against the yeast C. krusei, C. glabrata and C. albicans.With this combination, tannins reduced by almost 2-folds the bactericidal concentration for S. aureus.In addition, the EtOAc fraction renders the tannins capable of killing the pathogens K. pneumoniae and C. glabrata, which were insensitive to the microbicidal effects of this polyphenol alone.On the another hand, interaction between EtOAc fraction from C. regium and gallic acid was few satisfactory, showed indifferent interaction against S. aureus and an antagonic effect to C. albicans.However, gallic acid showed bactericidal effect in association with EtOAc fraction, with MBC of 312.5 μg·mL−1 against S. aureus.These results indicated that tannin-enriched EtOAc fraction from C. regium roots show the best combination vising antimicrobial effect.Medicinal use of enriched extracts is based in synergic effect produced by combination of differ constituents, which can increase considerable the biologic activity.Moreover, synergistic combinations could translate in the binomial of efficacy and safety, since they could lead to better dose responses with lower concentrations of each component, thus reducing the likelihood of toxicity.In conclusion, our dates demonstrated that tannin-enriched EtOAc fraction from C. regium roots showed considerable potential therapeutic against infection of medical interest.Sub-inhibitory concentrations of EtOAc fraction and tannin are able to inhibit catalase in several bacteria.In addition, ergosterol exogenous reduces antifungal activity, indicating that C. regium, tannin and gallic acid, as well as the polyenes, it bind to fungal membrane.Thus, we results showed that C. regium roots is a promisor search to source of new bioactive compound against pathogenic bacterial and fungal, and future studies should be conducted in that direction.All authors have none to declared.
Cochlospermum regium is a shrub used in folk medicine to treat infection diseases. However, limited information are available regarding the antimicrobial mechanism of action, synergic effect of major phytochemical components this specie, and it antimicrobial spectrum of activity. Here, we aimed to investigate the antimicrobial activity of its ethyl acetate fraction (EtOAc) from roots together with some constituents previously isolated and commercially available, to understand the possible pharmacological interactions between these components. The EtOAc fraction of C. regium, tannin and gallic acid commercially available, alone or in combination, were assessed for antimicrobial activity against eight bacteria and five yeasts. EtOAc fraction showed a broad spectrum of antimicrobial activity and gallic acid had better antifungal activity, while tannin was active against all microorganisms tested. Antibacterial effect was induced to catalase activity inhibition. For yeast, results indicated that the EtOAc fraction and compounds gallic acid and tannic acid bind to the ergosterol of the fungal membrane. Checkerboard assay showed that the combination of EtOAc fraction with tannin result in synergic effect against six microorganisms. Thus, highlight the pharmacological use of C. regium as a tannin-enriched extract for the treatment of infectious diseases. Together, data reveals the antimicrobial potential of C. regium, proving it used in the folk medicine of Brazilian southwestern.
31,615
Highly efficient transformation system for Malassezia furfur and Malassezia pachydermatis using Agrobacterium tumefaciens-mediated transformation
Malassezia is a genus of yeasts that are characterized by their lipid dependence.It is part of the mycobiome of human skin that is rich in sebum production and also has been isolated from many other niches.Currently, 17 species have been defined based on phenotypic and molecular data.Dermatological diseases such as dandruff/seborrhoeic dermatitis, pityriasis versicolor, and atopic dermatitis in humans have been associated with Malassezia globosa, Malassezia restricta, Malassezia sympodialis and Malassezia furfur, while Malassezia pachydermatis has been associated with otitis externa and dermatitis in dogs.In addition, M. furfur and M. pachydermatis have been related with bloodstream infections in patients who received parenteral lipid supplementation.The increasing interest in Malassezia as a pathogen urged the development of molecular tools for efficient transformation and genetic modification.Agrobacterium tumefaciens-mediated transformation is based on the capacity of this bacterial-plant pathogen to transfer DNA into a host cell.This method combines the use of a binary vector system with a plasmid containing the T-DNA and a plasmid containing the virulence genes that are involved in the transfer of the T-DNA to the host.This methodology was first described in fungi for Saccharomyces cerevisiae.Since then, it has been implemented successfully in yeasts and filamentous fungi including the pathogens Candida spp., Paracoccidioides brasiliensis, Cryptococcus neoformans, Coccidioides immitis, and Trichophyton mentagrophytes.Recently ATM was used to transform Malassezia and to inactivate genes by homologous recombination.In this study, we have adapted AMT from the protocols reported for A. bisporus and C. neoformans to transform M. furfur and M. pachydermatis.We tested different co-cultivation parameters, including temperature and time.We used the hygromycin B phosphotransferase gene as a selection marker and evaluated the use of GFP as a reporter protein in this yeast.The improvements we obtained when compared to the published transformation system will enable molecular studies to reveal mechanisms underlying pathogenicity of Malassezia.Frozen stocks of M. furfur CBS 1878 and M. pachydermatis CBS 1879 were reactivated for 4 to 5 days at 33 °C on modified Dixon agar.For liquid shaken cultures, Malassezia was grown in 150 mL Erlenmeyers at 180 rpm and 33 °C using 150 mL mDixon broth.To determine the minimum concentration of hygromycin B that abolishes yeast growth, 100 μL Malassezia suspension was incubated in triplicate for 7 days at 33 °C on mDixon agar supplemented with 6.25–100 μg mL− 1 antibiotic.The minimal hygromycin B concentration was 25 and 50 μg mL− 1 for M. furfur and M. pachydermatis, respectively.This assay was performed with each new hygromycin batch.Plasmid pBHg contains the hpt gene from Escherichia coli under the control of the A. bisporus glyceraldehyde-3-phosphate dehydrogenase promoter.Vector pBH-GFP-ActPT was constructed to express the green fluorescent protein gene gfp from Aequorea victoria under the control of the regulatory sequences of the actin gene of A. bisporus.To this end, primers 1 & 2 and 3 & 4 were used to amplify the act promoter and terminator, respectively.The products were cloned in pGEMt and reamplified with primers 5 & 6 and 7 & 8.The fragments were cloned in PacI/AscI digested pBHg-PA using In-Fusion cloning , resulting in plasmid pBHg-ActPT that contains PacI and AscI sites between the act promoter and terminator.Gene gfp from Aequorea victoria was amplified using primers 9 & 10, digested with PacI/AscI and inserted in PacI/AscI digested pBHg-ActPT, resulting in the 10,704 bp pBH-GFP-ActPT plasmid.The transformation procedure was adapted from protocols for transformation of A. bisporus and C. neoformans.Briefly, A. tumefaciens strain AGL-1 was transformed with vectors pBHg and pBH-GFP-ActPT by electroporation applying 1.5 kV with capacitance set at 25 μF.Transformants were selected at 28 °C in Luria broth supplemented with 50 μg mL− 1 kanamycin and 100 μg mL− 1 hygromycin.After 2 days, transformants were transferred to minimal medium supplemented with 50 μg mL− 1 kanamycin and grown overnight on a rotatory shaker at 28 °C and 250 rpm to OD600 0.6–0.8.Cells were collected by centrifugation for 15 min at 1248g and resuspended in induction medium containing 200 μM acetosyringone .The bacterial suspension was incubated for 3 h at 19 °C with shaking at 52 rpm.Malassezia cells were harvested from liquid shaken cultures by centrifugation for 5 min at 2432g, washed twice in milliQ H2O with Tween 80, and suspended in induction medium at a density of 107 cells mL− 1.Equal volumes of yeast and A. tumefaciens cells were mixed and 20 mL of the mix was filtered through a 0.45 μm pore cellulose membrane using a 13 mm diameter syringe filter holder.The membrane filters were placed on co-cultivation medium with 200 μM and incubated at 19 °C, 24 °C, or 28 °C for 3, 5, or 7 days.The membranes were washed with 0.1% Tween 80 and transferred to mDixon agar containing 50 μg mL− 1 hygromycin B, 200 μg mL− 1 cefatoxin , 100 μg mL− 1 carbenicillin , and 25 μg mL− 1 chloramphenicol to select transformants.Individual colonies were transferred to a fresh selection plate.Experiments were performed in duplo using biological triplicates.GFP fluorescence was monitored using a confocal microscope with 63 × ACS APO oil objective.Fluorescence was detected using the spectral band 500–600 nm.The Fiji image processing package of ImageJ was used for image analysis and processing.Genomic DNA of wild-type strains and transformants of M. furfur and M. pachydermatis was extracted as described.Presence of the hygromycin cassette was analyzed by PCR using primers Hy-Fw & Hy-Rv.Mitotic stability of 30 transformants was assessed by sub-culturing 10 times on mDixon agar without hygromycin followed by culturing in the presence of the antibiotic.The number of transformants obtained at the different growth conditions was analyzed by two-factor ANOVA in order to assess the effect of temperature and days of incubation."Normality and homoscedasticity of the data was evaluated with R using the Shapiro-Wilk test and Bartlett's test, respectively. "The best condition for the transformation was determined using Student's t-test between the means of the repeated experiments.A. tumefaciens containing the vector pBHg or pBH-GFP-ActPT was co-cultivated with M. furfur and M. pachydermatis at 19 °C, 24 °C, and 28 °C for 3, 5, and 7 days.Optimal co-cultivation time and temperature for transfer of pBHg was 5 and 7 days at 19 °C or 24 °C and for the GFP construct 5 days at 19 °C.Transformation efficiencies were 0.75–1.5% and 0.6–7.5% for M. furfur and M. pachydermatis, respectively.Transformants were examined by PCR analysis to confirm integration of the T-DNA.PCR products of expected size of 1049 and 774 bp for the hpt and the gfp gene, respectively, were obtained from 30 out of 30 M. furfur and M. pachydermatis transformants.In no case was a fragment amplified from the wild-type strains.Sequencing of the PCR products confirmed the presence of both genes in the Malassezia transformants.Microscopy showed GFP fluorescence in M. furfur and M. pachydermatis transformants with wild-type strains showing some background autofluorescence.A total number of 30 M. furfur and 30 M. pachydermatis transformants were 10 times subcultured on mDixon plates in the absence of hygromycin.Of these transformants, 80% were mitotically stable as shown by replating on hygromycin.M. furfur and M. sympodialis were recently transformed using A. tumefaciens.Here, A. tumefaciens mediated transformation was optimized resulting in a highly efficient transformation system for M. furfur and M. pachydermatis.Several changes in the AMT protocol were introduced to improve transformation efficiency. A filtration step of the mixture of A. tumefaciens and Malassezia suspension was introduced instead of placing this suspension directly onto induction medium or onto a filter as is usually done.Possibly, filtration facilitates the contact between the bacterial and yeast cells. Minimal medium was used as co-cultivation medium.Notably, Malassezia spp. was able to recover its growth after a co-cultivation period in this medium for 7 days despite the fact that these yeasts are lipid dependent. A concentration of 200 μM acetosyringone was used instead of 100 μM as was reported for basidiomycota yeast transformation.This result is in line with previous work showing that high transformation frequencies are obtained when sufficient AS is present during Agrobacterium pre-culture and during co-cultivation. A mixture of 108 bacterial cells mL− 1 and 106 Malassezia cells mL− 1 resulted in the highest transformation efficiency.This ratio corresponds to 100 bacterial cells per yeast cell.A correct ratio of A. tumefaciens cells relative to fungal cells is important to avoid the bacterium to overgrow the fungus and to obtain optimal transformation efficiency. The optimal temperature and co-cultivation time were 5 and 7 days at 19 °C and 24 °C, respectively, for the two constructs that were tested.These co-cultivation temperatures agree with those of the yeasts C. neoformans and Candida albicans but not of P. brasiliensis that was most efficiently transformed at 28 °C.These differences have been related with the growth rate of fungi and differences in their susceptibility to A. tumefaciens.An overall transformation efficiency of 0.75–1.5% and 0.6–7.5% was obtained for M. furfur and M. pachydermatis, respectively.These efficiencies are substantially higher than those reported for M. furfur and M. sympodialis or other yeast such as C. neoformans and P. brasiliensis that showed efficiencies of 0.2% and 0.0003%, respectively.On the other hand, the transformation efficiency of C. albicans was similar to our study.The hygromycin resistance was mitotically stable as 80% of the transformants remained resistant after 10 times sub-culturing in the absence of the antibiotic.This was similar to other fungi and yeasts.M. furfur transformants showed consistent high fluorescent signals using the act promoter of A. bisporus.Signals were lower in the case of M. pachydermatis but still sufficient for detection.These results and those obtained with the hpt gene show that regulatory sequences from A. bisporus are active in Malassezia.In this study, a highly efficient Agrobacterium-mediated transformation system is described for M. furfur and M. pachydermatis.The efficiency would even enable a marker free transformation.GFP was shown to be expressed in Malassezia enabling localization and expression studies aimed to understand the life style of these fungi.No conflict of interest declared.
Malassezia spp. are part of the normal human and animal mycobiota but are also associated with a variety of dermatological diseases. The absence of a transformation system hampered studies to reveal mechanisms underlying the switch from the non-pathogenic to pathogenic life style. Here we describe, a highly efficient Agrobacterium-mediated genetic transformation system for Malassezia furfur and M. pachydermatis. A binary T-DNA vector with the hygromycin B phosphotransferase (hpt) selection marker and the green fluorescent protein gene (gfp) was introduced in M. furfur and M. pachydermatis by combining the transformation protocols of Agaricus bisporus and Cryptococcus neoformans. Optimal temperature and co-cultivation time for transformation were 5 and 7 days at 19 °C and 24 °C, respectively. Transformation efficiency was 0.75–1.5% for M. furfur and 0.6–7.5% for M. pachydermatis. Integration of the hpt resistance cassette and gfp was verified using PCR and fluorescence microscopy, respectively. The T-DNA was mitotically stable in approximately 80% of the transformants after 10 times sub-culturing in the absence of hygromycin. Improving transformation protocols contribute to study the biology and pathophysiology of Malassezia.
31,616
Pentastomids of wild snakes in the Australian tropics
Pentastomids are long-lived endoparasites of the respiratory system of vertebrates, and are arguably the oldest metazoan parasites known to science.Prehistoric larvae closely resembling extant primary larvae appeared in the fossil record approximately 100 million years prior to the vertebrates they now parasitize.Pentastomids mature primarily in carnivorous reptiles, but also infect toads, birds, and mammals.Pentastomids generally have an indirect life cycle, utilising at least one intermediate host; suitable intermediate hosts for pentastomids span diverse taxa but for most species the intermediate host is unknown.Larval pentastomids enter their definitive host when it consumes an infected intermediate host.These larvae tunnel out of the digestive system and through their definitive host to the lungs, generating lesions and scars along their migration path.In intermediate or accidental hosts these larvae can establish widespread visceral infections.In humans, pentastomiasis is most commonly caused by Linguatula serrata or Armillifer armillatus; these parasites may be transmitted via food or water contaminated with their eggs, or in the case of A. armillatus, particularly via consumption of undercooked snake flesh.Adult pentastomids feed primarily on blood from host capillary beds in the lungs and can cause severe pathology resulting in death.Adult pentastomids reach large body sizes, physically occluding respiratory passages and inducing suffocation.The two pairs of hooks they use for attaching to lung tissue can cause perforations and haemorrhaging, and degrading moulted cuticles shed into the lung lumen by growing pentastomids may induce putrid pneumonia.The class Pentastomida comprises two orders: Cephalobaenida and Porocephalida, both of which are represented in Australia.Within the order Cephalobaenida is the family Cephalobaenidae, that contains the largest genus of pentastomids: Raillietiella, comprised of ∼39 species and known from all continents where reptiles occur.Raillietiellids are small pentastomids that mature primarily in the lungs of reptiles; most commonly, snakes and small lizards serve as definitive hosts.For the only raillietiellid where the life cycle has been experimentally elucidated, the eggs shed by infected definitive hosts are consumed by coprophagous insect intermediate hosts, develop to the infective stage, and are then eaten by the definitive host, thus completing the life cycle.For raillietiellid definitive hosts that do not eat insects, intermediate hosts may be snakes, lizards, and/or amphibians.Several species of Raillietiella are known in Australia: Raillietiella amphiboluri in the dragon Pogona barbata; Raillietiella frenata in the toad Rhinella marina, the treefrog Litoria caerulea, and the geckos Hemidactylus frenatus and Gehyra australis, and Raillietiella scincoides in the skink Tiliqua scincoides and the gecko Nephrurus laevissimus.An additional species, Raillietiella orientalis, previously known only from Asian snakes, was recently recorded in the lungs of two individual R. marina in the Northern Territory, but toads were considered an incidental host in this instance.Mature unidentified raillietiellids have also been reported in two Australian snake species, the elapids Pseudechis australis and Pseudonaja textilis.Within the order Porocephalida is the family Sambonidae that contains the genus Waddycephalus, comprised of 10 species known from Asian, Fijian, and Australian snakes.Seven species of Waddycephalus have been recorded in Australia, occurring across a vast geographic range from Tasmania to Cape York, and infecting several taxa of snakes.Six of these seven species were first described in 1981, highlighting the need for taxonomic work on this genus – yet more than thirty years have passed with no further research published on this topic, except new host and intermediate host records.The study of pentastomids has been neglected because of the risk of zoonoses, difficulties in species identification, and life cycle complexity hampering experimental manipulation.Most pentastomid research concerns evolutionary classification, taxonomy, new species descriptions, new host reports, records of prevalence and intensity, and occasional veterinary and clinical case studies on pathology or death due to pentastomiasis.Considering the often adverse consequences of infection in captive and wild hosts, the zoonotic potential of these parasites, and the possibility that these parasites are being inadvertently introduced to Australia, we urgently need data on the identity of these parasites, the prevalence and intensity of infections, and the host species that are likely to be infected.However, delineating species of parasites using morphological criteria alone can be difficult, and can often lead to misidentifications, particularly in taxa with few or variable distinguishing morphological characters.Here we combine traditional analyses of morphological appearance with molecular analyses to clarify the species of pentastomids infecting wild snakes in the Australian tropics.Road-killed snakes were collected on roads surrounding Middle Point in the tropics of the Northern Territory, Australia, between November 2008 and July 2011.Only snakes that were freshly killed with relatively intact airways were collected.Snakes were stretched straight along a ruler to measure total length from snout to tail tip to the nearest cm.Demansia species were distinguished by ventral scale counts following the procedure outlined in Dowling; specimens with ⩽197 ventral scales were identified as D. vestigiata and those with ⩾198 ventral scales were identified as Demansia papuensis.The mouth, trachea, bronchi, lungs, and air sac were inspected for pentastomids.All pentastomids were removed and placed immediately into cold 70% ethanol.We sequenced the mitochondrial cytochrome C oxidase subunit 1 for 40 pentastomes from snakes and included in analyses three additional sequences that have been published previously.Ethanol-preserved samples were first air-dried and then extracted using the Chelex method as described previously in Kelehear et al.For some samples that failed to yield products from the Chelex extraction method, we extracted total DNA using a modified salting out method.Briefly, a small sample of an ethanol-preserved specimen was first air-dried and then incubated overnight at 56 °C in 100 μL of TEN buffer, 1% SDS and 0.2 mg/mL of proteinase K. To this, 75 μL of ammonium acetate was added and the sample chilled at −80 °C for 10 min before spinning at maximum speed in a microcentrifuge.The supernatant was drawn off and added to 2 volumes of ethanol, chilled and centrifuged as before and the supernatant discarded.The pellet was washed twice in 70% ethanol and resuspended in 50 μL of ddH20.Purified DNA was used in PCR reactions, and the remainder stored at −20 °C.PCR amplifications were performed as described previously in Kelehear et al.Forward and reverse sequence reactions were performed by Macrogen Inc. and aligned using Mega 5.10 and by eye.Regions of ambiguous alignment were excluded and gaps were treated as missing.We constructed a neighbour joining dendrogram, using Kimura 2-parameter distances and tested the tree topology with 1000 bootstrap replicates.To visualise diagnostic characteristics, pentastomids were cleared in lactophenol, then sufficiently small specimens were mounted on slides and cover-slipped to hold them flat.Larger specimens were dissected to better visualise their anatomical features: their heads were removed, the cephalic end was flattened and the hooks were dissected out, generally from one side of the body only.All specimens measured were either males with fully formed copulatory spicules or females containing eggs.Body length was measured from the tip of the head to the end of the caudal segment; body width was measured at the widest point.When specimens were intact, counts of annuli were made under the 10X objective lens of a compound microscope.Dimensions of hooks from raillietiellids were measured as distances from the hook tip to the inside corner of the anterior fulcrum, and from the back corner of the anterior fulcrum to the outside corner of the posterior fulcrum.For comparison with other raillietiellids, we extracted raw measurements of AB/BC of posterior hooks from Fig. 3 in Ali et al.Unfortunately, Ali et al. did not provide corresponding body size data.Lengths of raillietiellid copulatory spicules were measured along the centre-line of the spicule from start to base; width of copulatory spicules was measured at the widest point of the base.Where possible, all four hooks and both copulatory spicules were measured, though for larger pentastomids only one spicule could be measured.Hook dimensions for Waddycephalus spp. were measured as per Riley and Self.We extracted raw hook measurements of AD/BC for Waddycephalus spp. from Figs. 6 and 11 in Riley and Self to enable direct comparison of our specimens; again, corresponding body size data were unavailable for most specimens.Our terminology follows that of Bush et al.We report prevalence as the number of hosts infected with pentastomids divided by the total number of hosts inspected.We report intensity as the number of individual pentastomids within infected hosts.To examine the influence of host species on pentastome prevalence and intensity, we included only those host taxa where sample sizes were >7.We applied square root transformations to our count data prior to analyses but we report raw data in the text, tables, and figures.Analyses of pentastome morphology were performed on mean values for all measurements taken on paired structures.Analyses were based on measurements of only one paired structure in large and poorly cleared specimens where it was not possible to measure all structures.For each aspect of pentastome body size we performed one-way ANOVAs with pentastome sex and host species as independent variables.For each aspect of pentastome hook morphology we performed one-way ANOVAs with pentastome body size, sex, and host species as independent variables.We included only those host species for which we had obtained measurements from >5 pentastomids.We used Tukey’s HSD post hoc tests to determine where the differences lay.All analyses were performed in JMP Pro 9.0 with alpha set at <0.05.We dissected 81 snakes of 10 different species, and recovered pentastomids from 48 of these snakes.These infections comprised a total of 359 individual pentastomids from the genera Raillietiella and Waddycephalus.Most snakes contained pentastomids of only one genus but six snakes had infections of both genera.Pentastomids were recovered from the mouth, trachea, lungs, and air sacs of dissected snakes.Large lesions surrounded the mouthparts of Waddycephalus spp. where they were embedded in lung tissue; in some cases these lesions were present even in the absence of current Waddycephalus infections, likely indicating prior infection with this pentastomid.Raillietiella orientalis were recovered from six snake species: the colubrids D. punctulatus and T. mairii, the elapids A. praelongus, D. papuensis, and D. vestigiata, and the python L. fuscus.Overall, 31 of 81 snakes were infected with R. orientalis; infection intensity ranged from 1 to 77.The likelihood of being infected with R. orientalis varied among host species but did not vary with host body length.Demansia vestigiata were more likely to be infected than were any other snake species.Demansia vestigiata was the only host species for which we had sufficient data to analyse whether the intensity of R. orientalis infections vary with host body size.Larger snakes tended to have more R. orientalis, but this relationship fell short of statistical significance.Waddycephalus spp. were recovered from five snake species: the colubrids D. punctulatus, S. cucullatus, and T. mairii, and the elapids A. praelongus and D. vestigiata.Overall, 23 of 81 snakes were infected with Waddycephalus spp.; infection intensity ranged from 1 to 10.The likelihood of being infected with Waddycephalus sp. varied with host species but not with host body length.D. punctulatus and S. cucullatus were significantly more likely to be infected with Waddycephalus sp. than were any other snake species, but the two host species did not differ from one another in infection likelihood.We analyzed sequences from 43 pentastomids: 13 raillietiellids, taken from seven snakes, and one toad; and 30 individual Waddycephalus spp. taken from 17 individual snakes.Forty sequences were new for this study, and three were already available.After excluding regions of ambiguous alignment and gaps, the complete data set consisted of 486 bp of COX1 sequence.Our data suggest that one species of raillietiellid and five species of Waddycephalus are present in the snakes examined.Of the 30 Waddycephalus that we sequenced, three species were recorded from single snake specimens; sp. 3 primarily infected D. punctulatus and, to a lesser extent, S. cucullatus; sp. 5 primarily infected S. cucullatus but was also recovered from one D. vestigiata.Our molecular data suggest that each Waddycephalus infection was generally comprised of a single Waddycephalus species: for the eight snakes from which we sequenced multiple pentastomids, only one snake possessed two species of Waddycephalus.To assess within and between species divergence, we calculated a matrix of mean K2P genetic distances and a neighbor joining dendrogram.Within species for which there was more than one sample, K2P distances were low.Between Waddycephalus species, the K2P distances ranged from 0.053 substitutions per site between sp. 1 and sp. 2, to 0.159 between sp. 1 and sp. 4; between the two species of Raillietiella, one from snakes and one from toads, the genetic distance was 0.212.The mean K2P distance between Waddycephalus and Raillietiella was 0.422 substitutions per site.We identified raillietiellids as R. orientalis based on the distinctive morphology of their large, flared, and highly ornamented copulatory spicules.We measured 31 adult R. orientalis from a total of four snake species.Specimens of R. orientalis were cylindrical to fusiform in shape and were generally long and thin.Body length ranged from 4.0 to 65.2 mm, and width from 0.19 to 2.37 mm.Pentastome length differed between males and females: females were longer than males.Raillietiellids infecting D. vestigiata were significantly longer than those infecting T. mairii and A. praelongus.Pentastome body width did not vary with pentastome sex.All anterior hooks of R. orientalis were sharp, as is characteristic of pentastomids in the genus Raillietiella.Hook size data are given in Table 3.We first performed one-way ANOVAs with pentastomid body length as a continuous independent variable and hook morphology as the dependent variable.Hook size was positively correlated with body length in all cases.We then conducted more thorough analyses correcting for pentastome length, sex, and host species.AB of anterior hooks did not vary significantly among host species or change with pentastome body length, but male pentastomes had shorter anterior hook AB lengths than did females.BC of anterior hooks did not vary significantly with host species, pentastome body length, or pentastome sex.AB of posterior hooks did not vary significantly among host species, or change with pentastome body length, but male pentastomes had shorter posterior hook AB lengths than did females.BC of posterior hooks did not vary with host species, pentastome length or pentastome sex.Male copulatory spicules were strongly ornamented and flared at the base and were 900–1600 μm long, and 371–1500 μm wide.The mean length and width of copulatory spicules did not differ with host species or pentastome length.In keeping with methods employed in previous studies of pentastomid morphology, we first plotted AB against BC of posterior hooks to visualize distinct clusters indicative of separate species.Measurements grouped into two distinct clusters, implying two separate species.However, when we included body size as a covariate, these discrete clusters disappeared, implying a single species.We compared our raw measurements of AB and BC of posterior hooks with those taken on 52 R. orientalis measured by Ali et al.Overall, AB of posterior hooks differed with pentastome sex and with study.Measurements of AB were significantly larger in female vs male pentastomes and in our study vs in Ali et al.BC of posterior hooks also varied with pentastome sex but did not differ between the two studies.Data on relative hook size has recently been shown to be important in eliminating false species clusters; unfortunately, raw body size data were not presented in Ali et al., precluding any comparison of relative hook sizes between studies.Pentastomids of the genus Waddycephalus sp. are large, bright red, and cylindrical.We measured 38 Waddycephalus spp. from a total of five snake species.Waddycephalus adult body length ranged from 5.3 to 42.0 mm, and width from 0.3 to 5.5 mm.Body length and width were positively correlated; female pentastomes were longer than males but there was no significant difference in relative body width between sexes.As with raillietiellids, hook morphology is important for identifying pentastomids of the genus Waddycephalus.AD of anterior hooks and posterior did not vary with pentastome body length, but male pentastomes had shorter anterior and posterior hook AD lengths than did females.BC of anterior and posterior hooks did not vary with pentastome body length, or pentastome sex.In keeping with methods employed in previous studies of pentastomid morphology, we plotted raw AD against BC of posterior hooks to visualize distinct clusters, which are generally indicative of separate species.Four potential clusters were initially visible in our raw data, though when we included body size as a covariate, two of these clusters combined.However, when we applied our molecular species groupings to our morphological data there was no correspondence between the two, indicating that hook measurements alone are unreliable for distinguishing between species of the genus Waddycephalus.We compared our raw measurements of AD and BC of posterior hooks with those taken on 28 Waddycephalus spp. measured by Riley and Self.Corresponding sex and body size data were not available in Riley and Self so our analyses did not correct for either of these variables.Measurements of AD of posterior hooks given in Riley and Self were significantly larger overall than in the current study.BC of posterior hooks did not differ between the two studies.The majority of hook dimensions of the Waddycephalus spp. measured in the present study were smaller than in previously described species of Waddycephalus.Overall, our individuals did not cluster well with previous species groupings and instead, were scattered across clusters, with two conspicuous outliers.There was substantial overlap in hook morphology between different molecular species groupings.One individual from S. cucullatus conformed closely to the hook morphology of Waddycephalus superbus, however this individual falls into Waddycephalus sp. 5, the remainder of which do not cluster morphologically around W. superbus.Indeed, one individual from Waddycephalus sp. 5 has almost identical posterior hook morphology to the single individual that comprised Waddycephalus sp. 2, reiterating that hook morphology is not a reliable characteristic for distinguishing species of the genus Waddycephalus.Voucher specimens are lodged in the Australian National Wildlife Collection, CSIRO Ecosystem Sciences, Canberra.Accession numbers are as follows: R. orientalis: P140–144; Waddycephalus sp. 2: P153; Waddycephalus sp. 3: P155–157; Waddycephalus sp. 4: P154, Waddycephalus sp. 5: P148–151.Overall, 59% of the tropical Australian snakes that we dissected were infected with at least one species of pentastomid.The pentastomids were of the genera Raillietiella and Waddycephalus and infected a range of host taxa, encompassing seven snake species from three snake families.Prevalence of infection was highest in the terrestrial elapid D. vestigiata and in the arboreal colubrid D. punctulatus.All of the seven snake species that were infected represent new host records for pentastomids of the genera Waddycephalus and/or Raillietiella.We found no pentastomids infecting the colubrids Boiga irregularis and Enhydris polylepis, nor the python Antaresia childreni.Given small sample sizes for these three taxa, an apparent absence of pentastomids may not reflect a lack of infection for the taxa overall.Two additional genera of pentastomids known to infect Australian snakes were not encountered during our surveys of snakes in the tropics of the Northern Territory: Parasambonia and Armillifer.The genus Parasambonia is unique to Australia and includes two species: Parasambonia bridgesi and Parasambonia minor.Parasambonia bridgesi has been recorded in the elapids Demansia psammophis, Pseudechis porphyriacus, and Tropidechis carinatus in Queensland and New South Wales, and P. minor has been recorded in the elapid Austrelaps superbus from unknown localities.Considering that we did not detect any pentastomids of the genus Parasambonia, and that the only known hosts occur on the east coast of Australia, it is plausible that the genus Parasambonia is not present in the Northern Territory.The other genus that was not represented in our surveys was Armillifer, a genus that is represented by two species in Australia: Armillifer australis and Armillifer arborealis.This genus is responsible for many human cases of pentastomiasis.Armillifer australis is known only from large Australian pythons from Queensland, and A. arborealis has been recorded in B. irregularis from Darwin.Due to a lack of available specimens, our survey omitted the only two known hosts for Armillifer that occur at our study site, and included only three individual B. irregularis.Therefore, both species of Armillifer may well be present but were not represented in our survey.All snakes sampled were road-killed animals.If pentastomes influence snake respiration and consequently locomotion, infected snakes may be more susceptible to being run over by motor vehicles, and thus, our sampling method might have overestimated pentastome prevalence.Even so, the ubiquity of pentastomid infections in snakes of the Australian tropics sampled in this study is perplexing, considering the often-adverse consequences of infection and the recognized zoonotic potential of these parasites.The large body sizes and high infection intensities attained in the current study reveal that the biomass of pentastomid infections can be substantial, significantly reducing the volume of the lung lumen and occluding respiratory passages, thus reducing host aerobic capacity.Waddycephalus spp. in particular were very large relative to the lungs they inhabited, and pentastomid body size did not vary with host species, therefore slender hosts may be particularly affected.Future studies could usefully evaluate the effects of these pentastomes on host performance.Pentastomids were recovered from the mouth, trachea, lungs, and air sacs of dissected snakes; however for Waddycephalus infections, large lesions were only observed in lung tissue.Because the specimens that we examined were road-killed, it is likely that pentastomids discovered in the mouth and trachea moved there post-mortem.Pentastomids often exit the lungs of sick or dying hosts and in one instance we observed a large R. orientalis crawling out of the mouth of its dead D. vestigiata host.Other than the lesions noted above, we did not assess the pathology associated with infections in these snakes.Several case reports describe snake death and disease in association with pentastomiasis.Necroscopy of a deceased Nigerian royal python revealed adult pentastomids in the lungs, and suspected larval pentastomids calcified within the liver.The death of four Dominican boa constrictors was attributed to infection with Porocephalus dominicana.Pneumonia in Gaboon vipers has been linked to infection with A. armillatus.Severe pulmonary tissue damage was reported in a Brazilian colubrid snake with a heavy pentastomid infection.Raillietiella orientalis occurred in the elapids A. praelongus, D. papuensis, D. vestigiata, the colubrids T. mairii and D. punctulatus, and the python L. fuscus.It was most prevalent in D. vestigiata, with 100% of snakes infected.Infection intensities ranged from 1 to 77 per infected host and severe occlusion of the lung was apparent in those snakes on the higher end of the intensity spectrum.The scarcity of ecological studies on R. orientalis makes it difficult to discern whether the levels of infection we encountered in Australian snakes are similar to those in other regions.Norval et al. reported that 4 of 6 mountain wolf snakes were infected with R. orientalis in Taiwan, with intensity ranging from 2 to 59 pentastomids per infected host.Body size of R. orientalis varied with pentastome sex and with host species.Females were longer than males and pentastomids from T. mairii were smaller than those from D. vestigiata.This may reflect differences in host size: T. mairii is smaller than D. vestigiata and consequently, lung size should differ accordingly.Host-dependent parasite morphology has been reported in R. frenata, with pentastomes reaching larger body sizes in the gecko H. frenatus than in the toad R. marina.In this instance morphology was not dependent on lung size.Our measurements of key morphological characteristics distinguishing species of the genus Raillietiella indicated that morphology of R. orientalis differs between Australia and Asia.Specifically, AB of posterior hooks and copulatory spicules were larger in the present study than in Ali et al., highlighting the need to look beyond morphological measurements alone to identify species.Further, the relationship between pentastome body size and hook size varied depending on how many covariates were included in the analysis.The most basic analysis showed a positive correlation between body size and AB and BC of anterior and posterior hooks, reiterating the importance of considering pentastomid body size in morphological comparisons between species of the genus Raillietiella.Raillietiella orientalis has previously been recorded from the snake families Colubridae, Elapidae, Viperidae, and Pythonidae in Asia.Recently, we found three specimens in two cane toads at our study site in the tropics of the Northern Territory, Australia.Only two of 2779 cane toads were infected with R. orientalis in those studies,implying that R. marina is an incidental host and may occasionally acquire infections when feeding at infected D. vestigiata carcasses.Birds of prey commonly feed on road-killed carcasses; in particular, black kites and whistling kites are common at our study site, where they feed on carcasses of Demansia spp., and presumably other snakes also.Recently, pentastomes were recorded in the air sacs of two terrestrial birds, both of which are scavengers: a black vulture infected with Hispania vulturis in Spain, and an oriental white-backed vulture infected with Raillietiella trachea in Pakistan.The authors hypothesized that these pentastomids possessed a direct life cycle; nonetheless, the scavenging habits of these hosts, and the fact that the oriental white-backed vulture was infected with a raillietiellid, suggest that these parasites may have been ingested with prey.The potential for scavengers to become infected with pentastomids via this pathway warrants further research.The life cycle of R. orientalis is unknown but has been hypothesized by Ali et al. to include one or more lizard and/or snake intermediate hosts.Presumably this parasite entered Australia from Asia when an infected intermediate host was accidentally introduced.When considering potential intermediate hosts for trophically transmitted parasites, such as pentastomids, it is imperative to consider the diet of the hosts, especially those with specialized diets.Of the ∼20 known ophidian definitive hosts of R. orientalis, few have specialized diets.One that does is the highly aquatic Asiatic water snake that primarily eats fish and to a lesser extent, frogs.None of 232 Asiatic water snakes examined by Brooks et al. contained a snake in their stomach contents.Therefore, snakes are an unlikely intermediate host for R. orientalis.Snakes are not commonly consumed by any of the definitive host species from which we recovered R. orientalis.Fish are not consumed by Demansia spp. which exhibited 100% prevalence of infection, hence it is unlikely that they are the primary intermediate host for R. orientalis.Frogs are the most plausible intermediate host for this pentastomid.All snake species that were infected with R. orientalis consume frogs and T. mairii preys almost exclusively on frogs.We detected only one R. orientalis in one individual of the arboreal D. punctulatus which preys primarily on tree frogs in the Northern Territory.Thus, a ground dwelling frog is the most plausible intermediate host for R. orientalis in the Australian tropics.We found one R. orientalis in one water python.These pythons feed primarily on native rats, Rattus colletti, at the only site where this species has been studied intensively.However, dissections of water pythons from other regions have revealed a broader diet, with significant numbers of avian and reptilian prey.Declines in rat abundance caused by flooding events may induce the pythons to feed upon a wider range of species.Future studies could usefully examine ground dwelling frogs for presence of infective R. orientalis larvae.We detected adult Waddycephalus spp. in the elapid D. vestigiata, and the colubrids S. cucullatus, T. mairii, and D. punctulatus; the former three species are new host records for pentastomids of the genus Waddycephalus.We recovered nymphal Waddycephalus sp. from the elapids A. praelongus and D. vestigiata, both of which are new host records for Waddycephalus nymphs.Our molecular results suggest that there are five species of Waddycephalus present in the snakes sampled in the Northern Territory.We were unable to obtain complete morphological data for all Waddycephalus specimens that we sequenced, firstly because hosts were road-killed and specimens were often damaged, and secondly because the chemical used to clear pentastomids for morphological investigation rendered the samples unsuitable for molecular analyses.Because three of our five species of Waddycephalus were represented by a single pentastomid, these data are too scant to warrant substantial discussion.Instead, we focus on Waddycephalus sp. 3 and sp. 5, for which sample sizes were larger.Adult Waddycephalus spp. were most prevalent in D. punctulatus, in which infection intensities were as high as 10 pentastomids per infected host.Dendrelaphis punctulatus is the type host for W. punctulatus described from the east coast of Queensland, Australia and to date no other pentastomids from this genus have been recorded from this host.We recovered two species from D. punctulatus: Waddycephalus sp. 3 and Waddycephalus sp. 4.Waddycephalus sp. 3 occurred primarily in D. punctulatus, and to a lesser extent, in S. cucullatus; the sole member of Waddycephalus sp. 4 infected D. punctulatus.The maximum body size of female Waddycephalus sp. 3 exceeded that of W. punctulatus and male Waddycephalus sp. 3 were all smaller than those from W. punctulatus.Total annuli counts from the only Waddycephalus sp. 3 female we could assess was within the range of annuli number for W. punctulatus.However, given the discrepancies in body size and the marked variability in hook size for Waddycephalus sp. 3 we do not consider it to be W. punctulatus.We were unable to obtain data on body size or annuli for the sole specimen of Waddycephalus sp. 4.Stegonotus cucullatus were infected with Waddycephalus sp. 3 and sp. 5 at a prevalence of 60%, representing the first records of pentastomids infecting this host species.Waddycephalus sp. 5 occurred primarily in S. cucullatus, although one individual was recovered from D. vestigiata.Waddycephalus sp. 5 is a relatively small species with a maximum size of 21 mm.Hook dimensions for this species were again highly variable.Several Waddycephalus sp. 5 conformed closely to the hook morphology of Waddycephalus scutata, but others varied substantially.The primary distinguishing feature of W. scutata is narrow body width which distinguishes it from other small species of Waddycephalus whose body width exceeds 5.0 mm.Waddycephalus sp. 5 were relatively thin individuals with a maximum body width of 4.0 mm so this species may be W. scutata.Adult W. scutata are known from the tiger snake on islands off the coast of South Australia and from the yellow-faced whip snake at a Park in the Northern Territory.The latter authors also reported nymphs from the northern quoll.The life cycles of species in the genus Waddycephalus are yet to be elucidated; frogs, lizards or mammals are likely intermediate hosts.The terrestrial and arboreal snakes infected with Waddycephalus spp. in the current study all prey primarily on frogs and lizards, implying an ectothermic intermediate host for Waddycephalus.Our dissections revealed three Waddycephalus nymphs, two in the lungs of D. vestigiata, and one attached to the exterior surface of the lungs in A. praelongus.One Waddycephalus nymph has also been recovered from one H. frenatus near our study area.Nymphs of Waddycephalus spp. have been recovered from diverse taxa in other Australian localities, including dasyurid marsupials, a sooty owl, a small-eyed snake, a Bynoe’s gecko, and a three-toed earless skink.Waddycephalus nymphs were also recovered from the remote froglet in Papua New Guinea.The nymphs were in diverse localities within the hosts and were often encapsulated, implying that either they were infective and waiting to be consumed by a definitive host to complete maturation, or that they had entered accidental hosts and would progress no further in their life cycle.Future research is needed to elucidate the life cycles of Waddycephalus pentastomids.Clearly, substantial taxonomic work remains to be done on pentastomids in Australia – particularly with respect to the genus Waddycephalus.Future work should employ a combination of molecular and morphological techniques and aim to unearth morphological characteristics that may be useful for species identification.
Pentastomids are endoparasites of the respiratory system of vertebrates, maturing primarily in carnivorous reptiles. Adult and larval pentastomids can cause severe pathology resulting in the death of their intermediate and definitive hosts. The study of pentastomids is a neglected field, impaired by risk of zoonoses, difficulties in species identification, and life cycle complexities. We surveyed wild snakes in the tropics of Australia to clarify which host species possess these parasites, and then sought to identify these pentastomids using a combination of morphological and molecular techniques. We detected pentastomid infections in 59% of the 81 snakes surveyed. The ubiquity of pentastomid infections in snakes of the Australian tropics sampled in this study is alarmingly high considering the often-adverse consequences of infection and the recognized zoonotic potential of these parasites. The pentastomids were of the genera Raillietiella and Waddycephalus and infected a range of host taxa, encompassing seven snake species from three snake families. All seven snake species represent new host records for pentastomids of the genera Raillietiella and/or Waddycephalus. The arboreal colubrid Dendrelaphis punctulatus and the terrestrial elapid Demansia vestigiata had particularly high infection prevalences (79% and 100% infected, respectively). Raillietiella orientalis infected 38% of the snakes surveyed, especially frog-eating species, implying a frog intermediate host for this parasite. Raillietiella orientalis was previously known only from Asian snakes and has invaded Australia via an unknown pathway. Our molecular data indicated that five species of Waddycephalus infect 28% of snakes in the surveyed area. Our morphological data indicate that features of pentastomid anatomy previously utilised to identify species of the genus Waddycephalus are unreliable for distinguishing species, highlighting the need for additional taxonomic work on this genus. © 2014 The Authors.
31,617
Numerical and experimental benchmarking of the dynamic response of SiC and TZM specimens in the MultiMat experiment
In the upcoming high luminosity upgrade of the LHC, the energy stored in the circulating beams will increase from 360 to 680 MJ, while for the proposed Future Circular Collider the beam energy is projected to reach up to 8500 MJ.This increase in beam energy brings the need for high-performing components, especially those in risk of accidental beam impacts, such as collimators and other beam intercepting devices.In such cases, for the HL-LHC, accidental impacts due to asynchronous beam dumps or injection errors may result in thermal loads exceeding energy densities of 10 kJ/cm3 on the collimator jaws, resulting in intense pressure waves propagating through the material, risking plasticity, fracture, and spallation of material, as well as melting and vaporization of the impacted region.With this in mind, the materials utilised for such components and exposed to such extreme conditions require extensive experimental testing and validation in order to derive the constitutive models required to numerically simulate high energy particle beam impacts."Such tests are conducted in CERN's HiRadMat facility.Full-scale collimator jaws have been tested in the facility, while the HRMT-14 experiment, performed in 2012, tested specimens with simple geometries in order to benchmark numerical results obtained through numerical analyses performed with codes such as ANSYS and Autodyn.In October 2017, the HRMT-36 “MultiMat” experiment was conducted at the HiRadMat facility.This experiment built on the experience gathered in previous experiments and aimed to offer a reusable platform to test recently developed high-performance materials for beam intercepting devices impacted by particle beams with energy densities close to HL-LHC levels.In this work, material constitutive models implemented in implicit thermo-mechanical numerical simulations performed in ANSYS Workbench are benchmarked with experimental results from the MultiMat experiment.The work focuses on two materials tested in the experiment, namely Silicon Carbide and Titanium Zirconium Molybdenum, and considers longitudinal and flexural dynamic phenomena provoked by the particle beam impacts in the experiment.For both materials, one or more specimens fractured during the experiment.The constitutive models initially adopted in numerical analyses consisted of temperature-dependent thermo-mechanical properties independent of strain rate, with the main goal of the study being to verify and extend the adopted constitutive models through the benchmarking with experimental data.The material models adopted from literature were deemed suitable to model the various phenomena and conditions tested in the experiment, with additional material testing at high strain rates projected to further bolster the models to allow the simulation of complex, high energy phenomena.The study further investigates the effects of boundary conditions on the flexural response, material damping in the experimental signal and its application in numerical analyses, as well as failure of the specimens and its simulation in the implicit numerical code – paving the way for more complex analyses on non-linear, anisotropic materials tested in the MultiMat experiment.As can be seen in Fig. 1, the test bench hosted 16 target stations, each having a total length of 1 m.The stations were mounted on a rotatable barrel, with each station hosting a number of material specimens in series, each in the form of a slender bar with lengths of 120 or 247 mm, and with cross-sections varying from 8 × 8 to 12 × 11.5 mm2.The setup utilised an actuation system which allowed vertical and horizontal adjustment of specimens via stepper motors, as well as rotation achieved via a Geneva mechanism, allowing each target station to be brought into shooting position to be impacted by the incoming beam.79 specimens were set up and 17 different materials were tested.The specimens were extensively equipped with strain gauges and thermal probes, along with remote instrumentation including a Laser Doppler Vibrometer and a radiation-hard high-definition camera.In the experimental setup, the specimens were placed on graphitic elastic supports at the two extremities and kept in contact with an upper support via a preloaded compression applied on the spring.The steel springs had an original length of 14 mm and were compressed to 9 mm, for a preload of 13.57 N.Modal analyses performed prior to the experiment indicated that the system was close to a system being simply-supported at each end and therefore, with this in mind, initial structural analyses performed in this study considered this scenario, modelled by restricting the bottom edges of the two samples’ extremities from moving in the vertical direction.As will be discussed later, experimental results indicate that in some cases the beam impact resulted in specimens losing contact with the support, causing a change in boundary conditions.The specimens were extensively equipped with longitudinal strain gauges on the bottom face, with up to a maximum of 5 strain gauges on the most loaded specimens.A schematic showing the face denomination with respect to the adopted coordinate system, along with the typical position of longitudinal and transverse strain gauges and temperature probes, can also be seen in Fig. 1.Several materials were tested in the MultiMat experiment.These can be grouped in six categories, mainly pure carbon materials, metal carbide – graphite composites, titanium alloys, copper – diamond, silicon carbide, and heavy alloys.Some of the carbon-based specimens were coated with copper, molybdenum or TiN thin films.A wide range of scenarios were tested, with pulse intensity ranging from 1 to 288 bunches and nominal beam RMS sizes of 0.25 × 0.25, 0.5 × 0.5 and 2 × 2 mm2.Three different types of impact were tested, namely axially centred impacts, offset impacts, and grazing impacts.This study focuses on the first two types of impacts, with the former resulting in a longitudinal response, and the latter provoking an additional flexural response.Such impact scenarios result in the generation of different signals of interest, having distinct timescales which can be easily detected and decoupled, namely the signal rise time, related to the pulse duration, which is in the order of 10 μs, following which a longitudinal wave with a distinct trapezoidal shape is generated, with a period in the order of 100 μs."When the impact has a transverse offset across the specimen's cross section, flexural oscillations are also excited, having a period in the order of 1 ms.A large amount of data was gathered from the experiment, requiring post-processing and comparison with numerical simulation results.This paper focuses on the results for two materials, Silicon Carbide and Titanium Zirconium Molybdenum, two materials which, as detailed, exhibited failure during the experiment.For both materials considered, the setup consisted of a station housing four specimens in series, each having dimensions of 10 × 10 × 247 mm3, with no coatings.The SiC grade tested was manufactured by chemical vapour deposition by Microcertec SAS.A linear elastic model was used for SiC, utilising temperature-dependent material properties supplied by the manufacturer.The temperature-dependent material properties adopted for the two materials can be seen in Fig. 2.The bilinear kinematic hardening parameters and tensile strength values adopted for TZM are shown in Table 1 along with the SiC flexural strength values provided by the manufacturer.In the experiment, each station was subject to a number of particle beam impacts with varying levels of deposited energy and beam position, including central, offset and grazing shots.In this study, two shots were considered for each material, one with a vertical offset and having caused no visible damage, and one which resulted in the failure of one or more specimens in the target station.For the first scenario, a vertical beam offset was considered for both materials in order to allow the study of both longitudinal and flexural effects.For the SiC line, the shot considered consisted of a 300 ns pulse with 12 bunches with a total intensity of 1.4 × 1012 protons, 0.5 mm beam sigma, and a vertical offset of 0.5 mm.For TZM, a 600 ns shot with 24 bunches with a total intensity of 2.64 × 1012 protons, 2 mm sigma, and a vertical offset of −2 mm was initially considered.In addition to these beam pulses which caused no visible damage to the samples, the shots resulting in the failure of the specimens were also considered in the last section of the study, detailing the failure scenario.All the shots considered in the study are summarised in Table 2.Note that shot 1 and shot 2 refer to the SiC and TZM shots respectively defined in the previous paragraph, whilst shot 1F and 2F refer to the shots causing failure in each respective material.Additionally, it is important to note that shot 2F is the last of a series of three shots with increasing bunch intensity, and chronologically took place before shot 2.In fact, the effects of the failure caused by shot 2F can be observed in results probed from the failed specimen in shot 2.Given that each shots impinges all of the four specimens in the target station, following the failure of the second specimen, the specimens were still affected normally by subsequent shots.It is also interesting to note that shot 1F resulted in a relatively large amount of energy and peak energy deposited in the SiC specimens, when compared to shot 1.For TZM, shot 2F can be seen to have a lower total energy but a significantly higher peak compared to shot 2.Particle beams interacting with matter cause part of their energy to be transferred to the material, under the form of a heat deposition, with a consequent sudden increase in temperature of the impacted material.Depending on the power density deposited, different effects may result from such interaction.Beam losses in particle accelerators result in a continuous energy deposition, which can last from a few seconds up to hours.In accidental beam impacts, the energy deposition time is in the order of nano to microseconds, resulting in a dynamic response of the impacted structure.The topic of the dynamic response of slender rods subject to an almost instantaneous temperature increase is thus of particular interest for high energy particle physics applications, given the slender nature of many beam intercepting devices.As detailed, the MultiMat experiment was designed to analyse such a response, with the geometry of the samples chosen to decouple as much as possible the different phenomena occurring during the impact: as already stated, this study focuses on dynamic longitudinal and flexural phenomena.Analytical solutions for the longitudinal response of slender rods under such conditions have been proposed by Bargmann, Sievers.Bertarelli et al. have proposed solutions on both longitudinal and flexural responses.The analytical solutions simplify the setup by assuming a constant energy deposition over the longitudinal length of the rod and a Gaussian distribution across the cross-section, as well as neglecting material inertia in the radial direction.The effect of a non-constant energy deposition over the longitudinal coordinate of the rod was discussed by Carra, who described how this can result in longitudinal waves originating from the two extremities of the specimen having varying amplitudes based on the distribution of energy along the length.The effect of radial inertia on the longitudinal response was further studied by the author, based on considerations made by Graff, indicating that phenomena related to radial inertia, such as wave dispersion and Poisson ratio effects, also have a non-negligible effect on the longitudinal signal."The typical dynamic longitudinal stress plot at a quarter and half the length of the rod scaled to the reference stress, as well as a typical bending stress plot, as computed in Bertarelli's study, can be seen in Fig. 3.Note the different time scales for each phenomenon.Table 3 shows the analytically calculated first longitudinal and flexural harmonics for the SiC and TZM specimens used in the experimental setup, evaluated with material properties at room temperature.As can be seen, when the specimens are in a free condition, the flexural frequency is expected to be more than twice that of when the specimen is in a simply-supported configuration.The experimental scenario was modelled with a weakly coupled thermo-mechanical simulation performed using an implicit finite-element model implementation in ANSYS Workbench 18.1.To model the energy deposition due to the impacting particle beam, energy deposition maps were generated with FLUKA, a Monte-Carlo-based statistical code which outputs the energy deposition for one proton at each mesh element across the specimen volume.This was then imported in ANSYS and scaled accordingly to the total particle intensity of the beam, and is used as an input in the thermal simulation in order to determine the thermal load in each specimen.During impact, a high energy density is induced in high density and high atomic number materials such as TZM, while less energy is deposited in materials having a relatively lower stopping power such as SiC.The thermal analyses were modelled adiabatically, given that the temperature change in the specimen evolves on a significantly longer timescale when compared to the dynamic phenomena being observed.The thermal analysis consisted of two steps: the first having a duration equal to the energy deposition, and the second having a longer duration to model the temperature evolution in the specimen following the initial temperature increase – in both cases, a load step equal to 3 periods of flexural oscillation was considered.For both modelled impacts, the beam offset is in the vertical X direction, with the beam being symmetrical across the XZ plane, and therefore symmetry on this plane was implemented in both thermal and structural analyses.The mesh size for the thermal analysis was determined to accurately interpolate the energy density map prescribed by FLUKA simulations – in the case of SiC, the mesh consisted of 25 longitudinal divisions, 81 divisions in the X-direction, and 41 divisions in the Y-direction, implementing the coordinate system shown in Fig. 1.Similarly, the model implemented for TZM consisted of 50, 41 and 21 divisions in the Z, X, and Y directions respectively.Fig. 4 shows the power deposited in each longitudinal division in each station for shots 1 and 2."As can be seen, for SiC, the third specimen is the most loaded specimen, whilst for the TZM station – which, due to the material's high density, absorbs significantly more energy than SiC – the first specimen is the most loaded.By Eq., the amplitude of the longitudinal wave being studied is a function of the total energy deposited in the specimen, and therefore, when possible, the most loaded specimen for each respective material was considered in this study in order to maximise the signal-to-noise ratio.The thermal analysis results at the end of the energy deposition showing the temperature along the length of each material station can also be seen in Fig. 4."It is interesting to note that, in the case of SiC, the maximum temperature is actually experienced in the second specimen, however a higher total energy is deposited throughout the third specimen's volume.The longitudinal wave amplitude is a function of the total energy, rather than the peak energy deposition in a specific location."Additionally, a temperature gradient can be observed along the length of various specimens, resulting in longitudinal waves of varying amplitude originating from the rod's extremities – this phenomenon is of particular importance when investigating possible causes of failure of the specimens.The resulting temperature field from the thermal analysis was imported in the structural analysis, which featured three subsequent steps: the first covering the energy deposition and consisting of 10 substeps, the second considering the dynamic longitudinal response, and the third encompassing the dynamic flexural response.The time step for each phase was set accordingly to resolve each phenomenon of interest.The Courant-Friedrichs-Lewy condition was adopted to define the required element size for structural analyses, considering the speed of sound in each material, and the respective time step.The results from the thermo-mechanical simulations were compared against the experimental data acquired from the strain gauges positioned on the specimens, with the final aim of benchmarking the proposed numerical studies, possibly extending the material models currently available in literature.The cut-off frequency considered when importing the experimental data was set to maximise signal-to-noise ratio, considering the frequency of the phenomena of interest, namely flexural, longitudinal and transverse oscillations.In Fig. 5, the longitudinal strain response for the third SiC specimen for shot 1 is reported, along with the longitudinal response for the third TZM specimen, for shot two.For both cases, the trapezoidal axial wave shape shown in Fig. 3 is clearly visible in both experimental and numerical results, with a good agreement in longitudinal frequency for SiC and TZM, as can be seen in Table 4."This suggests that the materials’ Young's Modulus and density are well replicated in the material models used since, as shown in Eq., the longitudinal frequency is a function of these two material properties.For SiC the wave was heavily distorted by the high noise-to-signal ratio in the experimental signal, given that the longitudinal wave amplitude in this case is an order of magnitude lower than for the most loaded TZM specimen, and therefore further filtering was applied to better distinguish the underlying longitudinal wave."By Eq., for a given power deposition, slight variations in longitudinal wave amplitude between experimental and numerical results are a function of the Young's Modulus, the coefficient of thermal expansion, and/or the specific heat capacity of the material. "For TZM, one can see a frequency difference of 0.6 kHz between analytical and numerical results which can be attributed to the temperature dependency of the material's properties, which are unaccounted for in the analytical model.Given the limited information available from the manufacturer for this material, a thermo-mechanical characterisation campaign is to be conducted at CERN facilities to consolidate the available material data."Preliminary impact excitation technique tests conducted at room temperature resulted in a calculated Young's Modulus of 320 GPa, suggesting an underestimation of the property in the currently available model.It is interesting to note that in the case of TZM, experimental measurements on the second specimen yielded a longitudinal frequency of 14.5 kHz, indicating that the specimen had failed upon beam impact or in a previous shot, as will be discussed further on in the section detailing the modelling of specimen failure.Numerical modal analyses carried out prior to the experiment to test the support system implemented indicated that the system behaved in a simply-supported manner, with experimental data matching this assumption in most cases.In some cases, including shots 1 and 2 considered in this study for TZM and SiC, the bending frequency measured in the initial stages of the signal was close to the one expected from a free-free condition.This can be attributed to the impacting beam, which caused some specimens to momentarily lose contact with the supporting structure, resulting in this “free” configuration.This brief loss of contact resulting in the free condition is a result of the supporting spring preload being exceeded once the beam impacts the specimen, resulting in the spring being forced to compress further, and the specimen losing contact with the top graphitic support.This phenomenon is observable in the experimental results for shots 1 and 2, as can be seen in Fig. 6.For SiC, a free condition can be observed in the first 2 ms, before transitioning into a simply-supported configuration.In this case, the time at which the specimen impacts the top support and returns to a simply-supported state is characterised by a sudden jolt in the strain reading at approximately 2 ms, following which a frequency associated with a simply-supported configuration can be observed.Similarly, for TZM, for which the results for the third specimen are presented, the first 5 ms of the signal are characterised by a frequency of 1050 Hz, corresponding to a free-free condition, before transitioning into a lower frequency corresponding to a simply-supported configuration.It was deemed important to verify this behaviour by replicating it in a simulation.To this end, two numerical models were implemented in ANSYS.The first model features the “simply-supported” configuration implemented for the longitudinal wave analysis, which simplifies the experimental setup by restricting vertical movement of the bottom edges at the extremities of each specimen."The second configuration aims to model the experiment's boundary conditions more accurately, particularly the loss of contact between the specimen and the support and the subsequent transition of boundary conditions, implementing compression-only supports at the top of the specimen and a preloaded spring at the bottom edge of both extremities.The compression-only support acts as a rigid constraint, thus impeding vertical displacement of the specimens’ top edges beyond the support.The specimen can therefore only move in the negative X-direction, resulting in loss of contact with the compression-only support if the reaction force imposed by the spring is exceeded.Additionally, it is important to note that the simply-supported and free-free conditions considered refer to the vertical motion at the edges of the support and therefore only affect to the bending response of the system.The longitudinal wave propagation is reflected at the free surfaces at the boundaries, which feature in both configurations, and therefore the longitudinal response is identical in both configurations considered.For this reason, in the previous section detailing the longitudinal response, due to the smaller time step required to simulate longitudinal wave propagation, the less computationally intensive option was adopted.Numerical results for the simulations modelling the preloaded spring setup in the experimental scenario for shots 1 and 2 are shown in Fig. 7.For SiC, the first millisecond of the strain signal is characterised by an underlying flexural frequency corresponding to a “free” condition due to the loss of contact between the specimen and the support.This transitions into a frequency of 880 Hz corresponding to a simply-supported configuration.Similarly, for TZM, the signal is initially characterised by a high frequency corresponding to the free state of the specimen, which settles to a frequency of approximately 380 Hz as the specimen settles in position.A summary of frequencies calculated and measured for both configurations can be seen in Table 5.This change in configuration is better understood when considering the vertical displacement of the top edge of the specimen, shown superimposed in red in Fig. 7.A negative reading corresponds to a loss of contact between the edge and the top support, indicating that the spring preload is exceeded, forcing the spring to compress further.For both materials, it can be seen that contact is lost upon beam impact, and is followed by a transitionary phase where contact is temporarily restored.In both cases, the specimen eventually settles back to its original configuration, at which point the vertical displacement of the edge is close to zero.For the results in question, for SiC, damping ratio of ζ = 0.08 was extracted from the logarithmic decay of the experimental bending signal."Similarly, for TZM a damping ratio of ζ = 0.05 was derived from experimental data and implemented in the simply-supported configuration of the analysis, to benchmark the damping response of the experimentally acquired signal characterised by a complete transition to the simply-supported condition at the specimen's boundaries.Fig. 8 shows a comparison of experimental and numerical results following the implementation of damping, with results shown in a time period at which point the experimental setup had settled into a simply-supported configuration."The damped numerical signal was therefore scaled to the experimental signal's amplitude at this point in time.One should note that the introduction of damping at the bending frequency results in the elimination of higher-frequency components in the simulation, resulting in numerical results having differing characteristics to experimental results.For centred shots, this method can be similarly implemented on the axial frequencies, where damping can also be observed.As detailed, both materials experienced failure in a number of specimens.For SiC, failure was observed in the last implemented pulse for the material, which consisted of a 36-bunch central shot with a pulse length of 900 ns, 1.175 × 1011 protons per bunch, and a beam sigma of 0.5 mm.This resulted in failure in the two most-loaded specimens, namely the second and third specimen, as can be seen in Fig. 9.For TZM, evidence of brittle failure was observed following a pulse consisting of 12 bunches with a bunch intensity of 1.325 × 1011 protons, with a pulse length of 300 ns, a beam sigma of 0.5 mm and no offset.The fractured specimen can be seen in X-ray tomography tests described in the following section, which also revealed that the first TZM specimen experienced various internal cracks along its length, despite no damage being observed upon the initial visual inspection of its exterior.Along with the available photos of the broken specimens, evidence of failure can be gathered from experimental strain results.Considering Eqs. and, the longitudinal and flexural frequencies being measured are a function of the length of the specimen, and are therefore affected by a specimen breaking in two or more separate pieces.The longitudinal response is especially indicative of the change in length given that, following failure, each part of the specimen has free surfaces at the boundaries which allow for regular longitudinal wave propagation and reflection.On the other hand, the flexural response is more complicated to analyse due to the complex boundary conditions in place, with a failure resulting in a free condition at one end of the rod and a dynamically varying condition at the opposing end.For SiC, failure occurred in two specimens as shown in Fig. 9, with the last pulse for the station."While the third specimen broke at various points along its length, the second specimen can be seen to have fractured in a symmetrical manner, at approximately 1/4th and 3/4th of the specimen's length.The specimen therefore broke into three separate parts, with the two shorter ends having a length of approximately 61.8 mm and the middle section having a length of 123.5 mm.The original length has an analytically calculated longitudinal frequency of 24 kHz, and therefore the new lengths result in frequencies of 96 kHz for the shorter parts and 48 kHz for the middle part.The strain gauges were located conveniently on the specimen, with gauges at 49.4, 98.8, 148.2 and 197.6 mm, i.e. one gauge on each of the shorter ends of the specimen, and two gauges on the middle section.The failure of the specimens was modelled in ANSYS Workbench by using the element birth and death functions.The program gives the option to perform element birth and death between load steps, thus allowing for the elimination of certain elements or contact points based on element results or load step number.This feature can be used to model the failure of an object once a maximum limiting value is reached or at a specific moment in time.The element death function was implemented in ANSYS and used to “kill” two slices of elements at lengths of 61.8 mm and 185.3 mm at the time of failure.Another approach is to slice the geometry into three separate sections, and apply a bonded contact between the different parts, which is subsequently eliminated at the time of failure.Both methods were tested, achieving comparable results.Fig. 10 shows the experimental result for the gauge at a length of 98.8 mm, plotted with the numerical results implementing element killing at 20 μs.The experimental signal can be seen to distort at the 20 μs, following which a signal frequency of approximately 47 kHz can be observed, corresponding to a specimen having half the original specimen length.The simulation can be seen to replicate the new longitudinal frequency adequately with a frequency of 47.5 kHz.The two results can be seen to differ in terms of amplitude, with the simulation retaining the original wave amplitude and not accounting to loss of energy as a result of the failure.For shot 1F, simulation results for specimen 2 indicated temperatures ranging from room temperature up to 316 °C.At the observed maximum temperature, flexural strength values for the material are in the order of 475 MPa, while equivalent stress values up to 480 MPa can be observed in structural analyses.One should note that flexural strength values available were obtained through static 4-point bending tests, while relatively high strain rates can be observed in experimental results.As discussed, the symmetrical failure of the specimen indicates that this was caused by the passing of the two longitudinal waves originating from each extremity of the rod.Simulation results indicate that each respective longitudinal wave reaches its maximum amplitude before reaching the points of failure, and therefore it has proven challenging to characterise the reason for the specimen to fail at these specific positions.In the case of TZM, the change in longitudinal frequency can be seen in the shots causing the failure of the specimen as well as in later shots.Similar to the results observed in SiC, a higher longitudinal frequency is measured in different parts of the broken specimen, corresponding to the new respective length of each part following failure.The specimen can be seen to have fractured at a length of approximately 185 mm, resulting in two parts: one having a length of 185 mm and another having a length of 62 mm.Strain gauges on each separate piece of the specimen should therefore measure a longitudinal frequency of 14.5 kHz and 45 kHz respectively, by Eq.This can be seen in Fig. 11, which shows the experimentally acquired longitudinal strain signal for the second TZM specimen for shot 2F.Results are shown for two longitudinal strain gauges, gauges 3 and 4, positioned on the bottom face of the specimen at longitudinal lengths of 148 mm and 198 mm respectively, meaning that they were positioned on opposite sides of the point of failure.One should note that for the specimen in question there is a temperature gradient along its length, resulting in the longitudinal waves propagating from each extremity of the rod having differing amplitudes.The right half of the rod has less energy deposited and therefore the wave originating from this extremity has a lower amplitude compared to that originating from the left end of the specimen.The longitudinal wave period for the rod with original length of 247 mm is approximately 90 μs.As can be seen by the change in frequency in Fig. 11, the specimen breaks well before the longitudinal wave propagates through the specimen and is reflected back to its original position.The position of the fracture is at a length of approximately 185 mm, whilst the speed of sound in TZM is approximately 5500 m/s.Following the particle beam impact, the wave with smaller amplitude propagating from the edge closest to the fracture location takes 11 μs to traverse 62 mm and reach the point of eventual failure, whilst the higher-amplitude wave propagating from the left-hand side takes 34 μs to reach the same point.As can be seen in Fig. 11, the high-frequency wave detected by the 4th strain gauge has a relatively high amplitude compared to the wave detected by the third gauge.Element death function was again implemented in ANSYS and used to “kill” a slice of elements at a length of 185 mm at the time of failure.Using this approach, two scenarios were considered – one where the specimen fractures at 11 μs, and one where the specimen fractures 45 μs into the simulation.The results for these two scenarios, for the strain gauges positioned at a length of 148 mm and 198 mm, can be seen in Fig. 12.In the first case, with failure at 11 μs, the high-amplitude wave travelling from the left-hand side never reaches gauge 4, due to the specimen breaking before this wave reaches the point of failure.One can see that the results displayed in Fig. 12 feature a very small amplitude measured by gauge 4, differing from the experimental scenario shown in Fig. 11.In Fig. 12, results from the analysis modelling failure at 45 μs are shown.For the first 45 μs, the results are identical to a simulation where no failure is simulated.One can clearly see the lower-amplitude longitudinal wave coming from the right end of the sample reaching the gauge 4 at approximately 11 μs, before the maximum amplitude is reached once the higher-amplitude longitudinal wave reaches this location, and the two waves superimpose.Failure is simulated at 45 μs, resulting in the two gauges measuring distinctly different frequencies, dependent on the length of each respective broken part of the specimen.The simulation results probed at gauge 4 for this scenario are more comparable to experimental results than in the case with failure at 11 μs, suggesting that the specimen fails due to the arrival of the higher-amplitude wave from the left-hand side, with some of the energy from this wave still being transferred to the right-hand side of the specimen, which is subsequently “trapped” in this region, resulting in the high-amplitude signal seen in both simulation and experimental results at gauge 4.This hypothesis suggests that the cause of failure of this specimen could have been a defect or void in the sample, which exhibits failure only on the arrival of the higher-amplitude stress wave.As discussed, this specimen – which is the second most loaded in terms of thermal load – failed, while the most loaded specimen remained intact.For shot 2F, simulation results indicate temperatures up to 400 °C in the core of the most loaded specimen, whilst for the failed specimen the maximum temperature is 362 °C on the point of failure, reducing to a maximum of approximately 200 °C at the point of failure."A cause for the failure of the second most loaded specimen could therefore be the higher brittleness of the specimen due to its lower temperature, a result of TZM's ductile-to-brittle transition temperature.In a study by Hiraoka et al investigating the ductile-to-brittle transition characteristics of Molybdenum and its alloys, including TZM, the DBTT in impact tests was found to be higher than that in static tests, indicating that a lower loading rate could result in a brittle failure at lower temperatures.As detailed, a post-irradiation examination consisting of computed tomography imaging was conducted in the CERN laboratories to assess the condition of the specimens, with particular focus on the failed specimens.Fig. 12. summarises the propagation of the waves originating from each end of the rod, and how each wave is affected by the failure of the TZM specimen.The top figure shows the position of each wave-front at 11 μs, at which point the wave propagating from the right-hand side reaches the eventual point of failure.The middle diagram shows the position of the two waves at 34 μs, at which point the wave travelling from the left-hand side – having a larger amplitude – reaches the length of 185 mm and causes the specimen to fail."The final diagram indicates that some of the wave's energy is transferred to the right-hand side of the specimen, which subsequently separates from the rest of the specimen, while some of the energy is reflected back at the newly-formed boundary.The position of the two strain gauges at 148 mm and 198 mm are marked in black and blue respectively.The bilinear kinematic constitutive law for TZM implemented in the numerical model modelling the MultiMat experiment indicates stress levels in the order of 900 MPa, exceeding non-dynamic yield and ultimate tensile strength values specified for TZM by the manufacturer.While the quasi-static strength is exceeded in both first and second TZM specimens, only the second specimen exhibited failure in the experiment, suggesting that the material model requires further refinement to include strain rate and temperature dependency.The strain rate observed in experimental results is in the order of 1000 s−1, at which point the material behaviour might differ significantly from steady-state conditions.While information on the material behaviour at such strain rates is not provided by the manufacturer, literature investigating the mechanical modelling of pure molybdenum at strain rates and temperatures in this order of magnitude is available, detailing the mechanical response of the material across a wide range of temperatures and strain rates, including the types of failure observed.Lacking any other means of comparison, the information available on Mo was taken as a qualitative reference, keeping in mind that the two materials can exhibit substantial differences in high-strain rate behaviour despite the small differences in composition.Results for pure Mo indicate that brittle failure is observed at high strain rates and low temperatures, which is also the type of failure observed in the MultiMat experiment for TZM, indicating that the material could be in a range of temperature and strain rate compatible with brittle failure.In addition to quasi-static tests, additional tests are therefore required in order to accurately model the material behaviour across a wide range of impact scenarios.The experimental campaign will consist of direct Hopkinson Bar setup, allowing the investigation of the mechanical behaviour in varying strain rates and temperatures.Initial visual inspections on TZM specimens following the experimental campaign only showed damage on the second most loaded specimen, as discussed.Following the completion of the experiment, the test-bench was transferred to a storage area and was opened once the dose rate reduced to safe levels in early 2019.A post-irradiation X-ray tomography campaign was subsequently conducted on various specimens, revealing that the most loaded TZM, which, given the failure in the second most loaded specimen, was also expected to fail, does in fact exhibit a number of internal cracks.As can be seen in Fig. 13, a long crack is clearly visible along the whole length of the specimen, along with various cracks across the cross-section.Images from the scans on the second TZM specimen clearly show the point of failure, while a top view of the most loaded SiC specimen allows a clear view of the various cracks near the centre of the sample.The MultiMat experiment was conducted in the HiRadMat facility at CERN in the end of 2017, with the aim to explore previously unknown material properties of materials used in beam intercepting devices.Material constitutive models, required to simulate dynamic phenomena generated by particle beam impacts, are not widely available in literature, and therefore dedicating testing such as that performed in the HiRadMat facility is required to explore material states which are not attainable with standard testing methods.This study focuses on the results generated for two materials, namely Silicon Carbide and Titanium Zirconium Molybdenum, benchmarking the currently available material models with experimental data.The analysis focuses on the longitudinal and flexural dynamic phenomena observable following the particle beam impact, discussing factors such as frequency and amplitude of the observed oscillations, material damping, and boundary-condition effects on the generated signal.In addition, a discussion on the eventual failure of certain specimens is presented, detailing possible causes of failure."Studying such phenomena allows for the benchmarking of the material models, which need to be able to model the material's thermal and mechanical behaviour across a wide range of scenarios.The study simulated the experimental setup by implementing a simply-coupled thermo-mechanical analysis in ANSYS Workbench.The thermal analysis modelled the energy deposition in the material specimens, with the resulting temperature field imported in structural analyses to model the dynamic response of the system.The longitudinal response was initially modelled, with the presented experimental results matching well with computational results in terms of wave shape, frequency and amplitude.The bending response of the specimens was similarly modelled.In this case, attempts were made to simulate the exact boundary conditions of the experimental setup, which consisted of graphitic supports kept in position by a preloaded spring.This resulted in a change of boundary conditions upon beam impact, due to a brief loss of contact between the specimens and the support, which could be observed as a change in frequency in the flexural response.The boundary conditions were modelled in ANSYS, successfully simulating the loss of contact, and subsequent transition in flexural frequencies.The damping observed in the experimental signal was also successfully modelled through the application of Rayleigh damping in ANSYS Workbench.Finally, the failure in various specimens was discussed and modelled.Element birth and death functions were applied in ANSYS in order to simulate the failure of TZM and SiC specimens due to the propagating longitudinal wave.As observed in experimental results, this resulted in a range of measured longitudinal frequencies, dependent on the length of each respective part of the specimen following failure.The symmetrical nature of failure in SiC specimens suggests that this is caused by the passing of the two longitudinal waves originating from the extremities of the specimen, however it is not yet understood why the specimen specifically fractured at a quarter length from each respective side.A post-irradiation X-ray tomography campaign aided in identifying various cracks along the length and cross-section of the most loaded TZM specimen, which was previously believed to have remained intact following the experimental campaign.The material models implemented, a summary of which is shown in Table 6, using a combination of properties provided by the respective manufacturers and literature, proved to be sufficient to successfully model a number of experimental scenarios considered.Strength data available was compared with stress results computed in numerical analyses, which indicated that the specified values had been exceeded with the material models implemented.A post-mortem analysis consisting of X-ray microtomography measurements has been conducted on a number of material samples which experienced failure during the experiment, allowing for additional observation of internal crack propagation within the specimens which had been previously undetected with visual inspection.For both materials, further metrology measurements and thermo-mechanical testing is expected to be conducted at CERN to verify the data obtained from literature.This includes Impact Excitation Technique tests, which are to be conducted across a range of temperatures, allowing the calculation of elastic properties, as well as testing for the calculation of specific heat capacity, coefficient of thermal expansion, density, and conductivity of the materials.The main limitation of the currently available models was found to be the lack of inclusion of strain rate effects on the mechanical properties of the two materials.The adoption of equations of state, strength and failure models is required to accurately model the full material behaviour across a wide range of pressures, strain rates, temperatures, and geometries commonly encountered in particle beam impact scenarios.In this regard, Hopkinson bar tests are scheduled to be conducted to complement the quasi-static material characterisation and complement the currently available material models.The main conclusions of the study are summarised below:The currently available material models for the Silicon Carbide and Titanium Zirconium Molybdenum grades tested in the MultiMat experiment successfully replicated longitudinal and flexural phenomena when implemented in implicit numerical simulations in ANSYS Workbench.The complex boundary conditions present in the experiment were also successfully modelled, allowing for the simulation of the transition from a “free” to a “simply-supported” condition, as observed in experimental results.Further experimental testing is required to fully model the material behaviour at high strain rates and temperatures, along with failure scenarios.
The MultiMat experiment was successfully conducted at CERN's HiRadMat facility, aiming to test novel high-performance materials for use in beam intercepting devices, allowing the derivation and validation of material constitutive models. This article provides an analysis of results for two materials tested in the experiment, namely Silicon Carbide and Titanium Zirconium Molybdenum, with the aim of benchmarking the material constitutive models currently available in literature with experimental results. The material models were implemented in numerical simulations, successfully modelling dynamic longitudinal and flexural phenomena. The article further studies the modelling of the complex boundary conditions present in the experiment, the internal damping characteristics of the materials, and the failure of certain specimens. The strength and failure models proved adequate to model a number of experimental scenarios tested, but require further study to describe the material behaviour at the high strain rates and temperatures induced by accidental particle beam impacts. A post-irradiation examination of the tested specimens was also performed to study the nature of failure in the specimens, and is to be coupled with quasi-static and high strain rate tests for both materials, allowing for the validation of the currently available models and the description of material behaviour across a wide range of strain rates and temperatures.
31,618
Injury severity data for front and second row passengers in frontal crashes
The data contained here were obtained from NHTSA׳s ftp portal for the National Automotive Sampling System – Crashworthiness Data System , which documents crash conditions and occupant injuries for crashes occurring in the United States.These data are obtained through a sampling framework that is described in the data set and supporting documentation .Utilizing the sampling frame and weights one can obtain national estimates for frequency for various motor vehicle crash related events.Only those occupants seated in the right front seat position and those who were seated in the outboard positions of the second row, who were indicated to have been using both a lap and shoulder belt by the NASS crash investigator are included here.The data in the raw files is coded.The codes are documented in files located on the NHTSA website .Frontal crash cases were segregated from crashes documented in the NASS-CDS data sets for the years 2008–2014.The General Vehicle data file was utilized in conjunction with the External Vehicle file for each year.The files were joined using the primary sampling unit, case identifier, and vehicle number codes.The resulting data file was filtered to select only those vehicle that were indicated to have experienced a primary crash force direction corresponding to 11:00–1:00, where 12:00 would indicate an impact force directed perpendicular to the vehicle׳s front).Within these frontal force cases, only those vehicles that exhibited a primarily “frontal” damage field) were included.Any vehicle that experienced a rollover was excluded.Further, any vehicle with a model year before 2000 was excluded.Lastly the data was filtered to include only sedans, Sport Utility Vehicles, minivans and trucks were included in the data set.The vehicle and crash data were joined to the Occupant data file.Occupant physical characteristics, overall injury severity, seat position, and other occupant specific data were obtained.The occupants included in the files provided in this brief are the right front seat position and the outboard seat positions in the second row.Only occupants who were 13 years or older, who were also utilizing the lap and shoulder belts, were retained in the final data set.Each row of these data files describes a single occupant and the vehicle/crash they experienced.The file for the right front row occupants contains 1771 entries that represent approximately 661,000 persons involved in motor vehicle crashes.The second row occupant file contains 436 entries that represent approximately 132,000 persons.Injury rates in front row seated occupants were higher than those for second row seated occupants.The “MAIS” is the maximum abbreviated injury score, where scores rate fatality risk and range from 1 to 6.The “ISS” is the injury severity score which utilizes up to 3 injury scores for an occupant.The ISS level of 9 was suggested by Palmer as a threshold for serious injury.Subsets of older occupants composed of only those in vehicles that were produced in 2006 or later were compared.In these cases the older occupants in the second row exhibited an injury rate that was higher than that in the front right occupants.However, the injury rates were obtained considering all crashes, which included many non-injurious crashes.In addition, these occupants were not necessarily exposed to the same crash, as they were not occupant pairs.Atkinson et al.’s analysis showed a higher injury rate in front row seated occupants when non-injurious crashes were removed from the data sets.The source files for these data sets are provided online in the Supplementary Material.A second data set consisting of occupant pairs was created by joining the right front occupant to a second row occupant in the same vehicle.Where there was more than one second row occupant available a second pair was generated.Each line of the data file represents a pair of occupants.The occupant seated in the second row is listed first along with any occupant specific data and the data associated with the matched front right seat occupant is listed second.The paired data file includes 238 pairs.Fig. 1.The source file for the paired data set is provided online in the Supplementary Material.The paired data was reduced to those pairs that were at most 15 years different in age.The crash and occupant factors suspected to play a role in moderating the likelihood of injury were examined using logistic regression.The specific outcome studied was the event of a second row seated occupant having an injury severity exceeding that of the paired right front occupant.
The data contained here were obtained from the National Highway Transportation Safety Administration[U+05F3]s National Automotive Sampling System - Crashworthiness Data System (NASS-CDS) for the years 2008-2014. This publically available data set monitors motor vehicle crashes in the United States, using a stratified random sample frame, resulting in information on approximately 5000 crashes each year that can be utilized to create national estimates for crashes. The NASS-CDS data sets document vehicle, crash, and occupant factors. These data can be utilized to examine public health, law enforcement, roadway planning, and vehicle design issues. The data provided in this brief are a subset of crash events and occupants. The crashes provided are exclusively frontal crashes. Within these crashes, only restrained occupants who were seated in the right front seat position or the second row outboard seat positions were included. The front row and second row data sets were utilized to construct occupant pairs crashes where both a right front seat occupant and a second row occupant were available. Both unpaired and paired data sets are provided in this brief.
31,619
A review on factors affecting the Agrobacterium-mediated genetic transformation in ornamental monocotyledonous geophytes
The global trade for ornamental perennials and garden plants is in excess of eight billion dollars annually, highlighting its economic importance.Over one billion ornamental plants are produced through micropropagation.These figures emphasize that ornamental floriculture is a very important aspect of horticulture.One feature of floriculture is that it encompasses cut flowers, pot plants and bedding plants.Unlike with agriculture; where crops such as maize and rice are planted yearly sometimes even twice a year, floriculture is dynamic and consumers require new varieties regularly.Since this is a global industry in which many countries such as Colombia, Ecuador, Kenya, Israel and South Africa have developed large industries, it is important to consider research studies aimed towards the development of genetically modified ornamental products.Bulbous plants are not only desirable for their ornamental value, but also for their benefits in traditional medicine; which quite often render them endangered in the wild.Thus, the applications of genetic transformation, micropropagation and other biotechnological tools remain important.While the above facts remain pertinent, micropropagation protocols of many other plants have been brought forward for the ease of genetic transformation on species of interest.Although excellent protocols on micropropagation of ornamental geophytes have been and are still being published, there are only a few reports on their genetic modification.Therefore, as the interest in developing new cultivars increases, a review on this issue will be of considerable value.Micropropagation applications for ornamental geophytes are mostly aimed at mass propagation, germplasm conservation as well as forming a solid foundation for developing new cultivars through recombinant DNA techniques.These, among others, include developing cultivars for disease and viral resistance, color and scent enhancement or high throughput production of medically recognized phytochemicals, inflorescence yield, corolla size and flower longevity.For many geophytes, the most pertinent challenge is that of flowering time.The switch from the vegetative to flowering phase is caused by floral induction, which is dependent on endogenous signals such as age as well as environmental signals like day length and temperature.Most geophytes undergo a long juvenile phase of vegetative development which may last years before flowering.Hence, shortening of juvenility in ornamental geophytes through genetic transformation can be of immense biotechnological interest and horticultural benefits.The success of any genetic transformation strategy entirely depends on the regeneration capability of the explant.Although many regeneration protocols have long been established for ornamental monocotyledonous geophytes, the regeneration response has been mostly through direct organogenesis.As an alternative to a regeneration pathway through direct organogenesis, somatic embryogenesis has been used in micropropagation systems to assist genetic transformation.Therefore, regeneration response via somatic embryogenesis in monocotyledonous geophytes will greatly facilitate their transformation.This highlights the need for somatic embryogenesis protocols to attain successful transformation.The molecular concepts underlying genetic transformation of plant cells by Agrobacterium are well known.Briefly, this involves the transfer of T-DNA; found within the tumor-inducing plasmid from the Agrobacterium to the plant nuclear genome.This process is assisted by virulence genes carried by the Ti plasmid.It is well recorded that the use of Agrobacterium for genetic transformation greatly facilitates stable integration of a single copy of the transgene in the plant genome with minimum or no re-arrangements of the foreign DNA structure.It is therefore known as a method with fewer complications such as transgene instability, gene silencing or co-suppression and this could greatly benefit transformation of monocotyledonous geophytes.During transformation, various virulent effector proteins are conveyed from the Agrobacterium to the host plant cells through the cell wall and the plasma membrane.Agrobacterium possesses some sensors that enable it to recognize signals emitted by the host tissue and thus enable virulence in response to these signals.Initially acetosyringone was identified as one of the plant cell exudates shown to act as a Vir inducer with varying efficiencies depending on plant species.For instance, transformation frequency in Trycirtis hirta; an ornamental plant, was increased when acetosyringone was added to the co-cultivation medium.The type of strain used can affect transformation frequency.In Iris germanica, LBA4404 gave remarkably higher transformation rates than EHA105.In Agapanthus praecox, the same LBA4404 was found to be more effective than EHA101.The activity of LBA4404 is attributed to the super binary vector pTOK233, which has VirB, VirC and VirG genes derived from the ‘supervirulent’ Ti-plasmid; pTiBo542.Many geophytes are monocotyledonous plants that have previously been thought to be non-hosts of A. tumefaciens.This is mainly due to the fact that monocotyledonous cells may sometimes produce unfavorable phenolic compounds in response to wounding.However, Danilova et al. found that an extract of sterile tobacco leaves and stems increased maize transformation more effectively than acetosyringone.The stimulatory effects of tobacco were attributed to the phenomenon that tobacco contains a wide range of favorable phenolic compounds, sugars and amino acids which induce Vir genes responsible for T-DNA transfer.Tobacco extract could equally be beneficial in the transformation of ornamentals.Co-cultivation period can also bring about the success or failure of transformation of a given plant.This period needs to be pre-determined to avoid less frequency of transformation or Agrobacterium overgrowth due to prolonged co-cultivation time.A co-cultivation period of 2–3 days provided best results in Agapanthus praecox, while in Typha latifolia, a three day co-cultivation resulted in the highest level of GUS expression.Since T-DNA transfer from Agrobacterium into the plant genome occurs during the S-phase of the cell cycle, it is essential to establish optimum co-culture conditions of explants and the Agrobacterium at the very beginning of the genetic transformation protocol.Regulated promoters allow control of gene expression and facilitate the genetic improvement of important plants.Therefore, successful genetic modification of flowering bulbs with genes of interest requires the availability of promoters that can be characterized and expressed at functional levels.The most common and widely used, cauliflower mosaic virus 35S promoter, has resulted in lower levels of expression in some plants while in others, the results were satisfactory.This promoter was found to be the best for transformation of Iris germanica.This outcome further confirmed that Agrobacterium-mediated transformation using CaMV35S can be applied to other important monocotyledonous ornamentals.Another well-known promoter ubiquitin; is used in genetic engineering of monocotyledonous species since it promotes high levels of expression in most plant tissues.Two ubiquitin promoters were isolated from Gladiolus namely; GUBQ2 and GUBQ4.It was shown that levels of GUS expression were higher with the GUBQ4 promoter than with GUBQ2.The GUBQ1 isolated from maize gave the highest level transient GUS expression in Gladiolus, while in Ornithogalum transformation this promoter was less efficient in expressing GUS when compared to the CaMV35S promoter.Identification and the use of efficient promoters in genetic modification of monocotyledonous geophytes must therefore be taken into consideration.Intensive research is needed to isolate and use promoters from each plant species to be transformed by their own active promoters.Research involving the discovery and characterization of new enhanced promoters with higher levels of constitutive expression is needed.The source of explant can determine the failure or success of transformation.The meristematic tissue whose cells receive the transgene must be able to recover from any shock inflicted by the transformation treatment and quickly regenerate into mature plants.Some reports have revealed that younger explants such as immature embryos can be transformed more efficiently than mature plants.Shoot tips were used as explants in Agrobacterium-mediated transformation of Gladiolus.Leaf explants also resulted in successful transformation of Hyacinthus orientalis and Narcissus tazzeta.The most common way of stable transformation of monocotyledonous geophytes has been through the use of callus.Thus far, somatic embryogenesis of ornamental geophytes has been achieved in several species including; Crocus sativus, Crocus heuffelianus, Fritillaria meleagris and Merwilla plumbea.This calls for more research in the development of somatic embryogenesis protocols in monocotyledonous geophytes to assist transformation.The most challenging aspect in the genetic manipulation of geophytes is establishing subsequent regeneration of plants after every transformation event.The post-agroinfection phase of explants involves their exposure to two forms of antibiotics; one for eliminating Agrobacterium and the other for selection of transformed plants.The concentration of these antibiotics has a significant impact on the regeneration and transformation efficiencies.The concentration of kanamycin used for selection of putative transformants varies with cultivars and explant types.Since selective agents such as kanamycin have been shown to interfere with regeneration, and that monocotyledonous geophytes are not natural hosts of the Agrobacterium, it is sometimes beneficial to involve a delay period of 2 to 10 days before inoculating explants onto the selection medium; thus allowing the transformed explants to recover from the infection process and to express selectable marker genes.Despite genetic engineering methods available, it has been observed that genes that could enhance the quality of ornamental geophytes are many but only a few have been characterized in ornamental geophytes.To date, only a few studies have involved the application of genetic transformation techniques other than reporter genes.Lilium longiflorum plants were transformed for resistance against cucumber mosaic virus via particle bombardment.Phytoene synthase is a regulatory enzyme for carotene biosynthesis and therefore important for color formation.The PSY gene was used in the Agrobacterium-mediated transformation of N. tazzeta var.chinensis.H. orientalis cv.Chine Pink transformed with the thaumatin II gene showed a significant level of resistance to the pathogenic fungus; Botrytis cinera.Gladiolus plants transformed with a defective replicase and protein subgroup II gene were found to be resistant to cucumber mosaic virus, while Azadi et al. established that the integration of a defective CMV replicase gene resulted in virus resistant Lilium plants.Since in ornamental floriculture, more emphasis is laid on flower quality and related characteristics, genetics in floral development has become an important discipline.Molecular genetic studies have identified many genes and other regulators that play important roles in floral development.These studies have yielded important insights into the control of flower development, thereby adding to the widely available genetic database for well-established models.For instance, a flower regeneration system was set up for Saussurea involucrata.This was to facilitate basic biological studies of flower development by introducing heading-date 3a; the gene responsible for early flower induction.Most systems currently or previously used are Agrobacterium-mediated based methods of transformation.Agrobacterium-mediated transformation is a widely utilized method of gene delivery.It can assume many forms or systems as outlined below.These systems can be employed individually or concurrently.Table 2 gives some of the examples where these systems are or have been employed.Most approaches to improve bacterial penetration in monocotyledonous plants involve wounding before or during co-culturing.This is to facilitate transfer of Agrobacterium genes across the plant cell walls.However, these mechanical treatments may sometimes damage or deteriorate the physiological state of the explants to an extent that growth retardation and reduced regeneration capacity may result.This system uses a long co-cultivation period of plant tissues and Agrobacterium to provide a high possibility of transformation, while at the same time not imposing adverse effects on tissue regeneration.To prepare a monolayer, the Agrobacterium is grown overnight and about 1 ml of the suspension is transferred and spread evenly on Petri dishes containing agar-solidified nutrient medium.Petri dishes are then left under laminar flow for 10–15 min for slight drying.Plant tissues are then inoculated over the bacterial monolayer and co-cultured.This system was successful for maize transformation.It was used in Agrobacterium-mediated transformation of Dierama erectum; a monocotyledonous ornamental geophyte and results have shown that it is the second best gene delivery system and could be applied for most monocotyledonous geophytes.This is an in planta transformation procedure in which the basal medium containing Agrobacterium carrying constructs of interest, is pipetted into open plant florets during anthesis.In monocotyledonous plants, best results are obtained when spikes have not yet emerged from the sheaths.Florets are then covered to create enough humidity and later uncovered and air dried.The mature T1 seeds are then screened for transformation.This method has been applied for stable transformation of wheat and could be easily applied on flower buds for monocotyledonous geophytes.In short, this involves the direct delivery of exogenous DNA into plant cells.The genetic material is precipitated onto micron-sized tungsten or gold particles.These are placed within a barrel designed to accelerate them to velocities needed to penetrate the cell wall.The limiting factors in developing transgenic ornamental bulbs can be overcome by direct DNA transfer methods; thus by-passing the barriers imposed by Agrobacterium–host specificity and monocotyledonous plant cell constraints.Some advantages offered by this system include: transformation of organized tissue, rapid discovery of transformed T1 seeds, transformation of recalcitrant species and also offering the basis for studying many plant developmental processes.This technique has been applied to obtain transgenic plants of tulip, L. longiflorum and Ornithogalum dubium.Recently, a successful genetic transformation protocol for Gladiolus; a monocotyledonous flower bulb using particle bombardment has been reported.Genetic transformation in monocotyledonous geophytes is impeded by availability of somatic embryogenesis protocols for specific plant species.The standardization of somatic embryogenesis helps maintain and enhance the multiplication of elite clones of interest for not only high productivity, but also the establishment and utility of a given transformation protocol in genetic engineering.Somatic embryogenesis has been considered as the basic tool for transformation studies especially in genetic transformation methodologies involving Agrobacterium and biolistics.It may well be stated that this is a necessity in the genetic modification of ornamental geophytes.The long juvenility phase of these geophytes is the other factor prolonging their genetic modification, since in situ methods such as pollen transformation would have to be performed only after flowers have emerged.Identification of new genes of interest together with the application of those that have proven successful in other plant species can be of immense horticultural benefit.For instance; the MADS-box genes which encode transcription factors involved in transition from vegetative to reproductive growth, determination of floral organ identity, senescence and many other developmental processes in plants can be utilized.The ectopic expression of OsMADS1 in transgenic tobacco plants resulted in early flowering plants.Another MADS-box gene isolated from silver birch; BpMADS4, prevents normal senescence, winter dormancy in Populus tremula and promotes early flowering in apple.In recent years, the FLOWERING LOCUS T gene has been the most widely used and effective in promoting early flowering in various plants.Its homologous genes such as PtFT1, CiFT, Hd3a and SFT have recently been isolated from poplar, citrus, rice and tomato, respectively.The discovery of the FT gene raised interest in the study of FT genes in different species.FT homologous genes have been isolated and their roles have been studied extensively.For instance, it was found that under short day conditions in Kasalath, an ortholog of FT; Hd3a promotes early flowering.In another study Tamaki et al. showed that the same Hd3a induces flowering in rice.Further investigations on the activity of these genes on ornamental geophytes, together with other programs aimed towards identification of beneficial genes are therefore valuable.Future research on these plants can also involve those aspects of genetic transformation which have not yet been explored in ornamental monocotyledonous geophytes.This includes utilization of the methods described below.Another approach that has recently received more attention is one which involves gene expression that does not require tissue culture; Agrobacterium gene transfer into seeds.It is considered to be a faster and less laborious approach of generating transgenic plants.To transform the seeds, the Agrobacterium containing a gene of interest is prepared.Dormant seeds are aseptically decontaminated, trimmed and co-cultivated with A. tumefaciens.This technique allows the Agrobacterium to penetrate intracellular spaces in the seed tissue and finally transform the embryo cells.Germinated seedlings are then transferred to soil for growth and further analyses are performed accordingly.Although there are no reports on the application of this technique in ornamental monocotyledonous geophytes, results obtained from transformation of Brachypodium distachyon, show that this system could be applied to other monocotyledonous species in future.Due to some transformation difficulties encountered in monocotyledonous species as mentioned earlier, the Agrobacterium may fail to reach the target cells.Trick and Finer described sonication-assisted Agrobacterium transformation as a tool that allows for effective delivery of T-DNA from Agrobacterium to a large number of cells in the plant tissues.This includes diverse groups of plants; dicotyledons, monocotyledons and gymnosperms.The technique simply involves exposure of plant tissue to ultrasound for a short duration in the presence of Agrobacterium.It was found that SAAT treatments produce fissures that assist the Agrobacterium to easily reach internal plant tissues thereby increasing chances for transformation events.In some studies, SAAT has proven to be effective even at low Agrobacterium optical density.This technique has great potential to be applied in genetic modification of monocotyledonous geophytes.Stable transformation of some monocotyledonous plants has been achieved through another non-tissue culture based technique; pollen transformation.This involves production of transgenic plants by inoculating florets with Agrobacterium at or near anthesis.This procedure leads to the production of embryos with enhanced resistance to antibiotics during the selection phase for transformants.It has been successful in cereal crops such as barley, maize and wheat.Successful horticultural trade with ornamental geophytes can be improved through application of multidimensional approaches towards the genetic enhancement of existing crops and further development of new ones.Genetic transformation has become an effective tool and thus future research on ornamental geophytes will utilize this technique.As more genes are being isolated, more options are becoming available for the application of genetic modification in ornamental geophytes.Krens and Kamo have listed more than 30 genes isolated and characterized from geophytes themselves.Some of these genes are involved in virus-, fungi- and insect-defenses as well as carotenoid biosynthesis.The majority of genes such as the MADS box genes that play a role in flower and color development, have also been isolated and successfully characterized.Strategies for flower architecture, color, scent modification and control of florigenesis via genetic engineering with special attention on metabolic engineering of the flavonoid pathway can also be applied.Important traits such as vase life are also open to genetic modification.Ethylene is involved in senescence in many flowers and vase life can be lengthened by blocking ethylene biosynthesis.Another important aspect that has shown great potential lately is the manipulation of biochemical pathways leading to production of highly valued plant secondary metabolites.Recent developments in the Agrobacterium-mediated transformation in monocotyledonous species can benefit from greater involvement in genetic modification of ornamental geophytes.Zhang et al. have discovered that weakening defense responses in plants, can enhance Agrobacterium-mediated transformation.The ability to successfully establish micropropagation protocols for monocotyledonous geophytes is critical as it serves as a pre-requisite to genetic transformation.With sustained interest in ornamental plants and availability of techniques such as Agrobacterium-based systems of gene delivery and biolistic methods, together with continuous research in gene isolation, there is a promising future for the success of genetic modification of monocotyledonous ornamental geophytes.This review presents an outline of some of the examples whereby most of the monocotyledonous species listed have been successfully transformed through Agrobacterium-based methods.However, most of the protocols given are either basic or done to induce disease resistance.In view of the recent promising prospects, the floriculture industry still has a potential to offer more satisfactory products to consumers.While these facts remain true, identification of genes and promoters that can be expressed at a functional level for monocotyledonous geophytes are still challenging.The knowledge of monocotyledonous host cell cycles, establishment of somatic embryogenesis protocols and their correlation with mechanisms for T-DNA transfer needs to be explored in depth.
Genetic improvement of ornamental geophytes especially the monocotyledonous type; is often restricted by failure of Agrobacterium to reach competent cells as well as a lack of efficient regeneration systems. Despite all these limitations, it has recently been shown that the use of efficient promoters, super-virulent strains, and the utilization of systems such as an agrobacterial monolayer, Agrobacterium-mediated pollen and seed transformation, floral dip method and sonication-assisted Agrobacterium-mediated transformation (SAAT) will ensure success in the genetic transformation of these ornamentals in the near future. This article outlines factors affecting transformation of monocotyledonous geophytes. Special emphasis is laid on measures that have been employed to alleviate various difficulties. The need to develop somatic embryogenesis protocols for the ease of transformation is highlighted. In addition, perspectives in view of future research are also given. This information is crucial for biotechnological improvement of ornamental geophytes that are proving difficult to transform.
31,620
Corpus callosum volumes in the 5 years following the first-episode of schizophrenia: Effects of antipsychotics, chronicity and maturation
The neurobiology of schizophrenia is not yet fully elucidated.In the last decades, neuroimaging studies have reported several structural and functional brain abnormalities associated with the disorder, some of which seem to be directly related to clinical course and symptom dimensions.Among the findings of such investigations, convergent evidence of structural MRI studies has corroborated the existence of cerebral white matter abnormalities in schizophrenia.The WM contains myelinated axonal fibers that interconnect gray matter structures all over the brain, possessing a crucial role in the integration of neural information.In addition to structural MRI investigations, post-mortem studies found reductions in size and density of oligodendrocytes and abnormal myelin structure and compactness in the WM of schizophrenia patients, which may interact with synaptic abnormalities to produce disrupted brain connectivity.Such findings are in line with the notion that abnormal integration of brain networks is central to the neurobiology of schizophrenia, as postulated in the disconnection hypothesis.A major subject of debate in schizophrenia research is whether the structural brain changes observed in MRI investigations are static or progressive over time.Progression of volumetric GM abnormalities has been consistently documented after the onset of schizophrenia, particularly in patients with a non-remitting course of illness.There is also evidence suggesting that WM abnormalities might be progressive in schizophrenia.One meta-analysis of longitudinal studies using region-of-interest analysis found significant reduction of frontal, temporal and parietal WM over time in schizophrenia, although no significant effect for total WM was observed.The corpus callosum, which is the largest bundle of axonal fibers in the brain and the main commissure connecting the cerebral hemispheres, seems to be one of the most affected WM structures in psychosis.Reductions of CC area or volume, mainly affecting its anterior portions, have been consistently described in schizophrenia, including first-episode patients.Also, smaller genu of the CC has been associated with conversion to psychosis in ultra-high risk subjects.However, to date only a few studies specifically examined the longitudinal course of CC abnormalities in adults with schizophrenia.Mitelman et al. found that patients with chronic schizophrenia had a faster decline in absolute CC size than healthy controls over 4 years of follow-up, and this effect was stronger in poor-outcome patients.Only one longitudinal investigation studied patients with first-episode of schizophrenia-related psychoses and found no differences relative to healthy controls in the rates of change of CC volumes over a 1-year follow-up period.So far, no study has evaluated progression of CC volumes over a fairly long period of time after illness onset.Moreover, the potential impact of antipsychotic use and illness course/prognosis on CC volumes has not been investigated using a longitudinal design in the initial years of the disease.As the brain is under constant change due to maturation and aging, another relevant question in neuroimaging research is whether findings of structural brain abnormalities in schizophrenia might actually be related to deviations in these processes.MRI investigations evaluating the age-related brain changes suggest that schizophrenia is associated with both a dysmaturational pattern and accelerated brain aging.Studies with schizophrenia patients examining the effects of age on the WM are still scarce and inconclusive.Regarding the effects of age on the CC structure, one cross-sectional investigation found that, whereas healthy adults exhibited an age-related increase in total CC area, this pattern was absent in treatment-naïve FESZ patients.Nevertheless, it is important to notice that the authors assessed linear correlations only and more recent evidence suggests that the CC matures/ages in a nonlinear fashion, which might have limited the sensitivity of their analyses.In the present study, a population-based cohort of young adults with FESZ and epidemiological controls were studied with structural MRI scanning at baseline and after 5 years of naturalistic follow-up.The FESZ patients enrolled here are part of a larger sample of first-episode psychosis that exhibited volumetric reductions affecting the right frontal WM and the genu and splenium of the CC at study entrance.We employed both voxel-based morphometry and volume-based morphometry analyses aiming to examine: The occurrence of volume changes affecting the CC over 5 years of follow-up in FESZ patients; The potential influence of clinical course and AP use in CC volumes over time; Abnormalities in the trajectories of maturation/aging of the CC in FESZ patients.This is particularly relevant considering that nonlinear changes related to maturation/aging processes may obscure the investigation of both static and progressive brain differences between patients and controls.The same analyses were carried out for the total WM compartment in order to allow interpretation of findings in the CC in the light of comprehensive changes in brain WM over time.We hypothesized that FESZ patients would present a global reduction of CC volumes over time, which might be more pronounced in the portions that presented reductions at baseline, as well as an accelerated age-related volume loss of the CC relative to controls.We also hypothesized that the longitudinal changes observed would be more pronounced in patients with non-remitting course.The FESZ patients examined herein were selected from a large sample of FEP individuals who took part in a population-based case-control study investigating the incidence of psychotic disorders in a circumscribed region of Sao Paulo city, as previously described.In the original epidemiological investigation, cases were identified by active surveillance of all people that made contact for the first time with the mental healthcare services attending that region due to psychotic symptoms.In order to obtain a population-based control sample at the time of the baseline assessment, next-door neighbors were actively contacted and enrolled as volunteers.In Chaim et al. and Colombo et al., our group examined baseline differences in, respectively, CC and WM volumes between FEP and controls drawn from this cohort.In the present investigation, we opted to include only those patients who fulfilled criteria for a first-episode of schizophrenia, schizophreniform disorder or schizoaffective disorder according to the Diagnostic and Statistical Manual for Mental Disorders, 4th edition and who completed the 5-year follow-up evaluation.A subsample of controls who completed the 5-year follow-up assessment, matched for age and gender to the FESZ group, was also drawn from the original control group.Other inclusion criteria for both FESZ patients and controls at baseline were: age between 18 and 50 years; residing for 6 months or more in defined geographic areas of Sao Paulo.Exclusion criteria for both cases and controls at any time of the study were: history of head injury with loss of consciousness; presence of neurological disorders or any organic disorders that could affect the central nervous system; moderate or severe mental retardation; and contraindications for MRI scanning.Exclusion criteria specific for controls were: diagnosis of any DSM Axis I disorders; and personal history of psychotic symptoms, assessed with the Psychosis Screening Questionnaire.Following baseline evaluation, patients were referenced to treatment at the health services located in the geographical regions where they lived in.Both FESZ patients and controls included in the present study were followed-up naturalistically over a 5-year period, with re-interviews carried out for diagnostic confirmation and assessment of prognosis.From the original sample studied at baseline, 32 FESZ and 34 controls completed the 5-year follow-up protocol and were included in the present analyses.Information on data attrition can be found in Rosa et al. and in the Supplementary Table 1.Following baseline evaluation, patients were referenced to treatment at the health services located in the geographical regions where they lived in.Both FESZ patients and controls included in the present study were followed-up naturalistically over a 5-year period, with re-interviews carried out for diagnostic confirmation and assessment of prognosis.From the original sample studied at baseline, 32 FESZ and 34 controls completed the 5-year follow-up protocol and were included in the present analyses.Information on data attrition can be found in Rosa et al. and in the Supplementary Table 1.Local ethics committees approved this investigation and all participants provided informed written consent before the study assessments both at baseline and at the 5-year follow-up.All participants were evaluated both at baseline and at 5-year follow-up with the Structured Clinical Interview for DSM-IV for the assessment of psychiatric diagnosis.For FESZ patients, the presence and severity of psychotic symptoms was measured with the Positive and Negative Syndrome Scale.In addition, a general medical history and information about the use of psychotropic medications through medical records and interviews with patients and/or family was also obtained.At 5-year follow-up, patients were categorized into remitting or non-remitting courses according to the DSM-IV course specifiers, assessed with the SCID: remitting course meaning a single psychotic episode in full remission, without new psychotic episode during the follow-up, and a non-remitting course meaning the presence of either continuous or episodic symptoms, or residual/negative symptoms.To investigate the effects of AP treatment on longitudinal WM changes, FESZ patients were divided into two groups based on AP use during the follow-up period: patients who had been on continuous and regular treatment, and those who had quit treatment with AP after the baseline evaluation.Imaging data were acquired both at baseline and at follow-up using two identical 1.5 T MRI scanners.In order to test the reliability between devices, analyses of brain measures of six controls who were examined in both scanners in the same day were conducted, demonstrating a very good consistency between them; for instance, an intraclass correlation coefficient of 0.98 was observed for CC measures.The same acquisition protocol was used in both scanners: a T1-spoiled gradient recall sequence providing 124 contiguous slices, slice thickness = 1.5 mm, matrix size = 256 × 192, echo time = 5.2 ms, repetition time = 21.7 ms, flip angle = 20.At baseline, all MRI scannings were collected with the same field of view, resulting in a voxel with the following dimensions: 0.86 × 0.86 × 1.5 mm.At the follow-up MRI scanning session, the same FOV and voxel dimension parameters were set initially, but the technical team that conducted the MRI acquisitions were allowed to adjust the FOV before image acquisition in cases where fittings were judged to be needed due to inter-subject differences in head size.Thus the final voxel sizes at the follow-up scanning session were: the same as initially set in 8 patients and 9 controls; 0.94 × 0.94 × 1.5 mm in 5 patients and 1 control; 1.02 × 1.02 × 1.5 mm in 16 patients and 22 controls, 1.05 × 1.05 × 1.5 mm in 2 patients, and 1.09 × 1.09 × 1.5 mm in 1 patients and 2 controls.No difference between groups regarding the distribution of voxel sizes of follow-up MRI acquisitions was observed.All images were resampled to an isotropic voxel as part of the processing protocol adopted here.An experienced neuroradiologist visually inspected all images with the purpose of identifying artifacts during image acquisition and the presence of silent gross brain lesions.VBM was performed using Statistical Parametric Mapping 8.0, executed in Matlab platform.First, follow-up images were registered to baseline images and reoriented; the mm coordinate of the anterior commissure matched the origin xyz, and the orientation approximated Montreal Neurological Institute space.Then, all images were segmented and classified into GM, WM and cerebrospinal fluid using the unified segmentation implemented in SPM8, which provides both the native space versions and Diffeomorphic Anatomical Registration Through Exponentiated Lie algebra imported versions of the tissues.A customized template was created from the subjects using the DARTEL protocol.The deformation field was applied to the segmented images in sequence.Finally, the images created in the previous step were standardized to MNI space, re-sliced to 1.5 × 1.5 × 1.5 mm voxels and smoothed using an 8-mm full width at half maximum Gaussian kernel.The total WM volumes were obtained from the modulated images.The choice to use a 8 mm Gaussian filtering size in the current study was based on the fact that such degree of smoothing provides an optimal degree of increment in signal-to-noise ratio and conformation of MRI data to a normal distribution, as well as compensating for some of the data loss incurred by spatial normalization.Small Volume Correction analyses were performed by masking the WM volumetric map with CC ROIs derived from a DTI-based WM atlas.By constraining the total number of voxels included in the analyses, the SVC approach improves the statistical power of hypothesis-driven analyses within specific CC regions.The predefined ROIs were projected onto each individual image to circumscribe total CC limits as well as to divide it on anterior, mid and posterior portions; such portions were arbitrarily defined to contain CC genu, body and splenium, respectively.To further explore abnormal patterns of age-related volume change affecting the CC in FESZ, we also conducted VolBM analyses of total WM and CC volumes.Differently from the SVC method described above, here the volumes of total WM and of total, anterior, mid and posterior CC were extracted for each and every subject to be analyzed using statistical tests not available in the SPM8 package.The measure of total WM volume was obtained with the “get_totals” script implemented for SPM8 on the native space of WM segmentation.For the CC measures, we applied the spatially normalized DTI-based ROI masks on each subject registered image and then extracted the volumes using a script from the SPM mailing list.Between-groups comparisons of demographic and clinical data were carried out using the Statistical Package for Social Sciences."We employed the Chi-square or Fisher's exact tests for categorical variables and the Student's t-test or an analysis of variance for continuous variables.Statistical significance was set at p < 0.05, two-tailed.A repeated-measures analysis of covariance contrast, with group and time as factors, was chosen to assess between-group differences in longitudinal WM changes.The following voxel-by-voxel comparisons of CC volumes were performed using the SPM8 statistical package:Total group of FESZ patients versus controls;,Remitted patients at T5 versus non-remitted patients at T5 versus controls;,Patients on continuous AP use at T5 versus patients without AP versus controls.Only voxels with values above an absolute threshold of p < 0.05 entered the analyses.A measure of the total amount of WM was entered as a covariate in all analyses, given by the sum of voxels within the corresponding WM compartment of each subject.Resulting statistics at each voxel were transformed to Z scores, and between-group differences were displayed as statistical parametric maps of the group x time interactions into standard anatomical space, thresholded at the one-tailed p < 0.001 level of statistical significance.As described above, the SVC tool of SPM8 was employed with the purpose of restricting the comparisons to voxels located within the CC and its three main portions.Clusters with a minimum of 10 contiguous voxels showing significant findings within each of those volumes of interest were reported only if they survived family-wise error correction for multiple comparisons over that region.Significant ANCOVA findings were followed-up with two-group post-hoc t-tests.The same comparisons were repeated including age and gender as nuisance variables.Subsequently, the SPM maps were inspected again on an exploratory basis aiming to identify possible regions of WM volume change over time in FESZ patients versus controls.Such findings were reported as statistically significant only if surviving FWE correction for multiple comparisons over the whole brain.In all analyses, MNI coordinates from the voxel of maximal statistical significance were converted to the Talairach and Tournoux System.Longitudinal changes in brain structure might be influenced by intra-individual factors; such random effects are not weighted by the statistical models commonly used in conventional voxelwise methods.Mixed effects models were devised to deal with this issue, thus being more sensitive to detect between-group differences that might be overshadowed by individual variability.For this reason, MEM analyses were performed on the volumes of total WM and of total, anterior, mid and posterior CC extracted from the preprocessed images, as described above.Linear MEM was run in the SPSS package, with total WM volume and each ROI of the CC as a dependent variable.In order to select the best fitting model, different models were tested using a three-step approach: first only fixed factors were included; then the random intercept was added; and lastly random slopes were included.For all analyses and steps, the fixed factors were: age, gender, group, time point, age X group, time point X group, and total WM volume.At step three, the interactions of age X group, and time point X group were also set as random slopes.Only for the FESZ group, MEM were performed including clinical status and AP use at T5, as well as its interaction with time point as fixed factors; at step three, both interactions were set as random slopes.In all analyses, values of p were corrected for multiple comparisons with the false discovery rate approach.For the evaluation of age-related trajectories of total WM and CC volumes change in FESZ patients and controls, the ROIs of total WM and of total, anterior, mid and posterior CC previously extracted from follow-up images were analyzed with SPSS 20.0.Regression analyses were used to determine the most appropriate trajectory model to describe the relationship between each ROI measurement versus age in each group.CC volumes were normalized by total WM volume.First, second and third order polynomial expansions were assessed and the best fitting models were reported.FESZ patients and controls did not differ regarding age, gender and interval between MRI scans.However, patients attained fewer year of education and had a higher rate of comorbid substance abuse/dependence compared to controls, as commonly reported by other studies.FESZ patients who remained remitted and those with a non-remitting course over the follow-up period had similar clinical characteristics, including interval between MRI scans, frequency of substance use disorder and exposure to AP in days.Patients in use of AP and those not using AP at follow-up had similar characteristics as well.FESZ patients who remained remitted and those with a non-remitting course over the follow-up period had similar clinical characteristics, including interval between MRI scans, frequency of substance use disorder and exposure to AP in days.Patients in use of AP and those not using AP at follow-up had similar characteristics as well.No significant longitudinal differences were observed between the whole group of FESZ subjects versus control group in regard to total CC volumes.Also, there were no significant differences in the comparison between remitted FESZ patients at T5 versus non-remitted FESZ patients at T5 versus controls on the change of CC volumes over time.Nevertheless, FESZ patients on continuous AP use showed greater volumetric increase over time in the posterior CC relative to controls, in a cluster involving both brain hemispheres.Results did not change significantly when the analyses were repeated including age and gender as confounding variables/fixed factors.No longitudinal differences were observed for total, anterior and mid CC volumes.To explore the influence of including total WM volume as a covariate in the final results, we repeated the voxelwise comparisons without controlling for total WM volume, which likely increases the sensitivity of the analyses.After repeating the voxelwise comparison without controlling for total WM volume, the following results emerged:The negative result regarding longitudinal differences between the whole group of FESZ subjects and controls in CC volumes remained unchanged;,FESZ patients on continuous AP use also showed greater volumetric increase over time in the posterior CC relative to controls;,One additional cluster of greater volumetric increase was observed in the anterior CC of FESZ patients on continuous AP use relative to controls;,Finally, non-remitted FESZ patients showed greater volume increase over time in both posterior and anterior CC relative to controls.These results did not significantly change when the analyses were repeated including age and gender as confounding variables/fixed factors.In the voxelwise comparison between the whole group of FESZ patients and controls, no significant differences in regional WM volumes over time were observed.Also, clinical status of FESZ patients as well as AP use at follow-up did not significantly influence longitudinal changes in WM volumes, i.e. no significant differences emerged in these comparisons."For total WM and total CC volumes, the best fitting model based on Akaike's Information Criteria was determined using a heterogeneous AR structure of variance-covariance matrix and including only fixed effects.For anterior and mid CC volumes, the best fitting model was determined using an AR structure of variance-covariance matrix and including only fixed effects.Regarding the posterior CC volume, the best fitting model was determined using a heterogeneous AR structure of variance-covariance matrix, and adding random intercept and slopes to the model reduced its error.No significant interaction between age and group or time point and group was observed for any of the dependent variables tested.For total WM, gender and time point were significant determinants of total WM volumes.For total, anterior, mid and posterior CC volumes, only total WM was a significant factor.In the analyses restricted to the FESZ group, the same pattern of model fitting was observed, with the exception of the posterior CC portion, for which the best model was determined by an AR structure of variance-covariance matrix and including only fixed effects.No significant interaction between clinical course and time point or AP intake and time point was observed for total WM or CC volumes.For total WM, age was the only significant predictor, indicating that the older the patient, the smaller the volume of brain WM. For total, anterior, mid and posterior CC volumes, similarly to what was observed for the whole sample, total WM was the only significant determinant.Also, for mid CC volume, a strong tendency was observed for time point, pointing to increasing volumes over time in the FESZ group.In FESZ patients, a pattern of linear WM volume decline with age was observed.Differently, controls showed a trend toward a non-linear pattern of WM aging, with total volume increase until the fourth decade of life and volumetric loss from then onwards.A significant linear volumetric loss with age was found in the anterior CC region of FESZ patients, whereas relative volume preservation during non-elderly adulthood was observed in mid and posterior CC portions of FESZ subjects and in all CC portions of controls.These results were obtained by evaluating the volumes extracted from follow-up images; nonetheless, the same pattern of findings emerged when we analyzed volumes extracted from baseline images.In order to better interpret our results on CC maturation and to increase sensitivity, we conducted post-hoc analyses for the trajectories of CC without controlling for total WM. The best fitting curve for all CC volumes of FESZ patients was represented by a linear reduction of volumes by age.For controls, no fitted curve was statistically significant.In order to better interpret our results on CC maturation and to increase sensitivity, we conducted post-hoc analyses for the trajectories of CC without controlling for total WM. The best fitting curve for all CC volumes of FESZ patients was represented by a linear reduction of volumes by age.For controls, no fitted curve was statistically significant.To our knowledge, this is the first structural MRI study to examine longitudinal morphometric changes in the CC of FESZ patients over the first 5 years after disease onset, as well as to assess age-related trajectories of CC volume change in schizophrenia patients.In the present investigation, while no significant changes in CC volumes were observed in FESZ patients relative to controls, continuous AP use was associated with greater increase of the posterior CC region over the 5 years of follow-up.Also, the regression analyses focusing on age-related effects on total WM and CC volumes found a different maturation/aging pattern in FESZ patients, who presented a linear decline of total WM and anterior CC with age, whereas controls showed an inverted U-shape pattern for total WM volume and relative preservation of CC volumes during non-elderly adulthood.Differently from our initial hypothesis, no significant changes over time in CC volumes were observed in FESZ patients relative to controls in voxelwise analyses using the SVC approach.Previous studies investigating the effects of illness course on CC morphometry in schizophrenia yielded variable results.In a meta-analysis of 28 cross-sectional investigations, CC area reductions were found to be more pronounced in FEP patients, while patients with chronic schizophrenia exhibited relatively greater CC area.The few longitudinal studies published so far mostly showed findings in a opposite direction, reporting reductions of CC measures over time, particularly in patients with poor prognosis.However, these studies have evaluated chronic schizophrenia patients and employed CC midsagittal area as the main measure.The only longitudinal study published to date to assess CC volumes in FESZ also failed to find significant changes over a 1-year follow-up period compared to controls.It is important to notice that the repeated-measures ANCOVA contrast used for voxelwise comparisons here and the statistical analyses conducted by Del Re et al. are based on linear statistics, and would likely not detect nonlinear patterns of volumetric changes over time.Nevertheless, these findings suggest that no progression of CC volume deficits occur in the first years after schizophrenia onset.We also failed to find significant longitudinal changes in regional WM volumes in the whole-brain voxelwise comparison between FESZ and control groups, which was corroborated by the MEM analyses.Contrary to our finding, one large longitudinal investigation of FESZ reported reductions in total, frontal, temporal and parietal WM of patients over time, which were more pronounced in the first years of follow-up.One possible explanation for the negative result observed here is that the population-based approach used to recruit participants for the present investigation could have led to the enrollment of cases with milder severity of illness than in previous longitudinal studies.Interestingly to this regard, clinical status of FESZ patients at T5 did not influence CC volumes over the 5 years of follow-up.Nonetheless, due to the modest size of our study groups, we cannot fully rule out the possibility of type II statistical errors.On the other hand, as illness severity seems to be linked to the degree of brain changes over time, the fact that we studied milder cases of FESZ may have improved our sensitivity to detect effects of AP exposure in WM morphometry.Our VBM comparisons showed that FESZ patients on continuous AP use had greater increase in posterior CC relative to controls over time.The effects of AP agents on GM volumes have been well documented by several studies, but less is known regarding the impact of such medications in the WM. In a study with chronic and FEP patients, Molina et al. found that exposure to AP lead to decreases in parietal and occipital WM volumes.Ho et al. reported an association between AP intake and volume reductions in total, frontal, temporal and parietal WM in a large cohort of FEP.Bartzokis et al., in contrast, found increased frontal WM in chronic schizophrenia patients receiving long-acting risperidone.Regarding the CC, a cross-sectional study with FESZ and chronic schizophrenia patients did not find significant correlations between AP load and CC measures.In the only longitudinal investigation of Del Re et al., no significant correlation was observed between rate of CC change over time and load of AP intake in FESZ.However, the very small size of study groups and the relatively short period of follow-up of Del Re et al. work might have limited its sensitivity to detect CC volume changes related to AP use.Regarding WM microstructure, DTI studies on the effects of AP are also inconclusive, reporting both increases and reductions in anisotropy following exposure to medication.Animal studies investigating the effects of exposure to AP on brain WM volumes are rare: in macaques, exposure to haloperidol and olanzapine caused non-significant decrements of parietal WM; in rodents, exposure to the very same AP did not influence CC volumes.One should consider that AP drugs actions on brain tissue might vary depending on medication class.Some studies have indicated a differential protective effect of atypical versus typical AP on brain volume changes in schizophrenia.Regarding the WM, Ho et al. demonstrated that higher doses of non-clozapine atypical AP were associated with enlarged parietal WM volumes over time.Another study observed that schizophrenia patients previously treated with typical AP and that were switched to olanzapine had a 25%-increase in the anterior internal capsule volume relative to controls in a 1-year follow-up.In this regard, due to the naturalistic follow-up design of our study, we could not control our analyses in terms of neuroleptic type and lifetime load; thus, we cannot rule out that the exposure to a specific profile of AP type might have modulated our findings.As molecular pathology in schizophrenia is still under investigation, effects of AP on brain structure are difficult to interpret.A possible explanation would be the restoration of damaged myelin, as postmortem investigations indicate that schizophrenia is associated with myelin/oligodendrocyte abnormalities and animal and co-culture studies have demonstrated a potential role for AP in oligodendrocyte regeneration and myelin repair.Also, it is proposed that WM volume changes observed over aging in healthy individuals are caused by myelination/demyelination.Therefore, one could postulate that the reversal of pathological processes could be indirectly observed from longitudinal morphometric studies, which would be demonstrated by the stability or even the regression of reductions in WM volumes in patients continuously exposed to AP medication, as previously proposed.Our results for the voxelwise comparisons without controlling for total WM reinforce our initial findings that the chronic exposure to medication induces the enlargement of volumes over time.We also found new clusters showing increasing volumes over time in patients with poor-prognosis.Such clusters seem to overlap with the clusters found for continuous AP use, although they are smaller and present lesser statistical significance.It is conceivable that patients with worse prognosis should receive more AP medication over time; in fact, we observed a non-significant association between continuous use of neuroleptic medications and non-remission status.In other words, there is some collinearity between the exposure to AP drugs and poor clinical outcome.Therefore, we consider that such new findings that emerged when we conducted the analyses without controlling for total WM are possibly false-positives, indirectly reflecting the effect of the exposure to AP medication rather than a true relationship between poorer prognosis and larger CC volumes.Our results for the voxelwise comparisons without controlling for total WM reinforce our initial findings that the chronic exposure to medication induces the enlargement of volumes over time.We also found new clusters showing increasing volumes over time in patients with poor-prognosis.Such clusters seem to overlap with the clusters found for continuous AP use, although they are smaller and present lesser statistical significance.It is conceivable that patients with worse prognosis should receive more AP medication over time; in fact, we observed a non-significant association between continuous use of neuroleptic medications and non-remission status.In other words, there is some collinearity between the exposure to AP drugs and poor clinical outcome.Therefore, we consider that such new findings that emerged when we conducted the analyses without controlling for total WM are possibly false-positives, indirectly reflecting the effect of the exposure to AP medication rather than a true relationship between poorer prognosis and larger CC volumes.Another issue that should be considered in the interpretation of our results is the possibility of brain maturation abnormalities over the lifespan: when comparing FESZ patients to controls, differences in the trajectories of WM aging may obscure the examination of longitudinal changes directly associated to the disorder.Also, brain maturation might follow non-linear patterns, which are not detected by the statistical models generally used in MRI investigations.Indeed, our analyses of aging effects on total WM and CC volumes pointed to an abnormal maturation/aging pattern in FESZ.In our study, controls showed a non-linear trend of WM maturation and aging, with an ascending curve until the fourth decade of life and volumetric loss subsequently.Despite not reaching statistical significance, such finding is in accordance with a number of previous studies on WM maturation in healthy volunteers that demonstrated quadratic curves with a peak in mid to late adulthood.In contrast, FESZ subjects showed a linear decline of total WM volume with age.Employing a different metric, another 5-year follow-up study also observed an abnormal pattern of WM aging in schizophrenia, showing a faster decline in WM enlargement naturally observed in early adulthood.Bose et al., investigating solely linear differences in WM maturation in schizophrenia, found an accelerated WM loss with age in patients, in accordance with Andreasen et al., who demonstrated that FESZ patients present faster WM reductions in total, frontal, temporal, and parietal regions with aging.Therefore, our results reinforce such previous findings indicating that abnormal aging of brain WM in schizophrenia patients might be present since the first outbreak of the disease.Evidence suggests that the maturation and normal aging of the CC and its sub-regions in healthy subjects are also non-linear during young adult life, showing the same inverted U-shape pattern as for total WM.Schizophrenia seems to also disturb such normal process.A previous study examining the effect of age on callosal thickness and area demonstrated that chronic schizophrenia patients in the early adulthood showed a reduction of callosal thickness with age, in opposition to the expected expansion observed in healthy controls.Another cross-sectional investigation showed that the expected expansion of CC area with age that occurred in controls was not observed in FEP patients.Nevertheless, so far no study has specifically evaluated non-linear volumetric trajectories of the CC in individuals with schizophrenia.In our study, in the analyses controlling for total WM, FESZ patients showed a linear reduction in anterior CC volume with age; when absolute volumes were examined, all CC measures showed linear reduction with age.Thus, CC seems to follow the pattern observed for global WM in patients, with even more accelerated volume reductions in its anterior part.Conversely, controls displayed overall volumetric stability of CC during non-elderly adulthood; this was also confirmed by the examination of absolute volumes.It is conceivable that pathological process affecting brain WM in schizophrenia might produce the same effect on all brain regions, indistinctly, causing widespread accelerated shrinkage of WM volumes.Regarding controls, no clear pattern was detected.Perhaps the restricted number of subjects might have precluded the observation of the normal non-linear maturational pattern; also, the short age range evaluated in our study might have limited our analyses, as CC aging seems to peak in early adulthood and remain stable till late life.In summary, the patterns observed in our study are in line with previous morphometric investigations of CC and aging in schizophrenia.The present investigation evaluated an enriched sample of FESZ, based on an epidemiological design with a naturalistic follow-up.Such approach aimed to control for unmeasured environmental effects and to favor the recruitment of a group of FESZ that was more representative of community, in opposition to commonly used convenience samples of patients assisted in tertiary services.Notwithstanding, some limitations of this study may have influenced our results and should also be weighted in the interpretation of our findings.Firstly, as considered above, the final sample of FESZ patients and controls who completed the 5-years follow-up protocol can be considered modest, which increases the risk of both type I and II statistical errors.Therefore, our results may not represent trends of the overall schizophrenia spectrum, which is highly heterogeneous, but they might be limited to specific groups of patients.Secondly, a substantial proportion of our patients present comorbid substance misuse, which can affect WM and CC volumes.Lastly, we have included patients whose psychosis onset was relatively late in life.This might have added to the heterogeneity in our analyses, as the neurobiological mechanisms rendering vulnerability for developing psychosis might change with older age.Our longitudinal investigation demonstrated that chronic exposure to AP medication is related to greater increase in posterior CC region in FESZ patients over time.Our analyses also revealed dysmaturational pattern in FESZ patients, who presented a linear decline of total WM and anterior CC volumes with age, while controls had a non-linear pattern of total WM maturation/aging and volumetric stability in the CC.Differences on brain maturation may interfere on the longitudinal effects of the disease, therefore obscuring conclusions on the static versus progressive issue.Replication of the present findings with a larger sample of patients is warranted.The following are the supplementary data related to this article.Sociodemographic and clinical characteristics of FESZ patients which completed follow-up evaluation and of FESZ dropouts.Supplementary Material - Table 1,Sociodemographic and clinical characteristics of FESZ patients according to remission status.Supplementary Material - Table 2,Sociodemographic and clinical characteristics of FESZ patients according to AP use over follow-up.Supplementary Material - Table 3,Trajectory of CC aging/maturation in FESZ patients and controls without controlling for total WM.Supplementary Material - Table 4,Supplementary data to this article can be found online at https://doi.org/10.1016/j.nicl.2018.03.015.This study was supported by the American Psychiatry Association award “APA/AstraZeneca Young Minds in Psychiatry International Award” to Dr. Maristela S. Schaufelberger and FAPESP scholarships no. 2014/00481-1 and 2013/03905-4 to M.T.M.M. and M.V.Z., respectively.Baseline MRI data was collected with the support of the Wellcome Trust.M.H.S. receives a research scholarship from the Centro de Aperfeiçoamento de Pessoal de Nível Superior.
Background: White matter (WM) structural changes, particularly affecting the corpus callosum (CC), seem to be critically implicated in psychosis. Whether such abnormalities are progressive or static is still a matter of debate in schizophrenia research. Aberrant maturation processes might also influence the longitudinal trajectory of age-related CC changes in schizophrenia patients. We investigated whether patients with first-episode schizophrenia-related psychoses (FESZ) would present longitudinal CC and whole WM volume changes over the 5 years after disease onset. Method: Thirty-two FESZ patients and 34 controls recruited using a population-based design completed a 5-year assessment protocol, including structural MRI scanning at baseline and follow-up. The linear effects of disease duration, clinical outcome and antipsychotic (AP) use over time on WM and CC volumes were studied using both voxelwise and volume-based morphometry analyses. We also examined maturation/aging abnormalities through cross-sectional analyses of age-related trajectories of total WM and CC volume changes. Results: No interaction between diagnosis and time was observed, and clinical outcome did not influence CC volumes in patients. On the other hand, FESZ patients continuously exposed to AP medication showed volume increase over time in posterior CC. Curve-estimation analyses revealed a different aging pattern in FESZ patients versus controls: while patients displayed a linear decline of total WM and anterior CC volumes with age, a non-linear trajectory of total WM and relative preservation of CC volumes were observed in controls. Conclusions: Continuous AP exposure can influence CC morphology during the first years after schizophrenia onset. Schizophrenia is associated with an abnormal pattern of total WM and anterior CC aging during non-elderly adulthood, and this adds complexity to the discussion on the static or progressive nature of structural abnormalities in psychosis.
31,621
Population characteristics of submicrometer-sized craters on regolith particles from asteroid Itokawa
Solar system objects without atmospheres are continuously exposed to hypervelocity impacts.Impact processes are considered to be among the fundamental agents causing the modification of surface geological features of airless bodies.They include impact cratering, regolith formation, regolith mixing, and migration.In addition, microscopic meteoroid impacts contribute to changes in the optical properties, chemical composition, and structures of regolith surface material through impact melting, vaporization, and condensation processes, so-called ‘space weathering’.Surface evolution caused by hypervelocity impacts can offer important clues to understanding the history of airless bodies in the solar system.The Hayabusa spacecraft touched down on an S-type near-Earth asteroid, 25,143 Itokawa, and recovered regolith particles from its surface.Mineralogical and oxygen isotope properties of Itokawa particles are consistent with those of LL5–6 chondrite."Itokawa particles contain solar-wind gases and cosmogenic nuclei, implying that they remained on the asteroid's surface.Partially crystalline rims containing nanoparticles provide evidence of space weathering effects on S-type asteroids, where solar-wind irradiation damage and implantation are the major causes of rim formation, whereas micrometeoroid impacts play only a minor role.In addition, regolith activity on Itokawa—probably driven by impact processes—has been identified based on grain motion, fracturing, and abrasion.These previous studies have shown that Itokawa particles contain a record of the collective processes of regolith evolution on this small asteroid.In previous studies, submicrometer-sized impact craters have been found on Itokawa particles.These natural, small-scale craters can offer insights into the process of small-scale hypervelocity impacts, for which impact experiments in the laboratory have not yet yielded a comprehensive picture.In addition, these small craters can provide information about the unknown origin of the micrometeoroids bombarding Itokawa and the possible contribution to space weathering by submicrometer impacts.Craters with diameters from micrometers to a few tens of nanometers are generally observed on lunar regolith, as well as on the meteoritic regolith breccias Kapoeta and Murchison.Therefore, understanding submicrometer-sized cratering processes on Itokawa particles can contribute to a general interpretation of small-scale impacting processes on airless bodies.Previous studies have reported that the abundance of submicrometer craters on Itokawa particles is very low compared with similar features on lunar regolith; detailed surface observations identified only 24 submicrometer craters on 32 Itokawa particles with sizes from 20 µm to 50 µm.Nakamura et al. suggested that submicrometer craters can be formed through direct impacts of nanometer-sized interplanetary dust particles.In contrast, the submicrometer craters found on Itokawa particles may have formed through impacts of secondary ejecta created by primary impacts on Itokawa, because submicrometer craters appear to be concentrated on only a limited number of specific Itokawa particles.So far, statistical analysis of these craters has been limited because of the low crater abundances observed.Therefore, it is not clear whether the few observed craters represent the whole evolutionary picture of submicrometer cratering processes on Itokawa.In addition, the detailed abundances and production rates of submicrometer caters are not understood.At the Extraterrestrial Sample Curation Center of the Japan Aerospace Exploration Agency, Itokawa particles with an average diameter of 30 µm were retrieved from a sample catcher in the Hayabusa sample container.Thus far, surface features of Itokawa particles larger than 100 µm have never been examined in detail.These large Itokawa particles have large surface areas and they are the most suitable particles for extensive investigations of impact craters.The objective of the present work is to reveal accurate abundances of submicrometer craters and determine whether secondary impacts are representative agents driving the small-scale cratering processes on Itokawa.In the present study, we performed surface observations of 51 Itokawa particles ranging in size from approximately 15 µm to 240 µm and obtained unprecedented information on the areal distributions and morphologies of the craters on Itokawa particles.We report approximately 900 craters on Itokawa particles and compare the crater population with the flux of interplanetary dust particles and the lunar dust environment.The Itokawa particles investigated in this study are listed in Table 1.The listed particles were collected from rooms A and B of a sample catcher, captured during the second and first touchdowns in the MUSES-C Region of Itokawa, respectively.The particles from room A were retrieved from a pure quartz disk after gently tapping on the exterior of the sample catcher, causing particles to drop onto the pure quartz disk.The particles from room B analyzed in the present study were retrieved from the cover of room B.We examined 29 particles from room A and 22 particles from Room B.The mineral phases and average diameters of the Itokawa particles were investigated based on their initial description.Itokawa particles retrieved from the quartz disk and cover of room B were first analyzed with the field emission scanning electron microscope at the JAXA curation center to enable an initial description and preliminary identification of particle size and major mineral phases.The particles were placed on a gold-coated holder using an electrostatically controlled micromanipulation system.In this study, we observed the surface morphologies of Itokawa particles using the FE-SEM after the routine initial description procedure.The particles were observed without conductive coating.As described by Yada et al., Itokawa particles observed in this study have never been exposed to an atmospheric environment, thus suppressing contamination and alteration of the particles.We performed secondary electron image observation at accelerating voltages of 1.5 kV and/or 2 kV in high vacuum with an electron beam current of approximately 10 pA.To assess their surface concavity or convexity, Itokawa particles were imaged from two angles, with a difference in tilt of 5°, to create stereograms.The particle surfaces were scanned initially at magnifications of 2000–10,000 in order to identify the presence of craters.Surfaces containing craters were re-examined at magnifications of up to 150,000."One crater-rich Itokawa particle was allocated for further research as part of JAXA's quota.The particle was adhered onto a carbon-conductive tape to secure sufficient electrical conduction for SEM analysis.This sample handling procedure was performed in a nitrogen-filled glove box at the JAXA curation center.Next, we transferred the particle to the Institute for Molecular Science.The particle was stored in a vacuum desiccator during transportation.We determined the elemental composition of its surface through energy-dispersive X-ray spectroscopy using an FE-SEM equipped with a Bluker XFlash® FlatQUAD detector at the IMS.The accelerating voltage for SE imaging was 1.5 kV, while for EDS analysis we used 5 kV and 10 kV.We identified numerous submicrometer-sized craters on 13 of the 29 Itokawa particles from room A.On the other hand, we did not identify craters on Itokawa particles obtained from the cover of room B.The size distribution of Itokawa particles observed here is shown in Fig. 3.In the present study, all Itokawa particles on which craters were found are larger than 80 µm.In contrast, craters were not identified on Itokawa particles smaller than 80 µm.Detailed SEM observations showed that 11 of the 13 Itokawa particles containing craters had more than 30 craters on their surfaces.The other two particles contain few craters.On 8 of the 11 crater-rich particles, the craters are widely dispersed across the particle surfaces, as shown in Fig. 4A. However, craters tend to concentrate locally on two of the 11 crater-rich particles.One of the 11 crater-rich particles is an assemblage of small, fractured fragments.The crater density on RA-QD02-0272 varies between the fragments, and the craters are particularly concentrated on a specific fragment.The front and rear faces of several Itokawa particles were observed when they were turned over during the SEM observations.The distribution of craters on the different surfaces of individual Itokawa particles differed among particles: Craters were widely distributed on the opposite and side surfaces of RA-QD02-0275, similarly to its front surface, while there are few craters on the RA-QD02-0273 surface opposite to the front surface.The surfaces of Itokawa particles can be divided into two types on the basis of their formation processes.One type is a fractured surface, which is formed by impacts and/or possibly thermal fatigue."Another type is a surface with concentric polygonal steps and/or euhedral grains, which is thought to have formed through thermal annealing in pores on Itokawa's parent body.Pores with euhedral–subhedral crystals are often observed in porous ordinary chondrites and were termed ‘micro-druses’ by Matsumoto et al.In the present study, the surfaces of Itokawa particles can be also classified as belonging to one of these two types.Craters can be identified regardless of the two surface types.Typical submicrometer craters on Itokawa particles observed in this study are shown in Figs. 1 and 2.Craters from several tens to 100 nm in diameter are characterized by a pit and a surrounding rim.Craters larger than 200 nm in diameter are frequently covered with melted material from the bottom to the rim of the craters.These morphologies are similar to those of submicrometer craters on both lunar rocks and fragments of Murchison carbonaceous chondrites.These melted materials occasionally flow radially over the edge of the craters.Occasionally, craters exhibit highly elongated outlines, as shown in Fig. 2D.As regards craters on troilite grains, cracks appear at their bottom, which are not observed in craters on silicate grains.The largest crater found in this study has an average size of 2.8 µm.It consists of an irregularly shaped melted object with concave features.Convex, fine spotted structures of 10–100 nm in size are frequently distributed on surfaces containing craters.The spotted structures are probably blisters, which are thought to have formed through accumulation of solar-wind-entrained hydrogen and helium implanted within 60–80 nm.The blisters might have developed through solar wind irradiation within the order of 103 years.In many cases, craters are devoid of blisters on their surfaces, although blisters may have developed near them.Occasionally, blisters appear at the bottom of the craters.Melt drops and melt splashes are generally observed on surfaces with craters.Craters in Fig. 2B–D are dispersed on the same particle surface as in Fig. 4A; craters with or without blisters and elongated craters coexist on the same surfaces.Enlarged images of locally concentrated craters correspond to Fig. 2A. Rims of concentrated craters tend to disappear in the same direction.We measured the cumulative number of craters on Itokawa particles as a function of crater diameter.Six Itokawa particles with abundant craters were selected.They consist of particles characterized by different minerals, surface types, and crater distributions, as summarized in Table 2.Regarding the observation of RA-QD02-0273 and RA-QD02-0278, some craters might have been overlooked because these surfaces were not scanned at the same magnification as RA-QD02-0275, RA-QD02-0283, RA-QD02-0286, and RA-QD02-0301.Crater diameters were defined as the distance from the top of the crater rim to the top of the diametrically opposite crater rim.Crater diameters were estimated as the average values of the major and minor axes.Long axes were used as representative crater diameters when craters were observed under oblique angles.In this study, we identified craters ranging from approximately 10 nm to 2.8 µm in diameter.As craters become smaller, it becomes more difficult to identify them unless they have distinct rims.In particular, few craters with sizes of tens of nanometers have been identified if blisters had developed extensively in their vicinity.The obscuration of crater features by blister development can lead to incomplete crater counts and may cause a decrease in the slopes in Fig. 5 below a crater size of 80 nm.Power-law fits to the slopes for craters larger than 80 nm are considered representative slopes of the size distribution.The presence of one or more large craters affects the best-fitting slopes considerably.Therefore, the largest crater on each particle was excluded for the purpose of our fits.Impact experiments have generally shown that the number of impact fragments larger than r behaves as N ∝ r–D, where the exponent D, usually called the fractal dimension, is found to lie in the range D ∈ .The D values of craters on the six Itokawa particles range from D = 1.3–2.3.The cumulative number of all craters on the six Itokawa particles is characterized by D = 2.2, which is within the range of D values of typical impact fragments.We analyzed elemental maps of 17 craters with sizes from 200 nm to 690 nm on an olivine grain.Fourteen of the 17 craters show no difference in elemental composition from the host olivine grain.The EDS spectra of the remaining three craters show Al Kα and Ca Kα peaks.The elemental maps of Al Kα and Ca Kα in Fig. 6 show that Al and Ca are distributed from the crater floor to the rim.We also analyzed elemental maps of 12 melted objects coexisting with craters on RA-QD02-0275.Eleven melted objects show excesses of Na, Al, and Ca, and/or Fe and S with respect to the host olivine surface.Adhering small fragments on the host olivine grain are composed of olivine, pyroxene, plagioclase, FeS, Fe–Ni metals, chromite, and phosphate, which are the major mineral components of Itokawa particles.The surface of Itokawa is characterized by two regions.The first region is rough and covered by boulders with sizes from a few to 50 m.The second region is smooth and rich in coarse, 1–5-cm-sized pebbles.The smooth and rough terrain might typically be characterized by centimeter- and meter-sized surface unevenness, respectively.This unevenness of the terrain might be accompanied by depressions, which have widths and depths comparable to the unevenness.Primary impacts by interplanetary dust in a depression can produce impact ejecta, which can be blocked by nearby regolith or rocks and form secondary craters, provided that a primary impact crater is much smaller than the depression.We determined whether primary impacts can occur within depressions on Itokawa during the expected surface exposure time for regolith particles of 103 yr.Using the interplanetary dust flux model at 1 AU, we estimated the expected number of primary interplanetary dust impacts accumulated for 103 yr within areas ranging from 1 cm2 to 1 m2, which correspond to the expected size scale of the depressions in the smooth and rough terrain on Itokawa, respectively.In Fig. 8, we plotted the cumulative impact number as a function of the mass of the interplanetary dust particles and the corresponding impact crater diameters calculated using Eq.Fig. 8 shows that one or more primary craters smaller than 200 µm can be formed within an area of 1 cm2 in 103 yr.Similarly, primary craters smaller than 6 mm can be produced within an area of 1 m2 in 103 yr.In both cases, the maximum primary crater diameters of 200 µm and 6 mm are much smaller than the expected sizes of the depressions in the smooth and rough terrain of 1 cm2 and 1 m2, respectively.Hence, secondary ejecta that escaped from primary craters can be captured easily by the depression walls.Hypervelocity oblique impact experiments by Fravill and McDonnell and Fravill et al. showed that ejecta from primary impact craters with an average diameter of 14 µm produced numerous secondary submicrometer craters.In addition, Zook reported oblique impacts of milligram glass and basalt projectiles, which may produce millimeter-scale primary craters, creating numerous submicrometer craters.These experiments suggest that primary impact cratering with sizes larger than 10 µm can produce ejecta with sufficient velocity to result in submicrometer secondary cratering.We estimated the number of primary impacts that can produce secondary submicrometer craters, assuming that the formation of primary impact craters larger than 10 µm can lead to the production of submicrometer craters.Fig. 8 shows that 3 × 101 primary impact craters and 3 × 105 primary impact craters larger than roughly 10 µm can be produced within 1 cm2 and 1 m2, respectively.Oblique primary impact experiments indicated that the number of secondary craters is more than two orders of magnitude higher than those associated with primary impacts.Therefore, we conclude that secondary submicrometer craters can be formed during regolith exposure to primary impacts of interplanetary dust in the depressions of either smooth or rough terrain on Itokawa.Since global granular convection is expected to segregate and migrate regolith particles from rough to smooth terrain, submicrometer cratering observed on Itokawa particles might have occurred in either smooth or rough terrain.If the surface unevenness of centimeter to meter scales is a significant factor driving the abundance of submicrometer secondary cratering, it is possible that the secondary particle flux can attain similar values among airless bodies, regardless of the overall shapes or sizes of those bodies, when the flux of the incident interplanetary dust particles is similar.This can explain the similar flux values for Itokawa and the Moon.Craters without blisters on the blistered surface of RA-QD02-0275 might have been formed after the initial blister formation period on the grain surface, while craters with blisters dispersed on the same grain surface might have been generated prior to blister development.These features indicate that craters of various formation epochs may coexist on the same grain surface.The common appearance of craters without blisters suggests that most craters might have formed within a shorter time frame than the exposure time required for blister formation.Elongation and partial disappearance of crater rims are characteristic features of oblique impacts on micrometer scales.Elongation in the impact direction and the absence of crater rims on the incident side of the crater becomes apparent at impact angles greater than 15°–35°.Sub-micrometer-sized elongate and shallow craters were also identified on Al foils of the Stardust Intersteller Dust Collector.They are thought to have been formed by secondary ejecta through oblique impacts.The locally concentrated craters tend to exhibit oblique impact features; the top sections of crater rims are absent in Fig. 2A, which may provide an indication of the incident direction of the oblique impacts.The possible impactor trajectories of concentrated craters are displayed as arrows in Fig. 2A and Fig. 4B, based on considering the direction of crater elongation and the absence of crater rims.The trajectories originate from similar directions, suggesting that impactors from a common impact event were a major contributor to the locally concentrated crater distribution.Harries et al. identified 15 craters larger than 200 nm in diameter on a small Itokawa particle, corresponding to a crater number density of 0.08 µm–2, which corresponds to the most heavily cratered surface previously reported.In this study, the number density of locally concentrated craters larger than 200 nm in size reaches 0.1 µm–2 across areas of approximately 40 µm2 on RA-QD02-0272, while widely distributed craters larger than 200 nm have number densities of approximately 2.6 × 10–3 µm–2 to 4.5 × 10–3 µm–2.The cratered surface reported by Harries et al. may correspond to concentrated crater areas on large Itokawa particles, such as those shown in Fig. 4B. Harries et al. proposed that the craters on RA-QD02-0265 were formed simultaneously by a nearby, small-scale impact.Our observations support the conclusion that locally concentrated craters were formed by projectiles from a single primary impact.The melted drops and splashes that commonly coexist with craters could have been closely related to impact phenomena causing secondary craters.A low impact velocity for the formation of the submicrometer craters on Itokawa particles was proposed by Harries et al. because of the craters’ shallow depths and the absence of spallation zones.Dobrică and Ogliore calculated that micrometer-sized melt splashes on Itokawa particles could have been cooled and solidified for 200 µs, based on considering radiative cooling.Assuming an ejecta velocity from the primary impact of less than 5 km s–1, the melts might have been excavated within a distance of 1 m from the impact.This traveling distance is consistent with the idea that impactors of secondary craters were transported to centimeter- to meter-sized depressions.The Ca- and Al-rich components detected in the craters are probably residues of the impactors, while craters indistinguishable from the host olivine surfaces on the basis of EDS mapping could have residues composed of the same elements as those of the host olivine."Since Ca and Al in the residues are major elements of chondritic materials, corresponding to the composition of both Itokawa and interplanetary dust, the crater residues may be either remnants of primary interplanetary dust impacts, as proposed by Harries et al., or Itokawa's own material.Pulse laser irradiation experiments have been performed to simulate micro-impacts, because laser irradiation causes local heating to form nanometer-sized iron particles and reproduces the optical properties of space weathering on S-type asteroids and the Moon.Fazio et al. reported that a pulse laser shot produced crater-like morphologies on olivine surfaces and many cracks appeared at the laser spot.The cracks induced by pulse laser irradiation on various types of ceramics may be caused by thermal stress, solidification stress after melting of the surface layer, and/or volume changes owing to a solid–solid phase transition.Cracks in craters on troilite grains could have been generated by processes similar to those causing the pulse laser-induced cracks through impact heating, followed by radiative cooling.Development of cracks only on troilite grains could reflect the different thermal properties of silicate and troilite.In the present study, craters were not identified on Itokawa particles smaller than 80 µm from either room A or room B. Thus far, craters have rarely been identified based on surface observations of small Itokawa particles.To determine the particle size dependence of the crater population, we modeled the crater abundance on Itokawa particles larger than 80 µm and applied the model to predict crater abundances on smaller particles.The crater abundance model assumed that 40% of Itokawa particles are surrounded by crater-rich surfaces with a crater density of 2.6 × 10–3 µm–2 for craters larger than 200 nm.This crater density corresponds to surfaces covered with widely dispersed craters.Craters larger than 200 nm can be clearly identified from surface observations.In the grain size histogram of the population of examined Itokawa particles smaller than 80 µm, we show the expected number of particles that contain at least one crater larger than 200 nm in each bin of the size histogram.Fig. 3 also includes the grain size histogram resulting from previous observations.From a re-examination of the SEM data obtained by Matsumoto et al., two 200-nm-sized crater candidates were identified on a 50-µm-sized particle on which no craters had been noticed so far.This result has been added to Fig. 3.The crater abundance on the small Itokawa particles from room A is comparable to or slightly lower than that expected from the model for larger particles, indicating that the size dependence of the crater abundance is not distinct for room A particles.The slightly lower abundance of craters on small Itokawa particles could have been caused by regolith fragmentation from larger to smaller particles through continuous meteorite bombardments or cyclic thermal fatigue.The regolith fragmentation could have produced smaller particles with fresh surfaces during the time they were present on Itokawa, because the timescale of regolith fragmentation could have been shorter than that of blister formation.Otherwise, small particles could have a tendency to escape from Itokawa, resulting in a loss of long-exposed particles.The crater abundance of room B particles is distinctively lower than the modeled abundance of the large Itokawa particles in room A. Note that a fraction of room B particles may have been crushed by an Al2O3 glass plate set in room B of the sample catcher for contamination monitoring.Therefore, we cannot conclude whether the low crater abundance of room B particles has preserved the original crater abundance or whether it was reduced by artificial fragmentation.The common appearance of the submicrometer craters on Itokawa particles enables extensive future analysis.Further statistical studies of impact residues in submicrometer craters can restrict the chemical composition of the primary impactors, which in turn can provide insights into the origin of interplanetary dust.Harries et al. proposed that secondary submicrometer cratering can contribute to space weathering of asteroids.Because submicrometer cratering can occur anywhere on Itokawa, submicrometer cratering can promote space weathering on Itokawa globally.Further nanometer-scale observations of submicrometer craters are necessary to clarify the effectiveness of space weathering through secondary micro-impacts.The abundance of submicrometer craters on Itokawa particles implies ubiquitous generation of submicrometer impact ejecta."Although the majority of impact ejecta will escape to the space environment owing to Itokawa's low gravity, some will form impact craters and could be retained as levitating dust sustained by electric charge and submicrometer ejecta dust clouds, which have been found on Jupiter's moons and are expected to surround all celestial bodies without atmospheres.Studies of ejecta properties based on submicrometer craters will shed light on the dust behavior and evolution of airless bodies in micro-gravity.Since cratering structures among Itokawa particles were previously considered rare, we studied numerous submicrometer-sized craters on Itokawa particles with sizes of up to 240 µm.The crater abundance does not vary significantly with particle size.Their morphologies are similar to those of craters of similar size on lunar regolith and fragments of carbonaceous chondrites.Craters produced during various formation epochs may coexist on the same grain surface.The estimated range of flux for craters on Itokawa particles is higher than the interplanetary dust flux and comparable to that for the submicrometer craters on lunar rocks.The higher flux at the lunar surface than the interplanetary flux was explained by high-speed secondary ejecta impacts and not by primary interplanetary dust impacts.Therefore, we conclude that secondary ejecta impacts are probably the dominant cratering process in the submicrometer-sized range on Itokawa regolith particles, as well as on the lunar surface."We demonstrated that secondary submicrometer craters can be produced anywhere in centimeter- to meter-sized depressions on Itokawa's surface by primary interplanetary dust impacts, while regolith particles remained on Itokawa.If the surface unevenness of centimeter to meter scales is an important factor driving the generation of submicrometer secondary cratering, hence irrespective of the overall shapes or sizes of celestial bodies, the similar flux values for Itokawa and the Moon can be explained.
We investigated impact crater structures on regolith particles from asteroid Itokawa using scanning electron microscopy. We observed the surfaces of 51 Itokawa particles, ranging from 15 µm to 240 µm in size. Craters with average diameters ranging from 10 nm to 2.8 µm were identified on 13 Itokawa particles larger than 80 µm. We examined the abundance, spatial distribution, and morphology of approximately 900 craters on six Itokawa particles. Craters with sizes in excess of 200 nm are widely dispersed, with spatial densities from 2.6 µm2 to 4.5 µm2; a fraction of the craters was locally concentrated with a density of 0.1 µm2. The fractal dimension of the cumulative crater diameters ranges from 1.3 to 2.3. Craters of several tens of nanometers in diameter exhibit pit and surrounding rim structures. Craters of more than 100 nm in diameter commonly have melted residue at their bottom. These morphologies are similar to those of submicrometer-sized craters on lunar regolith. We estimated the impactor flux on Itokawa regolith-forming craters, assuming that the craters were accumulated during direct exposure to the space environment for 102 to 104 yr. The range of impactor flux onto Itokawa particles is estimated to be at least one order of magnitude higher than the interplanetary dust flux and comparable to the secondary impact flux on the Moon. This indicates that secondary ejecta impacts are probably the dominant cratering process in the submicrometer range on Itokawa regolith particles, as well as on the lunar surface. We demonstrate that secondary submicrometer craters can be produced anywhere in centimeter- to meter-sized depressions on Itokawa's surface through primary interplanetary dust impacts. If the surface unevenness on centimeter to meter scales is a significant factor determining the abundance of submicrometer secondary cratering, the secondary impact flux could be independent of the overall shapes or sizes of celestial bodies, and the secondary impact flux could have similar values on Itokawa and the Moon.
31,622
Virtual user communities contributing to upscaling innovations in transitions: The case of electric vehicles
Widespread adoption of sustainable innovations helps to address persistent problems such as global climate change, local air and water pollution, and fossil fuel dependence.In the past few years, a number of these innovations have begun to move out of their early market niches and scale up into larger markets.However, this diffusion seems still focused on particular user segments.A main reason for the limited uptake is that, in contrast to conventional products, sustainable innovations represent systemic innovations, which require the building up of an entire support system of actors, networks, infrastructure and institutions, across geographical scales and contexts.This systemic nature also implies that consumers or end-users often play a different role in sustainability-oriented products.In conventional products, users can be portrayed as passive evaluators, judging the suitability of new offerings, in terms of serving established preferences and routinized use patterns.In the context of sustainability-oriented innovation processes, users have often been described as much more proactive contributors to the shape and meaning of new technologies.Especially in the early development phases, their contributions could span the creation of new supply structures, the shaping of specific technology characteristics, the development of new use patterns and preferences of prospective users, and the shaping of the social image of specific use forms.Beyond these early formation phases, user communities typically encounter a number of inherent limitations.Strong ties to a specific local milieu often limit scaling into other user segments.A major challenge is also the lack of appropriate capabilities to scale up the business model from arm’s length interactions in early community-based operations to managing rapidly scaling companies.However, some recent studies have stressed that user communities can play an important role in maturing innovation systems, as they mobilize support for the innovation, align different actors, and embed the innovation in their daily practices.In addition, the advent of the internet and new social media facilitates interaction between like-minded pioneer users all over the world with no time delay.This holds promise to overcome some of the essential limitations that traditional user communities encounter and enables the leveraging of user-generated resources in the innovation process much more quickly and at a larger scale.The emergence of virtual user communities therefore is interesting as it potentially enhances the role of users in the innovation process.To assess the role that virtual user communities can potentially play in the upscaling of innovations requires a broad understanding of how new socio-technical systems emerge and mature.This has been the main occupation of the field of sustainability transitions studies.The present paper aims to identify potential roles that virtual user communities can play in the upscaling of systemic innovation.Drawing on an extensive case study, we identify the agency and core mechanisms of change enacted by a virtual community, in the context of an overall system perspective on socio-technical innovations.We describe virtual community characteristics and specify detailed mechanisms of user contributions to three core development dimensions of socio-technical transitions: system build-up, geographical circulation, and reconfiguration of incumbent socio-technical regimes.The case study focuses on a virtual user community that formed for Electric Vehicles and we conduct an “internet etnography”.Threads from the Tesla Motors Club online forum are analyzed.Discussions on this forum are not limited to Tesla cars but include a broad range of general EV-related topics.The focus of this study is on EV users from The Netherlands.This is a suitable case as the Netherlands have seen an increasing adoption of EV beyond initial niche experiments.Importantly, the studied interactions on the Tesla Motors Club forum are not exclusively between these Dutch users, but also between Dutch users and users of other countries.The paper is structured as follows.In Section 2, we discuss relevant literature on sustainability transitions and user communities.Section 3 describes the methodology.Section 4 presents the results.Section 5 discusses the findings critically, lines out avenues for further research and concludes.In order to identify potential contributions that virtual communities may make to the upscaling of innovation in transitions, we have to first specify the vantage point from where such an assessment can be undertaken.Drawing on a blend of Science and Technology Studies and evolutionary and institutional thinking, the sustainability transitions field has extensively studied the development of systemic innovations.We draw on recent insights of the sustainability transitions field in order to derive basic upscaling dimensions that should be considered.Studies in the sustainability transition field also pay increasing attention to users.Accordingly, we review the literature on users in transitions following the identified upscaling dimensions.We then draw on studies into virtual user communities, to explore how the virtual aspect of the community could potentially change user contributions.This provides us with a starting approach for analyzing user contributions from our case study.The sustainability transitions literature has provided a large number of detailed explanations about the success conditions of early formation processes of sustainable technologies.The analysis of rapidly scaling innovations has started more recently, on par with the rapid growth of a number of renewable energy technologies such as wind and solar PV.Unlike conventional diffusion processes observed for discrete products, exhaustively captured by the classical diffusion literature, the upscaling of systemic innovations needs to take account of an entire socio-technical system of actors, supporting infrastructures and institutions across geographical contexts.Institutions are defined here as “sets of common habits, routines, established practices, rules, or laws that regulate the relations and interactions between individuals and groups”.Regarding the analysis and assessment of growing and maturing socio-technical systems, the TIS literature has been most explicit within the transitions tradition.The TIS approach posits that successful innovation processes are characterized by balanced system-building, that can be captured in a number of core processes or functions: the generation of new knowledge, entrepreneurial experimentation, guidance of search, the mobilization of different kinds of resources, the formation of appropriate markets and the creation of legitimacy for the technology.The MLP on transitions also describes endogenous processes fostering system-building, including learning economies, the development of complementary infrastructures and the development of positive cultural discourses.When developments in the different processes are well-aligned, they result in patterns of cumulative causation or “momentum” driving upscaling.User contributions to system-building in upscaling have been described in most detail for the specific process of market formation.Dewald and Truffer, for instance, have studied how civic cooperatives were instrumental in the build-up of the German PV sector in the 1990s and 2000s.They challenge a linear conception of market development with an account of interacting complementary processes in various market segments.Randelli and Rocchi find, for the case of organic food in Italy, that users can provide contributions to all system-building processes identified in the TIS literature.Zooming further out, Schot et al. provide a framework of the changing roles of users during the entire “life cycle” of a technology.In the upscaling phase, users contribute to system-building by acting as user-intermediaries, which help aligning the various elements of socio-technical systems.They also act as user-consumers, embedding the technology in daily practices.The limitations of system-building by users have received more attention in the literature on innovation by grassroots communities – idealistic user communities working on all sorts of sustainable alternatives.Hossain presents a long list of challenges that grassroots communities can face, ranging from network building, to capabilities and the ability to attract financial resources.These are also reflected in the work of Truffer on user-initiated car sharing initiatives in Switzerland.Here, the user-based organizational set-ups were increasingly challenged in the upscaling phase.Some members voiced strong preference for continuing as small cooperatives instead of accepting a more market oriented organizational form and growth strategy.Hence, there also seem to exist some inherent restraints to the contributions users can make to system-building.In a sustainability transitions perspective, upscaling of new socio-technical systems does not occur in a void, but in interaction with various context structures.We consider the geographical context in which the innovation develops and the prevailing socio-technical regime in a sector as particularly important in relation to user contributions.Regarding geographical context, scholars have noted how innovations are often deeply rooted in specific local environments.A common geographical upscaling pattern described in the sustainability transitions literature consists of the development of aggregated forms of knowledge that are then circulated and recontextualized to fit different circumstances.Sengers and Raven describe how in the diffusion of Bus Rapid Transit global and local processes are intrinsically connected.Place-specific factors influence the local implementation of the innovation, as well as the overall global upscaling journey.In the case of electric vehicles, Bakker et al. show the importance of connections between localized niches to come to charging standards.These studies point to processes of geographical circulation as part of upscaling in systemic innovation.These are not linear processes of abstraction, but iterative processes in which the innovation is continuously decontextualized and recontextualized in different localities.1,The role of users in geographical circulation in upscaling is often described as limited.User collectives have developed remarkable and successful sustainable alternatives in a variety of domains such as renewable energy and sustainable buildings.Yet the socio-technical configurations they develop are often tailored to a specific geographical context and rooted in local communities.Some organizations have emerged that aim at connecting local user initiatives, such as the Transitions Town Network.However, such organizations are often lacking and, those that exist typically face difficulties in aggregating highly contextualized forms of user knowledge.The second form of context to which the innovation emerges is the socio-technical regime, the current set of rules guiding actor behaviour in a sector.Apart from endogenous system growth, upscaling also entails a certain degree of reconfiguration of existing socio-technical regimes.In the phase of upscaling, the innovation may get confronted with barriers to growth that relate to the current dominant system.These regime barriers include vested interests or institutions that are not compatible with the innovation.For example, the status users attach to personal car ownership can hamper the transition to car-sharing.Accordingly, for the innovation to upscale, changes in institutions are required.Smith and Raven identify two mechanisms of interactions between innovations in niches and regimes: “fit-and-conform” in which the niche innovation aims to become competitive within the existing environment and “stretch-and-transform”, in which it fundamentally alters the regime.Geels and Johnson find a temporal sequence in these patterns.They describe a change from “regime-to-niche” towards “niche-to-regime” influences over time in their case of biomass diffusion in Austria.In later stages of upscaling, a powerful lobby was initiated, and actors related to the innovation were increasingly able to reconfigure the existing socio-technical regime.Some instances of users contributing to reconfiguration of existing socio-technical regimes have been described, but there also seem considerable challenges for users to contribute to lasting regime change.Dewald and Truffer show for the case of photovoltaics in Germany how citizen groups provided initial legitimation and continuing political support for the German feed-in tariff, the installation of which further accelerated PV diffusion.Kanger and Schot describe for the case of the fossil fuel car how users set up powerful user organizations lobbying for the interests of automobile drivers in an initially hostile environment.In the literature on grassroots communities, barriers to users’ contribution to regime change are emphasized.These user communities often develop in relative isolation from existing socio-technical regimes, hence lacking the connections and resources to meaningfully change them.If user communities aim at diffusing their projects into the mainstream, they face challenges related to watering down sustainability ideals and losing control.All told, we identified three major domains in which users could contribute to the growth and maturation of socio-technical systems: providing support and resources for system build-up, enabling the geographical circulation of the innovation, and reconfiguring the existing socio-technical regime to reduce barriers to development.In the following we want to elaborate the way in which virtual communities particularly can contribute in these areas.The rise of virtual communities seems to increase, or at least alter, the role of users in upscaling.Virtual user communities are communities of geographically dispersed users that share an interest in a certain technology and use the internet as primary mode of interaction.Here we review studies that have taken an in-depth look at innovation in virtual user communities, following the main domains of upscaling as described above: system-building, geographical circulation and reconfiguration of the current socio-technical regime.The virtual community seems to allow for some new roles in system-building during upscaling.One of the few cases described of virtual communities in sustainability transitions is that of heat pump forums in Finland.Hyysalo et al. show that during upscaling virtual community users provide information about the workings of the technology, not available from other market actors, and that they help articulating market demand.The provision of information reduces uncertainty, which is important in market expansion beyond environmental enthusiasts.In contrast to accounts of strongly embedded and homogenous local user groups working on innovation, more heterogeneity is reported for virtual communities.Virtual user communities have a core of highly-skilled expert users that engage most heavily.Yet, there is also an influential peripheral group, able to steer the direction of innovation.Then there is a large group of “lurkers” who do not actively contribute but are still aware of developments and also share these in their own networks.Füller et al. discuss how the variety of users builds upon each other’s contribution in a constant improvising process of trial and error.This resembles the process of “bricolage”, defined by Baker and Nelson, as “making do by applying combinations of the resources at hand to new problems and opportunities”.In sum, user communities seem to contribute to system-building by developing knowledge among a variety of participants.The virtual nature of the community is expected to change the role of users in the geographical circulation of the innovation during upscaling.As described above, traditionally many user collectives have been locally organized, with their embeddedness becoming a barrier in the upscaling process.The virtual community can here form a bridge between localities, for example by connecting geographically dispersed users and serving as a centralized knowledge archive.The other way around, the virtual community is also helpful for recontextualizing innovations in local environments.Users here build on the generalized solutions of others, and tailor them to their specific contexts of use.Patterns of geographical circulation and the role of virtual communities herein likely differ across innovation types.Hyysalo et al. describe an innovation, heat pumps, that is stationary and for which local contexts and national institutions such as energy regulations are highly important.As a reflection of this, in the virtual communities they study there is a strong focus on local and national issues, such as changes that can be made to heat pumps to make them work in the cold Finnish climate.In contrast, the electric vehicles central to this study are also used across countries.Consequently, it can be expected that more international issues are addressed on the forums.There might also be a higher prevalence of interactions between online users from different countries.It is unclear as to what extent the virtual nature of the community might influence the ability of users to engage in reconfiguration of the current socio-technical regime.Hyysalo et al. observe that forum users discuss heat pumps in “neutral” economic and technological terms, stressing their conformation to the existing regime.An important point to note, though, is that virtual communities are often “hybrid” communities encompassing both lay users and professional actors.The occupational background of forum participants is not directly shown, which reduces formal status and power differences that normally influence conversations.Grabher and Ibert give the example of doctors and patients interacting on more equal terms on a drug development internet forum than they would do in a traditional conference setting.These more equal interactions between different actor types may also facilitate discussion between users and actors associated with the socio-technical regime, such as companies or governments.To sum up, although many conventional grassroots user communities seem to struggle in upscaling, there are clear indications that users can play important roles in the upscaling phase of transitions.Specifically, the rise of virtual communities seems to increase the potential role that users can play in the upscaling phase.We therefore aim at describing the composition of types of actors that participate in these virtual communities.Based on this, we will empirically investigate how virtual user groups contribute to the core domains of system maturation that we derived from the transition studies review, namely their contribution to internal system-building, the facilitation of geographical circulation, and their impact on reconfiguring existing socio-technical regimes.In line with our research aims, we considered the systemic innovation of the electric vehicle.The electric vehicle contributes to environmental sustainability by reducing greenhouse gas emissions as well as local pollution.Its widespread adoption involves a transition of infrastructures, markets and institutions.The focus of this study is on EV users from The Netherlands.The studied interactions are not only between Dutch users, but sometimes also between Dutch users and users of other countries.The Netherlands is a frontrunner country in EVs.Both qualitative and quantitative indicators demonstrate that the EV in The Netherlands has grown beyond the “start-up phase” characterized by precarious and dispersed niche experiments.Qualitatively, multiple mass production models are available and European standardization processes are ongoing.Quantitatively, there is a large growth in Plug-in Hybrid Electric Vehicles and also the number of regular EVs has increased considerably.The total number of EVs in The Netherlands grew from 7410) in December 2012, to 119,332 in December 2017.The average market share was 5.6% over the 2013–2017 period.The main data source of this study are internet forum threads.The forum analyzed is the Tesla Motors Club forum.Virtual communities are very fuzzy social phenomena, with fluid boundaries.They often span multiple internet forums, as well as other mediums such as Facebook and Twitter.We thus had to make a very careful decision for the medium to analyze.Eventually, the Tesla Motors Club forum was chosen for three reasons.First, it is a large forum with ample daily activity as well as international reach.At the start of the analysis in June 2016, it counted approximately 60,000 threads, 1.5 million messages and 40,000 members.Its international reach is exemplified by subsections dedicated to EV developments in specific of countries, in various languages.Second, it is a well-established forum, enabling longitudinal analysis.It has been around since 2006 and has attracted substantial numbers of members for years.This makes it preferable over the alternative to forum threads of e.g. EV Facebook groups, which have mostly been initiated much more recently.Finally, in terms of content, it is not strictly limited to a discussion of Tesla cars.Topics cover all kinds of EV developments, charging infrastructure and wider discussions about sustainability.There is no comprehensive overview of forum user demographics.From names, introduction discussions and pictures it is clear that the users are overwhelmingly male.Importantly for our analysis, the geographic location of almost all users is known at the city level.To get some indication of the participation of EV users in internet forums, we included two questions on forum use in a general EV survey that was sent out to members of an organization concerned with placing public charging points for EV owners in April and May 2015.Out of the 251 respondents, 67% had been active as a reader, 28% had been active as a writer on an internet forum about electric vehicles.Particularly the reading of internet forums can hence be regarded as an activity that is common among electric vehicle owners in The Netherlands.The second data source for this study were 13 interviews with forum participants that were conducted at the end of 2017 and beginning of 2018.The interviewees were selected partly based on the forum analysis.They were sampled in order to obtain diversity in participation levels as well as role fulfilled on the forum.As a third data source we drew on sector reports and scientific articles about EV in The Netherlands.For the in-depth analysis of the internet forums we conducted a virtual ethnography, tracing back the role of the online community in upscaling.Compared to traditional ethnography, the internet forum as research site has the advantage that it is both archival and showing live communication.It is also well-annotated, as the exact date and time of each post, as well as the location of the poster, are noted.This was particularly helpful for our reconstruction of user activity in time and space.It should be noted that the majority of forum visitors are not actively posting on the forum, but merely “lurking”.However, these visitors do follow the activity on the forum.Considering the vast size of the forum, a pre-analysis was completed to identify those threads that were most related to the development and use of EVs and had a substantial number of replies and views, and involved Dutch EV users.In certain cases, threads are short with only a few replies to the thread starter, but they can also consist of hundreds of pages and remain active for multiple years.To identify relevant threads, we went through the headers and first posts of all threads active in the subsection “Belgium and The Netherlands” between January and May 2016 and the most replied threads in this section since its conception in 2012.This forum section contains posts from Dutch users, as well as from users in the Dutch-speaking part of Belgium.Before 2012, there was no Dutch subsection and Dutch EV users replied in international threads.To find these we selected some “dinosaur” Dutch contributors, and went through the headers and first posts of the threads they contributed to.Based on this pre-analysis N = 26 threads were selected for in-depth analysis.We categorized these into 10 broad themes related to upscaling, such as charging points and technical issues.We then went back to the titles and first post of the 410 first selected threads to see if they fitted into one of these themes.They did, which assured us that the 26 selected threads for in-depth analysis provided an apt overview of the breadth of upscaling topics discussed on the forum.We then proceeded with the in-depth analysis using NVivo 11.The analysis followed a constant iterative process between data and theory.We first used an open coding strategy to be able to get an overview of the topics discussed in the forum threads.Per group of related threads we also made summaries and notes about the topics discussed, to get a better grasp of the user’s activity in the often long threads.Here we also made our initial links with theory, using the meso-level upscaling processes of system-building, geographical circulation and adaptation of the existing socio-technical system as sensitizing concepts.With these concepts in hand we went back to the data, to see to what extent users were active in these processes.We interpreted the data in terms of the content of the contribution of users and the specific strategies they employed.In this way we could identify specific mechanisms such as institution building in practice and quasi-effortless knowledge production and sharing, linked to the meso-level processes of upscaling.Eventually, we identified seven core mechanisms through which the user community contributes to the upscaling processes.Apart from these core mechanisms, we also observed specific characteristics of the virtual community actor.Because of their importance for understanding the role of the virtual community, we decided to include them prior to the mechanisms in the results section.The interviews were used as a source of data triangulation.The round of interviewing occurred after the main analysis of the forums.The interviews were coded using the same codes as in the forum analyses, with some additions.In general, the findings from the forum analysis were confirmed, but small adaptations were made.The interviews also allowed for the inclusion of additional examples related to the emerging concepts.They allowed for greater exploration of the community aspect of the forum and the relationships between online and offline activities, which are harder to understand from the forum analysis alone.We also used “member checking” and asked two forum participants to reflect on our findings and concepts, in order to increase the reliability of this research.Finally, the sector reports and scientific articles were used throughout the research to assist in understanding the context of the forum discussions, as well as to track the general developments in the scaling of EV in The Netherlands.Before elaborating on the core mechanisms by which the virtual community participates in upscaling, we will first describe some key characteristics of the virtual community we studied.In the community, we observed a relatively large participant diversity, albeit on some specific dimensions.We also observed a strong sense of community.In terms of socio-demographic background of the user participants, there is little diversity.The introduce yourself thread gives an impression of the demographics of the virtual community.The participants are predominantly white males.The average age is around 40 years old.Virtual community members are often entrepreneurs or small business owners.Of the users who share their profession, about half work in the IT sector.Diversity was particularly marked in terms of the “professionality” of participants.On one end of the spectrum “pure” users can be placed, on the other end professional actors.Of the latter actors, let us first consider Tesla.In our analyzed threads, it was very rare that a Tesla employee would publicly intervene in the discussion.However, there is ample evidence that employees of Tesla both read the forum and act upon the observed discussion.For example, a user posts on the forum for help with some problem with his car and is then phoned by the Tesla Service Center to make an appointment to have his car checked.Charging point companies are active in similar ways.The virtual user community is particularly useful for Tesla and other system actors, because it discusses not only systematic barriers but also possible solutions.Policy-makers are surprisingly absent from the forums, as we did not find any instance of a policy-maker intervening in the discussions.During the interviews we spoke to a policy consultant who reads the forums to inform himself about charging locations as well as barriers related to charging.Officially, commercial activity on the forum is not allowed.However, there are various users who take up a “hybrid” role.Apart from being a Tesla user, they also have a side business that sells, for example, charging points or car accessories.Oftentimes, these hybrid roles emerge from the forum: users discuss upscaling barriers, which are then perceived by some as entrepreneurial opportunities.For example, a user who already has online shops becomes aware of difficulties with the delivery of charging cables.He then organizes an initial group purchase via the forum, and later opens a charging point shop.He does not advertise on the forum, but community members know how to find him.Here, the forum stimulates users to become system-builders in the upscaling process.A second dimension of diversity concerns the knowledge levels regarding the EV.Especially the earlier threads, such as the one about the international standardization of charging plugs, contain long, high-level technical discussions.Here many users with expert technical knowledge participate.Users also become more knowledgeable about EV technology while reading the forum.During the process of upscaling, however, the forum is joined by more users with lower levels of technological and topical knowledge of EVs.They also have more mainstream expectations of an EV.Experienced community members however continue to be active and provide knowledge to newcomers.Among the online users, we find a strong sense of community, which at times leads to the organization of “real world” events.Yet, the community aspect is limited to a specific group of forum members.Online users can easily opt out of community activities and use the forum as a functional source of knowledge only.This even holds for members with a high number of posts.The sense of community on the forum centers around shared feelings about pioneering and experienced barriers.Users feel part of a group of pioneers who take part in a development that will have major influence on society.Users perform joint activities such as road trips and enthusiastically report about them on the forum and blogs.They make fun of fossil fuel cars and occasionally call them “dino-juice-burners”.Not everything is rosy, however.A large share of the threads in the virtual community starts with some barrier experienced by a user.In these cases, the community is like a support group for EV drivers.Users try to help each other in finding solutions for experienced problems, online and offline.For example, a relative of a forum member drops someone off at an airport in the South of the Netherlands.The charging poles there are out of service and he risks running out of range.A message is sent to forum members.A member who lives nearby replies and the person can charge at his house and gets a coffee.Such actions are illustrative of the community spirit found amongst core forum members.Influenced by its distinctive actor characteristics the virtual user community takes some form of action.Seven core mechanisms through which the virtual user community participates in the upscaling process have been identified in the empirical analysis.Three of these are linked to system-building: quasi-effortless knowledge production & sharing, infrastructural bricolage and institution building in practice.An important part of the contribution of the virtual community to upscaling concerns knowledge.In the knowledge development by the virtual community during upscaling we observe two developments.First, digital technology has made it very easy to make minor changes to products or to monitor their performance under various circumstances.Examples of such small user actions are changing the settings of the vehicle and see what happens to its electricity use.Second, the internet forum facilitates sharing these small changes and their effects with the wider user community.All in all, the effort of performing and sharing an activity that makes a meaningful contribution to the process of upscaling is drastically lowered.This can be illustrated with the example of energy use.Given the limited range of the electric vehicle, a main issue among users is finding out factors that influence their energy use, as well as ways to improve their vehicle’s range.Table 1 lists a number of activities the users of the virtual community have conducted in this regard, ordered in terms of the dedicated effort they put in: Developing knowledge during day-to-day driving, by trial and error during driving, by performing a systemic analysis during driving, by participating in joint testing events, and by building tools and models.This list illustrates the relative ease with which knowledge production and sharing in the online community can occur.Action 4 and 5, respectively a joint testing session and the building of a model, still require dedicated time and effort.However, the first two actions, in which people drive around for trips they would make anyway and make some notes about the performance of their EV, can hardly be classified as dedicated innovative user actions.These are simple actions that users conduct during their normal routines without much additional effort.Because the results of these actions are now so easily shared and discussed within the community at large, they still affect the emerging driving practices of other users, and consequently the upscaling process.The forum users refer to the knowledge they develop as “true” or “real” knowledge, because it is based on experience.They see it as a middle way between the, according to them, optimistic knowledge provided by EV and charging point companies, and the arguably negative stories about EVs that appear in the press.This holds particularly for more controversial issues, such as the range of the EV in different circumstances, and technical problems of the Tesla.Three dominant types of knowledge sharing in the virtual community can be distinguished.Firstly, there is request-based knowledge sharing.This is by far the most common form of knowledge sharing on the forum.A user has a specific problem with their electric vehicle and asks the community for help.For example:“Someone experience with charging in Spain?,Can it be compared to France?,I see you can get there easily with super-chargers, but charging locally would also be nice”2,Users from the community then answer based on their own experience.Second, there is spontaneous sharing of knowledge that a user has developed with regard to EVs.Third, there is synthesizing knowledge sharing.Users then try to create a systematic overview of the knowledge produced by other users on the forum.The dominance of request-based knowledge sharing exemplifies some of the limitations of virtual communities.At least two obstacles complicate the process of knowledge sharing and learning.First, the forum threads are often very long, due to all the requests by individual users.Less active users complain on multiple occasions that they have had to read many pages before finding appropriate answers.Even creating a synthesis of knowledge on a certain topic is not always helpful, because users have to find it first.For overviews of knowledge, users therefore resort to using separate blogs or websites, for example with a user’s guide for new Tesla owners.Second, the nature of the knowledge users share is often very context-specific as well as subjective.On the one hand, this is an asset because it is often practical and experiential knowledge, that non-users cannot easily develop.On the other hand, the practical and experiential nature of the knowledge also makes it hard to decontextualize and aggregate the knowledge developed on the forum.The virtual user community does not engage in large-scale, coordinated efforts to build charging infrastructure.Yet, in their day-to-day lives and with whatever resource they have available, the members of the virtual community try to improve the material elements of the EV system.First, users engage in day-to-day charge point lobbying.If they go to a certain place, they approach hotels or other organizations beforehand with requests about charging facilities.On the internet forum they discuss strategies for convincing companies to place chargers.At a certain point, they have a list of organizations approached for charging facilities.However, these actions are largely ad-hoc and targeted at companies that users encounter at some point in their daily life, and not a structured infrastructure-building effort.Second, the virtual community has a dedicated “plug club” thread in which they share plugs that are needed when going abroad.Virtual community members in this way do not solve the problem of different charging standards in Europe, but work around this problem by carrying an array of plugs on holidays and sharing them with each other.Third, users help each other with the installation of home chargers.These are still not off-the-shelf products.In particular, they require a specific installation and configuration in the house, often together with other devices such as a smart meter.On the internet forum, the virtual community members discuss possible configurations.Users with greater expertise in electricity comment on the safety and feasibility of proposed solutions.As a sign of the community spirit on the forum, the users offer each other to try out their charging point before they get their car delivered, enabling them to start driving without hick-ups straightaway.As part of the upscaling process, institutions are created to enable coordination between the actors engaged with the systemic innovation.The virtual user community contributes by institution-building in practice.As an important institution, users work out the “optimal” way of driving their electric vehicle.When the EV users go on business trips and holidays in Europe, the limitations of their car in terms of range and charging time become most apparent.In the “holidays” thread users post detailed accounts of their trips with the chargers used, charging times and conditions such as driving speed, weather and number of passengers.Some users start taking stock and “rules” for driving an EV emerge, which are endorsed by other forum members.The user-developed knowledge thus forms the basis for the development of practice rules.However, as an indication of the instability of the practice-in-development, the emerging rules are also continuously challenged by forum members.The driving context is also changing with, for instance, more charging points becoming available.In the development of rules forum members also use simple modeling tools, with input parameters based on their experience.For instance, one user has developed and shared an Excel sheet for calculating the optimal speed for driving on long distances, taking into account various parameters.The sheet yields the general “rule” for charging during long-distance journeys in Europe, which is then accepted in the community:“With a distance of 200 km between super chargers the optimum is charging till 70% and driving at 120 km/h’,Dedicated parking places for EVs, which are equipped with chargers, are not yet a strong institution.Within the community of electric vehicle drivers, the main discussion is whether someone should free up their charging point when they have completed charging, to make it available to others.In the virtual community, new users are informed about the accepted practice, a process that contributes to strengthening the institution.“.but then the question remains whether I should wake up to move my car in the middle of the night, when it is fully charged.Is that the common practice?,“I do that.If you would have a gasoline car you wouldn’t park at the gasoline station for a night either.It also depends on circumstances.At fast chargers one should not park after charging.In a public parking area you can use the card ,and people can always call you if needed.If you can move your car after charging, I would always do that.,The virtual user community is active in processes of geographical circulation during upscaling, by means of the core mechanisms of trans-local interactions & facilitating use across geographical contexts.In the upscaling of electric vehicles different charging standards form an important barrier.When rumors emerged in 2011 that the Tesla Model S would not support the European form of 3-phase charging, virtual community users took action.Fig. 2 shows the geographical location of the users involved in the forum thread on the potential compatibility of the Tesla with the European 3-phase standard.The tread was started by a Dutch Tesla user.American users on both coasts joined in and the virtual community became a platform of exchange between users from different geographical niches.Long discussions followed about the different electricity and charging systems in various countries, as well as on how the Tesla should be able to deal with them.Eventually a collective letter was sent to Tesla, with supporters from multiple geographical regions, in order to ensure that the Tesla Model S would become compatible with European charging systems.In the same internet forum thread, a vice-president of Tesla then confirmed it would.The interaction patterns between users from different geographical contexts changes during the upscaling process.Initially, users from various localities around the globe reply in the same threads.As local user bases grow, gradually national subforums are created.Most users in this study are only active in the Dutch subforum.However, certain users are active in the international forum and national subforum.They perform a gatekeeper function by sharing topics between subforums as well as other mediums such as WhatsApp and YouTube.The difference in charging standards between countries as well as a large variety in charging passes that are to be used across countries and regions in Europe, pose barriers to Dutch EV users going on holidays.One of the most active threads on the forum is the “holidays” thread, with also separate threads dedicated to specific countries.The virtual community works hard to enable the EV users to also drive their EV in other geographical contexts, by sharing information, charge plugs and passes.In terms of circulation, the practice of EV driving is hence brought to places where it is less or differently embedded.This also affects the local establishment of other system elements, for example when users lobby for specific charging points at their holiday destinations throughout Europe.Notably, users send requests for the placement of specific Tesla-sponsored chargers to hotels throughout Europe.No particular local charging passes or plugs are needed for these chargers.The electric vehicle emerges in the context of the existing socio-technical regime of the fossil fuel car.Some regime parts simply carry over to the EV.Other regime elements however, are in conflict with EV.The virtual community participates in reconfiguration with the mechanisms of empowerment to challenge the regime and regime-adapting activities.The virtual community users are often reminded that the innovation they have adopted does not conform to the existing regime.For example, they are told that their car is not a real car, because it makes a different sound or only has a limited range.The EV users often get into discussion with conventional vehicle owners about how EVs fare compared to fossil fuel cars.The forum then empowers them by offering a series of possible arguments that can be used in such discussions.For example, a forum user develops an excel sheet that can be used to demonstrate that EVs are not more expensive than comparable fossil fuel cars if costs in the long run are considered.Another user develops a detailed post about the CO2 emissions of electric vehicles, after media reports have appeared that they have no or only limited benefits for the environment as compared to fossil fuel cars.Users discuss strategies for answering critical questions about EVs.“I use the following answer for questions about charging time: ‘I don’t know.It is just always full when I need it.’,This is not entirely true, but it comes closer to the truth than when I say: ‘well if it is totally empty, about 9 h.’ Because people then start looking worried, and you have to explain that it occurs very rarely that you arrive at your destination totally empty.Most of the time charging only takes a few hours at the end of the day.,Forum members employ these strategies in their own social circles, but also to comment on other sites and blogs, as well as by sending letters to newspapers that according to them are too much on the side of the fossil fuel car regime.As EVs scale up, some conflicts with the existing regime emerge.A good example are dedicated parking spots for EVs, equipped with chargers.These were mostly parking places for fossil fuel cars before.Fossil fuel car users still park on these spots, preventing EVs to charge.Formal signifiers such as road signs are often not yet available, making the parking places even more contested.On the internet forum the virtual community develops some activities to make these parking places more accepted.Leaflets are shared to make conventional car owners aware of their behaviour.Such leaflets are printed out by virtual community members, and put on the fossil fuel cars that block EV parking places every time a virtual community member sees one.Additionally, a wide variety of activities and their effectiveness are discussed.The virtual community also works on reconfiguring some highly institutionalized associations of the current regime.Most notably, this concerns the idea that a car that is able to drive long distances has to be a fossil fuel car.Virtual community members organize various activities that demonstrate that EVs are also able to drive long distances.For example, there is a yearly EV rally, which attracts national media attention.Then there are various international road trips organized among forum members, which are enthusiastically reported on the forum and blogs.These mostly do not receive large-scale media attention but can help people interested in EV “cross the line”.They are also found to stimulate existing EV owners in the virtual community to make longer trips, including their yearly holiday, with an EV instead of a fossil fuel car.Approached from a socio-technical perspective, a case study has been conducted of an Electric Vehicle user community.The results describe key community characteristics as well as core mechanisms by which users participate in upscaling.Distinctive characteristics of the virtual community are not only a strong sense of community but also large diversity in terms of participants ranging from “pure user” to professional actor, as well as in terms of knowledge levels about EV.Three core mechanisms have been identified by which the user community contributes to system-building.The introduction of digital technology has facilitated knowledge-related activities for users and hence quasi-effortless knowledge production and sharing is a main virtual community occupation.Although they do not engage in large-scale charging point development, users provide their contribution to developing the material dimension of the innovation system in the form of infrastructural bricolage.By engaging in institution-building in practice, users contribute to the development of shared rules, for instance about how an EV should be used, that help to hold the scaling system together.In terms of geographical circulation, on the internet forums fruitful trans-local interactions occur and users are active in facilitating use across geographical contexts.Two ways were identified in which users play a role in reconfiguration of the existing socio-technical regime.Community members are empowering each other in the process of challenging the existing regime.Additionally, various regime-adapting activities, aimed at changing some taken-for-granted institutions of the regime, emerge from the forum.These findings add to the emerging work on virtual communities in sustainability transitions.One of the few cases hitherto described in this regard is that of virtual communities related to heat pumps in Finland.The quasi-effortless knowledge development and sharing process we observe is well in line with Hyysalo et al.’s findings on the heat pumps case.It demonstrates how the community provides more balanced market information, develops solutions to upscaling barriers, and articulates demand to other market actors.Hence, it reduces uncertainties for more mainstream users during upscaling.As a difference, in the case of the Finnish heat pumps, the virtual community shared much knowledge regarding physical adaptations to the heat pumps.Such classical “tinkering” was hardly observed in the virtual community we studied.This is probably due to the EV being a highly technologically advanced product.Tinkering might also lead to loss of warranty, deterring users of expensive products.There was nonetheless some digital tinkering going on, with users making apps for the Tesla screen.It should be noted that on the American Tesla forums, which we did not study in detail, classical tinkering occurs, enabled by the presence of some highly technologically skilled users and the availability of spare Tesla parts.For the virtual community we studied, it was most noticeable that users contributed beyond simple knowledge sharing, and participated in system-building activities in the domains of infrastructure and institutions.In their own ad-hoc way, which we describe as infrastructural bricolage, the user community contributes to infrastructure development, for example by lobbying for chargers when people go on holidays.These findings fit well with recent work that stresses the importance of bottom-up processes driving infrastructure development, contrasting the dominant view on infrastructure development as resulting from top-down and centralized steering processes.Most users in our study had a Tesla model S, which is a luxury EV with higher performance than other EVs.This being a limitation of our research, we performed a basic cross-check with the Nissan Leaf internet forum and included people active on other forums among the interviewees.It emerged that the issues addressed in other EV communities are largely similar, except that for other EVs, there is more activity related to infrastructure, because the low range of these EVs makes charging a more important issue.By means of institution-building in practice, the virtual community helps to establish and disseminate rules and practices among EV users.This role has particular relevance in the up-scaling phase, in which many new users join.It also seems hard to be taken up by other intermediary actors, which lack the shared experiences and trust that emerge in the user community.More user activity in infrastructure and institution-building was reported than in the case of heat pumps in Finland.These differences can be related to the nature of the technologies involved.The heat pumps are stationary and their deployment is heavily influenced by national institutions such as energy laws.Electric vehicles, on the other hand, can and do cross national borders while in use.As a result, international institutions, such as charging standards, are highly important for further diffusion of EVs during upscaling.In line with this, we also observe online user activity regarding these issues, even lobbying across countries.In general, the relative importance of local, national and supranational institutions will differ per innovation type.These differences are likely to be reflected in the roles online communities take up and the degree of interaction between users across geographical locales.Future comparative research would be useful for investigating the relationship between, on the one hand, the relative importance of local, national and supranational institutions for the innovation, and on the other hand, the role of online user communities and geographical patterns of interactions between users.The virtual user community also participates in the reconfiguration of the existing socio-technical regime.In their study of Finnish heat pump forums, Hyysalo et al. note there is hardly discussion about sustainability.The heat pump is instead discussed in economic and technological terms.On the EV forum, this is only partly the case.Although EV users go to great lengths to demonstrate that their EV is on par with fossil fuel cars in terms of price and performance, they also discuss environmental sustainability and question the sustainability of regime actors such as car companies.Yet, following the user typology of Schot et al., we had expected to see more activity of the EV community in “hollowing out” the existing regime than we actually observed.This lower activity might reflect that the electric vehicle transition is still in an early stage of the upscaling phase and has not yet enough momentum to take on the existing regime.The underlying question here is about the extent to which virtual communities activities in regime reconfiguration tend more to what Smith and Raven call “fit and conform” to the existing regime or “stretch and transform” of its fundamental values and structures.This is a valuable question to explore further, as it will give insight into the extent to which users are able to accelerate large-scale sustainability transitions rather than contributing to incremental changes.Our case study suggests that a virtual user community differs in two ways from other user communities in terms of ability to engage in upscaling.First, digital platforms facilitate knowledge sharing and thus the collective production of knowledge among a wide variety of participants.In our case, the knowledge developed is taken up by users who already have an EV, prospective EV users and market actors active in the sector.It is also remarkable that during the influx of new users a relatively strong sense of community is maintained for a considerable group of users.This contrasts with accounts of tensions between initial users and more mainstream users in the literature on local grassroots communities.An explanation might be that, paradoxically, in a virtual community it is much easier to opt out of community activities if one is not interested, which is also accepted.Second, as expected, the virtual community is able to build bridges between otherwise geographically isolated user groups.This is a major difference with local communities as described in the grassroots literature, which are heavily embedded in specific socio-spatial contexts and hence have difficulties to engender more widespread sustainability transitions.It is noticeable that the trans-local interactions on the internet forum do not only concern knowledge sharing but can also result in international collective action, for example related to charging standards.During upscaling, national subforums become more dominant, which reduces the frequency of international contacts between users.On these national forums, there is still considerable activity related to the use of EVs in different geographical contexts.Certain users take up a gatekeeper role as they are active on multiple forums with different geographical focus areas, or on multiple mediums, such as Facebook, WhatsApp, YouTube and blogs.Before the virtual user community is hailed as the panacea for the upscaling of systemic innovation, two main limitations of its role have to be acknowledged.The first of these concerns the development and sharing of knowledge.While the virtual community is undeniably a valuable source of knowledge for EV drivers and other system actors, the knowledge sharing process is also far from perfect.This is partly for technical reasons.In contrast to what Grabher and Ibert have observed, the “archival” function of the internet forum does not function well, and users have to go through large amounts of texts to find useful knowledge.,Also, the context-specific and subjective nature of the practice-based and experiential knowledge shared hampers its aggregation during upscaling.Second, users mostly do not solve upscaling barriers permanently.They are relatively unorganized and rather work around the problems with whatever resources at hand in their everyday lives.In the Dutch context of our study there are policy-makers who recognize the potential of virtual communities, but the unorganized, ad-hoc, and subjective nature of these communities makes it very hard to better include them in the policy development process.At a more general level, our study provides some insights in overall sustainability transition dynamics and the role of users herein.Following the growth of various sustainable innovations beyond initial niche exploration, transition scholars have started to explore upscaling.If there is one thing on which these studies agree, then it is that niches after initial development will not simply continue to grow smoothly, as suggested by the diffusion of innovation model of Rogers.To highlight differences with diffusion, Hyysalo et al. proposed to use the term “innofusion”, defined as “the development of the sociotechnical characteristics of technology during its diffusion”.Even more than in the innofusion pattern as described by Hyysalo et al., the activities we observed the user community performing concerned the system, practices and institutions around the technology rather than the technological artefact of the vehicle itself.Accordingly, the dimensions of the upscaling pattern we used as starting point, namely system-building, geographical circulation, and reconfiguration of the existing socio-technical regime proved useful for capturing the variety in upscaling activity.At a more general level, the sustainability transition perspective allowed for analyzing the breadth of user activities as well as the way users handle the embeddedness of the innovation in existing geographical and institutional environments.Regarding the latter, the sustainability transitions lens particularly had value in enabling us to point out how EV users deal with the socio-technical regime of the existing technology of fossil fuel cars.As an explicit sustainability-oriented perspective was adopted in this study, it is worthwhile to reflect on the applicability of our findings to non-sustainable technologies.User involvement has also been observed for transitions towards non-sustainable innovations.For example, Kanger and Schot demonstrate the involvement of users in the transition towards automobility.Regarding the internal dynamics of the online community we studied, there is a large similarity to “non-sustainable” virtual innovation communities as described by Grabher and Ibert.A difference might be the extent of user activity that is devoted to institutional barriers.Such barriers are higher for sustainable innovations, as sustainability values are not yet deeply engrained in society.Hence, we observe user activity in demonstrating that EVs are on par with normal cars in economic and performance terms, as well as attempts to promote more sustainable living in general.A lively debate has emerged regarding the possibility of accelerating sustainability transitions.In this debate, users still overwhelmingly figure as passive actors or as actors that hamper transitions.However, as we have shown, users, empowered in online communities, can also contribute to acceleration processes.Another point to note is that new roles and actors emerge during upscaling.As Schot et al. have described, users take up various roles in the different phases of a transition.Additionally, we should not forget that broad societal trends and technological developments change divisions of roles between actors and create new ones.In our case, because of a coming together of the rise of EV from niches and developments in social media technology, the virtual community we studied emerged.It altered the role of users in upscaling, most notably by increased blurring of the role of user-producer and user-intermediary, as well as enhancing the role of user-intermediary.Following increased debate in recent years over the role of users in sustainability transitions, as well as the virtual nature of user communities, this paper set out to explore the role of the virtual user community in the upscaling of systemic innovations.An internet etnography was conducted of a large community of electric vehicle users.A socio-technical perspective was taken to identify upscaling dimensions: system-building, geographical circulation, and reconfiguration of the existing socio-technical regime.Our research demonstrates the participation of the virtual user community in the upscaling of innovation in sustainability transitions.The virtual user community makes a distinctive contribution to the work needed in the upscaling process.It is able to perform a broad scope of upscaling work, ranging from infrastructure development to institution-building.Knowledge is more easily developed and shared among a wide variety of participants than in local user communities.In terms of geography, the virtual community enables interactions between dispersed users in a variety of geographical contexts.The virtual user community also empowers its members to challenge the socio-technical regime of fossil fuel cars.At the same time, the virtual community also acts in an ad hoc manner, is unorganized and subjective, and generates solutions to upscaling barriers that are often only workarounds.Still, the uncertainty around EV that it reduces, the driving practices that it establishes and explains, and the empowerment vis-à-vis fossil fuel car proponents that it brings make that more mainstream users can “cross the line” and become EV users as well.
Users are increasingly acknowledged as important actors fostering those fundamental socio-technical innovations needed to achieve a sustainable society. In the literature, users have so far been portrayed mostly to play a role in early phases of technology formation. However, more recently users have become important players in the upscaling of various innovations. With the advent of new social media, users may interact effortlessly across large distances, exchange knowledge and so increase their contribution to upscaling. We investigate the new potential of virtual user communities. Conceptually, we build on recent insights from socio-technical transition studies to identify different upscaling dimensions. We conduct an internet ethnography of a large virtual community that formed around the Electric Vehicle (EV). Based on these data, we present virtual community characteristics and core mechanisms of participation in upscaling. We find that the community plays an important and distinctive role in fostering electric vehicle use.
31,623
Peanut allergens
Peanut allergy is one of the most severe food allergies which usually is not outgrown.Symptoms can be triggered by tiny amounts of allergens and even manifest as severe anaphylaxis.A survey made in the USA registered an increase of the prevalence of peanut allergy among children from a rate of 0.4% in 1997 to 1.4% in 2008.This phenomenon was confirmed in the UK where an increase of the prevalence of peanut allergy was also registered.The pattern of sensitization to peanut allergens varies among populations in different geographical regions.The major peanut allergens Ara h 1, Ara h 2, and Ara h 3 are the main elicitors of allergic reactions in the USA and are often associated with severe symptoms.Spanish patients recognized these peanut allergens less frequently and were more often sensitized to the lipid transfer protein Ara h 9.Swedish patients detected Ara h 1 to 3 more frequently than Spanish patients but had the highest sensitization rate to Ara h 8, a cross-reactive homologue of the major birch pollen allergen Bet v 1.In a study involving peanut allergic subjects from 11 European countries sensitized to Ara h 1, Ara h 2 and Ara h 3 since childhood, Ara h 2 was identified as the sole major allergen.Geographical differences were observed for Ara h 8 and Ara h 9, which were major allergens for Central/Western and Southern Europeans, respectively.In a study of peanut allergic patients from the Netherlands, the most frequently recognized allergen was also Ara h 2.Peanut profilin, Ara h 5, is another allergen responsible for pollen-associated peanut allergy.IgE reactivity to Ara h 5 was shown in a Swedish cohort of peanut allergic individuals to be associated with that of the profilins from grass and birch pollen, Phl p 12 and Bet v 2, respectively.In a study of individuals from the Swedish BAMSE birth cohort, children sensitized to both peanut and birch pollen were less likely to report symptoms to peanut than children sensitized to peanut but not to birch pollen at 8 years.Sensitization to peanut oleosins was associated with severe systemic reactions.No data are available on the prevalence or allergenic activity of Ara h 7.More studies are also needed to address the immunological properties of Ara h 12 and Ara h 13, the peanut defensins, which were recently found to be reactive with IgE from patients with severe peanut allergy.Seed storage proteins are present as one or more groups of proteins in high amounts in seeds to provide a store of amino acids for use during germination and seed growth.Ara h 1 and Ara h 3 are bicupin seed storage proteins.They belong to the cupin superfamily, a functionally highly diverse protein superfamily which contains at present 61 member families.In legumes, such as the peanut, the globulin type seed storage proteins are present in two forms, the 7S trimeric vicilins and the 11S hexameric legumins.Experiments performed by Viquez and colleagues revealed that Ara h 1 had trypsin inhibitory activity indicating that the protein might play a role in plant defense against insects.Interestingly, the peptide that is cleaved off at the N-terminus to yield mature Ara h 1 contains six cysteine residues that might stabilize its structure against digestive denaturation.The peptide resembles a class of antifungal oligopeptides from plant seeds such as Rs-AFP2, a defensin isolated from radish seeds.Ara h 2, Ara h 6, and Ara h 7 are 2S albumin seed storage proteins which are members of the prolamin superfamily.Non-specific lipid transfer proteins form another family of the prolamin superfamily.They are present as type 1 and type 2 nsLTPs in plants and involved in stabilization of membranes, cell wall organization, signal transduction, and plant growth and development as well as in resistance to biotic and abiotic stress.Ara h 9 and Ara h 17 are type 1 nsLTPs while Ara h 16 is a type 2 nsLTP.Plants contain actin-binding proteins which regulate the supramolecular organization and function of the actin cytoskeleton, including the monomer-binding profilins.Profilins regulate cytoskeletal dynamics and membrane trafficking.The peanut allergen Ara h 5 is a member of the profilin family.The major birch pollen allergen Bet v 1 is the founding member of the Bet v 1 family of proteins.Bet v 1 isoforms show an individual, highly specific binding behavior for differently glycosylated flavonoids, the physiological ligands of Bet v 1.Isoform and ligand mixtures have been suggested to act as fingerprints of the pollen from distinct trees and thus to play an important role in recognition processes during pollination.Ara h 8, the Bet v 1 homologous allergen from peanut, was shown to bind the isoflavones quercetin and apigenin as well as resveratrol with high avidity.Lipids are stored in oil seeds in specialized intracellular structures called oil bodies which are involved in various aspects of lipid and energy metabolism.They consist of a core of neutral lipids surrounded by proteins embedded is a phospholipid monolayer.Oleosins, amphiphilic structural proteins, are the most abundant oil body proteins.Ara h 10, Ara h 11, Ara h 14, and Ara h 15 are the peanut oleosins.Plant defensins are small, cysteine-rich peptides that possess biological activity towards a broad range of organisms, their activity being primarily directed against fungi.Ara h 12 and Ara h 13 are allergenic peanut defensins.The antimicrobial activity of the amphiphilic peanut defensins Ara h 12 and Ara h 13 is solely antifungal.The peanut defensins showed inhibitory effects on mold strains of the genera Cladosporium and Alternaria.To date, the WHO/IUIS Allergen Nomenclature Sub-Committee, the only body of experts authorized to assign official allergen designations, recognizes 16 peanut allergens.The allergen Ara h 4 was renamed Ara h 3.02 and the number 4 is not available for future peanut allergen designations to avoid confusions with the already existing literature.Ara h 1 is a bicupin storage protein of the vicilin type.The cDNA sequences of two Ara h 1 encoding clones, 41B and P17, were published in 1995.Both clones showed a sequence identity of greater than 97% and encoded proteins of around 68 kDa.Both proteins have an N-terminal 25 amino acid residue signal peptide and a single glycosylation site at amino acid positions 521–523.A genomic Ara h 1 clone, capable of giving rise to the mRNA for the cDNA of clone 41B, consisted of four exons and three introns.Its open reading frame encoded a protein of 626 residues.The first report of an N-terminal sequence of mature Ara h 1 indicated that, depending on the length of the isoallergen, 78 or 84 residues in total are cleaved off at the N-terminus during post-translational processing of Ara h 1.In SDS-PAGE, the two Ara h 1 isoforms appear as two closely spaced bands at 69 and 66 kDa.These sizes are consistent with the removal of a 25-residue signal peptide as well as the removal of an N-terminal propeptide.Ara h 1 is translated as a pre-pro-protein.The signal peptide directs the nascent protein to the storage vacuole where the propeptide is cleaved off to yield the mature Ara h 1 found in peanuts.The cleaved-off N-terminal propeptide contains three allergenic epitopes, of which two are major.The Ara h 1 monomer, which forms stable trimers held together by non-covalent interactions, occurs in peanuts as larger oligomers.The bicupin storage protein Ara h 3 was originally identified as a 14 kDa peanut protein by Eigenmann and coworkers.Its N-terminal sequence was determined and used to design degenerate oligonucleotides for screening a peanut cDNA library.The open reading frame of the Ara h 3 cDNA identified in this screen coded for a protein of around 60 kDa.The 14 kDa protein appeared to be an N-terminal breakdown product of the larger allergen.A genomic clone encoding Ara h 3, AF10854, revealed the presence of four exons.The deduced protein of 538 amino acid residues has a calculated molecular mass of 61.7 kDa.Ara h 3 has a leader peptide of 20 amino acid residues that is important for protein translocation to the storage vacuole.The deduced amino acid sequence showed 93% and 91% identity with the peanut allergens Ara h 3 and Ara h 4 indicating that these proteins were in fact variants of the same gene.Ara h 4 was later renamed Ara h 3.02.Ara h 3 is post-translationally cleaved into a 43 kDa acidic and a 28 kDa basic subunit that are covalently linked by a disulfide bond.In summary, several fragments of Ara h 3 can be observed, even under extraction conditions that inhibit proteases.This illustrates that Ara h 3 is proteolytically processed in peanuts.An additional isoform, iso-Ara h 3, only shares 70–85% sequence identity with the other reported Ara h 3 isoforms.In fact, five different genes were described to encode isoforms of Ara h 3.Ara h 2, a 2S albumin, can be purified as a doublet as described by Burks et al. and de Jong et al., both bands having the same N-terminal sequence.An almost complete cDNA sequence of Ara h 2.01 was published in 1997 by Stanley et al., and in 2003 complete cDNA sequences for both Ara h 2.01 and Ara h 2.02 were made available by Chatel and colleagues.The isoform Ara h 2.02 is characterized by a 12 amino acid residue insertion at position 75 in comparison to the isoform Ara h 2.01.The deduced amino acid sequence of a full length intron-free genomic clone of Ara h 1.01 comprises 207 residues and includes a signal peptide of 21 residues.The two isoforms of Ara h 2 are expressed from different genes.Furthermore, Ara h 2 undergoes proteolytic processing by peanut proteases resulting in the removal of the dipeptide RY at the C-terminus.Consequently, Ara h 2 is a mix of two isoforms as well as slightly truncated forms of both isoforms.The first cloning of a cDNA coding for an Ara h 6 isoform, another 2S albumin, was reported in 1999 by Kleber-Janke and colleagues.This cDNA expression product was identified by phage surface display technology.Ara h 6 contains 10 cysteine residues.Suhr et al. identified three different isoforms of natural Ara h 6 isolated from crude peanut extract by N-terminal amino acid sequencing.The presence of a signal peptide was predicted by comparing the N-terminus of the natural protein with the deduced amino acid sequence from the cDNA clone AF092846.Another nAra h 6 isoform from crude peanut extract was reported by Koppelman et al.Bernard and colleagues published two additional nAra h 6 isoforms.The native form of this Ara h 6 isoform occurs together with its naturally processed form in peanut.The processing results in the loss of the dipeptide IR at positions 46 and 47 of the mature protein and the formation of two peptide chains of 5.44 and 9.14 kDa linked together by disulfide bonds.Arachis hypogaea, being an allotetraploid with an AABB genome constitution, carries three copies of the arah6 gene, one of them located in the A genome and the other two in the B genome.An Ara h 7 cDNA sequence was first cloned by using the pJuFo phage display system and deposited in the database with the accession number AF091737.Ara h 7 is related to the other two 2S albumin allergens, Ara h 2 and Ara h 6, but the isoform Ara h 7.0101 only possessed 6 cysteine residues.Ara h 7 was later recloned and its sequence was accepted as the 17.7 kDa isoallergen Ara h 7.0201.Ara h 7.0201 possessed the conserved cysteine skeleton of 8 cysteine residues.Ara h 7.0201 could also be identified as a natural protein present in peanuts, while the previously annotated Ara h 1.0101 could not be found.A third isoallergen, Ara h 7.0301 was identified by expressed sequence tag analysis.The first cDNA coding for the peanut profilin Ara h 5 was obtained from a pJuFo phage surface display library that had been derived from a λ-ZAPII library.The cDNA’s coding region comprised 396 nucleotides, predicting a protein of 131 amino acid residues with a calculated mass of 14 kDa.The sequence was deposited in the GenBank database with the accession number AF059616.This approach was expanded in a follow-up study where 25 clones carrying Ara h 5 cDNAs ranging in length from 450 to 750 base pairs were isolated.All 25 clones carried cDNA inserts coding exclusively for one protein whose amino acid sequence is available under the accession number AAD55587.Ara h 5 was recloned in 2010 in Japan resulting in the protein sequence GU354312 that was 94.7% identical to the one previously published by Kleber-Janke et al.A cDNA encoding the Bet v 1-homologous Ara h 8 was amplified by PCR using degenerate primers designed on the basis of the sequence of a soybean PR-10 protein, Gly m 4.The full-length cDNA sequence harbored a 471 base pair open reading frame coding for a protein of 157 amino acid residues with a predicted molecular weight of 16.9 kDa.The protein sequence resulting from this sequence was assigned the isoallergen designation Ara h 8.0101.The characterization by micro-sequencing of a natural Ara h 8 protein isolated from peanuts revealed differences to the deduced amino acid sequence AY328088.The cDNA of this new Ara h 8 isoallergen was cloned, its sequence deposited in the database under the accession number EU046325, and the corresponding allergen designated as Ara h 8.0201.There is a similarity of only 51.3% between the two isoallergens.Analysis of genomic DNA obtained with Ara h 8.0101-specific primers revealed the presence of one intron.Full-length cDNAs of two Ara h 9 isoforms, non-specific lipid transfer proteins, were cloned using a combination of molecular biology and bioinformatics tools.A signal peptide of 24 amino acid residues was predicted for both isoforms which was confirmed by N-terminal sequencing of natural peanut nsLTP.The two nsLTP isoforms shared 90% sequence identity and were named Ara h 9.0101 and Ara h 9.0201.The sequences were made available with the GenBank accession numbers EU159429 and EU161278.Two additional non-specific lipid transfer proteins were accepted by the WHO/IUIS allergen nomenclature sub-committee but the respective papers have not yet been published.Ara h 16.0101 is a type 2 nsLTP with a calculated molecular weight of 7 kDa and Ara h 17.0101 is a type 1 nsLTP with a calculated molecular weight of 9.4 kDa.Oleosins are the major oil body stabilizing proteins.At least eight peanut-derived oleosins have been identified on the DNA level and by proteomic approaches.Schwager and coworkers have identified eight allergenic peanut oleosins by using N-terminal sequencing and peptide mass fingerprinting.These eight oleosins were classified into four allergen groups, i.e. Ara h 10 comprising the isoforms Ara h 10.0101 and Ara h 10.0102; Ara h 11 comprising the two isoforms Ara h 11.0101 and Ara h 11.0102; Ara h 14 comprising the three isoforms Ara h 14.0101, Ara h 14.0102, and Ara h 14.0103; and Ara h 15 with the isoform Ara h 15.0101.N-terminal sequencing of IgE reactive proteins extracted by chloroform/methanol from roasted peanuts followed by a homology search in the expressed sequence tag database led to the identification of defensins as peanut allergens.Two groups of peanut defensins with sequence identities of 43% to 45% were found.Ara h 12 is represented by one isoform with a molecular mass of 5.2 kDa.Two Ara h 13 isoforms, with determined molecular masses of around 5.4 kDa, were identified.Ara h 13.0101 and Ara h 13.0102 differed by only three amino acid residues.Peanut allergens have been expressed in various pro- and eukaryotic expression systems.An overview on these expression systems, the plasmids used and the yields is given in Table 2.Hurlburt and colleagues back-translated the Ara h 1 protein sequence using optimized codons for the expression in E. coli.The authors established a reproducible protocol to produce large quantities of pure rAra h 1.Mature Ara h 1 and the Ara h 1 core domain were expressed in BL21 cells using the pET-9a vector.A little more than half of the expressed protein appeared in the soluble fraction.Compared to natural Ara h 1, both recombinant Ara h 1 forms showed lower IgE binding by the majority of patients’ sera used in the study.The presence of codons in an allergen’s cDNA that are rarely used in E. coli leads to decreased expression levels, frame shifts and mistranslations.The BL21 E. coli strain is a convenient and effective host for heterologous protein expression.However, when the cDNAs coding for Ara h 1, Ara h 2 and Ara h 6 inserted in pET-16b were expressed in BL21 cells, production was very inefficient.In contrast, the peanut profilin Ara h 5 was successfully expressed.Ara h 1 has a content of 5.4% of the arginine codons AGG and AGA that are least used in E. coli, Ara h 2 has 90.6%, and Ara h 6 has 8.1%.The AGG/AGA content of Ara h 5 is only 0.8%.Transfer RNAs that recognize these codons are extremely rare in E. coli.A high level of expression of these allergens could be achieved by using E. coli BL21-Codon Plus RIL cells that carry extra copies of the argU tRNA gene.The expression yield of Ara h 5 which was totally present in the soluble fraction was 30 mg/L culture.The yield of Ara h 2.01 from the soluble fraction was 4.5 mg/L culture and from inclusion bodies 24 mg/L.Lehmann and colleagues reported a method for large scale production of properly folded Ara h 2.The full-length coding sequence of Ara h 2.01 was inserted into the pET-32a expression vector fused at the N-terminus with the sequence encoding the 109 residues of the thioredoxin tag.E. coli Origami cells were cotransformed with the expression vector and a plasmid carrying the argU tRNA gene.The thioredoxin fusion tag was used to enhance the solubility of rAra h 2 and to catalyze the formation of disulfide bonds in the oxidizing cytoplasm of this modified E. coli strain.rAra h 2 was expressed at 19 mg/L culture and was shown to be identical to natural Ara h 2 as judged by immunoblotting, analytical high-performance liquid chromatography and circular dichroism spectra.A codon-optimized gene of Ara h 2.01 was inserted in the pMALX_E plasmid resulting in the expression of Ara h 2 fused to a mutated version of the maltose binding protein.E. coli Origami B cells were serially transformed with a pACYCDuet-1 plasmid encoding thioredoxin followed by pMALX_E Ara h 2 plasmids.The purified MBP-Ara h 2 protein was then used for crystallization trials.Lew et al. expressed a codon-optimized Ara h 2.02 gene inserted into a pET-28a as an N-terminally hexahistidine-tagged protein in E. coli BL21 cells.The insoluble recombinant protein was isolated under denaturing conditions by Ni-affinity chromatography and refolded by dialysis in decreasing urea concentrations.The rAra h 2.02 that was produced at a yield of 74 mg/L culture showed a significantly decreased reactivity with specific IgE when compared to natural Ara h 2.Lactococcus lactis is a gram-positive lactic acid bacterium with food grade status, free of endotoxins, and the ability to secrete heterologous proteins with few other proteins.The Ara h 2.02 gene codon optimized for expression in L. lactis was inserted into the expression vector pAMJ399 as a fusion to a signal peptide to enable its secretion into the culture medium.Correctly processed full-length Ara h 2.02 was isolated at 40 mg/L culture.Heterologous proteins can be produced in chloroplasts of the unicellular eukaryotic green alga Chlamydomonas reinhardtii.C. reinhardtii can be rapidly transformed into stable transgenic strains and grown in large quantities in minimal media in photobioreactors.Codon optimization was done for Ara h 1 and Ara h 2.02 for the codon usage of C. reinhardtii chloroplasts, cloned into pJAG15 and the transformed algae were grown in large-scale cultures.Although the core domain of Ara h 1 and the full-length Ara h 2.02 were produced in algal chloroplasts, the recombinant proteins displayed a reduced binding to IgE from peanut allergic patients as compared to the native allergens.This could indicate a possible lack of post-translational modifications.E. coli Rosetta2 is a BL21 derivative designed to alleviate codon bias when expressing heterologous proteins in E. coli.The gene encoding Ara h 5 was cloned into the pET-21d plasmid and expressed in Rosetta2 cells.The total yield of the purified protein was 29 mg/L culture.When tested in a microarray together with birch and timothy profilins, rAra h 5 displayed good cross-reactivity with the two pollen profilins.One of the advantages of Pichia pastoris over E. coli is its ability to produce more properly formed disulfide bonds.Ara h 6 contains 10 cysteine residues that form 5 disulfide bonds required for correct folding and allergenic activity.The Ara h 6 gene, codon-optimized for yeast, was cloned into the pPink-HC expression vector and transformed into PichiaPink cells.Compared to natural Ara h 6, rAra h 6 produced in Pichia had intact effector functions while Ara h 6 produced in E. coli BL21-Codon Plus RIL cells via the pET-32 Ek/LIC vector had significantly reduced functions.Due to the difficulties of obtaining soluble oleosins in E. coli, the peanut oleosins Ara h 10, Ara h 11, and Ara h 14 were expressed in soluble form in the insect cell-baculovirus system.The coding genes of the peanut oleosins were inserted into the pENTR4 vector which was then recombined with BaculoDirect Linear DNA.Recombinant baculovirus constructs were then transfected into Spodoptera frugiperda Sf9 cells.All three oleosins were expressed in soluble form in the insect cells.The final protein yield was 0.9 mg/L of culture for Ara h 10, 0.8 mg/L for Ara h 11, and 1.3 mg/L of culture for Ara h 14.Natural allergens, too often disregarded, are the authentic counterpart of recombinant allergens.They are absolutely required for the validation of the quality of newly produced recombinant allergens.However, purification of natural allergens from peanut can be quite challenging.In addition, food allergens undergo modifications during food processing which are not present in recombinant allergens.Peanuts are consumed after thermal processing, such as boiling, frying or roasting, and as a consequence, the physicochemical properties of their allergens change impacting on their allergenicity and IgE-binding capacity.Peanut proteins make up 22–30% of the total protein in peanut seeds.Sixteen proteins, belonging to 7 protein families, are at present classified as allergens.Many peanut allergens are seed storage proteins that, being closely related, are challenging to purify.Other allergens are much less abundant in seeds and are hardly obtained in decent amounts to work with.Several studies have contributed to the identification and characterization of peanut allergens in crude extracts, and many provided us with a detailed purification method for obtaining pure allergens, sometimes in high yields.Here, we describe some of the purification methods made available for most of the allergens, or their identification in crude peanut extracts.Table 3 gives an overview on purification procedures.The peanut seed storage allergens, Ara h 1 and Ara h 2 make up 12–16% and 5.9–9.2% of the total peanut protein content, respectively.Ara h 1 is a glycoprotein with a molecular mass of 63.5 kDa, forming trimers of 180 kDa.It can be purified from crude peanut extract by affinity chromatography to concanavalin A which binds the mannose residues present on the glycan chain.A higher purity level of Ara h 1 can be achieved by size exclusion chromatography following affinity chromatography.Ara h 3, with a molecular mass of 60 kDa for the monomer, occurs in peanuts as hexamer of 360 kDa.Koppelman et al. developed a strategy for the purification of natural Ara h 3 from crude peanut extract and described a post-translational processing of Ara h 3 that affects its IgE-binding properties.Ara h 2, described as the most potent peanut allergen and the best predictor of peanut allergy in adults, was purified by Burks and coworkers by ion exchange chromatography.There are two isoforms of Ara h 2, Ara h 2.0101 and Ara h 2.0201, and their molecular masses are 16.7 and 18.0 kDa, respectively.Ara h 2.0201 differs from the other isoform by an insertion of 12 amino acids residues.In 2008, Marsh and colleagues obtained pure Ara h 2 from peanut seeds by fractionating a peanut protein extract with ammonium sulfate followed by size exclusion chromatography, anion-exchange and preparative C18 RP-HPLC.A simplified procedure was proposed by Masuyama and colleagues which enabled the purification of Ara h 2 without the use of chromatography.Ara h 6 is of high clinical importance as, together with Ara h 2, it is the best predictor of severe peanut allergy.Ara h 6 was isolated from peanut extracts by size exclusion chromatography, in which Ara h 6 was co-eluted with Ara h 2, followed by an anion-exchange chromatography.In 2008, Marsh et al. proposed a different protocol for the purification of both Ara h 2 and Ara h 6 from peanut seeds that allows a clear separation of the two allergens, described above in more detail for Ara h 2.Ara h 7, like Ara h 2 and Ara h 6, is a 2S albumin with a molecular mass of 17.3 kDa.Due to their cross-reactivity and their similar physiochemical properties, the identification of this natural allergen was challenging.Schmidt et al. were able to identify only Ara h 7.0201 in peanut extract from a pool of size exclusion chromatography-enriched 20 kDa proteins separated by 2D gel electrophoresis.Ara h 8, the Bet v 1 homologue from peanut, was identified in 2004 by Mittag and coworkers.However, only 4 years later, a purification strategy for obtaining pure Ara h 8 from peanut extract was published.Ara h 8, with a molecular mass of 17 kDa is not abundant in peanut seeds.Thus, the authors developed a protein extraction method using an acidic buffer which resulted in an Ara h 8-enriched extract.A different purification system for Ara h 8 was proposed by Petersen et al., based on a lipophilic extraction of proteins from peanuts, to achieve a yield of 20 and 8 μg from 1 g of unroasted and roasted peanut flour, respectively.The peanut nsLTP Ara h 9, an allergen important for the Mediterranean population, was purified by Lauer et al. by a two-step purification procedure.It is a small 9.1 kDa protein with a basic pI of 9.3.The authors obtained 1.3 mg of pure Ara h 9 starting from 110 g of peanuts.Two additional peanut nsLTPs are listed in the WHO/IUIS allergen nomenclature database, named Ara h 16 and Ara h 17, but so far no data on their purification strategies are available.Oleosins are very abundant in peanut seeds.In 2015, Schwager et al. published a method for the simultaneous purification of all known peanut oleosins.The purification method is based on the isolation of oil bodies from peanut, which were delipidated and then peanut oil body proteins were separated by preparative electrophoresis.In this study, the authors identified Ara h 14 and Ara h 15, which were then officially accepted as allergens.Recently, Ara h 12 and Ara h 13, two defensins with antifungal activity, were identified as novel peanut allergens and purified after a lipophilic extraction.Peanuts undergo thermal processing before consumption.They are eaten boiled, fried or roasted according to various culinary traditions.These preparation methods seem to have an impact on the prevalence of peanut allergy.In fact, a lower incidence of peanut allergy is reported in countries where peanuts are consumed after boiling.Thermal processing has been shown to affect peanut proteins in different manners.Boiling of peanuts, a cooking method widely used in China, decreases the IgE binding capacity of Ara h 1, Ara h 2, and Ara h 3 compared to roasting.This is probably due to a loss of allergens into the boiling water.Moreover, the IgE-binding of sera from patients tolerant to boiled peanuts was reduced and mostly limited to the peanut 2S albumins Ara h 2, 6 and 7.The process of roasting was shown to increase peanut allergenicity and the IgE-binding capacity of peanut allergens.The Maillard reaction which occurs during dry-roasting between the allergens’ amino groups and reducing sugars present in peanuts contributes to this effect.Consequently, the allergens Ara h 1 and Ara h 2 undergo chemical modifications which increase their IgE-binding capacity, produce more stable structures and confer resistance to heat and digestion.The presence of advanced glycation end products in roasted peanuts might explain the higher levels of IgE-binding compared to boiled or fried peanuts.Furthermore, roasting of peanuts was necessary to induce sensitization to Ara h 6 in mice as unroasted Ara h 6 could not.The origin of this difference might be found in the formation of a stable protein complex between Ara h 6 and Ara h 1 in roasted peanut extracts.This effect on increased IgE-binding due to roasting was also described for the peanut Bet v 1 homologue Ara h 8.Ara h 8 turned out to be more stable to heat and enzymatic digestion, possibly as a consequence of the association with lipophilic ligands stabilized by roasting.Allergenicity of peanut proteins is also affected by structural changes induced by thermal processing.Frying of peanuts, but not boiling or roasting, altered the secondary structure of Ara h 2 dramatically by decreasing the molecule’s content of α-helices and increasing its β-sheets, β-turn and random coil, thus altering Ara h 2 epitopes and reducing its allergenicity.In addition, boiling of Ara h 1 induced a partial loss of secondary structure of the molecule which then assembled into branched complexes with a reduced IgE-binding capacity due to decreased epitope availability.Although boiled peanuts had a reduced allergenicity, they could not be regarded as hypoallergenic peanuts.Although many studies investigated the effect of thermal processing on the IgE-binding properties of peanuts, the contribution of food processing to the immunological characteristics of peanut allergens and peanut matrix components needs to be further elucidated.Post-translational modifications refer to covalent modifications of proteins once they have been fully translated.PTMs such as phosphorylation, acetylation or glycosylation control many biological processes.As many as 300 PTMs of proteins are known to occur physiologically.The study of PTMs and their functions in plants is an emerging field.Glycosylation is the major PTM of peanut allergens that has been studied to date.Ara h 1 contains one glycosylation site that bears mainly xylosylated N-glycans of the composition Man3XylGlcNAc2.While the predicted glycosylation of Ara h 2 was not found, site-specific hydroxylations were identified of 2/8 and 3/11 proline residues of Ara h 2.01 and Ara h 2.02, respectively.In contrast to Ara h 2, no consensus sequence for N-glycosylation could be identified in the sequence of Ara h 6.Ara h 6 was also assumed not to be a glycoprotein based on data obtained by Zhuang et al.Although putative glycosylation sites are present in Ara h 12 and Ara h 13, mass spectrometry proved that these peanut defensins were not glycosylated.X-ray crystallography structures are available for five peanut allergens.While structures were obtained for four peanut allergens, Ara h 1, 2, 5, and 8, from recombinant proteins, the structure of the hexameric Ara h 3 was obtained from a natural protein.Fig. 1 shows the ribbon representations of the architecture of these allergens.While the full-length recombinant protein did not exhibit a fully native structure and was partially unfolded, the crystal structure of the Ara h 1 core could be determined at resolutions of 2.71 and 2.35 Å.The core region of Ara h 1, which revealed the typical bicupin fold, formed homotrimers and existed as higher molecular weight assemblies in solution.This was in agreement with the observations made by van Boxtel and colleagues.A second study on the crystallization of Ara h 1 was performed by Cabanos and colleagues.Comparable to the Chruszcz study, a full-length Ara h 1 complete with the N-terminal extension and the C-terminal flexible region did not yield crystals with a good diffraction quality.However, the crystal structure of an Ara h 1 core region was determined at a final resolution of 2.43 Å.Ara h 1 was found to exist in trimeric form and its structure and topology were very similar to other known structures of 7S globulins.Ara h 2 was crystallized as a fusion protein that used an engineered maltose-binding protein as a carrier to improve the likelihood of crystallization.Hence, in the crystal structure deposited in the PDB database the Ara h 2 residues are numbered 1028–1148.The crystal structure determined at a resolution of 2.7 Å revealed a bundle of five alpha-helices held together by four disulfide bonds typical for the small molecular weight members of the prolamin superfamily.Diffraction quality crystals of Ara h 3 were obtained by purifying one isoform from dry peanut kernels.Ara h 3 is a bicupin hexamer, the trimer to hexamer transition is only possible after the cleavage of the peptide bond between the acidic and the basic subunit.Due to the purification of the natural protein, the crystallization of a homohexameric Ara h 3 at a resolution of 1.73 Å was achieved.Simply expressing the coding sequence of Ara h 3 in an expression vector would not have yielded a recombinant protein in its native hexameric state.The structure of a peanut profilin was determined at a resolution of 1.10 Å for recombinant Ara h 5 produced in E. coli.The structure displayed the typical profilin-like fold consisting of a seven-stranded antiparallel β-sheet with two α-helices on one side and one α-helix on the other side.Although the H. brasiliensis latex profilin Hev b 8 had the highest percentage of sequence identity with Ara h 5, the structural alignment with the birch pollen profilin was much better.This indicates that a model-based prediction of IgE epitopes needs to be approached with caution.Recombinant Ara h 8 was expressed in E. coli and the structure of the purified protein was determined at a resolution of 1.60 Å.Its overall fold consisted of three α-helices that face a seven-stranded anti-parallel β-sheet illustrating the high similarity to the birch pollen allergen Bet v 1 architecture despite a low sequence identity of 48%.Like Bet v 1, Ara h 8 contained a large hydrophobic cavity.The double-blind placebo-controlled food challenge is and remains the gold standard of food allergy diagnosis.However, when testing for the presence of peanut allergy, the DBPCFC bears an inherent risk of inducing life-threatening reactions.In many cases, the presence of measurable levels of specific IgE to a food will help establish a diagnosis of food allergy.Hence, the availability of purified allergens allows the possibility of multiple testing and determination of specific IgE levels against allergens in one single measurement.For peanut allergic patients, who often manifest severe reactions to minute amounts of allergens, the so-called component-resolved diagnosis is a valuable diagnostic tool.Instead of exposing patients to crude allergen extracts, CRD utilizes purified natural or recombinant allergens to identify the specific molecule involved in sensitization or allergy.The allergens available for CRD on the ImmunoCAP® ISAC are Ara h 1, Ara h 2, Ara h 3, Ara h 6, Ara h 8, and Ara h 9.The major peanut allergen Ara h 2 is associated with the severity of symptoms of peanut allergy.It was shown that sIgE to Ara h 2 could be used as a good predictor of suspected peanut allergy, for both children and adults, among allergic populations of several geographic regions.A similar diagnostic value of sIgE was shown for the peanut allergen Ara h 6, probably due to its homology to the 2S albumin Ara h 2.In contrast, studies performed with the peanut allergens Ara h 1 and Ara h 3 showed that the predictive diagnostic value of their sIgE was quite low and depended on the geographical area of the origin of the study populations.Some of the peanut allergens, such as Ara h 5 and Ara h 7, are neither available nor have they been investigated in clinical studies which may represent a limit of the CRD.The same applies to the peanut oleosins, which were only recently associated with severe peanut allergic reactions.It is also important to consider whether a patient might be at risk of developing symptoms when eating peanuts because of cross-reactivity of certain allergens with other homologous allergens.This is the case for Ara h 8 which cross-reacts with IgE induced by the birch pollen allergen Bet v 1 and for Ara h 9 which cross-reacts with other nsLTPs.At present, no routine therapeutic approaches are available for the treatment of peanut allergy.Strict avoidance of the culprit food is the only available option.A possible therapeutic line of action might be the modulation of the immune response to peanut allergens and the induction of oral tolerance.However, one of the major problems during attempts of treating food allergies is the occurrence of adverse reactions.Recombinant hypoallergenic allergen variants offer an alternative approach to avoid such IgE-mediated adverse reactions.In 2001, Bannon and colleagues produced hypoallergenic variants of Ara h 1, Ara h 2 and Ara h 3 in E. coli showing a reduced IgE-binding capacity of the newly produced molecules by immunoblotting with peanut allergic patients’ sera.Due to the clinical relevance of Ara h 2, more efforts were made to produce a recombinant hypoallergenic variant of Ara h 2.All linear IgE-binding epitopes of Ara h 2 were modified to reduce its IgE-binding capacity.The modified Ara h 2 displayed a reduced capacity to trigger the release of β-hexosaminidase from an RBL-2H3 cell line passively sensitized with IgE from peanut allergic individuals.A hypoallergenic derivative of Ara h 3 was produced by introducing point mutations into four critical IgE-binding sites while the ability to stimulate T-cell proliferation was retained.In a mouse model of peanut allergy, three modified peanut allergens, Ara h 1–3, were subcutaneously co-administered with heat-killed Listeria monocytogenes bacteria.This treatment markedly decreased peanut-specific IgE and histamine levels in the plasma and offered protection from anaphylaxis.The exact mechanism conveying protection was not clear but the effect was accompanied by a switch from a Th2- to a Th1-biased response.In 2013, a mix of recombinant hypoallergenic variants of the peanut allergens Ara h 1, 2 and 3, called EMP-123, entered phase I of an open-label clinical trial.The three recombinant allergens were encapsulated in heat/phenol-inactivated E. coli cells used as adjuvant.The candidate vaccine EMP-123 was administered rectally to 10 peanut allergic adults in weekly doses over 10 weeks, followed by 3-weekly doses.Of the 10 patients enrolled, 5 experienced adverse reactions and dropped out of the study.The reason for the observed adverse reactions might have been due to an incomplete removal of the IgE-binding epitopes from the allergens.A first multicenter, randomized, double-blind placebo-controlled study of peanut sublingual immunotherapy was published by Fleischer and colleagues in 2013.Forty subjects were enrolled in this study, with ages ranging from 12 to 37 years.After 44 weeks of SLIT, 14 of 20 subjects receiving peanut SLIT were responders compared to 3 of 20 subjects receiving placebo.In the responders, the median successfully consumed dose increased from 3.5 to 496 mg.Peanut SLIT induced a modest level of desensitization in a majority of the subjects compared to eth placebo group.The LEAP study was a clinical trial completed in order to evaluate strategies whether peanut consumption or avoidance would be more effective in preventing the development of peanut allergy in children at high risk.Six hundred and forty atopic children with eczema or egg allergy, not younger than 4 months and not older than 11 months, were enrolled for this study and randomly assigned to avoidance or peanut consumption.In the consumption group only 1.9% of the children developed peanut allergy at the age of 60 months, whereas 13.7% developed peanut allergy in the avoidance group.According to the LEAP study, the early introduction of peanuts in the diet of children at high risk is beneficial for reducing the frequency of the development of peanut allergy.In a single center clinical trial, 37 children who were aged 9–36 months and who had a reaction during the entry open food challenge either received low-dose or high-dose early intervention oral immunotherapy.Overall, 78% of the subjects receiving E-OIT demonstrated sustained unresponsiveness to peanut four weeks after stopping E-OIT after a median of 29 months of treatment enabling them to reintroduce peanut-containing foods into the diet ad libidum.Interestingly, 300 mg/d was as effective as 3000 mg/d at regulating the allergic immune response.In accordance with the LEAP study, this E-OIT trial suggested that allergic responses may be more easily corrected in young children.A Viaskin Peanut patch comprising of an epicutaneous delivery system containing a dry deposit of a formulation of peanut extract was applied in a multicenter, randomized DBPCFC phase II study.Of the 74 peanut allergic participants, 24 were assigned to the group treated with Viaskin Peanut 100 μg, 25 were treated with Viaskin Peanut 250 μ, and 25 subjects were assigned to the placebo group.Treatment success after 52 weeks was defined as passing a 5044 mg peanut protein OFC or achieving a 10-fold greater increase in peanut protein consumption from baseline.Forty-six percent of the VP100 participants, 48% of the VP250 participants, but only 12% of the placebo group achieved the treatment success.A phase IIb DBPC dose-ranging study of a peanut patch was performed in 221 peanut allergic patients, aged 6 to 55 years, from 22 centers.Patients received epicutaneous peanut patches containing 50 μg, 100 μg, or 250 μg peanut protein or a placebo patch for 12 months.Following the therapy, changes in the eliciting dose were established for each patient by DBPCFC.The 250 μg peanut patch resulted in a significant treatment response versus the placebo patch.The largest effect was seen in children with approximately 50% achieving the primary endpoint at 12 months defined by either a 10-times increase in the challenge threshold or an increase of the symptom-eliciting dose to 100 mg peanut protein or more.Recently, the first phase II randomized, double-blind, placebo-controlled study was carried out in eight US centers to assess the safety and efficacy of AR101, a novel oral biologic drug.AR101 consisted of encapsulated defatted lightly roasted peanut flour, well characterized in terms of peanut allergen composition and potency.Fifty-five peanut sensitized individuals were enrolled in the study, between 4 and 26 years old, with symptoms triggered by less than 143 mg/day as assessed by DBPCFC.Patients were exposed daily to AR101 or placebo with an increased dosage from 0.5 mg to 300 mg/day.AR101 significantly improved patients’ symptoms by reducing the severity of symptoms following DBPCFC.Patients tolerated 18-fold higher amounts of peanut proteins after the treatment.Peanut allergens will continue to generate research interests.The recombinant production of peanut allergens will encompass the more recently described peanut allergen such as the oleosins, defensins, and nsLTPs.At present, there are no structures available.It will prove to be quite challenging to complete the full panel of structures.Likewise, the allergenic characteristics and immunological properties of the peanut oleosins, defensins, and nsLTPs will have to be determined in broader studies.It will be advisable to validate the qualities of the recombinant peanut allergens using their purified natural counterparts.It is unclear at present, how much post-translational modifications contribute to the allergenic potency and whether there are such modifications that have been overlooked.The effect of thermal processing on the more recently described peanut allergens is yet another area that will have to be investigated.Last but not least, the contribution of the matrix, i.e. the peanut lipids, to the sensitization process is not well understood.Studies of the effects of peanut matrix components in association with the allergens might shed more light on the mechanisms that ultimately result in the production of allergen specific IgE.
Peanut allergens have the potential to negatively impact on the health and quality of life of millions of consumers worldwide. The seeds of the peanut plant Arachis hypogaea contain an array of allergens that are able to induce the production of specific IgE antibodies in predisposed individuals. A lot of effort has been focused on obtaining the sequences and structures of these allergens due to the high health risk they represent. At present, 16 proteins present in peanuts are officially recognized as allergens. Research has also focused on their in-depth immunological characterization as well as on the design of modified hypoallergenic derivatives for potential use in clinical studies and the formulation of strategies for immunotherapy. Detailed research protocols are available for the purification of natural allergens as well as their recombinant production in bacterial, yeast, insect, and algal cells. Purified allergen molecules are now routinely used in diagnostic multiplex protein arrays for the detection of the presence of allergen-specific IgE. This review gives an overview on the wealth of knowledge that is available on individual peanut allergens.
31,624
The effect of temperature in moringa seed phytochemical compounds and carbohydrate mobilization
Moringa is a perennial plant known for its high antioxidant contents.It is an important food commodity with all plant parts including leaves, flowers, fruits, and immature pods possessing nutritive value.The seeds of M. oleifera are known to predominantly produce phenols and different fatty acids.In plants, the phenolic antioxidants are known to exist in the free phenolic form and usually stored in the vacuole.Furthermore, free phenolics are polymerized to lignans and lignins in the plant cell wall.The produced free phenolics can be polymerized on the cell walls of developing seedlings.Apart from the antioxidants potential evident in different organs of M. oleifera in response to growing conditions, its adaptations and tolerance to extreme environmental conditions is favored by their unique physio-chemical and physiological characteristics.Seeds of M. oleifera are reported to have multifunctional roles.The seeds in some instances are used as the best normal coagulants, which possess antimicrobial, antioxidant properties and as a result the seeds are used for purification of water.The seeds also are known to have valuable nutrients for human diet and contain oil range from 49.8% to 57.25%.The oil extracted from moringa seed is reported to be rich in high unsaturated fatty acid with oleic as the major component.Also, it is found as the most stable oil since it has linolenic acid at undetectable level.In addition, the seeds are known to possess approximately 18.9%–21.12% carbohydrate and 23.8%–33.25% protein.Plants activate several adaptive strategies in response to abiotic environmental stresses such as temperature fluctuations, dehydration, and osmotic pressure.These adaptive mechanisms include changes in physiological and biochemical processes.Adaptation to these aforementioned stresses is associated with metabolic adjustments that lead to the accumulation of several organic solutes such as sugars, polyols, phenols, and proline.The physiological and biochemical status of the seeds of M. oleifera including primary and secondary metabolites may also have huge roles for seed germination as well as post germination seedling establishment and plant development.Their adaptation to harsh conditions, which impact plant development, involves mobilization of different antioxidants.Previously, Muhl reported that 30/20 °C regime was the optimum temperature for seed germination and post-germination seedling establishment when compared to 25/15 °C and 20/10 °C.However, in the physiological and biochemical insights underlying the seed germination, seedling establishment under the optimum temperature remained speculative.It is imperative to understand the seed germination process in relation to the antioxidant system and carbohydrate mobilization during germination at different temperature regimes.Thus, the current study aimed at evaluating the effect of these aforementioned temperatures on germination rate, antioxidant enzymes, phytochemical, and carbohydrate contents in the seeds of M. oleifera.All chemicals were obtained from Sigma-Aldrich®, Saarchem®, Fluka®, Separations®, or Glycoteam GmbH.M. oleifera Lam.cultivar originally from Sudan; the seeds were generously donated by a commercial farmer and cultivated for leaf production at the Ukulinga experimental farm, Pietermaritzburg, KwaZulu-Natal.Species identification and authentication were done by a taxonomist in Bews Herbarium, School of Life Sciences, University of KwaZulu-Natal, South Africa.Furthermore, a voucher of the specimen was prepared to be deposited in the Bews Herbarium, for future references.Moringa seeds were selected based on their size and color.Total number of 1350 seeds were sub-divided into three batches containing 450 seeds each and replicated into three.Then for germination test, 100 seeds were arranged in moist germination paper towel and allowed to germinate in dark rooms under varying three temperature regimes.Seed samples were collected every 24 h for 8 weeks until radicle emergency, and the seeds freeze-dried and stored in − 75 °C for further biochemical analysis.The experiment was terminated after 8 weeks for statistical analysis to determine temperature treatment effects on mobilization of seed biochemical.where MGT is the mean germination time, n is the number of seed which were germinated on day D, and D is the number of days counted from the beginning of germination.Total antioxidant capacity was determined according to Benzie and Strain with slight modifications.These authors developed the FRAP assay which is based on the reduction of the ferric tripyridyltriazine-TPTZ) complex to the ferrous tripyridyltriazine-TPTZ) complex by a reductant, therefore determining the combined antioxidant capacity of antioxidant molecules present in the tissue under investigation.Aliquots of 0.1 g freeze-dried plant material were extracted with 1 N perchloric acid, vortexed and centrifuged at 12,400g for 10 min at 4 °C.A fresh FRAP reagent solution-TPTZ prepared in 40 mM HCl, 20 mM FeCl3.6H2O) was prepared prior to measurement.Subsequently, an aliquot of the samples was mixed with 900 μl FRAP reagent solution, and the absorbance was measured at 593 nm after 10 min.The total antioxidant capacity was expressed as mg FeSO4⋅ 7H2O × g DW− 1 equivalent.Free soluble prolines were extracted according to Bates et al. with slight modifications.Briefly, approximately 0.1 g of plant material was homogenized in 10 ml of 3% aqueous sulfosalicylic acid and the homogenate filtered through Whatman no. 2 filter paper.Two milliliters of filtrate was reacted with 2 ml acid-ninhdrin and 2 ml of glacial acetic acid in a test tube for 1 h at 100 °C, and the reaction was terminated in an ice bath.The reaction mixture was extracted with 4 ml toluene, mixed vigorously with a test tube stirrer for 15–20 s.The chromophore containing toluene was pippeted out from the aqueous phase into a glass cuvette and warmed to room temperature, and the absorbance read at 520 nm using toluene for a blank.Proline standards with different dilutions were used for calibration curve.Proline concentration was determined from a standard curve.Phenols were determined according to Hertog et al., with slight modifications.Briefly, freeze-dried material was mixed with 10 ml 99.8% methanol and vortexed for 30 s. Thereafter, the mixture was shaken overnight at room temperature to extract the free phenols.Subsequently, the mixture was centrifuged, and supernatant were filtered through Whatman no. 1 filter paper and the sample was again rinsed with 10 ml of solvent until color was no longer released.And acid hydrolysis was also used for the remaining plant residue to efficiently release cell wall-bound phenols.Briefly, a 10 ml portion of acidified 60% aqueous methanol was added to each sample and placed in an oven at 90 °C for 90 min exactly.Tubes were allowed to cool, and supernatants were filtered through a 0.45 μm filter and ready for analysis.The phenols concentration was determined spectrophotometrically using Folin-Ciocalteu reagent at 750 nm using gallic acid monohydrate as standard and the total phenolics concentration expressed as ‘Gallic Acid Equivalents’.Total soluble proteins were extracted according to Kanellis and Kalaitzis from 1 g DW of frozen plant tissue.The extract was allowed to stand on ice for 15 min, was centrifuged at 20,000g for 20 min at 4 °C, and the supernatant was used for enzyme assays after being passed through Miracloth®-quick filtration material for gelatinous grindates.The Bradford Microassay was used to determine the protein content of the samples.Bradford dye reagent was prepared by diluting the dye concentrate with distilled water 1:4.The dye was added to test tubes containing 20 μl sample extract, mixed, and incubated at room temperature for 5 min.Samples were then read spectrophotometrically at 595 nm and the protein concentration determined by comparing results with a standard curve constructed using bovine serum albumin.The alpha-amylase activity in dry seed extract was assayed by quantifying the reducing sugars liberated from soluble starch using HPLC-RID as described by Tesfay et al.Seeds samples frozen with liquid nitrogen and grounded to fine powder for further analysis.Samples were then homogenized with 2 ml ice-cold buffer of 50 mM Tris–HCl containing 1 mM EDTA.The homogenate was centrifuged at 30000g for 45 min and the supernatant was heated with 3 mM CaCl2 at 70 °C for 15 min to inactivate β–amylase, de-branching enzyme, and α-glucosidase.One unit of enzyme activity was defined as the amount of enzyme required to release 1 μmol of glucose from soluble starch per minute under the assay conditions.Determination of PAL activity was done according to a modified procedure used by Schopfer and Miohr.Samples of plant tissue were homogenized at 4 °C in a glass homogenizer with 5 ml of 0.1 M sodium borate buffer, pH 8.8, containing 2 mm potassium meta-bisulfite.Following centrifugation at 20000g for 15 min, a 4.5 ml aliquot of the supernatant solution was passed through an 8 × 230 mm Sephadex G-25 column, which was previously equilibrated with the same borate buffer, to remove low mol.wt.substances.Before application of the supernatant solution, liquid in the void volume was removed from the column by suction.PAL activity was assayed using 2 ml of the sodium borate buffer, 0.5 ml of the enzyme extract, and 0.5 ml of 0.1 μM phenylalanine in 0.1 M borate buffer.The increase in absorption at 290 nm was measured after 2 h incubation at 37 °C.A method originally described by Beers and Sizer was used with slight modifications to determine CAT activity.The reaction solution contained 0.05 M potassium phosphate, 0.059 M hydrogen peroxide, 0.1 ml enzyme extract, and 1.9 ml distilled water.To start the reaction, the mixture, in absence of enzyme extract, was incubated for 4 to 5 min to achieve temperature equilibration and to establish a blank rate.To this mixture, 0.1 ml diluted enzyme extract was added, and the disappearance of H2O2 was followed spectrophotometrically every 20 s for 3 min via the decrease in absorbance at 240 nm.The change in absorbance from the initial linear portion of the curve was calculated.One unit of CAT activity was defined as the amount that decomposes one μmol H2O2.Enzyme activity was reported as Units/mg protein using the following equation: units/mg protein = ×− 1.SOD activity was determined by measuring the ability of the enzyme to inhibit the photochemical reduction of nitroblue tetrazolium chloride, as described by Giannopolitis and Ries.The reaction solution contained 1.5 mM NBT, 0.12 mM riboflavin, 13 mM methionine, 0.1 M EDTA, 67 mM phosphate buffer, and contained 10 to 100 μl enzyme extract.Riboflavin was added last, and tubes were shaken and placed under fluorescent lighting with an intensity 8.42 μmol m− 2 s− 1.Blanks and controls were determined by with/without illumination and without addition of enzyme, respectively.The absorbances of the illuminated and non-illuminated solutions were determined spectrophotometrically at 560 nm.One unit of SOD activity was defined as the amount of enzyme inhibiting 50% NBT photo-reduction.The results were expressed as units ×− 1 = 1000 ×− 1.P5CS activity was assayed according to the method of Vogel and Kopac modified as follows.Briefly, a 0.7 ml reaction medium containing 50 mM Tris, 2 mM MgCl2, 10 mM ATP, 1.0 mM NADH, 50 mM glutamic acid, and 0.1 ml enzyme extract was incubated at 37 °C for 30 min.The reaction was then stopped by 0.3 ml of 10% trichloroacetic acid.Color reaction developed by incubating with 0.1 ml of 0.5% o-aminobenzaldehyde for 1 h.After centrifugation at 12,000g for 10 min, the clear supernatant fraction was taken to measure the absorbance at 440 nm.Enzyme activity was calculated using the extinction coefficient of 2.68.The data collected were analyzed using statistical software using GenStat 14.1.Standard error values were calculated where a significant standard deviation was found at P < 0.05 between individual values.The absorption of water by the seed activates metabolic processes, mobilizing storage compounds that subsequently lead to expansion of the embryo and penetration of the radicle through the surrounding tissues.Respiration to supply metabolic energy for these processes is activated immediately following imbibition.Temperature had significant effect in speed of germination.A 30/20 °C hastened seed radicle emergence started to germinate within 48 h, followed by 25/15 °C between 48 h and 72 and 20/10 °C after 72 h respectively.In this matter, temperature may have regulated some of the growth metabolites, such as carbohydrates, polyols, and enzyme proteins.Early seed radicle emergence at 30/20 °C temperature might be associated with rapid hydrolysis and mobilization of seed reserves through catalytic enzymatic activities, for example, α-amylase in catalysis of starch, a storage compound into glucose units.Conversely, seeds which are vulnerable to low temperatures during the early phase of imbibition result in decrease in percent germination, anticipated to encounter poor seedling growth and reduced plant productivity.Similarly, positive correlation between germination of wheat seed and α-amylase activity at various temperatures was also reported by Sultana et al.The current findings in agreement with Muhl, whereby optimum germination rate was observed under 30/20 °C day/night temperatures.This temperature might regulate timely accumulation of substrates for respiration and the seed begins to become metabolically active in seed germination process.Seeds germinated under 30/20 °C regime showed a quick sharp decline in transport sugar sucrose concentration during germination process.During the process, seed hexoses, such as sucrose, glucose, and fructose had significant differences in their concentration under these temperature regimes which possibly influenced the germination.At 30/20 °C, sucrose concentrations peaked within 24 h, and the same sugar showed sharp decline for 120 h.Furthermore, glucose and fructose concentrations also increased for 120 h and afterward started to insignificantly decrease as seeds starting to radicle emergence.The two temperature regimes 25/15 °C and 20/10 °C also produced similar sugar pattern as observed in the former temperature regime; in all regimes, sucrose was found a dominant sugar.At 25/15 °C, seed glucose and fructose slowly decreased toward the end, while 20/10 °C displayed increment for both energy sugars.The seeds recorded the maximum sucrose concentration at 72 h for 25/15 °C and 96 h for 20/10 °C.Starch and glucose concentrations were negatively correlated.Seemingly, seed high metabolic rate during seed germination can be associated with radicle emergence; the seed requires more energy by mobilizing seed biochemical compounds supplying a substrate for seed respiration to initiate seed germination.Sharma et al. reported similar results on soybean carbohydrate composition during seed whereby the starch content significantly decreasing and coincided with an increase of total soluble sugar.The energy is provided via catalysis of storage compound; starch is then broken down to smaller units, producing sucrose, glucose, and fructose, which are then available to plant for their growth and development.During the second phase of seed germination, water absorption and respiration are ongoing processes; simultaneously, starch, lipids, and proteins in the endosperm are hydrolyzed to sugars, fatty acids, and amino acids, simple compounds that are soluble and mobile.Subsequently, these substances are mobilized to the growing points of the embryonic axis and are used in growth processes, which allow the growth of the radicle of the embryo.The 30/20 °C increased the amylase activity earlier, when compared to 25/15 °C and 20/10 °C.At 30/20 °C temperature, the seed enzyme activity showed sharp increase immediately after 24 h of seed imbibition.This could be associated with amylase catalytic effect; starch as storage compound is converted in to simple sugars then can readily be used as energy source by the plant cell.Rahman et al. also reported that amylase activity was tremendously increased from 200% to 220% at 24 h of germination and decreased gradually from 48 to 96 h of germination.Temperature significantly increased seed proline concentration.The result showed proline started to accumulate at 72 h toward 120 h.It was also found that seed prolines also accumulated for 25/15 °C and 20/10 °C on the same time.Proline accumulation in plant cell could have been stimulated in response to growing conditions.In some instances, their accumulation linked to plant cell adjusting mechanism inter- and intracellular osmotic gradients that could be caused as result to high cell metabolic activity, which possibly symbolizes a switch to next developmental phase, for example, radicle emergence.Hare et al. also reported that seed germination in Arabidopsis thaliana was enhanced by proline applied exogenously.Kravic et al. reported the role of proline in protein synthesis and antioxidant activity during germination.In maize kernel, proline is one of the most abundant amino acids, essential for further growth and plant development.It is predominantly found in its bound form as a storage protein zein and released during germination, thus providing amino nitrogen and energy needed for protein,The investigated temperatures significantly affected phenols accumulation in seed during germination.The temperature 30/20 °C had significant effect in phenol accumulation, which peaked and stimulated early seed radicle emergence.Seed phenols also accumulated simultaneously for 25/15 °C and 20/10 °C.Temperature is one of the abiotic factors that can regulate the biosynthesis of phenols.The increase of these compounds could also be associated to a plant mechanism to adapt to temperature regimes, resulting in synergistic biochemical compositions.Similar results are also reported by Shetty et al.; in the increase of faba beans’ phenols during seed priming, its production was occurred almost the same time during germination process.The same authors also suggested mung bean seed, other than seed quality, can be allowed to germinate and influence phenolic compounds, seed bitterness, and astringency changes.The plant’s phenolic production influences bitterness and astringency.Germination process could offer a good strategy to improve the phenolic content in quinoa seeds for enhanced their antioxidant activity properties.Furthermore, Weidner et al. reported the effect of temperature on germinating seeds of Vitis californica, which contain an elevated total amount of phenolics and various phenolic compositional profiles.Temperature had significant effect in PAL activity.The 30/20 °C temperature increased the enzyme activity, which reached highest concentration at 144 h.The increment in enzyme activity in response to growth temperature regimes a trend aligned in sequence of ascending temperature levels, maximum activity at 30/20 °C, 25/15 °C, and 20/10 °C, respectively, most likely contribute to plant phenols accumulation.In PAL activity, which is highly sensitive to environmental condition, such as high temperature, UV plays a major role in controlling the flux in total phenolics.The main function of phenols is to maintain the stable concentration of free radical by producing and scavenging them, and their physiological function may be shown by the regulation of cell redox potential.Due to high cell metabolism during seed germination, the seed is prone to high rate of respiration leading to the accumulation of oxidants of either radicals or ROS.The seed therefore requires to adjust to extreme conditions and neutralize toxic properties of these radicals and ROSs in order to maintain further plant development.Plant phenols and their biosynthesis are determined by enzymes, increased PAL activity, and phenols, ubiquitous in the plant system, including seed; its production could potentially engage into multifunctional roles.Furthermore, Hura et al. also reported a correlation between PAL activity and phenolic compounds in leaves of hybrid maize in drought stress and considered the accumulation of phenolic compounds as the indication of activated defense reaction in the drought resistance of maize genotype.Temperature had a significant effect in pyroline-5-carboxylate synthase activity.In contrast to optimum temperature for PAL activity, the 20/10 °C temperature had the highest P5CS activity, followed by 30/20 °C and 25/15/20 °C.Moreover, the enzyme activity increased during the early hours of seed germination and then declined toward the end of seed radicle emergence.Seed germinated under 20/10 °C resulted in increased P5CS activity; high enzyme activity that seems to resemble the temperature regime appears as unfavorable condition for optimum growth, resulting in regulating defense metabolites to cope with stress condition, and therefore seed germination will continue.However, slower morphological progress could have an impact in post seedling development.It is well known that higher plants accumulate free proline in response to a number of abiotic stresses such as drought, salinity, and freezing.The accumulation of proline under stressed environments can result from enhanced biosynthesis and/or reduced degradation of proline.In plants, biosynthesis of proline is catalyzed by P5C synthase and P5C reductase.The manipulation of these P5CS genes has demonstrated that their overexpression increases proline production and confers salt tolerance in transgenic plants, including wheat.Temperature had a significant effect in CAT activity.At 30/20 °C, CAT activity increased during seed imbibition, exhibited continuous increasing trend for 144 h, and declined afterward.The other two temperatures displayed a CAT activity in almost a straight line pattern.The enzyme activity could have increased in response to the high accumulation of H2O2 due to high seed metabolic rate.Seed germination could therefore be effected in response to the cumulative effect of low molecular antioxidants produced during seed imbibition time.In oily seeds, CAT is required and contributes hugely in the early events of seedling growth because it alleviates H2O2 produced during β-oxidation of the fatty acids.In the present study, CAT activity was also examined and responded to temperature regimes.Increased CAT activity could be an indication of the cellular evaluated ROS since the amount of CAT present resembles with level of oxidants at cellular level.Temperature had a significant effect in seed SOD activity.At 30/20 °C, SOD was the highest and displayed increasing trend toward seed germination; the activity was found increasing for 120 h as it experiences high cell metabolic rise.By contrast, the other two temperatures 25/15 °C at 144 h and 20/10 °C at 168 h recorded less activity compared with the former temperature, although these temperatures displayed increasing trend in SOD activity.The SOD mainly dismutises the super oxides into H2O2; the higher product of this ROS might be associated with increased SOD activity.This also confirms the holistic strength between SOD activity and CAT activity; the CAT activity depends on the H2O2 concentration, which is the product of SOD.The control of steady-state ROS levels by SOD is an important protective mechanism against cellular oxidative damage since O2− acts as a precursor of more cytotoxic or highly relative ROS.Early reports illustrate that increased SOD activities and cellular ROS levels were involved in the life of many plants including developmental course such as seed germination.Enhanced SOD activity can be triggered by increased production of ROS or it might be a protective measure adopted by seeds against oxidative damage.Our findings, shown in Fig. 5, were also in line with Dučić et al. reports symptomatic of the participation of SOD in the defense mechanism during germination and early seedlings development.SOD has been established to work in collaboration with POD and CAT which act in tandem to remove O2− and H2O2, respectively.Moreover, Rogozhin et al. reported the changes of SOD activity in the degrading endosperms, and developing cotyledons were correlated to those of POD and CAT activities.FRAP result clearly showed significant differences in total antioxidant capacity for moringa seeds germinated under temperature regimes.The temperature of 30/20 °C had the highest antioxidant capacity of other two, followed by 25/15 °C and 20/10 °C, respectively.This experiment has reported the dominance of different phytochemicals in M. oleifera and specifically antioxidants, including overall energy power and total antioxidant capacity during seed germination under 30/2 °C.The dominance of seed total antioxidant capacity by these seeds is impacted by various investigated individual antioxidants; its high production increases this capacity.In conclusion, temperature is reported to have an effect in plant metabolites required for seed germination process.M. oleifera adapt to different temperature; its biochemical accumulation favors its adaptive strategy to this condition.This research reports on seed germination of M. oleifera with regards to the seed antioxidant enzymes and the mobilization of macromolecules with varying temperature regimes.Of which, the 30/20 °C temperature was found to be the optimum temperature for most of the antioxidant metabolites, which eventually impacted the seed germination process.During the onset of metabolites of moringa seed germination, mainly energy storage reserves were regulated by optimum temperature to be used as energy source to stabilize seeds’ high metabolic rate.During seed metabolic rise, cell biochemical process is accompanied by the accumulation of antioxidants to maintain cellular redox balance, which enhances further plant development.Freeze-dried material was mixed with 10 ml 80% ethanol and homogenized for 1 min.Thereafter, the mixture was incubated in an 80 °C water bath for 60 min to extract the soluble sugars.Subsequently, the mixture was kept at 4 °C overnight.After centrifugation at 12000g for 15 min at 4 °C, the supernatant was filtered through glass wool and taken to dryness in a vacuum concentrator.Dried samples were resuspended in 2 ml ultra-pure water, filtered through a 0.45 μm nylon filter, and sugars were analyzed according to Liu et al., using an isocratic HPLC system equipped with a refractive index detector on a Phenomenex® column.The concentration of individual sugars was determined by comparison with authentic sugar standards.
Temperature is one of the climatic factors that regulate seed biochemical compounds and plant physiological responses, mainly biosynthesis of carbohydrates and phytochemical compounds. This study investigated the effect of temperature on moringa seed phytochemicals’ compositional changes and their utilization during seed germination. Moringa seeds were subjected to three varying temperature regimes (30/20 °C, 25/15 °C, and 20/10 °C) in germination chambers. Subsequently, the seeds were destructively sampled every 24 h interval until radicle emergence and then freeze dried for analysis. Seed performance and spectrophotometric determination of non-enzymatic and enzymatic antioxidants were carried out, while sugars were analyzed using HPLC-RID. Temperature had significant effect on speed of seed germination. Particularly, 30/20 °C accelerated seed radicle emergence with germination occurring within 48 h. Subsequently, germination was observed between 48 h and 72 h at 25/15 °C and after 72 h at 20/10 °C. Similarly, temperature especially 30/20 °C also significantly influenced the biosynthesis and accumulation of biochemical compounds in the seeds. Overall, temperature treatments of moringa seed resulted in significant differences in the rate of germination and biochemical changes, which are associated with various antioxidants and their mobilization.
31,625
Using visual representations for the searching and browsing of large, complex, multimedia data sets
The quantity of data generated and stored globally is increasing at a phenomenal rate."In 2013 it was said that 90% of the world's data were generated in the past two years.The digitization of business, global industry partnerships and the increasing presence of the Internet in our lives have all contributed to datasets so large that they have become highly challenging to manage effectively.Companies that operate from a digital platform, for example internet retailers and social networks, face great challenges in capturing, storing, analyzing and protecting the huge volumes of data their businesses generate.Even ‘traditional’ industries such as engineering and construction are facing challenges.Large, global, collaborative projects also generate huge volumes of data, from design documentation to supply chain management to communication records.The speed and accuracy at which these large datasets can be effectively mined for information that is relevant and valuable can have a significant effect on company performance.Therefore there is a need for systems which enable the effective management of this ‘big data’.This paper presents the preliminary findings from an evaluation of a data visualization system designed to enable faster and more accurate searching and understanding of large datasets.The first section introduces the concept of data visualization systems and presents the aim and hypothesis of the study.The second introduces the SIZL visualization system and defines the research methodology used to evaluate its effectiveness.The third section presents the findings from the quantitative and qualitative data gathered during evaluation, and the fourth section discusses the significance of these results as well as the limitations of the research at this stage.The paper concludes with details of further work required to refine the research and provide further insights into the benefits of the data visualization system.The rate at which data can be collected and stored is out-growing the rate at which it can be analyzed.As the size of datasets grows exponentially, there is increasing risk that much of the valuable and relevant information stored is being lost due to ineffective systems for data exploration and visualization.Traditional, 2-dimensional methods of data visualization include charts, graphs and plots.These visual ways of displaying data have been designed to communicate information in a way that humans can more easily understand and analyze.Chen and Yu conducted a meta-analysis of information visualization research focused on users, tasks and tools.The research revealed that given a consistent level of cognitive abilities, individuals showed a preference for simple visual–spatial interfaces.In other words, processing visual information is more intuitive to humans than processing other types of information such as text or numbers.Over the years, methods of data visualization and interactive graphics have become increasingly sophisticated.This progress has also resulted in an increase in the use of 3D visual displays to present greater complexity in datasets and enable a more interactive, intuitive data searching experience.Some well-known examples of data visualization include Google Earth, Google Images and ChronoZoom, which allow users to search for information in a visual, interactive and multi-dimensional environment.There has been much debate in the literature over the merits of 3D visualization systems—do they genuinely improve the effectiveness of information retrieval and analysis?,Earlier studies in particular promoted 3D visualization as more intuitive.However, later studies have questioned this assertion.Cockburn evaluated data storage and retrieval tasks in 2D and 3D visualizations.The study concluded that whilst 3D systems emulate a more ‘natural’ environment, their benefits are task-specific.Kosara, Hauser, and Gresh also state that 3D visualizations can have detrimental effects on users such as increased workload, occlusion and disorientation.Schneiderman highlights that 3D visualizations can simplify tasks and improve interactions only if properly implemented.Clearly 3D is not without merits, but its application must be carefully considered to ensure it is truly providing benefits to the desired task.Users can process visualizations faster than text, and inexperienced users can navigate 3D interfaces more intuitively than 2D interfaces.However, several issues affect 3D visualization, such as context, interpretation, cognitive and dimensional overload.The fine balance between beneficial and gratuitous use of 3D in data visualization has led several researchers to recommend the use of hybrid or 2.5D interfaces.Such environments can provide users with the cognitive/spatial advantages of 3D whilst retaining the refined interactions of 2D, therefore reducing the chance of users becoming ‘lost’ in the system.Other studies have explored in more detail the preferred functionalities for effective data visualization systems.Bergman, Beyth-Marom, Nachmias, Gradovitch, and Whittaker found that users show a preference for navigation over searching when locating files that have a set structure, for example folders or e-mails, and argue that navigation reduces cognitive workload, because individuals are psychologically programmed from childhood to store and retrieve objects from locations."Whereas searching relies on an individual's ability to associate attributes to an object, for example the file name of a document.Exploring navigation further, Hornbæk, Bederson and Plaisant studied the use of overviews in user interfaces.Participants showed a preference for a navigation overview which allowed them to keep track of their actions, however the researchers found that this overview slowed down performance, possibly due to increased workload.They propose the implementation of a ‘zoomable’ interface to overcome these issues.Over the years there have been continuing advances in low cost, high performance 2D and 3D display and manipulation technologies, as well as ever-increasing computation power.At the same time, the huge increase in data generated by companies, projects and even individuals has led to great challenges in visualizing and searching for information.This project emerged from the idea that exploiting the human “cog” within these systems provides an opportunity to redress the balance between high volume information/data storage and effective navigation.Thus finding information more easily.Therefore, the overall aim of this research was to investigate the feasibility of using visual representations for the searching and browsing of large, complex, multimedia data sets.Drawing upon prior research in this field, the following hypothesis was tested:Human beings find the recall and recognition of 2D and 3D shapes and environments so intuitive and effortless that any system for the effective management and use of data should make use of this fact.In addition to this hypothesis, a number of more specific questions were raised during the early stages of the research, including:Can a new system allow an “at a glance” pictorial summary of its content?,Can the information interface allow users to spot relationships in data more easily by “illustrating” the contents of files through icon representations that reflect the context of information?, "Can an advanced visual interface impact on a user's ability to quickly and accurately find individual items and identify relationships within subsets of the data?",A system was developed that would enable the researchers to answer these questions and effectively test whether 2.5D environments can benefit effective data management.The SIZL system was created to evaluate user interaction and experience with data in 2.5D environments, and enable the researchers to evaluate the effectiveness of this method.This software prototype combines a zooming user interface and a timeline—a zooming interface to a visual information landscape—and was designed with the capability to extract data from numerous document types such as word documents, spreadsheets, PDFs and image files.The software has a multi-search functionality, allowing users to search within the dataset for multiple keywords or phrases that are highlighted simultaneously using different colours.Captured data can then be moved to the ‘lightbox’ area to be compared and contrasted, enabling the user to identify document relationships.The SIZL system process is summarized in Fig. 1.The system is database-driven and facilitates the creation of dynamic user interfaces in response to user inputs.It was developed in the Net environment using C#, connecting to a MySQL database, using Sphinx searching technology to index and search the system content.Based upon findings from the literature regarding the advantages and disadvantages of both 2D and 3D visualizations, SIZL was designed as a 2.5D environment.The user can interact with the system through direct searching, for example key words or document browsing.Relevant documents are extracted by the system and used to generate objects which are then presented to the user in the ‘timeline’ section of the Zooming User Interface—see Fig. 2.Objects are presented as 3D blocks with differing depths, providing the user with an ‘at a glance’ pictorial summary of the content of the document each model represents.The user can focus in detail on the objects they deem relevant by dropping them into the ‘lightbox’ section of the ZUI.The user will remove interface options that are not considered to be relevant and replace them with objects that relate to the direction the user is taking within the application, allowing them to focus on a limited number of chosen documents."The system's functionality is summarized in a You-tube video/ Video 1 which accompanies this paper. .The SIZL system was designed to evaluate four key elements:Timeline: provides instinctive chronological flexibility.Time is a form of metadata applicable to almost all information.The timeline class consists of multiple arrays of time-scales—days, hours and minutes.The user has the ability to set the upper and lower limits of the timeline which will in turn limit the availability of the documents to those that fall within the set limits.The user can also expand or contract the timeline allowing them to manipulate the volume of displayed documents.Zoom: emulates time, facilitates user controlled data convergence.Zooming provides an infinite landscape.The zooming functionality has been created by fixing a camera to the 2D x-plane in the 3D environment.This allows users to zoom in and out of the Z-axis which changes the distance of the camera plane in relation to the displayed objects.This allows the user to control the amount of documents currently on display by the system.Furthermore, it allows the user to focus on clusters of documents that may be of particular interest.Human search is powerful because we notice patterns, oddities and depth/distance.The database contains a vast range of data which, with the multi-search functionality, is used by the system to create arrays of objects which have underlying connections.These relationships are used to generate the default ZUI.The goal of exploiting the underlying relationships is to allow the user to quickly retrieve information based on whatever knowledge they have relating to the search.For example, knowing the name of a person or place but not knowing the title of a document.Interaction: concepts to match input/output devices.The ZUI is based around an interactive timeline which can be customized by the user.The ZUI allows the user to zoom in and out of documents; manipulate document positions and scales; retrieve key information relating to a document; and, open original documents.Section 2.3 discusses how the SIZL system was used to test the research hypothesis.It should be noted that the SIZL system is a prototype under development."It was designed to support the study of how advanced visual interfaces impact upon a user's ability to find individual items and identify patterns within subsets of data, and therefore it is the interactions enabled by SIZL that were being evaluated, not the usability of the software itself.An experimental methodology was adopted so that the SIZL system could be used to evaluate the use of human cognition in effective data management.The methodology progressed in four phases:Experiments,Documentation, demonstration and dissemination,Very few previous studies of visualization methods have included adequate user evaluation, meaning it is difficult to make conclusions regarding their effectiveness and applicability.The use of an experimental methodology to conduct user evaluation is a significant contribution of this research.Perhaps one of the reasons behind this lack of user evaluation is the considerable number of challenges the task poses, some of which are discussed in the literature.One of the main complications in this kind of study is the use of custom developed software.The software itself is not tested as part of the research, however, inevitably bugs and unfamiliarity with the system can have an impact upon participant performance and experience.Similarly, the unfamiliarity of a new system can create a bias for any existing systems that are included in the experiment.These issues were observed in the SIZL evaluation, however their effects were alleviated through the use of two pilot experiments and the presence of the SIZL software developer to assist participants with any software-related problems during the tasks.Another criticism of visualization evaluations is the use of students, as they will not have the necessary experience and expertise to carry out the evaluation tasks asked of them, thus affecting their performance and quality of feedback.Eighteen design engineering students were used in this study.However, rather than tackle unfamiliar tasks, the students in this study were asked to use SIZL to answer questions about a Hollywood spy movie.The decision to use a popular culture reference meant that students were more engaged with the task and able to digest instructions more easily, regardless of whether they had seen the movie or not.Furthermore, there were advantages to using a student demographic: this age group is typically very familiar with visualization systems, in particular games technology, and as such they could use and learn the new software relatively quickly.According to the literature, a good evaluation of a visualization system will provide training in the new system, use appropriate tasks for testing that provide meaningful results for the kind of tasks the new system is designed for, and use appropriate measurements.Section 2.3 will discuss the evaluation process that was carried out for this research.In order to evaluate the effectiveness of exploiting the human cog in the system for effective data management, experiments were conducted with the SIZL system."Microsoft's ‘File Manager’ system—the predominant data management system used in industry today—was used so a comparison could be made between the speed and accuracy of human interaction against a baseline.The fictional story from the 2007 movie ‘The Bourne Ultimatum’ was used as a case study for the evaluation.Firstly, SIZL and File Manager were populated with identical sets of documents relating to The Bourne Ultimatum story, including newspaper articles, receipts and airline passenger manifests.Six questions were set relating to the movie—two ‘easy’, two ‘medium’ and two ‘difficult’.Participants were asked three questions for SIZL and three questions for File Manager.Which question was asked of which system was alternated between tests."The difficult tasks were specifically designed to test user's ability to retrieve file relationships.Questions included:Easy: e.g.What weapon is used by the assassin who was not working for the CIA?,Medium: e.g. Which of his aliases did Bourne use to travel from Moscow to Paris?,Difficult: e.g.Who gave the order to assassinate Simon Ross?,—Give your evidence.Following two pilot tests, 16 participants were invited to take part in the study.Participants were university students, some of whom had seen the movie and some who had not.In a short survey conducted before the test, all participants stated that they were confident with their IT skills.All spent at least an hour a day on a computer or online, with 7 participants stating that they spent over seven hours a day.This survey indicated that the participants were in general highly familiar with visualization systems.Firstly, participants were given a basic File Manager tutorial and a SIZL tutorial to familiarize themselves with the software.Next, participants each spent 20 min answering three questions on File Manager and 20 min answering three questions on SIZL.They answered an easy, medium and difficult question for each system.The number of correct documents found and the time it took them to answer questions was recorded.Participants were also encouraged to ‘think aloud’ and their thoughts were recorded throughout.A facilitator was always available to assist with use of the software.Participants were then asked for immediate feedback through a semi-structured questionnaire.They were asked about the ease of finding information and determining relationships between documents.Some were later invited back for a focus group where they were presented with the overall results.They were then asked to reflect on these results, and discuss their views the benefits and drawbacks of both File Manager and SIZL.For each experiment conducted, both quantitative and qualitative data were gathered: the measurement of task accuracy and time; the collection of user experiences from facilitator observations and the ‘think aloud’ exercise; and the results of the questionnaire participants were asked to complete at the end of their test.This allowed the research to explore both the effectiveness of the visualization methods utilized in SIZL and users’ preferences for the system.This section presents the findings from the 16 experiments, and the focus group that was conducted at a later date.Recordings of the time and accuracy to complete a set of tasks; the qualitative data from observations and the participant survey were analyzed to identify key findings.The focus group was used to validate these findings.During testing, facilitators recorded both the times in which participants answered the three questions for SIZL and the three questions for File Manager, and the accuracy of their answers.The results are displayed in Fig. 5.By plotting time and accuracy together, it can be seen that many participants were able to achieve the same accuracy in SIZL as File Manager, in significantly less time.This relationship was particularly clear for the difficult questions, which were specifically developed to evaluate participants’ ability to identify relationships between documents.No correlation was found between participants’ performance in File Manager and SIZL.During testing, facilitators recorded both their observations of participant behaviour, and any comments made by participants during each session.These notes were then analyzed using NVivo software to identify the most beneficial and popular features of the SIZL system, as well as the most common issues and criticisms.Notes were coded when a particular feature of the SIZL system was recorded as being used to identify the correct answer to a question, or was commented upon positively by a participant.Notes were also coded when participants expressed difficulty or dislike for a particular aspect of the system.Firstly, considering the most useful features of the SIZL system, by far the most commonly used feature was the multi-search functionality.Participants normally began their search for data by entering a number of search terms into the system, and would often use trial and error to retrieve the correct documents.The second feature most commonly referred to in the notes was the ‘lightbox’, which participants used to examine promising documents in greater detail.Based on the testing notes, it would appear that the zoom and timeline functions were used less frequently in determining the correct answers to the questions.Focusing on participant comments, again the multi-search functionality received much praise.For example:‘The multi search was great, it was very useful for seeing documents.’,The overall visualization method of the SIZL system also scored highly, with participants expressing a preference for the visual overview of the dataset that SIZL provides, and commenting on how much easier it was to visualize relationships between documents.The zoom and timeline features also received some positive commentary.‘SIZL provided a good way to visualize the overall dataset.’,‘I found SIZL very effective in highlighting relationships - in File Manager I felt I had to hold a lot of info on relationships between documents in my head, and then forget.Visually seeing the relationships made it much easier to narrow down the docs to find what I was looking for.’,Overall, comments on the SIZL system were highly positive.As predicted, the vast majority of negative comments concerned the SIZL software design and on-screen layout of information; aspects not being tested in this study.In particular, some participants expressed difficulty in identifying highlighted documents and difficulty moving documents around the screen with the keyboard.These problems could be alleviated with improvements to the software prototype, and are not directly linked to the developed visualization system.Although the design of the SIZL software was not being tested in this study, it inevitably had an impact upon participants’ experiences and ability to retrieve the correct answers to the questions.The second most common negative comment relating to SIZL was unfamiliarity with the system.This is a common problem with the testing of any new visualization system, and was in part alleviated by providing each participant with a SIZL tutorial prior to testing; and access to the systems developer during experiments.Another problem situation that occurred on several occasions was participants getting ‘lost’ in the system—going on tangents and struggling to get back on the right track.As well as providing an insight into the most beneficial aspects of the SIZL system, the observation data also demonstrated the ‘human cog’ at work when participants were using SIZL.The facilitators regularly recorded use of reasoning and ‘real world thinking’ when participants were navigating the system for answers, and participants were often observed taking notes while problem-solving.One participant was also observed searching for words that were not included in the question, illustrating how the human mind plays a significant role in the search for relevant information when using the SIZL system.In summary, the key findings from participant observation were:The multi-search function was the most useful and most popular feature of the SIZL system.Participants responded positively to the visualization provided by SIZL.The SIZL system utilizes the ‘human cog’ in the system: participants used the system to navigate their own problem-solving processes.Participants were also asked to complete a survey at the end of the test.This Likert scale survey asked participants about their experience of using SIZL and their views on its effectiveness as a visualization system.Survey results are presented, in descending order of positive response, in Fig. 7."Corresponding to the findings from participant observation, the question relating to the SIZL system's multi-search functionality received the most positive response.Participants also largely responded positively to the usability of the SIZL functions and visual layout of the system.Although unfamiliarity was a common complaint during participant testing, the survey results show that the majority of participants felt that the SIZL system was easy to learn, suggesting this problem would quickly dissipate with experience.Most of the ‘negative’ responses to the survey were in response to negatively keyed questions.However, a significant number of participants responded negatively to the statement ‘I found I could quickly recover when errors occur in this system’.This could be linked to problems with the software design, as discussed in Section 3.2, but could also be linked to the issue of several participants getting ‘lost’ in their search for relevant documents.In summary, the key findings from the participant survey were:Participants responded highly positively to the multi-search function in particular.The majority of participants expressed a preference for the visual layout of the SIZL system.Some participants struggled to recover when errors occurred in the system.Once all data had been gathered and analyzed, participants were invited to attend a focus group to discuss the results and reflect on their experience of the SIZL system.During the focus group, the participants were presented with the key findings from the research and asked to discuss a variety of questions and topics, including:Did you have a preference for either system?,For each system give the benefits and drawbacks."In terms of looking for information in large datasets, please comment on SIZL's multi-search/timeline/zoom features. "Please comment on SIZL's ability to support the visualization of inter-document relationships.This process contributed to the validation of the research results, by reaffirming the key findings from research and providing further insight into the reasons behind these results.The key outcomes from the focus group were:In correlation with the qualitative data findings, the multi-search function was the most popular feature of the SIZL system.The timeline and zoom features, on the other hand, were not used as readily with some participants unaware of these features.Also mirroring the key findings from the tests, errors in the SIZL software were highlighted as a key issue.Although the usability of the software design was not tested as part of the research, it inevitably had an impact upon the results.For example, it is possible that the lack of interest in the timeline and zoom features could be attributed to the design of the SIZL software.This study set out to test the hypothesis that human beings find the recall and recognition of 2D and 3D shapes and environments so intuitive and effortless that any system for the effective management and use of data should make use of this fact.This was done through the evaluation of a visualization system, SIZL, which enables users to discover relationships and patterns within large datasets.A small, preliminary evaluation was conducted, and findings suggest that the SIZL system, using a zooming timeline and multi-search visualization methods, does indeed enable users to achieve similar accuracy in less time when compared to traditional text-based searching.Also, participants expressed a preference for SIZL when asked to identify relationships between documents.These findings would suggest that people do find certain data mining tasks more intuitive when searching within a 2.5D environment.The findings from the qualitative data suggest that the multi-search feature in particular has high potential to improve users’ data mining experience.The findings also highlighted some key areas for improvement—better usability of the zooming timeline functionality and an improved navigation experience for users who make wrong decisions and get ‘lost’ in the system.Improvements to the usability of the SIZL software would also enable more accurate testing of the visualization method.One of the key contributions of this study was the methodology—the use of user evaluation experiments to explore the effectiveness of a new data visualization system.It has been established that there is a lack of evidence of user evaluation in the visualization literature, so it is anticipated that this method may be refined and replicated to build upon the preliminary findings of this study, and may also act as a case study for future evaluations of visualization systems.This paper has presented the results from the preliminary evaluation of a new data visualization system.During this phase the testing process was improved through the use of two pilot tests, however there were still a number of limitations to the research.Some can be attributed to the early stage of development of this system, others are common challenges faced when attempting to evaluate any visualization system.Early evaluation of the system involved only 16 participants.Whilst this is considered an acceptable number for the qualitative elements of the methodology, a sample size of 50–100 participants is more appropriate for identifying statistically significant findings.Therefore, although the statistical analysis of the 16 experiments provided interesting and significant insights into the merits of the SIZL system, further evaluation with a larger number of participants is required to provide more conclusive results.Within the constraints of the project, the SIZL software was designed to be easy to use and accessible, with a view to ensuring the SIZL interface had as little negative impact upon participants’ experience as possible."This is because it was the visualization method that was being evaluated, not the system's interface.However, it was very difficult to completely remove the influence of software design from the evaluation.Qualitative data collected suggested that unfamiliarity with the SIZL software had an impact on participants’ ability to complete tasks and fully experience the visualization system—for example, some participants reported not noticing the timeline functionality, or finding the zoom function difficult to use, even having reacted positively to the concept.Such feedback will enable the SIZL software to be improved for future evaluation.As global partnerships and collaborative workspaces, not to mention the ‘big data’ phenomenon of increasingly digitized businesses, lead to increasingly large and complex datasets, traditional text-based search systems are inadequate for effective information management.The functionality provided by the SIZL system has the potential to improve the data mining experience in a wide range of industries with a need to manage large numbers of documents, understand the timing of these documents and the relationships between them.There are a great variety of industry sectors that could benefit from an improved method of mining relevant data.Three key sectors identified for this research were Defence, IT and Manufacturing."The UK's defence export market exceeded £7bn for the first time in 2009 and today accounts for nearly 20% of the global market.Similarly manufacturing is an important part of the UK economy.It accounts for 12.8% of UK gross domestic product and 55% of total exports.The software industry although smaller by comparison is pivotal to the products of many other enterprises.The proposed SIZL system could support business needs of all three sectors.This is particularly relevant in the defence industries where security is paramount, and information has to be stored and retrieved—in near real time—safely from increasingly diverse and distributed sources.One of the key barriers to implementing a new system in any workplace is ‘familiarity bias’: even if the new system eventually provides an improved experience, the effort and time required to become familiarized with the new system means that people tend to express a preference for the old, familiar system.This was also an issue during the evaluation of SIZL, and, although the software design was not being tested as part of the study, it inevitably had an effect upon participants’ experience.The problem of familiarity bias could be alleviated by combining the implementation of the new visualization with a high standard of user-friendly interface design.This paper has presented the findings from the early phase of a study of exploiting the human ‘cog’ within a data visualization system; testing the hypothesis that human beings find the recall and recognition of 2D and 3D shapes and environments so intuitive and effortless that any system for the effective management and use of data should make use of this fact.The paper also presented the experimental methodology that was used to evaluate the functionalities of the SIZL system: data visualization and searching software that was developed specifically for this study.Although further evaluation is required to provide conclusive results, this first phase has indicated that the SIZL system does help users search for and identify relationships between documents in large datasets, when compared to a traditional text-based system.The key findings from this preliminary study can be summarized as follows:Participants were able to achieve the same accuracy using SIZL, in less time, when compared to results for File Manager.The multi-search functionality was the most commonly used and most popular feature of the SIZL system during testing.Participants stated a preference for the overall visualization of the SIZL system.The zoom and timeline functionalities were not used as often as expected.Some participants experienced problems of getting ‘lost’ in the SIZL system.The next stage in this research is a refinement of the evaluation process and further testing with improved software and a larger pool of participants to provide in-depth and reliable insight into the benefits of using a data visualization system based on the SIZL model, as well as a greater understanding of the potential applications for a system that utilizes the human mind to provide a richer user experience.
Recent years have seen a huge increase in the digital data generated by companies, projects and individuals. This has led to significant challenges in visualizing and using large, diverse collections of digital information. Indeed the speed and accuracy with which these large datasets can be effectively mined for information that is relevant and valuable can have a significant effect on company performance. This research investigates the feasibility of using visual representations for the searching and browsing of large, complex, multimedia data sets. This paper introduces the SIZL (Searching for Information in a Zoom Landscape) system, which was developed to enable the authors to effectively test whether a 2.5D graphical representation of a multimedia data landscape produces quantifiable improvements in a user's ability to assess its contents. The usability of this visualization system was analyzed using experiments and a combination of quantitative and qualitative data collection methods. The paper presents these results and discusses potential industrial applications as well as future work that will improve the SIZL data visualization method.
31,626
Preventing childhood scalds within the home: Overview of systematic reviews and a systematic review of primary studies
Children are at particular risk of thermal injuries.Globally, thermal injuries are the 11th leading cause of death between the ages of 1 and 9 years and the fifth most common cause of non-fatal childhood injuries .The majority of thermal injuries in the under-fives are scalds .They are important as they can result in long term disability, have lasting psychological consequences and place a large burden on health care resources, with an estimated 19 million disability-adjusted life years lost each year .The treatment of scalds is resource intensive.In the USA between 2003 and 2012, the average cost per hospital stay for scald injuries in the under-fives was between $40,000 and $50,000 .The total cost of treating hot water tap scald injuries to children and adults in England and Wales in 2009 was estimated at £61 million .Most scalds in the under-fives occur at home .They are most commonly caused by hot liquids from cups or mugs, baths and kettles .Bath water scalds are more likely to involve a greater body surface area especially in infants and toddlers and are more likely to undergo admission to hospital, transfer to specialist hospital or burns unit .There are a number of systematic reviews that have synthesised the evidence on scald prevention interventions.However, most of them reviewed interventions to prevent a range of childhood injuries including scalds, some do not report conclusions specific to scald prevention and the remainder report conflicting conclusions .One review focussing on interventions specific to reducing thermal injuries in children concluded that there was a paucity of research studies to form an evidence base on the effectiveness of community-based thermal injury prevention programmes.A meta-analysis for which the searches were undertaken in 2009 found home safety education, including the provision of safety equipment, was effective in increasing the proportion of families with a safe hot tap water temperature, but there was a lack of evidence that home safety interventions reduced thermal injury rates or helped families keep hot drinks out of the reach of children .There is therefore a need to consolidate evidence across existing reviews and update the evidence with more recently published studies to inform policy, practice, and the design and implementation of scald prevention.Overviews that synthesise all available evidence on a topic are more accessible to decision makers than multiple systematic reviews and can avoid uncertainty created by conflicting conclusions from different reviews, which may vary in scope and quality .Overviews are useful where, as is the case for programmes to prevent scalds, there are multiple interventions for the same condition or problem reported in separate systematic reviews .This paper presents the findings from an overview of reviews of childhood scald prevention interventions and a systematic review of primary studies to enable the most up-to-date information on scalds prevention interventions to be evaluated.We searched Cochrane Central Register of Controlled Trials, Cochrane database of systematic reviews, MEDLINE, Embase, CINAHL, ASSIA, PsycINFO and Web of Science from inception to October 2012.We also hand-searched the journal Injury Prevention, abstracts of World Conferences on Injury Prevention and Control, reference lists of included reviews and primary studies, and a range of websites and trial registers for potentially relevant studies.No language limitation was applied.We included systematic reviews, meta-analyses, randomised controlled trials, non-randomised controlled trials, controlled before-after studies and controlled observational studies targeting children aged 0–19 and their families to prevent unintentional scalds.The outcomes of interest were unintentional scalds, hot tap water temperature, use of thermometers to test water temperature, lowering boiler thermostat settings, use of devices to limit hot tap water temperature, keeping hot drinks and food out of reach, and kitchen and cooking practices.Potential eligible primary studies were identified from included systematic reviews by scanning references and further eligible primary studies were identified from additional literature searches of electronic databases and other sources.Titles and abstracts of studies were screened for inclusion by two reviewers.Where there was uncertainty about inclusion from the title or abstract the full text paper was obtained.Disagreements between reviewers were resolved by consensus-forming discussions and referral to a third reviewer if necessary.We assessed the risk of bias in included systematic reviews and meta-analyses using the Overview Quality Assessment Questionnaire .The risk of bias of randomised controlled trials, non-randomised controlled trials and controlled before-after studies was assessed with respect to random sequence generation, allocation concealment, blinding of participants and personnel, blinding of outcome assessment, incomplete outcome data, selective reporting and other bias.The risk of bias in cohort and case-control studies was assessed using the Newcastle–Ottawa scale .Data on study design, characteristics of participants, intervention, and outcomes were extracted using separate standardised data extraction forms for reviews and primary studies.Quality assessment and data extraction were conducted by two independent reviewers, with disagreements being resolved by consensus forming discussions and referring to a third reviewer if necessary.In view of the clinical heterogeneity between studies in terms of design, population, intervention and outcomes, data were synthesised narratively by types of outcomes including outcomes related to safe hot water temperature, safe handling of hot food and drinks such as keeping hot drinks and food out of reach of children, kitchen and cooking safety practices such as using cooker guards or keeping children out of kitchen and other outcomes related to scalds that could not be classified specifically.Fig. 1 shows the process of identification and selection of studies.Four meta-analyses and 10 systematic reviews and 39 primary studies were included in the overview.Of these primary studies, 34 were identified from published systematic reviews and meta-analyses and five were identified from the additional literature search.Tables of excluded studies are available from the authors on request.Characteristics of included reviews are shown in Table 2.One review focused on community-based programmes to prevent scalds , while the remainder covered a range of injury mechanisms including but not specific to scalds.Only one review drew conclusions specific to scalds prevention interventions .Two meta-analyses combined effect sizes from studies reporting safe hot tap water temperature and one combined effect sizes from studies reporting keeping hot food and drinks out of reach .Four systematic reviews narratively synthesised the evidence on the effect of interventions on scald injuries and three on safe hot water temperature .Seven systematic reviews reviewed the effectiveness of interventions on prevention of child injuries including burns and scalds, but did not make conclusions specific to scalds prevention .The 39 eligible primary studies included 26 RCTs, 3 NRCTs, 7 CBAs, 2 cohort studies and 1 case-control study.The characteristics of included primary studies are show in Table 3.Most of the included studies employed multifaceted interventions including home safety inspections, education or counselling, provision of educational materials and safety devices.Included studies less commonly reported multifaceted home visiting programmes aimed at improving a range of child and maternal health outcomes, community multimedia campaigns, scald prevention education delivered through lectures or workshops, in clinical consultations, via specially designed computer programmes or other online educational material.Assessment of risk of bias is shown in Table 2 for reviews and Table 3 for primary studies.For reviews, OQAQ scores ranged from 1 to 7.For primary studies, 12 of the 26 RCTs had adequate allocation concealment, 10 had blinded outcome assessment and 14 followed up at least 80% of participants in each group.Of the nine NRCTs and CBAs, none had blinded outcome assessment, two followed up at least 80% of participants in each group and two had a balanced distribution of confounders between treatment groups.Findings from included reviews are shown in Table 2 and from primary studies in Table 3.Six reviews reported interventions to prevent scalds from two primary studies .No meta-analyses reported the effect of interventions on the incidence of scalds.The first study , an RCT, reported significantly fewer self-reported scald injuries two years after a school-based education programme in the intervention group than the control group.The second study, a CBA, found a reduction in the number of scalds, particularly scalds from hot tap water and from hot cooking liquids being pulled from cooker tops, in the intervention areas over a 12 year period, but does not present similar data for the control area or the statistical significance of these findings .Fourteen reviews reported the effect of interventions on safe hot tap water temperature from 26 primary studies and three primary studies reporting safe hot tap water temperature were identified from additional literature search .Two meta-analyses combined effect sizes for having a safe hot tap water temperature, and both found a significant effect favouring the intervention group with pooled odds ratios of 2.32 and 1.41 .Three systematic reviews concluded there was a positive effect of interventions on safe hot water temperature from a narrative synthesis of the evidence .Eighteen of the 29 studies clearly defined safe hot tap water temperature:less than or equal to 46 °C ,less than 49 °C ,less than or equal to 52 °C ,less than or equal to 54 °C ,less than or equal to 60 °C .Eleven studies did not define safe hot tap water temperature .Eleven studies reported significant effects favouring the intervention group for one or more outcomes related to safe hot tap water temperature including families having a safe hot water temperature, checking hot water temperature, and using engineering equipment to control hot water temperature.This included nine RCTs , one CBA and one cohort study .Six studies reported significantly more families in the intervention than control group had a safe hot tap water temperature .Five studies reported significantly more families in the intervention than control group checked or tested their hot tap water temperature , including one RCT specified using water temperature cards and another using thermometers .A cohort study found significantly more families exposed to the intervention lowered their hot water temperature than those not exposed to the intervention .One RCT found significantly more families in the intervention than control group used spout covers for bath taps .However, one CBA evaluating home safety checks, education and provision of bath water thermometers found significantly fewer families in the intervention group had a hot tap water temperature less than or equal to 52 °C than in the control group .Most primary studies reporting significant effects on outcomes related to safe hot tap water temperature employed multifaceted interventions.Three RCTs and one CBA provided safety education, a home safety assessment and safety equipment .Two RCTs provided safety education and thermometers for checking water temperature .One RCT provided education and thermostatic mixing valves fitted by qualified plumbers .Two RCTs delivered educational lectures .One RCT compared education plus supplying thermometers to supplying thermometers alone .One cohort study compared families exposed to a multi-media scald prevention campaign with unexposed families .Eighteen primary studies did not find a significant effect of interventions on outcomes related to safe hot tap water temperature including families having a safe hot water temperature, checking hot water temperature and using engineering equipment to control hot water temperature.These including 11 RCTs , two NRCTs , three CBAs , one cohort study and one case-control study .These studies evaluated integrated or individual interventions including home visits, home safety checks, counselling, safety education and offering safety devices.Three systematic reviews and one meta-analysis looked into the effect of interventions on safe handling of hot drinks and food from seven primary studies .Two more primary studies were identified through additional literature search .The meta-analysis estimated the pooled odds ratio for the effect of home safety education on keeping hot food and drinks out of reach; it failed to find a significant effect of the intervention .Of the nine studies, one RCT evaluated the effectiveness of education plus home safety assessments .It found that significantly more families in the intervention group tested the temperature of food prepared in a microwave oven than the control families.The remaining eight studies evaluating a range of interventions, including home safety education, tailored safety advice, home safety assessments, provision of discounted or free home safety equipment and exposure to Safe Kids Week champion, found no significant differences between the intervention and control groups.These included three RCTs , three NRCTs and one CBA and one cohort study .Nine reviews reported the effectiveness of interventions on kitchen and cooking safety practices from 6 primary studies .No meta-analyses reported pooled odds ratios related to kitchen and cooking practices.Two primary studies investigating interventions on kitchen and cooking safety practices were identified through additional literature search .Two of the eight primary studies found significant effect of interventions.One RCT evaluating home safety education and home safety assessments reported that families in the intervention group were significantly more likely to have “childproofed” electrical heating devices in the kitchen .One NRCT evaluating home safety education, home safety assessments and burn and scald prevention workshops found that the intervention group were significantly more likely than the control group to have a “child-protected” cooker, and to have removed objects that a child could use to climb on to reach the sink .However, the other six studies reporting on a variety of interventions including home safety education, home safety assessments, media campaigns, and free home safety equipment did not find any significant differences between the intervention and control groups in promoting kitchen and cooking safety practices.One RCT evaluating the effectiveness of a school-based injury prevention programme found no significant differences between the practices of children in the intervention and control groups when cooking without an adult present.Another RCT evaluating home safety education, home safety assessments and discount vouchers for safety equipment found no significant effect on keeping heating devices out of reach of children or for the use of stove guards.An RCT assessing the effectiveness of an emergency department based home safety intervention found no significant effect on cooking on the back burners of cookers or turning pan handles towards the back of the cooker.An NRCT evaluating providing tailored home safety education found no significant effect on keeping children away from the cooker or oven or on turning pan handles away from the edge of the cooker.One cohort study evaluating Safe Kids Week 2001 found no significant differences between families who had been exposed to a media campaign on scald and burn prevention and controls for kitchen and cooking safety practices including cooking on the back burners of the cooker, keeping children out of the kitchen when cooking, turning pot handles to the back of the cooker and removing dangling cords of heating devices.A case-control study investigating hazards in the homes of children who had presented with injuries from falls, burns, scalds, ingestions or choking found that no significant differences between cases and controls for having a cooker guard or not having dangling cords of heating devices."Eight reviews reported other scald-related outcomes such as burn safety scores which comprised a range of burn prevention behaviours such as pot handles left facing the edge of stove, not drinking tea/coffee or eating hot food when a child is on someone's lap, putting cool water in first when running a bath, or in some studies, undefined scald-related safety practices and undefined use of safety devices.No meta-analyses reported pooled odds ratios for any other scald-related outcomes.Four primary studies reported other scald-related outcomes.Two RCTs found significant effects on intervention groups from home safety education, home safety assessments and free home safety equipment on the burn safety scores than the control groups .One RCT found significantly more families in the intervention group made their homes safer after a television campaign, home safety advice, a home safety assessment check and advice on welfare benefits available to purchase safety equipment and local availability of equipment .One CBA found no significant effect of a multi-faceted campaign aimed at reducing the occurrence of scalds in children aged 0–4 years on scald prevention behaviours .This overview synthesised the largest number of primary studies evaluating child scald prevention interventions to date.Eligible studies were identified from comprehensive searches of published reviews, electronic databases, conference abstracts and other sources minimising the potential for publication and reporting bias.Rigorous procedures were used for study selection, quality assessment and data extraction.Our overview incorporated evidence from a spectrum of study designs including RCTs, NRCTs, CBAs, cohort studies and a case-control study to ensure maximum ascertainment of evidence in the field.There was little evidence of the effect of scald prevention interventions on the incidence of scalds.We were able to find only two studies reporting scald occurrence, one of which reported a significant reduction in the incidence of scalds following a primary school-based injury prevention programme targeting school children and parents .The second reported a reduction in the incidence of scalds following a community burn prevention programme comprising home safety education, home safety assessments, the promotion and installation of cooker guards and lowering tap water thermostat settings .However, the statistical significance of the reduction in scalds was not reported.There was more evidence that home safety interventions are effective in promoting safe hot tap water temperature with two meta-analyses and 11 primary studies reporting significant effects favouring the intervention group.Most studies with significant effects provided home safety education, home safety assessments and discounted or free safety equipment including thermometers and thermostatic mixing valves.We did not find any consistent evidence that home safety interventions were effective in promoting the safe handling of hot food or drinks, or kitchen and cooking safety practices, but the number of studies reporting these outcomes was small.In addition, there was wide variation and a lack of standardisation in the tools used to measure these outcomes, which hampered evidence synthesis in general and meta-analysis in particular.There are several limitations of the review.First, there was considerable heterogeneity in the content of interventions of included studies and most studies used multifaceted interventions, hence it was not possible to attribute treatment effects to specific components of interventions.Care needs to be taken in interpreting the effects of interventions on hot tap water temperature due to the varying definitions of a “safe” temperature used by different studies and some studies not providing the definition they used.In addition, the temperature defined as “safe” has reduced over time, with more recent studies using a lower temperature than older studies.Consequently it is possible that the interventions in our review may not reduce hot tap water temperatures to levels that would now be considered sufficient to substantially reduce the risk of scalds.There was also considerable variation in study populations across included studies, making it difficult to ascertain if interventions would benefit specific groups of children or families to a greater degree.The vast majority of included studies were undertaken in high income countries, limiting the generalizability of our findings to low and middle income countries.The risk of bias varied across studies, but up to half of the RCTs had adequate allocation concealment, blinding of outcome assessment and follow up of at least 80% of participants in each group.For the NRCTs and CBAs, none had blinded outcome assessment, and only one in five had follow up of at least 80% of participants in each group or balance of confounding factors between groups.The new evidence we found was consistent with the findings from the two published meta-analyses and from the published narrative systematic reviews which found home safety interventions were effective in promoting a safe hot tap water temperature.Our findings were also consistent with the previous meta-analysis and many systematic reviews that failed to find evidence that home safety interventions improved other scald prevention practices or reduced the incidence of scalds.Our finding that most studies which were effective in promoting a safe hot tap water temperature included home safety education, home safety assessments and free or discounted safety equipment differed from that of the review by Pearson and colleagues .This review focussed on home safety assessments, with or without the provision of safety equipment.Since publication of that review, two new studies have demonstrated significant effects favouring the intervention group , both of which provided free home safety equipment.In addition, our review included a wider range of interventions and these differences may partly account for the apparent inconsistency in our findings.Although this review focussed on interventions that could be delivered in health and social care settings, other engineering or legislative approaches may be beneficial in reducing scalds.A recent trial evaluating thermostatic control of social housing estate boiler houses with daily sterilisation demonstrated significant reductions in hot tap water temperature .Legislative changes such as those requiring new boiler thermostats to be set at lower temperatures or requiring thermostatic mixing valves in domestic settings are likely to be cost-effective.An economic analysis of one of the trials included in this overview found home safety education plus fitting of thermostatic mixing valves as part of bathroom refurbishment of social housing stock saved £1.41 for every £1 spent .A recent Canadian study evaluating legislation to lower thermostat settings on domestic hot water heaters accompanied by yearly educational information provided to utility company customers estimated cost savings of C$531 per scald averted .It is therefore important that scald prevention strategies encompass other engineering and legislative approaches as well as educational ones.The paucity of evidence we found highlights the need for research to investigate the effect of interventions on reducing the incidence of childhood scalds in the home, the safe handling of food and drinks, and safe kitchen and cooking practices.Researchers should use existing validated tools to measure these outcomes wherever possible to facilitate evidence synthesis and meta-analysis.In terms of helping households to have a “safe” hot tap water temperature, further analyses are required to disentangle the effects of providing home safety education, thermometers, home safety assessments and thermostatic mixing valves.Network meta-analysis has previously been used to good effect in synthesising the evidence for smoke alarms and is likely to be helpful in this situation.Providers of child health and social care should provide education to reduce tap water scalds, along with thermometers or thermostatic mixing valves.Public health policy-makers and practitioners should develop and implement scald prevention strategies that encompass legislative, engineering and educational approaches to reduce scalds risk.
Objective To synthesise and evaluate the evidence of the effectiveness of interventions to prevent scalds in children. Methods An overview of systematic reviews (SR) and a SR of primary studies were performed evaluating interventions to prevent scalds in children. A comprehensive literature search was conducted covering various resources up to October 2012. Experimental and controlled observational studies reporting scald injuries, safety practices and safety equipment use were included. Results Fourteen systematic reviews and 39 primary studies were included. There is little evidence that interventions are effective in reducing the incidence of scalds in children. More evidence was found that inventions are effective in promoting safe hot tap water temperature, especially when home safety education, home safety checks and discounted or free safety equipment including thermometers and thermostatic mixing valves were provided. No consistent evidence was found for the effectiveness of interventions on the safe handling of hot food or drinks nor improving kitchen safety practices. Conclusion Education, home safety checks along with thermometers or thermostatic mixing valves should be promoted to reduce tap water scalds. Further research is needed to evaluate the effectiveness of interventions on scald injuries and to disentangle the effects of multifaceted interventions on scald injuries and safety practices.
31,627
High-temperature deformation mechanisms in a polycrystalline nickel-base superalloy studied by neutron diffraction and electron microscopy
Nickel-base superalloys are a structural material for applications that demand high strength at elevated temperatures as well as hot corrosion resistance .The fundamental basis for their high-temperature strength is that they contain a significant volume fraction of γ′ precipitates, whose ordered L12 structure provides precipitation strengthening, particularly at high temperature.One of the main drivers for developing advanced nickel-base superalloys has been the desire to increase the turbine entry temperature, since the performance and efficiency of the engine is greatly improved if the TET can be raised .For high-pressure compressor and turbine discs, where polycrystalline nickel-base superalloys are applied because of the required balance of high-temperature strength/creep resistance and fatigue properties, an important development has been the rising volume fraction of γ′.As a consequence, conventional γ′ strengthened polycrystalline nickel-base superalloys, such as Waspaloy, are now being replaced in the most demanding parts of an aero engine by more advanced alloys with ∼50 vol.% γ′ .The current understanding and theories of precipitation strengthening in polycrystalline alloys were developed for materials with low precipitate volume fractions, where negligible interaction between the precipitates is expected.In the case of large γ′ volume fractions, the precipitates will constrain the γ matrix, which needs to be considered.In addition, a great deal of work has been performed on understanding the deformation mechanisms in single crystal nickel-based superalloys, which generally contain very high γ′ volume fractions.In contrast, studies on deformation mechanisms in polycrystalline nickel-base superalloys with a balanced volume fraction of γ and γ′ are rare .However, in order to optimize their microstructure for best performance and for providing guidance when developing new alloys, it is imperative to improve the micromechanical understanding of the interaction between γ and γ′ for different γ′ particle sizes during mechanical loading and at temperature.Nickel-base superalloys for disc application in aero engines generally possess a complex γ′ size distribution within the face-centred cubic γ matrix.Depending on whether the material was heat treated above or below the γ′ solvus, either a bimodal or a trimodal γ′ size distribution is observed .In the latter case, non-coherent primary γ′ pins the grain boundaries during the sub-solvus solution heat treatment and effectively reduces the level of intragranular γ′.In the case of intragranular γ′, a cube–cube crystallographic relationship exists with the γ matrix and, because of their similar lattice parameter, the interface is coherent.Typically, the diameter of primary γ′ precipitates is in the range of 1–3 μm, whereas it is between 50 and 500 nm for secondary γ′ and 5–30 nm for tertiary γ′ .A number of potential deformation mechanisms in γ′ strengthened nickel-base superalloys have been identified, including weakly and strongly coupled dislocations cutting γ′ , cross-slip of superdislocations in γ′ forming a Kear–Wilsdorf lock and dissociation of dislocations into partials , as well as Orowan looping of dislocations , dislocation climb and microtwinning .The activity of the many possible deformation mechanisms in the material during deformation is often closely related to the size distribution of the γ′ precipitates.For instance, if a single dislocation were to move through an ordered γ′ precipitate, it would leave an anti-phase boundary behind, increasing the energy of the crystal.Consequently, dislocations that cut through γ′ tend to move in weakly or strongly coupled pairs with the second dislocation cancelling the APB .The difference between weakly and strongly coupled dislocations is related to the distance between the two dislocations and whether that distance is larger or smaller than the width of a precipitate .Furthermore, there are a number of ways in which the dislocations can dissociate into partials, which can be favourable because the dislocation elastic energy is proportional to b2, where b is the Burgers vector.The bulk of deformation studies have been obtained by post-mortem analysis using, for example, transmission electron microscopy.Such studies alone often make it difficult to pinpoint the onset of a certain deformation mechanism during plastic deformation and the relative importance of the different mechanisms, especially in polycrystalline materials.For this reason, in situ studies have become more common, using, for example, highly penetrating neutron or high-energy synchrotron X-ray diffraction to measure the evolution of intergranular strains during plastic deformation .The elastic lattice strain evolution recorded for a particular material during mechanical loading can be understood as a fingerprint of the dominant deformation mechanisms.In order to use such fingerprints for identifying deformation modes, crystal plasticity modelling is required.The most commonly used plasticity model for such an analysis is the elasto-plastic self-consistent model, which uses the Eshelby–Hill formulation .For the interpretation of two phase materials, such as γ′ strengthened nickel-base superalloys, a two-site EPSC model was developed by Daymond et al. .This adaptation of the model uses two inclusions inside an infinite medium.The two inclusions have, for the case of nickel-base superalloys, a cube–cube orientation relationship, and the volume fraction is determined by the relative sizes of the inclusions.Daymond et al. used in situ neutron diffraction to study deformation mechanisms in Udimet 720LI, with a trimodal γ′ size distribution, at various temperatures between 20 °C and 750 °C.Using the two-site EPSC modelling approach, it was demonstrated that a change in deformation mechanism occurred with increasing test temperature.In order to obtain a good fit between predicted and measured intergranular strain evolution, as well as predicting the measured flow curve accurately, the addition of cube slip was needed above 400 °C .The deformation mechanisms in precipitation strengthened materials are known to be strongly dependent on the precipitate size.However, the microstructures that Daymond et al. tested were trimodal, and therefore the responses of the γ′ diffraction peaks consisted of diffraction signal from three different sizes of precipitate, which are expected to behave differently.For the present work and to circumvent this issue, material with three model microstructures with unimodal γ′ size distributions was produced.These were deformed at 750 °C, using neutron diffraction to record, in situ, the elastic lattice strain evolution.These data were then used in conjunction with EPSC modelling to study the effect of γ′ size on the deformation mechanisms of this superalloy.Note that the same methodology was applied previously to study deformation mechanisms at room temperature .The temperature region of 750 °C is of particular interest, because it is considered to be near the maximum temperature that an advanced nickel-base superalloy for disc application can sustain for an extended period of time.The material studied was RR1000, a nickel-base superalloy developed by Rolls-Royce plc and used in disc components in the high-pressure compressor and turbine of aero engines ; see Table 1 for the composition .Compared with more conventional nickel-base superalloys such as Waspaloy and Inconel 718, RR1000 has a higher volume fraction of γ′.As mentioned above, disc alloys exhibit a complex bimodal or trimodal γ′ size distribution.This is related to the continuous nucleation and growth of γ′ when the material is cooled from the solution heat treatment that is carried out either slightly below or above the γ′-solvus .For this reason, developing an improved understanding of the γ′ particle size on deformation mechanisms in such alloys has been hampered.To overcome this problem, model microstructures with unimodal γ′ size distributions have been developed that exhibit γ′ precipitation sizes of 90, 130 and 230 nm.This was achieved by first heat treating the material above the γ′-solvus for 1 h at 1180 °C to dissolve all γ′, followed by oil quenching, which produced a very fine bimodal γ′ distribution with sizes of ∼60–70 and 10 nm.A second heat treatment of 800, 925 or 1050 °C was then applied to allow growth of the precipitates to 90, 130 or 230 nm, respectively.In each case, the material was cooled very slowly in order to allow the precipitates present during the second heat treatment to grow without nucleating new γ′.In addition, such slow cooling rates should ensure that the chemistry of γ and γ′ stays comparable for the three microstructures.Previous work had demonstrated effects of cooling rates on lattice mismatch and γ′ chemistry because of differing diffusion rates of the various γ′ stabilizers through the γ matrix .Here, X-ray diffraction on electrochemically extracted γ′ was used to confirm that, despite their different size, the chemistry of the γ′ particles was similar.These experiments revealed variations of less than 7.5 × 10−4 Å in the γ′ lattice parameter for the three different particle sizes.The three different model microstructures were deformed in uniaxial tension while monitored in situ, using neutron diffraction on the ENGIN-X beam line at the UK neutron spallation source ISIS .It is important to note that the intensity of the γ′ superlattice reflections is significantly stronger when using neutron compared with X-ray diffraction.In the diffraction spectrum of a fcc structure, the and reflections are extinct because of the existence of half planes, resulting in destructive interference.However, with an L12 structure, the atoms in the adjacent planes have different scattering lengths and therefore the peak is not completely extinguished.The X-ray scattering lengths of nickel and titanium are relatively similar and determined by the atomic number.However, neutron scattering is a nuclear interaction, and neighbouring elements in the periodic table can have substantially different scattering characteristics .Most importantly for the present case, titanium displays a negative neutron scattering length, while nickel and other alloying elements have a positive neutron scattering length .Since titanium partitions to γ′, the tendency for extinction of the superlattice reflections is relatively weak when using neutron diffraction.Despite this, a relatively long counting time of ∼20 min is still required on ENGIN-X to obtain sufficient signal-to-noise ratios for the various superlattice reflections.The tensile samples were of cylindrical shape with a gauge length of 50 mm and 6 mm diameter.The high-temperature loading experiments were carried out on an INSTRON 100KN tensile rig.Macroscopic strain was monitored on the samples using a dynamic high-temperature extensometer clip gauge, while the diffraction spectrum of the loading and transverse directions were recorded using the two-detector banks, as indicated in Fig. 2.The experimental setup is explained in more detail in Ref. .The diffracting gauge volume was defined using slits that were 4 mm high and 8 mm wide on the incident side and 4 mm collimators on the diffracting side.The tensile samples were heated to 750 °C using an optical furnace, which uses a thermocouple spot welded to the sample to control the temperature.The two-detector bank setup at ENGIN-X enables the simultaneous measurement of the elastic lattice strains in the loading and transverse directions.Because of the comparatively long counting times to acquire good enough data to enable the deconvolution of the γ and γ′ spectra, a continuous strain rate could not be applied during tensile loading.Therefore, the samples were loaded in steps and held at each load for 20 min, while the diffraction spectra were recorded.The experiment was carried out in load control to avoid any stress relaxation during data acquisition.Consequently, some creep strain was measured during the experiment, which increased with increasing load.The amount of strain accumulated during each holding period can be deduced by comparing the strain between each data point in the stress–strain curves in Fig. 3.The frequency of measurement points was increased by decreasing the load steps around the yield point, which is the area of most interest.The behaviour of different grain families was investigated using single peak analysis.Owing to the similar lattice parameter of γ and γ′, the only distinguishable difference in the diffraction spectra comes from the ordered nature of the L12 structure.Hence, the γ′ phase has additional reflections compared with the γ phase, which does not have any peaks that can be fitted independently.The individual phase responses can only be “deconvoluted” from the and reflections using the information of the corresponding γ′ superlattice peaks .The exact knowledge of the and reflections enables one to fix the γ′ and peak position and width when using a double peak fitting routine to isolate the and responses of the γ matrix.In recent years, this modelling approach has been used extensively on a range of quasi-single-phase engineering alloys to identify possible slip modes .A central aspect of the elastic lattice strain data recorded by neutron diffraction is that they provide additional information that can be applied to constrain the choice of parameters used in the plasticity model.In other words, input parameters from Eq. are not only chosen to fit a stress–strain curve, but also to the elastic lattice strain evolution measured in the longitudinal and transverse direction by neutron diffraction .The fitting parameters used in this case are the stiffness values for the elastic region, the slip systems and the coefficients of the Voce hardening law for the plastic behaviour.It is important, particularly in the case of a dual phase material, that a large enough number of grains is modelled to gain suitable statistics for the lattice strain of individual grain families.In the present case, it was sufficient to use 1000 grains in the calculations along the loading direction, while to capture the reflection in the transverse direction required ∼5000 grains, and for the reflection in the transverse direction, ∼50,000 grains were required.Scanning TEM was employed to image the deformation structure after failure .Foils were cut from the tensile specimens parallel to the loading direction and ground down before 3 mm discs were punched out of the material.The discs were thinned further to a thickness of 100–150 μm and then electropolished using 8% perchloric acid in acetic acid and a twin-jet Tenupol at 10 °C and 40 V. STEM imaging was performed on an FEI Tecnai F20 XT with a field emission gun at 200 kV at Ohio State University, Columbus, OH, USA.Fig. 1 presents FEG-SEM images of the model microstructures.For the medium and coarse γ′ microstructures, a unimodal γ′ size distribution was successfully generated.The average particle size, including standard deviation using the linear intercept method, and the γ volume fraction are given in Table 2.The 800 °C heat treatment did not result in a strictly unimodal γ′ size distribution, as the very fine γ′ formed during oil-quenching did not dissolve at this temperature.In this case, only the slightly coarser γ was considered in the quantitative analysis, while the very fine γ was estimated to be ∼20 nm.When determining the best parameters for the EPSC model, agreement is sought with the bulk stress–strain curve and the elastic lattice strain curves.Fig. 3 shows the measured bulk stress–strain curves for each microstructure tested at 750 °C compared with the EPSC predictions.The yield stresses for the fine, medium and coarse γ′ microstructures were 550, 450 and 350 MPa, and the strain to failure was 2.5, 2.9 and 3.1%, respectively.Hence, as the γ′ particle size is increased, the yield stress reduces, but the ductility increases slightly.The reason for the low ductility at 750 °C is currently not known.It is possible that the long holding periods at high stress levels, required for the accurate measurements of the diffraction spectrum, lead to creep cavitation, or that the unusual heat treatment procedure for generating the model microstructures resulted in either defects from the oil quench or the formation of the brittle sigma phase.In addition, with reduced γ′ particle size, both the initial and final hardening rates decrease.Fig. 3 also shows that the EPSC model is capable of closely fitting the measured stress–strain curves of the three microstructures by adjusting the parameters used in the Voce hardening law.It is important to note that a simple fit to a stress–strain curve will not unambiguously identify the most physically meaningful parameters used in the EPSC model.Therefore, the stress–strain curves were fitted alongside the elastic lattice strains measured by neutron diffraction in the loading and transverse direction.Figs. 4–6 show the measured and best fitted elastic lattice strain evolution of the and γ grain and γ′ precipitate families for the three microstructures.In the case of the fine γ′ microstructure, the elastic lattice strain responses of the γ and the γ′ phase are almost identical throughout the deformation process implying a joint deformation of γ and γ′.In contrast, the elastic lattice strain responses of γ and γ′ deviate in the plastic regime for the medium and coarse γ′ microstructures.In the case of the coarse γ′ microstructure, the deviation occurs close to the yield point of the material, while in the case of the medium γ′ microstructure, more plasticity is required before load partitioning between γ and γ′ takes place.After the point of deviation, the γ′ phase takes up more elastic lattice strain than γ in both the medium and coarse γ′ microstructure.This behaviour was also observed for the coarse γ′ microstructure at 20 °C and 500 °C .As the γ′ phase takes up more elastic lattice strain, the γ phase takes up more plastic strain, indicating a load transfer from γ to γ′.Generally, reasonably good agreement between the recorded and predicted elastic lattice strains was obtained in the loading direction, whereas in the transverse direction this was less the case, as previously reported in other materials, and discussed in Ref. .Most importantly, it was possible to predict the different levels of load transfer between γ and γ′ for the different types of microstructures by adjusting the phase-specific critical shear stresses and hardening rates.The STEM images of the deformed microstructures after failure show stacking faults in all three microstructures.In the fine γ′ microstructure, these stacking faults tend to extend through both the γ and γ′ phases equally, whereas in the medium and coarse γ′, the stacking faults are exclusively limited to the γ′ precipitates, highlighting a difference in the respective deformation behaviour.Dislocations are found to pile up around the precipitates in the medium and even more so in the coarse γ′ microstructure, but not in the fine γ′.No rafting, coalescence or coarsening of particles was observed in the microstructures after failure.The in situ loading experiments at 750 °C show that the yield strength of the material increases as the γ′ particle size decreases in the range of precipitate sizes studied in this work.Using the additional information recorded during the in situ experiment and the information obtained from the EPSC model, it is now possible to obtain a better micromechanical understanding of the early stage of deformation.It is clear that, with increasing particle size, peak broadening is greater for a given strain.The FWHM results presented in Fig. 9 therefore suggest increased dislocation interaction and retained stored energy with increasing γ′ particle size during plastic deformation, which is again characteristic of a higher hardening rate.A higher dislocation density in the γ matrix is also observed qualitatively in the STEM images of the medium and coarse γ′ microstructure.The STEM images of the deformed fine γ′ microstructure show stacking faults extending through both phases, which means that the same slip system is active in γ and γ′.There is little evidence for accumulation of dislocation content in the matrix around the precipitates.Therefore, the γ and γ′ phases appear to be deforming jointly, a hypothesis that is supported by the neutron diffraction and ESPC modelling results.In the elastic lattice strain response of the fine γ′ material, hardly any load transfer is observed throughout the course of plastic deformation, implying that γ and γ′ always take up a similar amount of plastic strain, as is seen in the phase-specific plastic strain results of the EPSC model.A further indication of the joint deformation in the fine microstructure is that the two phases display almost identical hardening behaviour according to the prediction of the EPSC model.The joint deformation of the two phases in the fine microstructure might be related to the presence of very fine γ′ precipitates in addition to the 90 nm size precipitates, Fig. 1a.These particles might impede the independent deformation of the matrix phase, which was observed in the other two microstructures.The joint deformation of the two phases also appears to result in the most efficient hardening of the γ matrix by the precipitate phase, as the phase-specific stress of the γ phase as well as the yield strength are highest for this microstructure.Furthermore, recent phase field modelling results indicate that matrix dislocations will tend to dissociate and decorrelate in microstructures with finer γ′ precipitates and narrower channels, owing to the relative forces acting on the leading and trailing Shockley partials .The decorrelation of matrix dislocations forms intrinsic stacking faults in the matrix, and is furthermore considered to be a necessary precursor to shearing of the γ′ precipitates by superlattice stacking faults and microtwins.In the medium and coarse γ′ microstructures, the stacking faults are confined to the precipitates and not seen in the matrix, which highlights a change in deformation behaviour from the fine γ′.Similar deformation behaviour is found after creep in single-crystal nickel-base superalloys , where < 112 > -type dislocations cut through the γ′ precipitate.It is not surprising that the deformation structure observed here resembles creep deformation, considering the holding periods and stepwise loading pattern applied during the neutron diffraction experiment.Since the stacking faults are only found in the γ′ phase, a type of dislocation cuts through the precipitate, different from that travelling through the γ matrix.At the particle interface, the dislocations of the matrix have to combine in order to create a dislocation that can cut through the precipitate.The resultant barrier to dislocation movement is also evident from the accumulation and pile-up of dislocations in the γ channels, which is observed in the STEM images of the medium and especially the coarse γ′ microstructure.A change in deformation mechanism from the fine to the medium and coarse γ′ can also be deduced from the elastic lattice strain data.Unlike in the fine γ′, there is a difference in the elastic lattice strain response of γ and γ′ in the case of the medium and coarse microstructures.A load transfer from γ to γ′ is observed.With increasing γ′ size, this load transfer starts at lower plastic strains and becomes more pronounced.The difference in the elastic lattice strain responses shows that γ and γ′ no longer deform together, which correlates well with the STEM results.As the dislocations have to wait at the precipitate interfaces for a suitable dislocation for reaction, a higher plastic strain is achieved in the γ matrix than in the precipitate.The difference in plastic strain is also seen in the results of the EPSC model where, for the medium and coarse γ′ microstructure, the phase-specific plastic strain of the γ phase is higher than for γ′.An important observation of the phase-specific hardening curves is the apparent softening of the γ matrix with increasing γ′ particle size.The greater interparticle spacing in the coarse γ′ microstructure allows the matrix to deform on its own, while in the fine γ′, the small inter-particle spacing as well as the tertiary γ′ present only in this microstructure forces the matrix to deform jointly with the precipitates, leading to a higher yield strength.The differences observed between the three microstructures might be linked to the occurrence of cross-slip in the material.The alloy studied here was developed to have a low stacking fault energy in order to promote planar slip, which is beneficial for low crack growth rates .This desired behaviour seems to occur in the fine γ′ microstructure with stacking faults extending through both γ and γ′, hence encouraging planar slip patterns in both phases.In the other two microstructures, where stacking faults are found only in the precipitates, but not in the matrix, cross slip might be activated in the matrix but inhibited in the precipitates because of these stacking faults.This might further contribute to the difference in the elastic lattice strain of the two phases observed in the medium and coarse microstructures.Without the activation of cross-slip in the precipitates, their deformation might be mainly elastic at this point during the plastic regime, explaining the larger elastic lattice strains compared with the γ phase.At this stage it is interesting to compare the present observations with previous studies of exactly the same alloy and microstructures, but loaded at 20 °C and 500 °C .Remarkably, the behaviour seen in the elastic lattice strain response at these lower temperatures was essentially the same as observed here at 750 °C.It showed the same tendency for increased load to transfer from γ to γ′ with increasing γ′ size, with no load transfer in the fine γ′ microstructure and observation of such transfer in the medium and coarse γ′ microstructures.It was also more pronounced and started “earlier” in the coarse γ′ microstructure compared with the medium γ′ microstructure, i.e. at a lower plastic strain.However, what was indeed different during the high-temperature experiments was that less plastic strain was required for the onset of load transfer, and that the degree of load transfer was much greater compared with the room temperature experiments.Although the general behaviour is the same, the amount of load transfer, when it occurs, is more pronounced at higher temperatures.This indicates that the strength of the γ′ phase in the material decreases more slowly than that of the γ phase as a function of increasing temperature.Furthermore, the contribution of the γ′ phase to the overall strength of the material changes differently for different particle sizes as the temperature is increased.At room temperature, a large percentage of the plastic regime of the medium microstructure was spent in joint deformation of γ and γ′, a behaviour that is closer to the behaviour of the fine precipitates, whereas at 750 °C this percentage is much lower.The behaviour at 750 °C is therefore more similar to that shown by the coarse particles.In other words, when the temperature is raised, the particles need to be smaller to ensure joint deformation.In RR1000 tested at 20 °C and 500 °C, coupled dislocations were observed at least in the fine γ′ microstructure, and the occurrence of load transfer was attributed to the onset of dislocation bowing in addition to particle shearing, as both processes were found in the microstructures showing load transfer .In contrast, here single dislocations appear to operate in all three microstructures, as the abundance of stacking faults indicates, and load transfer seems to be associated with the presence of stacking faults that are limited to the precipitates, as demonstrated by the STEM analysis.Another interesting comparison can be made with previous work carried out on UDIMET 720LI with a conventional trimodal γ′ distribution .In that particular work, load partitioning between the phases was observed from the outset when the material was tested at 750 °C, but not at room temperature.So, while at room temperature the load partitioning observed for the UDIMET trimodal γ′ size distribution resembles the behaviour of the fine γ′ microstructure in RR1000 , in contrast at high temperatures, the trimodal UDIMET microstructure displays similar behaviour to the coarse γ′ microstructure of RR1000.Whereas in the work of Daymond et al. a change in deformation mode from octahedral to cube slip was required to model the diffraction elastic strains at high temperatures, the modelling results here and in Ref. suggest that the slip modes in γ and γ′ at 750 °C and 20 °C are the same.In particular, there seems to be no need to invoke cube slip.The biggest discrepancies between the predicted and measured elastic lattice strains occur at higher plastic strains, where the particles behave harder than modelled, for all microstructures.This effect is stronger for the 1 0 0 reflection, and therefore it is suggested that this has to do with the onset of multiple slip.This discrepancy was also observed at 20 °C and 500 °C, where the deviations were even larger.The findings presented here show that the fine γ′ microstructure displays the deformation structures desired for good fatigue strength.While it does have the highest yield strength, hence taking best advantage of the increase in strength of the γ′ phase, the fine microstructure is not very ductile at high temperatures.This might lead to early failure if, for example, the yield strength is locally exceeded as a result of stress concentration around carbides or other particles.In that event, the higher ductility of the material with coarse γ′ particles would be preferable.In conclusion, a comparison of the elastic lattice strain data with the electron microscopy results shows excellent agreement between the two methods.The most striking difference in the elastic lattice strain results when the γ′ size increases is the increasing load transfer between the two phases.In the STEM results, this is manifested in the difference in dislocation density between γ and γ′.In the STEM images of the fine γ′ microstructure, it is hard to distinguish between the two phases, just as no difference is seen in the elastic lattice strain response.In the coarse γ′ microstructure, the precipitates can easily be distinguished from the matrix, as dislocations pile up around them, which correlates well with the difference in the elastic lattice strain response that is found.The medium γ′ microstructure displays a behaviour that is in-between that of the fine and the coarse microstructures regarding both the STEM images and the elastic lattice strain response.In situ loading experiments using neutron diffraction were carried out at 750 °C on model microstructures with a unimodal γ′ size distribution based on the polycrystalline nickel-base superalloy RR1000.The results were fitted using an EPSC model to identify possible deformation modes.The dislocation structure of the failed specimens was revealed by STEM analysis.The main findings can be summarized as follows.The elastic lattice strain data gained from the neutron diffraction experiment as well as the EPSC model’s predictions indicate the γ and γ′ deform jointly in the fine γ′ microstructure, but not in the medium and coarse γ′.A load transfer between γ and γ′ is observed in the elastic lattice strain data for the medium and coarse γ′ microstructures, indicating a different deformation mechanism from that in the fine microstructure.This change in deformation behaviour is supported by the results of the STEM analysis, which shows that the same slip system involving single dislocations is active in both phases in the fine γ′ with continuous stacking faults extending through both phases.In contrast, in the medium and coarse γ′ microstructure, stacking faults are restricted to the γ′ phase, and the matrix dislocations do not penetrate the precipitates in a similar fashion as observed for the fine γ′ microstructure.By undertaking comparisons with previous studies of the same material and microstructure, but tested at room temperature, only very subtle differences could be detected in terms of deformation mechanism, with the critical particle size for load transfer from γ to γ′ decreasing with increasing test temperature.While evidence of cube slip in γ′ was anticipated when testing the material at 750 °C, neither the elastic lattice strain data in combination with plasticity modelling nor the STEM analysis provided any evidence of it.
To study the effect of γ′ precipitate size on the deformation behaviour of a polycrystalline nickel-based superalloy, model microstructures with a unimodal γ′ size distribution were developed and subjected to loading experiments at 750 °C. Neutron diffraction measurements were carried out during loading to record the elastic lattice strain response of the γ and γ′ phase. A two-site elasto-plastic self-consistent model (EPSC) assisted in the interpretation of the elastic lattice strain response. In addition, the microstructures of the deformed specimens were analysed by (scanning) transmission electron microscopy (STEM). Excellent agreement was found between the EPSC and STEM results regarding a joint deformation of the γ and γ′ phase in the fine γ′ microstructures and for low plastic strains in the medium γ′ microstructures. With increasing γ′ size and increasing degree of plastic deformation, both experimental methodologies revealed a tendency of the two phases to deform independently. © 2014 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
31,628
Core-6 fucose and the oligomerization of the 1918 pandemic influenza viral neuraminidase
The 1918 influenza, or “Spanish Flu” virus, caused the most devastating pandemic in recorded human history.Its estimated infection rate lies between 25% and 30% of the 1918 global population with approximately 50–100 million deaths worldwide .Currently, influenza is an ongoing threat to human society , having caused the recent pandemics of the 2005 Hong Kong avian flu and 2009 swine flu .Research on the 1918 influenza virus may provide key insights for combating the emerging strains of influenza virus.For this reason, the 1918 influenza virus was resurrected from the body of an Alaskan Inuit woman in 2005 .Significant effort has been made to decipher the mechanism of the lethality of this virus since then.Influenza virus has two surface antigens, hemagglutinin and neuraminidase .HA initially recognizes and binds to the terminal sialic acid residue on a host cell surface to mediate viral infection and infusion .NA subsequently hydrolyzes host sialic acid residues to allow release of the progeny virus and spread of the infection .NA is a box-shaped tetrameric glycoprotein with a large and globular head, a stalk region, and a small hydrophobic region that anchors the antigen into the viral lipid membrane .The stalk region of the NA from the 1918 influenza virus contains approximately 50 amino acid residues and is responsible for the oligomerization of the protein .The active site of NA is highly conserved throughout all influenza viruses , and this has allowed the development of two enzyme-inhibitor based anti-flu drugs: Tamiflu and Relenza .Despite this success, our understanding of NA regulation remains limited, particularly regarding the biological functions of NA glycosylation.Previously, the expression of the 1918 influenza viral NA resulted in isolation of both active tetramer and inactive dimer/monomer forms .The inactive form was unable to oligomerize to the active form, suggesting intrinsic molecular differences among the two forms.Subsequently, the N-glycans on the two forms were found to be different.Herein, we further characterize the N-glycans from the two forms by mass spectrometry and in vitro glycosylation study.Mass spectrometry analysis revealed a striking difference between the levels of core-6 fucosylation, where the overall level of core-6-linked fucosylation was significantly lower in the active form accompanied by almost complete absence of core-6-fucose within its stalk region.This discovery was further supported by the data obtained from in vitro incorporation of azido-fucose that was detected via click chemistry and radio-labeled fucose at core-6 position using recombinant fucosyltransferase 8.Previously, the pandemic 1918 H1N1 influenza viral NA was expressed and purified in an active form and an inactive form .The expressed sequence contains seven potential N-glycosylation sites and five of them reside in a single tryptic peptide within its stalk region.It was found that the glycosylation was intrinsically different between the two forms.To further investigate this difference, the two forms were first separated on SDS gel after reduction and alkylation.It was clear that the two forms had similar mobility around 53 kDa, while the theoretical mass is 48.3 kDa, again suggesting that the difference is structural.To further investigate the difference on glycosylation, the N-glycans of the two forms were released and analyzed with MALDI-MS.The mass spectra showed simple pauci- and high-mannose type glycans.In the spectrum of the tetramer, the most dominant peak was from a pentasaccharide Man3GlcNAc2 , and the second dominant peak at much less abundance was from a fucosylated glycan Man3GlcNAc2Fuc1.Surprisingly, in the spectrum of the dimer, the fucosylated glycan became the most dominant peak, and the non-fucosylated glycan became much less abundant.Since all detected glycans were neutral and similar in size, the relative abundance of individual composition within each sample should be proportional to its peak intensity in each MALDI-MS spectrum .Based on this assumption, the non-fucosylated compositions were calculated to account for 81.89% of the total glycans in the tetramer, and 26.40% of the total glycans in the dimer.Since the stalk region that is highly enriched with N-glycosylation sites is involved in the oligomerization of the protein and possibly protecting the protein from host cell protease attack , different glycosylation on this region is likely to affect these functionalities.For this reasoning, the tryptic peptides of the stalk region of the two forms were further isolated and analyzed with MALDI-MS.Even more surprisingly, the glycans of the stalk region of the active tetramer were found to be almost completely devoid of fucosylation, while those of the inactive form were largely fucosylated.To locate the position of the fucosylation, the permethylated N-glycans from both forms were subjected to ESI-MSn fragmentation .When the fucosylated composition from the tetramer was analyzed, the fucose residue was found to be linked to the reducing end GlcNAc of the oligosaccharide Man3GlcNAc2Fuc1.Given that the N-glycans were enzymatically released by PNGase F that lacks activity on core-3-linked fucosylated N-glycans , the fucose residue must be at the 6-O position of the reducing end GlcNAc residue.To this end, the most abundant composition of the tetramer was confirmed to be a typical core pentasaccharide.When equivalent compositions from the dimer were analyzed, identical structures were obtained.Overall, MS analysis clearly indicated that the level of core-6-linked fucosylation was the major difference between the N-glycans of the two forms of NA.Core-6 fucosylation of the reducing-end GlcNAc on N-glycans is introduced by FUT8 in humans .To confirm the results of mass spectrometry analysis, the levels of core-6 fucosylation were further probed with azido fucose using recombinant human FUT8.The incorporated azido fucose was then conjugated to a biotin residue via a click chemistry reaction , and detected with streptavidin-conjugated horse radish peroxidase .As shown in Fig. 3A, same amount of the tetramer accommodated significantly more azido fucose than the dimer.When the core-6 fucosylation was probed with 3H-fucose using FUT8, again, the active tetramer incorporated significantly more 3H-fucose compared to the dimer.Considering that the two forms of NA had similar amount of total glycans, the results in Fig. 3 again suggest that the active tetramer had significantly more non-fucosylated glycans than the inactive dimer.Given that the active NA had significant lower level of core-fucosylation than that of the inactive form, it is imperative to investigate whether in vitro fucosylation will inactive the active form.To answer this question, the active tetramer was fucosylated using FUT8 for different time lengths before its activity was measured.Surprisingly, the enzymatic activity remained constant along the course.Previously, 1918 influenza viral NA was expressed as two inconvertible forms, i.e. active and inactive forms, suggesting an intrinsic difference between the two forms.In this report, using mass spectrometry and in vitro fucosylation, we demonstrated that the inactive form is highly core-fucosylated, whereas the active form is mainly devoid of core fucose."However, in vitro core-fucosylation of the active form didn't abolish its activity, suggesting that there is no direct link between core-fucosylation and enzymatic activity.Core-fucose is known to be involved in several biological events.One well-known example is antibody-dependent cell-mediated cytotoxicity, where core-fucose inhibits carbohydrate–carbohydrate interactions between the antibody and its receptor FcγRIIIa .Antibodies lacking core fucosylation show a large increase in affinity for FcγRIIIa, leading to an improved receptor-mediated effector function .Core fucosylation of N-glycans is also required for the binding of EGF to its receptor, suggesting core fucosylation is critical to EGF mediated intracellular signaling .In the case of 1918 influenza viral NA, considering that the stalk region is heavily glycosylated and play roles in the process of oligomerization, it is likely that the core-fucosylation in this region weakened the carbohydrate–carbohydrate interaction so that it prevented the tetramerization but allowed the formation of dimer, and explains the earlier observation that the monomer and dimer were convertible but not the tetramer .Fucosylation is ubiquitous and age related .For example, older people has higher level of fucosylated haptoglobin compared to younger people .In a mouse model, it has been reported that the expression of Fut8 is up-regulated with age .If core-fucosylation indeed inhibited the oligomerization and the activation of the neuraminidase of the 1918 pandemic influenza virus, people with higher level of FUT8 activity would have been advantageous for survival during the pandemic.Therefore, together with the age specific expression of core-fucose, the current finding may explain the mystery of the 1918 pandemic influenza, i.e. it caused a relatively lower death rate among the older population, while the younger population experienced a higher death rate .Accordingly, if core-fucosylation is a general mechanism for inhibiting the oligomerization of influenza viral neuraminidases, it will be a good target for influenza drug development.Ammonium bicarbonate, dithiothreitol, iodoacetamide, trifluoroacetic acid, sodium hydroxide, dimethyl sulfoxide, iodomethane, sodium borohydride, 2,5-dihydroxybenzoic acid, urea, cellulose, trypsin, HPLC grade ethanol, 1-butanol, and GDP-fucose were from Sigma–Aldrich.Sep-Pak C18 SPE cartridge was from Waters.PNGase F was from New England Biolabs.The NuPAGE series of LDS sample buffer, MOPS SDS running buffer, SimplyBlue SafeStain and 4–12% Bis-Tris Gels were from Life Technologies.10,000 MWCO centrifugal devices were obtained from Millipore.NA was expressed with N-terminal His tag in sf21 insect cells and purified using nickel-histidine affinity chromatography followed by gel filtration as previously described .Recombinant FUT8, FUT11, GDP-azido-fucose, biotinylated alkyne, and streptavidin-HRP were from R&D Systems.The active tetramer and inactive dimer of the purified 1918 H1N1 NA were first reconstituted in 8 M urea/0.4 M ammonium bicarbonate.Both samples were then reduced by 25 mM DTT, and alkylated by 40 mM IAA in a dark environment.An aliquot of 2 μg of each form was mixed with LDS sample buffer and loaded onto a 4–12% Bis-Tris gel."SDS-PAGE was performed using MOPS SDS buffer and protein was stained with SimplyBlue SafeStain according to the vendor's protocol.The aliquots of both reduced and alkylated forms of NA were incubated with trypsin overnight at a weight ratio in a 37 °C oven.The proteolytic digestion was stopped by the addition of TFA into the samples in an ice bath.Large polypeptide-containing stalk regions from both samples were enriched via YM-10 centrifugal filters.Briefly, the mixture of crude tryptic peptides was re-dissolved in 500 μL of HPLC water and transferred into the sample chamber of a centrifugal filter.Small peptides were removed by repeated centrifugation, and the retained fraction in the sample chamber was collected as the stalk region tryptic peptide as described in the Results section.Peptides were further desalted by passage through C18 SPE .The desalted peptides, either from tryptic digestion of 150 μg of an NA sample or its equivalent amount of purified stalk region, were re-dissolved in 5 μL of 500 mM sodium phosphate and 45 μL of HPLC water.Each sample was then digested with 2 μL PNGase F at 37 °C for 1 h.The released N-glycans were separated from peptides on a second C18 SPE using 5% ACN/0.1 TFA as elution buffer.When required, N-glycans were further reduced to the corresponding alditols by the addition of 200 μL of NaBH4.The reduced N-glycans were purified using hand-packed cellulose cartridges and permethylated for MSn analysis.MALDI-MS was carried out on a MALDI-TOF instrument, with a nitrogen laser at 337 nm.External calibration was performed using the ProteoMass Peptide MALDI-MS calibration kit.The matrix solution was prepared by dissolving 10 mg of DHB in a volume of 1 mL of 50% acetonitrile containing 2 mM sodium acetate.N-Glycans were directly spotted onto a stainless steel plate and mixed with an equal volume of matrix solution.The MALDI-MS spectra were acquired in the positive reflectron mode from 600 to 5000 Da.The laser energy was manually adjusted to obtain best signals.A minimum of 1500 scan were averaged for each spectra using the Spectrum Contents within the application Launchpad.ESI-MS and ESI-MSn of permethylated N-glycans were carried out on a Thermo linear ion trap instrument LTQ equipped with direct chip-based infusion.The ESI source voltage was set at 2.0–2.5 kV, and the capillary temperature at 230 °C.MS spectra were acquired from m/z 500–1800 with a minimum of 30 scan.MS/MS and MSn spectra were performed using a normalized collision energy at 35%, activation Q at 0.25, and activation time of 30 ms. All scans of one spectrum were accumulated by Xcalibur 2.0.The activity assay previously described was followed.The NA enzyme was first diluted with an assay buffer to a concentration of 1 ng/μL.To start the reaction, 50 μL of the diluted enzyme was mixed with 50 μL of 400 μM substrate 2′--α-D-N-acetylneuraminic acid in a 96-well fluorescent plate.The reaction kinetics was monitored by fluorescence at excitation of 365 nm and emission of 445 nm in a SpectraMax Multi-Mode Microplate Reader.Enzymatic fucosylation was carried out by combining 1 μg of substrate with either 2 μL of GDP-3H-fucose or 5 nmol of GDP-fucose, and 0.5 μg of recombinant human FUT8 in 20 μL of 10 mM Tris and 10 mM MnCl2.The reaction was then incubated at room temperature for required length of time.Following 3H-fucose incorporation, 8 μL from each reaction was spotted on a type GF/C glass fiber.Ethanol was dropped to these spots to denature the protein.The filters were dried and then thoroughly washed in 200 mL of water in a shaker for 5 min.The filters were finally counted with an LS 6500 scintillation counter in Ready Protein™ cocktail.The general procedure of glycoprotein labeling with click chemistry was followed to probe core-6 fucosylation with recombinant human FUT8.The dimer NA at 95 ng/μL or the tetramer NA at 70 ng/μL was incubated with 30 ng/μL of FUT8, or 40 ng/μL of FUT11, plus 0.08 mM GDP-azido-fucose in 25 mM Tris, 150 mM NaCl, and 4 mM MnCl2 at 37 °C for 90 min.To initiate the click chemistry reaction, 100 μM biotinylated alkyne, 100 μM CuCl2, and 2 mM ascorbic acid were added to the mixtures, and followed by incubation at room temperature for 1 hour.One μg each of fucosylated NA samples was loaded per well onto a 12% SDS-PAGE gel containing 2,2,2-Trichloroethanol.The gel was run at 50 mA, imaged via the method of Ladner, C.L. et al. , and further electro-transferred onto nitrocellulose membrane at 25 V for 30 min.The blot was then blocked with 10% milk, washed thoroughly with TBS buffer, incubated with 75 ng/mL streptavidin-HRP in 25 mM Tris, 150 mM NaCl, pH 7.5 for 30 min, washed three times in TBS for a total of 30 min, incubated with ECL chemiluminescent substrate briefly, and then exposed to an X-ray film for 20 s.
The 1918 H1N1 influenza virus was responsible for one of the most deadly pandemics in human history. Yet to date, the structure component responsible for its virulence is still a mystery. In order to search for such a component, the neuraminidase (NA) antigen of the virus was expressed, which led to the discovery of an active form (tetramer) and an inactive form (dimer and monomer) of the protein due to different glycosylation. In this report, the N-glycans from both forms were released and characterized by mass spectrometry. It was found that the glycans from the active form had 26% core-6 fucosylated, while the glycans from the inactive form had 82% core-6 fucosylated. Even more surprisingly, the stalk region of the active form was almost completely devoid of core-6-linked fucose. These findings were further supported by the results obtained from in vitro incorporation of azido fucose and 3H-labeled fucose using core-6 fucosyltransferase, FUT8. In addition, the incorporation of fucose did not change the enzymatic activity of the active form, implying that core-6 fucose is not directly involved in the enzymatic activity. It is postulated that core-6 fucose prohibits the oligomerization and subsequent activation of the enzyme.