Unnamed: 0
int64 0
31.6k
| Clean_Title
stringlengths 7
376
| Clean_Text
stringlengths 1.85k
288k
| Clean_Summary
stringlengths 215
5.34k
|
---|---|---|---|
500 | Seed storage longevity of Hosta sieboldiana (Asparagaceae) | Hosta sieboldiana,Engler is a popular ornamental plant in Japan.This beautiful perennial herbaceous plant is endemic to southwestern and central Japan, where it is widely distributed.Because it is such an important ornamental, there is interest in developing seed production technologies to facilitate the commercial availability of this plant.Seed biology information is available for 264 species in the Asparagaceae, and all are reported to have orthodox seeds.But desiccation sensitivity has been shown in the Asparagales, e.g. Ophiopogon japonicas.H. sieboldiana seeds were reported to be orthodox as they survived liquid nitrogen storage at moisture contents of 10% to 12%.H. sieboldiana usually grows along streams or rivers in valleys but also can be found in mesic open meadows and forests.Like Salix species, which also occur in similar habitats and have short-lived seeds, seeds of H. sieboldiana may also be predicted to have a short life span.Indeed, in our pre-experiment, dry seeds of this species died after 15 months at room temperatures.Optimal conditions are crucial to prolong the seed longevity of H. sieboldiana during storage.However, seed storage longevity of this species has not been investigated.In this study, we carried out a four-year seed storage study of H. sieboldiana seeds from two locations in the central district of Japan to investigate the effects of seed maturity status, seed storage moisture content and temperature on the seed storage longevity of the species.Seeds of H. sieboldiana were collected from two locations on 18 September 2005 from central Japan: Asagiri Height, Fujinomiya, Shizuoka and Ashikawa, Yamanashi.At the time of collection, seeds from seed lot Ashikawa were at the point of natural dispersal.However, seeds from seed lot Asagiri Height were immature.They were collected two weeks prior to the time of natural seed dispersal.Three MC levels were targeted for storage: ~ 65%; ~ 10% and ~ 5%.For each seed lot, seeds were separated into three groups.One group of seeds was sealed in polyethylene bags immediately after seed collection and then stored at 5 °C and − 20 °C.These seeds had a moisture content of around 64% to 66%.Seeds of the other two groups were dried for nine to 18 days using silica gel to adjust their moisture content.After drying, seed moisture contents of seed lot YA and AS reached 9.6% and 4.5%; and 7.7% and 5%, respectively.These seeds were then sealed in polyethylene bags and stored at both 5 °C and − 20 °C.Seed storage experiments were conducted for more than four years.The preliminary seed germination experiments indicated that, although germination occurred over a wide temperature range, germination was most rapid at 25 °C and 30 °C; therefore, 25 °C was used in all the experiments.To test the initial seed viability of the seed lots, on the commencement of the storage experiments, three replicates of 50 fresh seeds each of each seed lot were placed on top of two sheets of filter paper soaked in water in a 9-cm diameter petri dish and incubated at 25 °C.During the roughly four-year storage, germination tests were conducted every one to three months, and up to 50 germination tests for each treatment were done.For each germination test, three replicates of 50 seeds were used.In addition, seed moisture content was assessed gravimetrically on 180 seeds in total per target moisture content by drying seeds at 105 °C for 16 h. MCs are expressed on a fresh-weight basis.Logistic regression was used to model the relationship between seed germination percentage and three predictors as follows: storage time, storage temperature and seed moisture content.The three predictors and all their interactions were included in the full model; then the contribution of individual predictor/interaction was assessed by the likelihood ratio test.The selected model consisted of predictors and interactions that significantly decreased the deviance.The goodness-of-fit of the selected model was evaluated by the likelihood ratio test, which compared the deviance of the selected model to the deviance of the null model.For the mature seeds, the seed germination measurements of seeds stored at − 20 °C and 4.5% MC were taken much more frequently during the first two years than the third and fourth years so that each measurement was weighted to adjust this imbalance.The selected model successfully captured the linear relationship between the log-odds of seed germination and storage time at various storage conditions.It achieved a significance of < 0.0001 in the goodness-of-fit test.For immature seed, the relationship between the germination percentage and the storage time was not monotonic, so the linear model was not appropriate to fit the relationship between the log-odds of germination and the storage time.Hence, a transformed term x1/2 was added into the model to achieve a better fit.The selected model then achieved a significance of < 0.0001.The effects of the predictors were represented by the regression coefficients.The mean seed germination percentages at various storage times and conditions were estimated using the selected models.The time for germination to decrease by 50% was also estimated.All the analyses mentioned above were performed using the R statistical package.Mature seeds from YA had an initial germination of 82%; the immature seeds from AS had an initial germination of only 18%.During storage, the viability of seeds from YA decreased significantly in all the treatments, except at 4.5% MC and − 20 °C.Seeds with a high MC of 65.7% died rapidly at both 5 °C and − 20 °C.Seeds at 9.6% MC stored at − 20 °C lost viability much faster than seeds with 9.6% MC and 4.5% MC stored at 5 °C.Because of the immaturity of the seeds, the selected model for AS seeds was not a monotonic function of the storage duration.During storage, the viability of seeds appeared to increase in the first two to 19 months, depending on temperatures and MCs, and then started to decrease.This relationship was modeled by function y = β0 + β1x + β2x1/2 where y was the logit transformed germination percentages and x was the number of storage months.Following the same interpretation of a slope in a linear function, the first order derivative β1 + β2 / described the rates of the viability change during seed storage.At both 5 °C and − 20 °C, the viability of seeds stored with 7.7% MC increased and decreased faster than that of seeds stored with 5% MC.At the same MC level of 5% or 7.7%, the viability increased at similar rates between the two temperatures but declined much faster at − 20 °C than at 5 °C.Seeds with 63.8% MC stored at 5 °C quickly reached a germination percentage of 56% and then lost all the viability quickly.Seeds with 63.8% MC stored at − 20 °C lost all viability before the second germination test.The time required to reduce germination to half of the maximum germination ranged from 0.5 to 221.9 months for YA and ranged from 2.3 to 56.7 months for AS.Under the same storage conditions, P50 for seeds from AS was lower than that for YA seeds.Seeds of H. sieboldiana were able to survive low moisture contents of about 5% and low temperature of − 20 °C, suggesting that these seeds are not recalcitrant.However, these seeds are not typical orthodox.Reductions in seed moisture content and storage temperature within broad limits are known to enhance longevity in orthodox seeds.For successful long-term storage, orthodox seeds require low moisture content and low temperature.This study showed that, for both the mature and immature seeds, decreasing moisture content increased seed longevity of H. sieboldiana, but the responses of the seeds to temperatures did not occur in the same predictable way as for orthodox seeds, and seeds of H. sieboldiana lost viability quickly under all the storage conditions.With a high MC of ~ 65%, seeds of H. sieboldiana lost all viabiliy within five months.It is not surprising because these seeds would have been killed by ice crystal formation at subzero temperatures or rapid viability loss/aging at temperatures above zero.At the MC levels below the unfrozen water contents, however, the calculated P50s of the mature seeds stored at 5 °C and − 20 °C were only 48.4–221.9 months.Long-term storage studies provide direct evidence of changes in seed viability with storage time.Because these types of studies are time-consuming, relative data are usually rare.Lack of data means that we are not able to compare our results with storage data for congener species or species from the same family.However, in a survey of 276 species across 18 families, under storage conditions of 4% to 8% MC and − 18 °C, only 25% of the species had predicted half-lives of less than 50 years, and the median P50 was 54 years, suggesting that seeds of H. sieboldiana are comparatively short-lived.Unlike typical orthodox seeds, the dry seeds of H. sieboldiana lost viability more quickly at − 20 °C than at 5 °C.Similar responses of seeds to subzero and cool temperatures were also reported for the intermediate seeds of coffee.However, coffee seeds from many seed lots were more sensitive to desiccation, and seed viability was lost or declined largely after the seeds were desiccated to 5% MC.Bonner defined seeds that can be stored under the same conditions as typical orthodox seeds, but for shorter period to be suborthodox.This type includes Populus and Salix, both of which could survive 6% to 10% MCs but lost viability rapidly at temperatures below − 20 °C or − 10 °C.Therefore, seed storage behavior of H. sieboldiana is similar to that of Populus and Salix, and may be classified as suborthodox.During storage, after-ripening occurred in the immature seeds of H. sieboldiana.Generally, the increase in germination was faster at higher MCs; at the same MC, the rates were not much different at 5 °C and − 20 °C.When ripening ended, the germination percentages decreased faster at higher MCs and lower temperature.The lower P50 of the immature seeds compared with the mature seeds suggests that maturity status had significant effects on the seed storage longevity of this species.Though storage allowed some continuation of maturation, the maturation process of these immature seeds may not be completed during storage, and their longevity was shorter than that of the initially mature seeds stored under the same storage conditions."It has been suggested that seed storage longevity is related to the climate of species' origin, and that seeds from cool and wet regions are more likely to be short-lived.Consistent with this prediction, our data showed a short seed lifespan for this wet-climate-originated species.Compared with species from similar habitats, H. sieboldiana seeds lost viability faster than short-lived Liliaceae species but more slowly than Salix species when stored in the same conditions.Standard seed banking protocols recommend that orthodox seeds should be dried to 3% to 7% moisture content and then stored at − 18 °C.Our data indicate that while short-term storage of H. sieboldiana seeds under standard seed bank conditions may be feasible, cryogenic storage might be a more efficient method for long-term conservation for these comparatively short-lived seeds.Indeed Stanwood has shown that H. sieboldiana seeds survive in liquid nitrogen. | Seed storage longevity of Hosta sieboldiana (Lodd.) Engler was studied. Mature and immature seeds of H. sieboldiana were stored immediately after collection at -. 20. °C and 5. °C with a moisture content of about 65%. In addition, the seeds were dried to either about 10% or 5% moisture content (MC) and then stored at either -. 20. °C or 5. °C. Seeds of H. sieboldiana were desiccation-tolerant but short-lived, with the P50 (the time required to reduce germination to half of the maximum germination) under storage conditions ranging from 0.5 to 221.9. months and 2.3 to 56.7. months for mature and immature seeds, respectively. Seed longevity of H. sieboldiana was increased by decreasing moisture content, but the responses of the seeds to temperatures were different from typical orthodox seeds. The dry seeds of H. sieboldiana (~. 5% to 10% MC) lost viability more quickly at -. 20. °C than at 5. °C. Storage allowed immature seeds to continue some maturation, but the longevity of these seeds was generally shorter than that of the mature seeds stored under the same conditions. |
501 | Estimated glomerular filtration rate predicts incident stroke among Ghanaians with diabetes and hypertension | The incidence, prevalence and mortality secondary to stroke in Low-and-Middle Income Countries in sub-Saharan Africa have rapidly risen in recent decades .These recent secular trends contrasts sharply with the scenario in High-Income Countries where burden of stroke is receding due to improved control of vascular risk factors .Stroke among Africans has a predilection to be hemorrhagic , affect a younger population and is associated with a myriad of post-stroke comorbidities including persisting functional limitations , depression , cognitive impairment , and stigma in the setting with limited access to neurologists and rehabilitation services.Because stroke care is severely challenged in LMICs, the most sustainable long-term approach to mitigating the enormous burden imposed by stroke in LMICs is to identify and characterize the risk factors associated with stroke occurrence for primary prevention at the population level .The association between chronic kidney disease and stroke occurrence among Africans although some studies have demonstrated the existence of such associations in other populations .Pathophysiologically, both the brain and kidneys are susceptible to vasculature injuries due to reliance on similar microvasculature that allows for continuous high volume perfusion.In addition, stroke and renal disease commonly share traditional vascular risk factors such as hypertension, diabetes, obesity, dyslipidemia and obesity.There are suggestions from the literature of regional and racial differences on the effect of CKD on incident stroke, with a greater burden among Asians .There are no published studies among indigenous Africans that have specifically assessed the association between CKD and stroke.The highest frequencies of renal related variants of apolipoprotein L1-associated with non-diabetic kidney disease are reported among West Africans .Given the recently reported association between APOL-1 variants and small vessel disease strokes among West Africans , there is justifiable scientific incentive to investigate the associations between CKD and stroke risk among indigenous Africans further.We therefore sought to narrow the existing knowledge gaps by exploring prospectively the association between chronic kidney disease and stroke incidence among Ghanaians with hypertension and/or diabetes mellitus.Participants were recruited as part of a pragmatic clinical trial aimed at improving access to medicines for the control of hypertension and diabetes by offering medications at differential pricing .The Ghana Access and Affordability Program pilot study is a prospective cohort study involving adults with hypertension, hypertension with diabetes mellitus and diabetes mellitus at five public hospitals in Ghana.The study sites included the Agogo Presbyterian Hospital, Atua Government Hospital, Komfo Anokye Teaching Hospital, Kings Medical center, and the Tamale Teaching Hospital,.Ethical approval was obtained from all study sites.The study protocol has been published elsewhere but a brief synopsis is provided below .Informed consent was obtained from all study participants prior to enrollment into the study.Trained research assistants followed Standard Operation Procedures across sites to collect demographic information including age, gender, educational attainment, employment status; and lifestyle behaviors such as alcohol use, cigarette smoking, level of physical activities, frequency and daily quantities of fruits and vegetable consumption as well as table added salt were assessed through interviews and responses collected on a questionnaire.A detailed medical history including duration of hypertension or diabetes diagnosis and current medications lists taken were obtained.Anthropometric evaluations including measurement of weight, height and waist circumference were performed by Study nurses.Body mass index of each participant was then derived by dividing the weight in kilograms by the square of the height in meters .An International Organization for Standardization-certified and quality-assured laboratory was contracted to run all biochemical panels which included serum creatinine, lipid profile and hemoglobin A1C for subjects with diabetes.Samples were transported to the laboratory by trained phlebotomists on the same day of collection often within 4 h or where not feasible, samples were stored in a freezer before transported to the laboratory the next day.Stroke diagnosis was based on the World Health Organization definition , if participant had ever experienced sudden onset of weakness or sensory loss on one side of the body, sudden loss of vision, or sudden loss of speech.These questions were derived from the 8-item questionnaire for verifying stroke free status which has been validated locally .QVSFS was used as neuro-imaging facilities were not available at any of the study sites at the time of the study.Study participants visited every 2 months for 18 months to assess control of hypertension and diabetes mellitus and to assess for vascular complications including stroke.Stroke diagnosis was adjudicated by FSS,.Prevalent stroke was defined as stroke diagnosis at time of enrollment into the study while incident stroke was defined as stroke diagnosed during follow-up of study participants.Incident stroke was identified prospectively among participants without stroke at baseline.Renal impairment was defined using estimated glomerular filtration rate calculated from baseline serum creatinine measurement using the Chronic Kidney Disease Epidemiology Collaboration formula .Individuals were classified as physically active if they were regularly involved in moderate exercise or strenuous exercise for 4 h or more per week.Alcohol use was categorized into current users or never/former drinker while alcohol intake was categorized as low drinkers and high drinker .Smoking status was defined as current smoker or never or former smoker .Vegetable and fruit intake was assessed based on number of daily servings per week."Baseline characteristics of patients with hypertension, diabetes and dual diagnosis of diabetes and hypertension were compared using Analysis of variance for 3-group comparisons or by using the Student's t-test for 2-group comparisons. "Proportions were compared using the Chi-squared tests or Fisher's exact test for proportions with subgroupings <5.At baseline, factors associated with prevalent stroke were assessed using a logistic regression model.Crude incidence rates of stroke according to eGFR were calculated and expressed as events/1000-person years of follow-up and 95%CI calculated using the Mid-P exact test.A multivariate Cox Hazards Proportion regression analysis was fitted to identify factors independently associated with the risk of stroke with the inclusion of eGFR as a categorical variable in the model.Other independent variables evaluated included age, gender, location of residence, employment status, previous cigarette smoking, current alcohol use, physical activity, table added salt, fruit and vegetable intake, level of healthcare institution, and central obesity.Variable selection was based on clinical and empirical significance of covariates in the model.Patients were censored either the date of stroke, at the last visit for those who died, were lost-to-follow up, and at July 31, 2017 for the remainder.In all analyses, two-tailed p-values < .05 were considered statistically significant.Secondary analyses included assessing association between eGFR of <60 ml/min and stroke risk among participants with hypertension only, hypertension and diabetes mellitus and or diabetes only.Model diagnostics and fit were assessed using residual plots analysis and visual inspection for collinearity of variables in the Cox models.Statistical analysis was performed using SPSS and GraphPad Prism version 7.We enrolled 3296 study participants comprising of 1867 with hypertension, 1006 with both diabetes and hypertension and 422 with diabetes mellitus.The demographics, lifestyle and clinical characteristics are shown in Table 1.Among the entire cohort, mean estimated glomerular filtration rate was 76.6 ± 16.2 ml/min, being lowest among diabetics with hypertension of 74.4 ± 18.4 ml/min followed by 76.6 ± 15.4 ml/min among those with hypertension and 82.5 ± 12.2 ml/min among diabetics, p < .0001.Correspondingly, 19.2% of diabetic hypertensive participants, 14.8% of participants with hypertension and 6.5% of diabetics had eGFR <60 ml/min, p < .0001.There were 45 incident strokes during follow-up among 2631 participants who did not have baseline history of stroke and had baseline eGFR measurements.Participants with incident strokes are compared with those without stroke in Table 2.The mean duration of follow-up per participant was 14.6 ± 5.6 months with 1548 completing month 18 visit contributing to 3200 person-years of follow-up.Stroke incidence rates increased with decreasing eGFR categories of 89, 60–88, 30–59 and <29 ml/min corresponding to stroke incidence rates of 7.58, 14.45, 29.43 and 66.23/1000 person-years respectively.In a model specifying eGFR as categories with >89 ml/min as referent, eGFR level of 60-89 ml/min had adjusted hazards ratio of 1.42, 30–59 ml/min had aHR of 1.88 and eGFR < 29 ml/min had aHR of 1.52.The only other factor that remained significantly associated with incident stroke was previous history of cigarette smoking with aHR of 2.70.Use of an ACE-inhibitor was associated with unadjusted HR of 1.05, p = .88, and use of ARB was associated with unadjusted HR of 1.36, p = .33 for incident strokes and were therefore not included in final adjusted models.Unadjusted HR for stroke occurrence among those with hypertension only was 4.87, p < .0004 for an eGFR <60 ml/min and that for those with diabetes was 1.83, p = .20.The association between eGFR and stroke risk remained significant among participants with hypertension only, adjusted HR of 3.69, p = .0047 for those with eGFR<60 ml/min.However among those with diabetes and hypertension, the adjusted HR was 1.32, p = .61 and among those with diabetes only, adjusted HR was 4.00, p = .28.Those with any diabetes, adjusted HR for stroke occurrence was 1.50, p = .42 for an eGFR<60 ml/min.In this prospective hospital based cohort of Ghanaians with hypertension and diabetes mellitus, we found CKD assessed using estimated glomerular filtration rate to be independently associated with stroke occurrence.At baseline, we observed a high frequency of renal impairment among the cohort with approximately 15% having eGFR <60 ml/min.Incident stroke risk increased with declining renal function among a clinically stroke-free cohort at enrollment who were followed up prospectively.Overall, stroke risk increased by 42% among participants with eGFR of 60 to 89 ml/min, by 88% among those with eGFR between 30 and 59 ml/min and by 52% among those with eGFR<30 ml/min compared with participants with normal eGFR.Among patients with hypertension, an eGFR<60 ml/min was significantly associated with an adjusted HR of 3.69 for stroke occurrence while that among diabetic patients did not reach statistical significance.This to the best of our knowledge is the first study in sub-Saharan Africa to prospectively assess the predictive associations between renal dysfunction and stroke risk among individuals on treatment for hypertension and/or diabetes.A meta-analysis performed nearly a decade ago involving a prospective cohort of 285,000 participants from 33 studies with 7900 strokes demonstrated a 43% increased risk for stroke occurrence for eGFR <60 ml/min compared with those with normal baseline eGFR .In that seminal meta-analysis, general or hypertensive patients with eGFR < 60 ml/min had a 58% increased risk of stroke compared with those with normal baseline eGFR and Asians had higher risk of stroke from CKD than non-Asians but no African cohorts were included in that study .In the present study, Ghanaian hypertensive patients with eGFR < 60 ml/min had a 269% higher risk of stroke, p = .0047 but no significant association were observed among diabetics.The toxic bidirectional relationship between CKD and cerebrovascular diseases, known as cerebro-renal interaction has gathered scientific momentum over the past decade.CKD is independently associated with both ischemic and hemorrhagic stroke types as well as with recurrent strokes .CKD is associated with cerebral microbleeds, which are clinically silent, discrete punctate hypodense lesions 5–10 mm in size evidenced on gradient-recall echo T2*-weighted MRI .Cerebral microbleeds are important harbingers of intracerebral hemorrhage which are commoner among indigenous Africans and African Americans than European Americans .Interestingly, declining renal function is closely associated with a surge in vascular inflammation, oxidative stress and anemia which may potentiate the occurrence of cardiovascular events via generalized endothelial dysfunction and systematic vascular remodeling .The salience of renal impairment and increased stroke risk particularly among individuals of African ancestry may have strong genetic underpinnings based on more recent evidence .The SIREN investigators involved in the largest study on stroke in Africa to date, were the first to demonstrate an association between APOL1 variant rs73885319 and small vessel disease strokes among West Africans .Among African Americans, the APOL1 gene has been propounded as a risk locus for chronic kidney disease and West Africans, among whom small vessel disease strokes are the commonest stroke phenotype, have the highest frequencies of APOL1-associated kidney variants on the globe with the Akans of Ghana reporting 43.6% and that among the Yorubas of Nigeria being 34.2%.However, it is worth emphasizing the conflicting reports of associations between APOL1 renal related variants and occurrence of strokes from large prospective studies."While the Jackson Heart Study , Women's Health Initiative and Cardiovascular Health Study found evidence linking APOL1 variants to adverse CVD events such as strokes, the Systolic Blood Pressure Intervention Trial , the Atherosclerosis Risk in Communities , and African American Study of Kidney Disease and Hypertension in contrast found no such evidence highlighting the need for further studies to resolve this conundrum.Our findings have important clinical and public health implications in SSA which now bears a rapidly growing burden of hypertension, diabetes, CKD and stroke globally.Strategic investments in primary prevention of stroke are of paramount importance, given the dire limitations in funding for health infrastructure and availability of skilled personnel to managed CVD in Africa.Regular monitoring of renal function of patients with hypertension and/or diabetes mellitus may be a cost effective strategy in SSA given the dose-response association observed between stages of CKD and risk of stroke occurrence in the present study.This will help identify individuals at high risk of stroke for intensification of control of blood pressure or blood glucose, and reduction of proteinuria to avert the progression of kidney disease and occurrence of CVDs.Health systems should be strengthened to support the use of evidence-based interventions such as Renin-Angiotensin-Aldosterone-System modulators and statins in SSA.A limited sub-analysis performed in the present cohort did not find evidence to support moderation of stroke risk from the use of either ACE-inhibitors or Angiotensin Receptor blockers.This result should however be interpreted with caution given that the indications for use of ACEI or ARBs among participants in the present study were unknown.A systematic review of ischemic stroke survivors on either ACE inhibitors or ARBs showed a modest reduction in risk for recurrent strokes as a secondary prevention strategy, thus a larger sample size may be required to show benefit or otherwise of ACE-I and ARBs in primary prevention of stroke in SSA.Importantly, as the weight of evidence supporting the plausibility of CKD as a risk factor for stroke and indeed other CVDs becomes substantial, the question of whether eGFR is simply a risk marker or is etiologically linked to stroke occurrence will require further interrogation.For instance, among Ghanaians in the present cohort, eGFR was independently associated with poor glycemic and blood pressure control , and poor BP control is potently associated with stroke occurrence .However, an interesting nationwide cohort in Taiwan found evidence to suggest that CKD is a causal risk factor for stroke beyond traditional cardiovascular risk factors .The prospective design of our study is an important strength because it minimizes confounding from recall bias, selection bias and reverse causality associations.Furthermore, participants of the study were recruited from hospitals at primary, secondary and tertiary cadres of healthcare distributed across Ghana, which enhances the generalizability of our study findings.However, the lack of confirmation of stroke with neuro-imaging, which is considered as the gold standard is a limitation worth noting.We relied on clinical assessments by study physicians to confirm stroke diagnosis which may be subject to diagnostic misclassification.Unavailability of CT scans at most of the study sites precluded us from radiologic confirmation of clinically suspected strokes.There is a possibility that we missed some severe or fatal stroke cases who did not report to clinic for follow-up since strokes were assessed only among participants who reported for follow-up visits.However, when participants missed clinic visits we followed up with telephone calls to ascertain reasons for default.Well known vascular risk factors such as atrial fibrillation and dyslipidemia were not systematically assessed because electrocardiography were not undertaken for all participants and only a fourth of study population was supported by the study to cover lipid panel costs.In spite of these limitations, we believe our study contributes to literature by providing evidence from West Africa to strengthen the reported associations between chronic kidney disease and stroke occurrence on the globe.More cohort studies of such nature are needed to assess the impact of renal disease on strokes occurrence in low-and-middle income countries in SSA.In conclusion, chronic kidney disease is dose-dependently associated with occurrence of incident strokes among Ghanaians with hypertension and diabetes mellitus.Further studies are required to explore interventions that could attenuate the risk of stroke attributable to renal disease among patients with hypertension in SSA.FSS, LMM, JPR, DA and DO-A designed the study and planned analyses, and FSS wrote the first draft of the report.FSS performed statistical analyses.All authors contributed to the collection of data, discussions and interpretation of the data, and to writing of the manuscript.FSS had full access to the data.All authors reviewed and approved drafts of the report.The authors do not have any competing interests.Funding for this study was provided by MSD, Novartis, Pfizer, Sanofi and the Bill and Melinda Gates Foundation through the New Venture Fund.The NVF is a not-for-profit organization exempt as a public charity under section 501 of the United States Internal Revenue Code of 1986, and assumes financial management of the study as a fiduciary agent and primary contractor for the Funders."Consistent with anti-trust laws that govern industry interactions, each Participant Company independently and voluntarily will continue to develop its own marketing and pricing strategies reflecting, among other factors, the Company's product portfolios and the patients it serves.For the avoidance of doubt, the Participant Companies committed not to: discuss any price or marketing strategy that may involve any Project-related product; or make any decision with respect to the presence, absence or withdrawal of any Participant Company in or from any therapeutic area; or discuss the launching, maintaining or withdrawing of any product in any market whatsoever.Each Participant Company is solely responsible for its own compliance with applicable anti-trust laws.The Funders were kept apprised of progress in developing and implementing the study program in Ghana but had no role in study design, data collection, data analysis or in study report writing.FSS was supported by National Institute of Health-National Institute of Neurological Disorders & Stroke; R21 NS094033. | Background: Sub-Saharan Africa is currently experiencing a high burden of both chronic kidney disease (CKD) and stroke as a result of a rapid rise in shared common vascular risk factors such as hypertension and diabetes mellitus. However, no previous study has prospectively explored independent associations between CKD and incident stroke occurrence among indigenous Africans. This study sought to fill this knowledge gap. Methods: A prospective cohort study involving Ghanaians adults with hypertension or type II diabetes mellitus from 5 public hospitals. Patients were followed every 2 months in clinic for 18 months and assessed clinically for first ever stroke by physicians. Serum creatinine derived estimated glomerular filtration rates (eGFR) were determined at baseline for 2631 (81.7%) out of 3296 participants. We assessed associations between eGFR and incident stroke using a multivariate Cox Proportional Hazards regression model. Results: Stroke incidence rates (95% CI) increased with decreasing eGFR categories of 89, 60–88, 30–59 and <29 ml/min corresponding to incidence rates of 7.58 (3.58–13.51), 14.45 (9.07–21.92), 29.43 (15.95–50.04) and 66.23 (16.85–180.20)/1000 person-years respectively. Adjusted hazard ratios (95%CI) for stroke occurrence according to eGFR were 1.42 (0.63–3.21) for eGFR of 60-89 ml/min, 1.88 (1.17–3.02) for 30-59 ml/min and 1.52 (0.93–2.43) for <30 ml/min compared with eGFR of >89 ml/min. Adjusted HR for stroke occurrence among patients with hypertension with eGFR<60 ml/min was 3.69 (1.49–9.13), p = .0047 and among those with diabetes was 1.50 (0.56–3.98), p = .42. Conclusion: CKD is dose-dependently associated with occurrence of incident strokes among Ghanaians with hypertension and diabetes mellitus. Further studies are warranted to explore interventions that could attenuate the risk of stroke attributable to renal disease among patients with hypertension in SSA. |
502 | Diagnosis of Microvascular Angina Using Cardiac Magnetic Resonance | Fifty patients with angina and suspected or known CAD referred for outpatient diagnostic coronary angiography were recruited for study.Patients underwent CMR scans at 2 commonly used clinical field-strengths: 1.5-T or 3-T.We also recruited 20 age-matched healthy control subjects to undergo CMR at 1.5-T and 3-T using the same CMR scanners and protocol as the study patients.Perfusion measures were comparable between the 1.5-T and 3-T scans in control subjects and patients.All study procedures were approved by a local ethics committee, and all subjects provided written informed consent.All subjects abstained from caffeine for 24 h before the CMR.The CMR was performed by using established techniques, including cine, adenosine stress and rest perfusion, and late-gadolinium enhancement imaging, as previously described.All subjects had good hemodynamic stress response.In addition, 60% of patients and all control subjects had a significant drop in systolic blood pressure during stress.Within 7 days post-CMR, all 50 patients underwent invasive coronary angiography.Quantitative coronary angiography was also performed offline by using Medcon QCA software, as previously described.FFR and IMR were measured, as described elsewhere, by operators blinded to the research CMR.IMR was corrected by using the Yong formula to account for any effects of collateral circulation.In total, 28 patients had significant epicardial CAD.In these patients, 86% of epicardial lesions were functionally obstructive.The remaining 22 patients all had angiographic NOCAD, where 100% of coronary arteries were FFR-negative, and IMR was assessed in all 66 vessels.All CMR images were analyzed by using the commercially available cmr42 software.Myocardial perfusion images were analyzed as previously described, blinded to clinical information, other CMR, and invasive coronary data.MPRI was derived semi-quantitatively as the stress/rest ratio of myocardial signal intensity upslopes, normalized to the arterial input function.Absolute quantification of MBF was performed by using model-independent Fermi deconvolution of myocardial and arterial input signal intensity curves, as previously described.To enable correlation between perfusion measures and invasive coronary measurements on a per-vessel basis, myocardial perfusion images were segmented and allocated to each coronary artery territory according to the American Heart Association’s 16-segment model, accounting for coronary artery dominance.Segmental MPRI and MBF values were then averaged for each coronary artery territory and matched to FFR and IMR data for further analysis.Left ventricular function and LGE imaging were analyzed as previously described.An optimal MPRI threshold for symptomatic inducible ischemia on stress perfusion CMR was first derived using the 28 patients with angina and obstructive epicardial CAD and the 20 normal control subjects.A receiver-operating characteristic curve was used with myocardium downstream of FFR ≤0.8 vessels as true-positives for ischemia and normal controls as true-negatives.This MPRI threshold was then applied in 22 patients with 3-vessel NOCAD to determine its diagnostic performance for detecting significantly impaired perfusion due to CMD with high IMR.All continuous variables were normally distributed, as checked by using the Kolmogorov-Smirnov test, and are expressed as mean ± SD.Each patient with 3-vessel NOCAD contributed 3 IMR values, and the intraclass correlation coefficient was calculated to determine the need to adjust the data for clustering.The intraclass correlation coefficient was very low for IMR, indicating that the values were not strongly related within the same patient; the relations between IMR and CMR data were analyzed on a per-vessel basis.Due to the highly statistically significant comparisons observed throughout the study, we used a conservative approach to compensate for potential multiple comparisons and any remaining within-individual correlations of IMR data by reducing the threshold p value from the conventional 0.05 to 0.01.This approach likely overcompensates for the worst-case scenario of 3 fully dependent variables within the same individual.For all analyses, p values <0.01 were considered statistically significant.Comparisons between 2 separate data groups were performed by using unpaired Student’s t-test.Comparisons between ≥3 separate data groups were performed by using analysis of variance with a Bonferroni post hoc method.Categorical data were compared by using the Fisher exact test.Correlations were assessed by using the Spearman’s rank correlation coefficients.For ROC analysis, area under the curve with 95% confidence intervals are reported, as well as sensitivity, specificity, accuracy, positive predictive values, and negative predictive values where appropriate.All analyses were performed by using MedCalc version 12.7.8.Subject characteristics are summarized in Table 1.The 28 patients with obstructive epicardial CAD contributed a total of 30 FFR-positive coronary arteries to the analysis, which were also angiographically significant on QCA.The 22 patients with NOCAD contributed a total of 66 FFR-negative vessels, with minimal angiographic disease on QCA.Table 2 presents a summary of coronary physiology measures.In patients with NOCAD, IMR was not significantly affected by the Yong formula corrections, suggesting minimal influence from collateral circulations.IMR was not significantly correlated to FFR.Myocardial infarct scars were detected in 4 of 28 patients with obstructive CAD on CMR LGE imaging.Patients with NOCAD and control subjects had no scars on LGE.To present a true representation of myocardial perfusion in noninfarcted myocardium, segments with scars were excluded.This method did not lead to exclusion of any patients.As a reference, myocardium downstream of obstructive CAD had significantly lower MPRI than control subjects.Downstream of NOCAD vessels, MPRI correlated significantly with IMR and coronary flow reserve but not with FFR.CMD was defined as myocardium with IMR ≥25 U downstream of NOCAD vessels, as previously described.Myocardium with IMR <25 U had similar MPRI compared with normal control subjects.In contrast, myocardium with IMR ≥25 U had impaired MPRI, similar to myocardium downstream of obstructive CAD in patients with angina.An MPRI threshold of 1.4 was optimal for detecting inducible myocardial ischemia from obstructive CAD.Myocardium downstream of obstructive CAD served as true-positives; normal myocardium of control subjects served as true-negatives.This threshold for inducible ischemia was then applied to patients with 3-vessel NOCAD.The MPRI threshold of 1.4 also accurately detected inducible ischemia due to CMD, with a specificity of 95%, a sensitivity of 89%, and accuracy of 92%.An MPRI threshold of 1.6 yielded a high negative predictive value and sensitivity for ruling out significant inducible ischemia due to CMD.Downstream of nonobstructive FFR ≥0.8 coronary arteries in patients with obstructive CAD, the same MPRI threshold of 1.4 also accurately detected CMD.To further understand the impaired MPRI observed in NOCAD, absolute quantification of MBF was performed at rest and during stress.Resting MBF was similar between normal control subjects, myocardium downstream of epicardial CAD, myocardium downstream of NOCAD with IMR <25 U and myocardium downstream of NOCAD with IMR ≥25 U, p = 0.76.During adenosine stress, myocardium downstream of obstructive epicardial CAD had significantly lower stress MBF than normal control subjects.Downstream of NOCAD, myocardium with IMR ≥25 U had a similar degree of impairment in stress MBF as myocardium downstream of obstructive epicardial CAD.Interestingly, although myocardium with IMR <25 U had higher stress MBF than both myocardium with IMR ≥25 U and myocardium downstream of obstructive CAD, it was still significantly blunted compared with that of healthy age-matched control subjects.The quantitatively derived MPR showed a similar pattern as for semi-quantitatively derived MPRI.On ROC analysis, semi-quantitative MPRI, quantitative MPR, and stress MBF alone all had similar diagnostic performance for detecting impaired perfusion due to CMD.Figure 5 presents the assessment of a patient with microvascular angina using CMR MPRI.In patients with NOCAD and normal IMR but blunted stress MBF compared with control subjects, stress MBF was not significantly correlated to IMR.This impaired augmentation of stress MBF suggests possible mild or early CMD, insensitive to detection with the use of ratio-based measures.A stress MBF threshold of 2.3 ml/min/g distinguished this mild CMD from normal control subjects with 100% specificity and 100% positive predictive value,.The present study used adenosine stress CMR to objectively assess inducible ischemia due to CMD in patients with angina and NOCAD, as validated against the IMR.Impairments in MPR due to CMD were driven by blunted augmentation of hyperemic MBF and were comparable to ischemic myocardium downstream of FFR-positive obstructive CAD.An MPRI threshold of 1.4 accurately detected significant CMD-related hypoperfusion.Furthermore, a quantitatively derived stress MBF threshold of 2.3 ml/min/g can detect mild CMD.Integration of MPRI and MBF assessment into the clinical CMR workflow can provide a noninvasive approach for evaluating both epicardial and microvascular CAD in patients with angina, which deserves further validation in an all-comers population.In stress perfusion CMR, obstructive epicardial CAD leads to regional perfusion defects that can be visually distinguished from areas without perfusion defects.In the absence of obstructive epicardial CAD, CMD may also induce myocardial hypoperfusion, but this process rarely results in regional or global perfusion defects that can be assessed visually.Furthermore, qualitative assessment of hypoperfusion as a binary “yes/no” output cannot inform about the severity or the distribution of microvascular disease.Advances in CMR image post-processing methods enabled detailed examination of MPR and MBF, which are well validated for detecting obstructive CAD.However, because visual assessment of perfusion images is already accurate for detecting obstructive CAD in routine clinical workflow, these more sophisticated post-processing methods have largely remained in the realm of research.For detecting microvascular inducible ischemia, however, these more advanced methods are invaluable because visual assessment is not possible.In previous studies, microvascular ischemia has largely been a diagnosis of exclusion, rather than being objectively demonstrated, due to either the complete lack of validation against invasive reference standards for CMD or validation against invasive markers that are not specific for the microcirculation, such as coronary flow reserve or coronary reactivity testing.Moreover, nonobstructive coronary arteries in previous studies were defined according to angiographic appearances alone, which informs little about their physiological significance.This limitation of previous studies introduces disease heterogeneities, leading to conflicting results, which render the derivation of objective diagnostic thresholds highly challenging thus far.As a representative threshold for inducible ischemia, an MPRI cutoff based on myocardium downstream of obstructive epicardial CAD was established, defined using the clinically accepted FFR method.This threshold then accurately detected significantly impaired myocardial perfusion due to CMD in a separate group of patients with angiographically and physiologically nonobstructive coronary arteries, as referenced to invasive IMR.In this way, we adjudicated that myocardial perfusion deficits were related to an invasive marker of CMD, enabling the derivation of an objective threshold on CMR for diagnosing microvascular ischemia.The impaired MPR downstream of NOCAD with high IMR being similar to downstream FFR ≤0.8 CAD supports the presence of microvascular inducible ischemia that could account for angina symptoms.This perfusion reserve impairment was driven by blunted hyperemic MBF, with normal resting MBF, indicating a functional vasodilatory deficit, despite achieving good hemodynamic response to adenosine stress in all these patients.Overall, it would seem that when myocardial perfusion becomes significantly impaired, symptoms of angina would ensue, whether this outcome is due to obstructive epicardial CAD or coronary microvascular dysfunction.Intriguingly, myocardium without significant epicardial CAD or CMD still had blunted stress MBF compared with that of normal control subjects.Here, CMR detects changes in MBF and may be sensitive to early or mild changes in CMD.Furthermore, this possible mild CMD in patients with stable angina was insensitive to detection with the use of ratio-based measures such as MPRI or quantitative MPR, which remained indistinguishable from normal.The underlying mechanisms for this observation, including structural and/or functional abnormalities, deserve further investigation.Absolute quantification of MBF represents a strength of CMR in the comprehensive, noninvasive assessment of these patients.CMD is classically described as a global phenomenon across the myocardium.Although this description is consistent with our results in patients with NOCAD and 3-vessel high IMR who had globally impaired MPRI with little regional variations, approximately one-half of our patients with NOCAD had a combination of vessels with high IMR and vessels with normal IMR.This interesting finding revealed a coronary-specific distribution of microvascular dysfunction in some patients; in fact, as the number of vessels with high IMR increased in each patient, there was progressive worsening of the global MPRI.This outcome may explain the heterogeneities in myocardial perfusion seen in previous studies, in which patients with angiographic NOCAD but varying distribution of CMD may have been studied.Clinically, the knowledge of single-vessel versus multivessel CMD may offer better risk stratification and disease monitoring tailored to the individual patient.This concept deserves further investigation.Because stress perfusion CMR is excellent for ruling out obstructive epicardial CAD, integration of MPRI assessment may enable a dual evaluation of epicardial CAD and microvascular dysfunction in a single CMR scan.MPRI can be assessed by using commercially available software for direct clinical application, and stress MBF may enable the detection of subtle microvascular dysfunction, when the post-processing methods become available beyond experienced centers.The prognostic implications of these approaches and their roles in guiding clinical management of patients with angina are the subject of active research.There are 3 major clinical dilemmas surrounding patients with angina and NOCAD.First, objectively diagnosing microvascular angina is challenging due to the lack of noninvasive reference standard tests in clinical practice.Second, even when the clinical suspicion of microvascular angina is high, the clinician is hampered by a limited armamentarium of disease-modifying therapies for CMD.Patients are therefore either started empirically on antianginal medications or not treated at all.Third, there is currently a lack of objective methods for disease monitoring; hence, the true natural history of CMD progression in the clinical arena remains unclear.Objective diagnostic thresholds using CMR can offer patients with microvascular angina an objective explanation for their symptoms, which can improve psychological well-being.For clinicians, these markers may allow them to provide a more confident diagnosis for the patient, a firmer indication for commencing medical therapy, and potentiate the development and testing of novel therapies for microvascular ischemia.In addition to monitoring the changes in symptoms over time, which can be subjective, MPRI may provide an objective disease-monitoring tool in patients with angina and NOCAD.The prognostic values of MPRI and MPRI-guided therapy are important topics of active ongoing research.This study was conducted in a single tertiary-care center with a relatively small number of patients with NOCAD, although the 3-vessel assessment of coronary physiology with matching multiparametric CMR data is unique.Admittedly, there is currently no true “gold standard” marker for myocardial ischemia, and this fact remains a limitation of most similar studies.In this study, CMR was chosen due to its high spatial resolution for assessing myocardial perfusion, and IMR was used as a reproducible invasive marker for CMD.The combination of high IMR ≥25 U and impaired MPRI, similar to downstream of significant FFR-positive epicardial CAD, strongly supported the presence of microvascular inducible ischemia in patients with NOCAD.Although CMR perfusion imaging achieved high diagnostic performance for diagnosing microvascular ischemia in this study, a CMR-based comprehensive diagnostic pathway that enables pre-angiography clinical decision-making in patients with angina and NOCAD deserves further validation in a larger prospective study.Future studies using position emission tomography and advanced metabolic imaging, as well as other invasive methods, may be further informative for defining cellular myocardial ischemia in the context of CMD.In angina patients with NOCAD, CMR can objectively and noninvasively assess microvascular angina.A CMR-based combined diagnostic pathway for both epicardial and microvascular CAD deserves further clinical validation.COMPETENCY IN PATIENT CARE AND PROCEDURAL SKILLS: In patients with angina and nonobstructive coronary atherosclerosis, stress CMR can diagnose microvascular angina in viable myocardium using an MPRI threshold of 1.4, as validated against an elevated invasive IMR.The degree of impaired perfusion is similar to that in patients with angina due to obstructive CAD.TRANSLATIONAL OUTLOOK: Further research is required to determine the prognostic implications of MPRI thresholds to guide therapy and monitor patients for coronary microvascular dysfunction. | Background: In patients with angina and nonobstructive coronary artery disease (NOCAD), confirming symptoms due to coronary microvascular dysfunction (CMD) remains challenging. Cardiac magnetic resonance (CMR) assesses myocardial perfusion with high spatial resolution and is widely used for diagnosing obstructive coronary artery disease (CAD). Objectives: The goal of this study was to validate CMR for diagnosing microvascular angina in patients with NOCAD, compared with patients with obstructive CAD and correlated to the index of microcirculatory resistance (IMR) during invasive coronary angiography. Methods: Fifty patients with angina (65 ± 9 years of age) and 20 age-matched healthy control subjects underwent adenosine stress CMR (1.5- and 3-T) to assess left ventricular function, inducible ischemia (myocardial perfusion reserve index [MPRI]; myocardial blood flow [MBF]), and infarction (late gadolinium enhancement). During subsequent angiography within 7 days, 28 patients had obstructive CAD (fractional flow reserve [FFR] ≤0.8) and 22 patients had NOCAD (FFR >0.8) who underwent 3-vessel IMR measurements. Results: In patients with NOCAD, myocardium with IMR <25 U had normal MPRI (1.9 ± 0.4 vs. controls 2.0 ± 0.3; p = 0.49); myocardium with IMR ≥25 U had significantly impaired MPRI, similar to ischemic myocardium downstream of obstructive CAD (1.2 ± 0.3 vs. 1.2 ± 0.4; p = 0.61). An MPRI of 1.4 accurately detected impaired perfusion related to CMD (IMR ≥25 U; FFR >0.8) (area under the curve: 0.90; specificity: 95%; sensitivity: 89%; p < 0.001). Impaired MPRI in patients with NOCAD was driven by impaired augmentation of MBF during stress, with normal resting MBF. Myocardium with FFR >0.8 and normal IMR (<25 U) still had blunted stress MBF, suggesting mild CMD, which was distinguishable from control subjects by using a stress MBF threshold of 2.3 ml/min/g with 100% positive predictive value. Conclusions: In angina patients with NOCAD, CMR can objectively and noninvasively assess microvascular angina. A CMR-based combined diagnostic pathway for both epicardial and microvascular CAD deserves further clinical validation. |
503 | Development of NanoLuc bridging immunoassay for detection of anti-drug antibodies | The first antibody to be approved by FDA for therapeutic use was Muromonab, an anti CD3 mouse monoclonal antibody, for prevention of kidney transplant rejection.Being a mouse monoclonal antibody, it elicited a strong immune response in the patients including development of ADAs and resulted in severe side effects.Muromonab was followed by the approval of the first chimeric antibody in 1995, the humanized antibody Daclizumab in 1999, and finally, the first fully human antibody in 2003.With improvements in antibody structure, the rate of adverse immune responses dropped from > 70% for Muronomab to < 10% for fully human antibodies.Nevertheless, the presence of even low concentrations of ADAs can have a serious impact on the efficacy, pharmacokinetics profile, and safety of the drugs.ADAs can form immune complexes with the drug and cause rapid clearance and reduced bioavailability.Neutralizing ADAs, which are a subset of total ADA response, can reduce the drug efficacy by preventing the drug from binding to the target.In light of such significant impact of ADAs on patient health, regulatory bodies such as the FDA and EMA require strict monitoring of immunogenicity during drug development, clinical testing, and post launch.A variety of formats for ADA detection have been proposed including the bridging immunoassay, affinity capture elution assay, solid-phase extraction with acid dissociation, precipitation and acid dissociation assay, and antigen binding test.Among these, the bridging immunoassay is most frequently used and involves incubating samples containing ADAs with a ‘bridging mixture’, which is a mixture of drug labeled with a biotin for capture and the same drug labeled with a separate tag used for detection.ADAs make bridges with the two differently labeled drug molecules in solution and form bridging complexes.These complexes are subsequently captured on a streptavidin plate via the biotin tag, and are quantitated using the detection tag.Bridging immunoassays are however, susceptible to the presence of free drug in the sample, which will compete with the labeled drug for formation of bridging complexes and will reduce the signal or can even cause false negative results.Tolerance for free drug in such cases is improved by incorporating an acid-dissociation step to dissociate ADA-Drug immune complexes followed by neutralization in the presence of bridging mixture.During neutralization, some of the ADAs form bridge complexes and are captured on the streptavidin plates and detected.Bridging immunoassays are frequently performed on a traditional ELISA platform or on electrochemiluminescence platform developed by MSD.With the traditional ELISA format, high sensitivity is achieved by using a bridging mixture containing drug labeled with biotin along with Digoxigenin labeled drug for detection.After capture on the Streptavidin plate, bridge complexes are detected by incubation with an anti-Digoxigenin antibody labeled with HRP.This approach has two different incubation and washing steps and requires a secondary antibody labeled with HRP.Direct labeling of drugs with HRP has been tried to simplify the workflow but the resulting assays were less sensitive.Unlike ELISA, the ECL platform involves direct labeling of drug with electrochemiluminescent labels for detection, therefore, eliminating the secondary antibody incubation step, and simplifying the protocol.In addition, electrochemiluminescence signal results in a sensitive assay with at least three-log order dynamic range, and a better drug tolerance.However, the ECL platform requires specialized and proprietary equipment and so there is a desire for alternative assay formats, which have the performance advantages and simple workflow of the ECL platform, but can be performed on widely available ELISA platforms.In this paper, we describe the development of a bridging immunoassay based on a traditional ELISA format where NanoLuc luciferase is used as detection label.NanoLuc is a small monomeric enzyme, which produces a bright stable glow type light, 100 fold brighter than traditional luminescent reporters like Firefly luciferase and Renilla luciferase.In addition, NanoLuc can be genetically fused with variety of proteins including antibodies and as a result, several novel biological applications have been developed including bioluminescence resonant energy transfer for protein-protein interactions and highly sensitive ELISAs.We hypothesized that NanoLuc will bring unique capabilities to bridging immunoassays including: drugs can be labeled recombinantly with NanoLuc, and may result in improved assay performances compared to the use of chemically labeled drug-HRP conjugate; if drug labeled with NanoLuc can meet desired assay specification that will eliminate the use of labeled secondary antibody and simplify the workflow; and the bright luminescence signal from NanoLuc will provide high sensitivity and broad dynamic range seen in the luminescence-based MSD platform.Although, recombinant labeling of drugs is a key advantage of the NanoLuc, a novel method for chemical labeling of drugs with NanoLuc was also developed and the assay performances of two differently labeled drugs were similar.To test our hypothesis, we selected Trastuzumab and Cetuximab drugs as model drugs and evaluated NanoLuc bridging immunoassay for detection of anti-Trastuzumab antibodies and anti-Cetuximab antibodies.There were two reasons for selecting these model systems; first, sequences for Trastuzumab and Cetuximab are available in the public domain, which allowed us to make recombinant fusions of drugs with NanoLuc and evaluate them as detection reagents.Second, ATA and ACA were commercially available for use as positive controls to optimize NanoLuc bridging immunoassays.With these two model systems, we demonstrate a sensitivity of 1.0 ng/mL and a broad dynamic range of four log orders for human serum samples spiked with ATAs and ACA.Furthermore, the assay was optimized for high drug tolerance and at FDA recommended assay sensitivity of 100 ng/mL, can tolerate > 500 fold excess of free drug."Trastuzumab and Cetuximab were labeled with amine reactive long chain biotin using the manufacturer's suggested protocol.Briefly, drugs were dialyzed into 100 mM bicarbonate buffer and reacted with twenty molar excess of amine reactive biotin for 1 h. Unreacted biotin was removed by Zeba desalting column.Concentrations of biotin labeled drugs were calculated by measuring absorbance at 280 nm.Trastuzumab and Cetuximab fused with NanoLuc at the C-terminus of the heavy chain were custom ordered from Absolute Antibody.Fusions were made using publically available Trastuzumab and Cetuximab sequences with expression in HEK293.For purification, a HisTag was added after NanoLuc and purification was done using Ni2 + column.Protein A or G columns typically used for antibody purification were not used due to adverse effect of low pH elution on NanoLuc activity.Non-denaturing SDS-PAGE gel was used to characterize the NanoLuc conjugated drugs.In rest of the paper, Trastuzumab and Cetuximab recombinantly labeled with NanoLuc are mentioned as Trastuzumab-rNanoLuc and Cetuximab-rNanoLuc respectively.Genetic fusion of NanoLuc with HaloTag was achieved by combining existing NanoLuc and HaloTag sequences separated by a short Gly-Ser-Ser-Gly linker.Purification was facilitated by transferring the fusion using flanking SgfI and XbaI sites into a modified pF1K Flexi vector which added an N terminal 6His purification tag with the sequence Met-Lys-His-His-His-His-His-His-Ala-Ile-Ala.Glycerol stock of E. coli, expressing HisTag-NanoLuc-HaloTag fusion protein was used to inoculate 50 mL starter cultures, which were grown overnight at 37 °C in LB media containing 25 μg/mL kanamycin.Starter cultures were diluted 1:100 into 500 mL fresh LB media, containing 25 μg/mL kanamycin, 0.12% glucose, and 0.2% rhamnose.Cultures were grown for 22–24 h at 25 °C.Cells were pelleted by centrifugation for 30 min at 4 °C and re-suspended in 50 mL PBS.1 mL protease inhibitor cocktail, 0.5 mL RQ1 DNase, and 0.5 mL of 10 mg/mL lysozyme were added, and the cell suspension was incubated on ice with mild agitation for 1 h. Cells were lysed by sonication at 15% power at 5 s intervals for 1.5 min and subsequently centrifuged at 10,000 rpm for 30 min at 4 °C."Supernatant was collected and protein purified using HisTag columns, following the manufacturer's recommended protocol.Protein was eluted using 500 mM imidazole, dialyzed in PBS, characterized using SDS-PAGE gel and was > 95% pure.Trastuzumab and Cetuximab were labeled with amine reactive HaloTag Succinimidyl Ester Ligand, using protocol similar to that used for biotin labeling.Drugs activated with HaloTag ligand were incubated for at least 2 h with four molar excess of HisTag-NanoLuc-HaloTag fusion protein to allow covalent attachment.Non-denaturing SDS-PAGE gel was used to characterize the NanoLuc conjugated drugs.In rest of the paper, Trastuzumab and Cetuximab chemically labeled with HisTag-NanoLuc-HaloTag are mentioned as Trastuzumab-cNanoLuc and Cetuximab-cNanoLuc respectively.Recombinant HER2 and EGFR proteins were diluted to 2.0 μg/mL in 100 mM bicarbonate buffer and 50 μL was added to the wells of white high binding 96 well plates.The plates were incubated for 1–2 h at room temperature with mild agitation to allow proteins to adsorb to the plate.The plates were subsequently blocked by 1 h incubation with Super Block.Labeled and unlabeled Trastuzumab or Cetuximab were serially diluted into Super Block and added to HER2 or EGFR coated plates respectively.Plates were incubated for 1 h at RT with mild agitation.Plates were washed three times with PBS containing 0.05% Tween 20, then incubated for 1 h with anti-Human-IgG-HRP conjugate diluted 1:5000-fold in Super Block.Plates were washed three times with PBST, then once with PBS and Super Signal ELISA Pico Chemiluminescent HRP substrate was added and plates read on a Tecan Genios Pro plate reader.Affinity measurements for three ATAs and one ACA were performed by Neoclone on the Octet QK using streptavidin biosensor.Biotinylated drugs-Trastuzumab or Cetuximab-were immobilized on the streptavidin Biosensor by incubating sensors with 15 μg/mL of antibodies for 20 min at 1000 rpm.Typical immobilization levels were 1.2 nm for Cetuximab and 2.8 nm for Trastuzumab.Affinity measurements for ATA or ACA were determined by running association and dissociation at seven concentrations between 60 and 0.9 μg/mL.Sample dilutions were made in PBS pH 7.4.Association was measured for 900 s and dissociation phase was for 3600 s. Data were fitted to a 1:1 interaction model using ForteBio data analysis software 6.4.1.2.ATAs and ACA used in these experiments were: ATA1; ATA2; ATA3; ACA1.Human serum samples spiked with ATA or ACA were added to a 96-well non-binding plate.Human serum used in these experiments was pooled human male serum.A stock solution of bridging mixture containing equal concentration of biotin and NanoLuc labeled drug was prepared in phosphate buffer saline.For example, 5.0 μg/mL of bridging mixture has 5.0 μg/mL each of biotin labeled drug and NanoLuc labeled drug.Bridging mixture was added to the samples and plates incubated for 1–2 h at room temperature in order for bridge complexes to form.Samples were subsequently transferred to a white 96 well high capacity streptavidin plate and further incubated for 2 h to capture the bridge complexes.Wells were washed three times with PBST followed by addition of NanoGlo reagent to detect captured bridge complexes.Luminescence signal was read on a Tecan GeniosPro instrument.Human serum samples spiked with ATA or ACA at 100 ng/mL and increasing concentrations of unlabeled Trastuzumab or Cetuximab, respectively were incubated for 2 h at room temperature.This step allows formation of immune complexes similar to one present in real samples where free drugs are present along with ADAs.Samples were subsequently acidified by adding one volume of human serum sample and four volumes of 300 mM acetic acid and incubating for 1 h to dissociate immune complexes formed in previous step.In a separate 96 well non-binding plate, equal volumes of acidified sample, 1M Tris neutralization buffer, and bridging mixture were added and incubated for 2 h to simultaneously neutralize the acidified samples and form the bridge complexes with labeled drug present in bridging mixture.Mixture was subsequently transferred to high capacity streptavidin plates and processed as described in previous section.To calculate lower limit of quantitation and upper limit of quantitation of NanoLuc bridging immunoassay, plots were fitted to a 4-parameter logistic regression equation with 1/Y2 weighing function.Original concentrations of the positive controls used in the assay were compared to the values obtained by interpolating from the fitted graphs.LLOQ and ULOQ were lowest and the highest concentrations respectively where back fitted values were within 20% of the actual value and %CVs were < 20% of the replicate readings.Limit of detection was calculated as concentration that gave signal above the mean plus three standard deviation of the signal from a negative sample.In addition, LOD was also used for purposes of calculating drug tolerance.Drug tolerance is defined as a ratio of drug to ADA concentration, which gives signal above LOD.For this technology demonstration work, a single pooled human serum sample was used as negative control, and mean and standard deviation of analytical replicates were used for LOD and drug tolerance determination.However, for a validated immunogenicity assays, negative serum samples from 20 to 50 naïve human subjects, and mean and the standard deviation from these biological replicates will be needed to establish cut point for the purposes of drug tolerances.The goal of this study was to evaluate the use of NanoLuc luciferase for ADA detection using the bridging immunoassay format shown in Fig. 1.Trastuzumab and Cetuximab were used as model drugs and ADA immunoassays were optimized for detection of three different ATAs and one ACA.Briefly, drugs were labeled with biotin and with NanoLuc and mixed in equal amount to make a bridging mixture.Human serum samples containing ATA or ACA are mixed with respective bridging mixture to form bridge complexes, which are captured on a white streptavidin plate, washed, and quantitated using NanoLuc luminescent signal.Key steps during the study were: labeling and characterization of Trastuzumab and Cetuximab with biotin and NanoLuc; characterization of three different monoclonal ATAs and one monoclonal ACA used as positive controls for assay optimization; optimization of bridging mixture to obtain maximum sensitivity and dynamic range; and determining the tolerance of the assay to the presence of free drug.Two critical components for a sensitive NanoLuc bridging immunoassay are biotin labeled drug for capturing bridge complexes and NanoLuc labeled drug for detection.Biotinylation of antibodies has been extensively reported and the standard protocol was followed in this study for labeling two drug molecules with biotin.For labeling of drugs with NanoLuc, both the recombinant and chemical approaches were used.For chemical labeling, a novel approach using HaloTag technology was utilized for a covalent and oriented attachment of NanoLuc to the drugs.This is a two-step process in which the drug is first chemically activated with small HaloTag ligand using well-established amine chemistry similar to that used for biotinylation.In the second step, purified His-NanoLuc-HaloTag fusion protein is added to the activated drug for oriented and covalent attachment.During optimization, various ratios of His-NanoLuc-HaloTag to drug were tested and a ratio of four gave the best results.Labeled drugs were characterized using a non-denaturing SDS-PAGE gel where unlabeled Trastuzumab and Cetuximab give a single band around 150 kDa and upon chemical labeling, multiple high molecular weight bands appear; corresponding to drug molecules with varying number of His-NanoLuc-HaloTag molecules.Most of the drug is converted into NanoLuc conjugates, indicating a high efficiency of labeling method.A small amount of His-NanoLuc-HaloTag remains unreacted based on which we estimated an average of three His-NanoLuc-HaloTag molecules per drug molecule and an average molecular weight of 300 kDa.Recombinant drug-NanoLuc fusions were also generated.Both, expressed very well and could be easily purified at > 95% purity as confirmed by non-denaturing SDS-PAGE.Moreover, a single band at higher molecular weight indicates a homogeneous population of two NanoLuc per drug molecule highlighting a key benefit of the recombinant approach over the chemical conjugation approach.Functionality of the drug may be affected after labeling if the labels are present close to the antigen binding site, or the drug becomes structurally unstable due to the presence of the labels.Any such impact on Trastuzumab and Cetuximab after labeling with biotin or NanoLuc was investigated using antigen down ELISA method.HER2 or EGFR are coated on the plate, followed by addition of increasing concentration of Trastuzumab and Cetuximab respectively.The amount of captured Trastuzumab and Cetuximab are detected using HRP labeled anti-Human antibody.For Trastuzumab, the binding affinities of unlabeled and biotin labeled drug were similar but a decrease in the relative affinities of NanoLuc labeled antibodies were seen by the rightward shift in dose-response curves.For Cetuximab the binding affinities of unlabeled, biotin labeled, and NanoLuc labeled drug were very similar.An assumption in the assay is that the detection antibody will bind equally well to unlabeled drug and drug labeled with various tags.In a previous study we saw that Trastuzumab was more sensitive to labeling than Cetuximab but this had minimal impact on downstream assays, so no further optimization was performed.In patients, ADA response is polyclonal in nature with varying amounts of antibodies of different specificities and affinities.As a result, it is not possible to have a reference material for developing quantitative ADA immunoassays.The FDA suggests the use of either monoclonal or polyclonal antibodies, developed in-house or available commercially, as positive controls for establishing assay sensitivity, dynamic range and other performance parameters.It is known that the performance of the ADA immunoassay will depend on the affinities and binding site of the positive controls.Therefore, a variety of ADAs, three different ATAs and one ACA, were used as positive controls.In addition, KD values of all four positive controls were measured on Octet platform to correlate affinities and assay performances.ATA2 and ATA3 are anti-idiotype human antibodies which were generated from Fab fragments using HuCAL platform.Affinities for both fragments are in sub nanomolar range and Fab fragment used for ATA2 is a strong binder compared to that used for ATA3.KD values of full length ATA2 and ATA3 have higher affinities compared to Fab fragment by a factor of 80 and 280 fold respectively with a similar 20 fold difference.Higher affinities of full length IgG format is not surprising, and in fact, fold increase of up to ~ 4000 have been reported.Unlike ATA2 and ATA3, ATA1 and ACA1 are mouse monoclonal antibodies generated using traditional approaches using F2 as antigen.Binding sites for these antibodies are not known and they also had lower affinities compared to anti-idiotype antibodies.After characterizing the individual assay components, the next step was to optimize the assay workflow.Key steps in the assay as depicted in Fig. 1 were, choosing optimum amount of bridging mixture; choice of streptavidin plate for the capture of bridge complexes; and incubation time for the formation and capture of bridge complexes.High binding capacity white Streptavidin plate and 1–2 h of incubation time for formation and capture of bridge complexes were used in this study based on earlier reports.The amount of bridging mixture was optimized to obtain a wide dynamic range, high sensitivity, and high drug tolerance.For the two model drugs, bridging mixtures containing labeled drugs at 1.0, 5.0 and 10.0 μg/mL were prepared.For this study, drugs labeled recombinantly with NanoLuc were used and molar ratio of biotin to NanoLuc labeled antibody was.ATA1 and ACA1 were spiked into undiluted normal human serum and mixed with the respective bridging mixture at a 1:1 ratio in 96 well non-binding plates.Samples were incubated for 1–2 h to allow formation of bridge complexes and then transferred to high capacity streptavidin plates to capture the bridge complexes followed by detection using NanoLuc tag.A linear relationship between concentration and signal was observed for both ATA1 and ACA1 over a wide concentration range.Interestingly, for both ATA1 and ACA1 detection, a significant hook effect is observed with 1.0 μg/mL of bridging mixture and happens because the excess analyte saturates the labeled drug and prevents bridge formation.Hook effect with 5.0, and 10 μg/mL of bridging mixture is not apparent because the highest concentration of ATA1 and ACA1 used in these experiments were 20 μg/mL and not high enough to trigger decrease in signal.Signal over background plots of the same data reveals significant differences between two systems.For Trastuzumab, S/B ratio drops with increase in master mix concentration because, increase in non-specific background signal is higher compared to increase in specific signal.For Cetuximab, S/B ratio is not only significantly higher than Trastuzumab but also shows limited influence of master mix composition due to relatively low background even at higher master mix concentration.To calculate various assay parameters, data for 5.0 μg/mL of bridging reagent were fitted to a four parametric equation and a LLOQ of < 1.0 ng/mL and a broad dynamic range of almost four log orders of magnitude were obtained.Similar results were obtained, when experiments were repeated using bridging mixtures containing drugs chemically labeled with NanoLuc This is not surprising because the activities of drugs labeled with NanoLuc using two different methods were similar when tested using antigen down ELISA.Although, HaloTag based labeling overcomes some of the problems of traditional chemical labeling such as random attachment and inactivation of reporter enzyme it still has the limitation of yielding a heterogeneous mixture of labeled drugs and random placement of labels on the drug, which may result in batch-to-batch variations.Therefore, subsequent assays were performed using drugs labeled recombinantly with NanoLuc.In the previous experiment, similar sensitivities and dynamic ranges for both ATA1 and ACA1 were obtained because both were mouse monoclonal antibodies raised against F2, and had similar affinities.To understand if antibodies with different specificities and affinities would impact assay performances, we focused on three different anti Trastuzumab antibodies and generated dose response curves using 5.0 μg/mL of bridging mixture.Both ATA2 and ATA3 are anti-idiotype antibodies but ATA2 is a stronger binder which likely results in higher sensitivity compared to ATA2.On the other hand, ATA3 has a higher ULOQ of at least 20 μg/mL compared to ~ 5.0 ng/mL for ATA2.Unfortunately, the relationship between the KD of the positive control and assay performance does not hold true when comparing ATA1 and ATA2, which have similar assay performances even though their KD values are significantly different.Our results seem to indicate that the sensitivity and the dynamic range of the bridging immunoassay is a complex interplay of affinity and binding site and cannot be predicted a-priori.Previous reports have alluded to this limitation and the FDA guidelines also acknowledge the limitation that positive standards are different from real samples and values reported for real samples using a specific control is relative and not absolute.Assays sensitivities were so far calculated by spiking ATAs and ACA into the drug free human serum samples.However, biologic drugs are dosed at very high concentrations and have very long half-life, and will be present in the samples collected for ADA analysis.Drug in the samples will interfere with the assay by complexing with ADAs; therefore, reducing the assay sensitivity and even causing false negative results.Assay drug tolerance is improved by introducing an acid dissociation step to dissociate immune complexes between drug and ADAs.Labeled drugs in a high pH buffer are subsequently added to simultaneously neutralize the sample and compete with free drugs to form bridging complexes.A key concern with use of acid dissociation step in NanoLuc bridging immunoassay was possible loss of NanoLuc activity during brief exposure to low pH solution during neutralization.To address this issue, NanoLuc luminescence signal in acidified serum sample was compared with serum that was never acidified and no significant difference was seen, indicating that brief exposure of NanoLuc to acidic pH is not a concern.The drug tolerances of NanoLuc bridging immunoassay for three ATAs at 100 ng/mL were obtained in presence of excess Trastuzumab.Concentration of 100 ng/mL was selected, as this is the FDA recommended sensitivity for ADA detection.Samples of pooled Human serum spiked with 100 ng/mL of ATA and increasing amount of Trastuzumab were incubated to form immune complexes.Samples were subsequently acidified, neutralized in presence of 5.0 μg/mL of bridging mixture, and detected.Pooled serum containing no ATA but all other components was used as negative control and used to determine drug tolerance as described in Section 2.8.As expected, luminescence signal falls for all three ATAs with increasing amount of free drug as the bridging mixture competes with the free drug in the solution to form bridge complexes.It is worth noting that absolute RLU values in these experiments were lower than observed in Fig. 5 because of sample dilutions due to acidification and neutralization.Inspite of different absolute signal, signal-to-background ratio for three different ATAs were similar and 100 ng/mL of all three ATAs could be detected in presence of up to 50 μg/mL free drug, giving a drug tolerance of 500.The extent of drug tolerance in the bridging ELISA is driven by competition between amount of bridging mixture and free drug present in the sample.Hence, it should be possible to improve the drug tolerance of the assay by increasing the amount of bridging mixture, which is exactly what we observed with ATA1.Drug tolerances of 1000, 500, and 250 were observed with 10.0, 5.0 and 1.0 μg/mL of bridging mixture.It is worth mentioning that drug tolerance cannot be improved indefinitely by increasing the amount of bridging mixture because of increase in signal from non-specific binding and lower signal over background ratios as is evident in Fig. 7 and seen before."Immunogenic response to drugs can have major impact on drug's safety and efficacy; hence, various regulatory bodies require testing of ADA responses during drug development process.The most common assay format for ADA testing is the bridging immunoassay with acid dissociation and have been implemented on variety of technology platforms such as ELISA, Meso Scale Discovery, and GYROS among others.The MSD platform has become an industry standard possibly due to high sensitivity, wide dynamic range and a simplified protocol involving a single washing step.However, few reports have raised concerns about the dependence on proprietary reagents and instrumentation for assays that may be used over long periods.As a result, several improvement in ELISA based methods have been proposed to achieve performances similar to that of the MSD platform, but the protocols still involve multiple incubation/washing steps and use secondary antibodies labeled with HRP for detection.Multiple incubation and washing steps make for longer assays and use of secondary antibody introduces the need for additional quality controlled reagents such as polyclonal antibodies and processes like HRP conjugation.Direct conjugation of drugs with HRP will simplify the method but the approach has been reported to have insufficient assay sensitivity and is rarely used.Although specific reasons for that loss have not been described, a probable reason may be inefficient labeling of drug or loss of drug activity after HRP labeling.Although, it is possible to make genetic fusions of HRP and AP with proteins these methods have not been widely used in immunoassays.In this study, use of NanoLuc as a reporter in bridging immunoassays for detection of ADA was evaluated.We hypothesized that extremely bright NanoLuc reporter will allow us to simplify and use ELISA workflow while maintaining the sensitivity and drug tolerance required of ADA immunoassays.We were encouraged to pursue this approach by a recent publication on the use of NanoLuc for the development of robust and highly sensitive assay for antibody screening.The study attributed the advantages of NanoLuc for immunoassay to several factors including ability to make genetic fusion of antibodies and ScFv with a small NanoLuc tag; extremely bright light intensity; and long half-life of the signal, which allows stacking of plates for high throughput studies.To evaluate the use of NanoLuc for ADA detection, anti-Trastuzumab and anti-Cetuximab monoclonal antibodies were selected as model systems.For detection, NanoLuc fused chemically or recombinantly were both tested.Recombinant fusions of drug-NanoLuc could be easily obtained with high purity and homogeneous labeling of two NanoLuc per drug molecule.Recombinant fusions will have the advantage of lot-to-lot reproducibility which is an extremely useful attribute for validated assays.It should be noted that drug molecule in the recombinant fusion might not be identical to the original drug due to differences in expression systems.Making genetic fusions of antibodies is becoming easier but still may not be possible for every researcher, therefore, a novel chemical method based on HaloTag was developed for labeling antibodies with NanoLuc.Traditional approaches where enzymes are chemically modified before conjugating to the antibody led to significant decrease in NanoLuc activity, whereas the use of HaloTag-NanoLuc fusion maintained the NanoLuc activity while allowing covalent attachment.Another advantage of our approach was high labeling efficiency, which eliminates the need for having additional size exclusion or affinity chromatography step for removing unconjugated enzyme.In fact, due to the difficulties involved in separating unconjugated enzymes, many commercial antibody-enzyme conjugates are not purified and may cause high background in the assays.A drawback of our method is the addition of a 32 kDa HaloTag protein possibly resulting in steric inhibition in some cases but the combined molecular weight of HaloTag-NanoLuc fusion protein is in the range of HRP and much smaller than AP.Labeling with NanoLuc slightly reduces the activity of Trastuzumab but no such change is seen with Cetuximab.Moreover, when combined with the fact that loss in Trastuzumab activity was similar for both recombinant as well as chemical approach, it seems that loss in activity may be due to structural perturbation rather than steric inhibition of antigen binding from NanoLuc and is drug specific.However, it is worth noting that bridging immunoassays using Trastuzumab and Cetuximab labeled with NanoLuc met specific assay requirements of sensitivity and dynamic range.Our results are in line with well-documented observations that labeling may affect antibody activity but still can result in reproducible downstream applications as long as labeling protocol is optimized and stringently controlled.A systematic long-term shelf life stability studies of NanoLuc labeled drugs were not done in this study but in general, antibody NanoLuc conjugates have been reported to be stable and maintained their activity during multiple freeze thaw cycles.NanoLuc bridging immunoassays for detection of ATAs and ACA are sensitive, with LODs in low nanogram/mL range, and compares favorably with other reported ADA assays.More importantly, high sensitivity was obtained using a standard ELISA format with a single washing step and without the use of expensive instrumentation.Assays also had a broad dynamic range of four log orders, which is important as ADA concentrations in serum may display a wide range of three log order or more even for fully human monoclonal antibodies.Surprisingly, not much attention has been paid to the assay dynamic range in the literature even though excess ADA in the sample will result in a hook effect and an underestimation of ADA.Having a broad dynamic range is a clear advantage of the NanoLuc bridging immunoassay and is probably due to luminescence-based detection.Moreover, both the sensitivity and dynamic range were tunable by optimizing the amount of bridging mixture used in the assay.Even though same assay format was used for detection of ATA and ACA, the S/B ratio for Cetuximab was significantly higher and was primarily due to lower background signal from non-specific binding.Some antibodies are inherently ‘sticky’, prone to aggregation, have lower solubility or stability, and would result in unpredictable non-specific binding.Biophysical characterization of drugs is routinely done during drug ‘developability’ studies and similar studies may be necessary during assay specific optimization for maximizing S/B ratio.In our assays, equal amount of NanoLuc and biotin labeled drugs were used, which translates into the molar ratio of biotin to NanoLuc labeled drug of 1:0.8 and 1:0.5 for recombinant and chemically conjugated NanoLuc respectively.During initial optimization, different molar ratios of labeled drugs were tested, which indicated possibility of further changes in assay performance depending on the specific assay as has been shown by others.Finally, use of 100% serum sample in the assay will simplify the workflow, avoid unnecessary sample dilutions, and offer higher sensitivity.Immune response to a drug is polyclonal, unique to a patient, and depends on the dosage and frequency of administration; therefore, a true positive control for optimization of assays to detect ADA is not available.Instead, either monoclonal antibodies or affinity purified polyclonal antibodies are often used as positive controls and are accepted by regulatory agencies.The choice between polyclonal and monoclonal positive control is typically left to the assay developer but recently use of monoclonal antibodies has been proposed as preferable because they can be consistently produced and are better suited as universal calibrator.NanoLuc bridging immunoassay was tested with small set of monoclonal antibodies against the two drugs, and although differences in assay performances were observed, all the antibodies could be easily detected at concentrations several fold lower than the 100 ng/mL cutoff recommended by FDA.It can be argued that our assay did not account for low affinity antibodies that may be present in a real polyclonal serum.Lack of an ideal positive control representative of a real patient sample is well understood in the field and FDA has detailed guidelines on generating and use of positive controls in reporting results from real samples.Antibody drugs have long half-life and are typically present in patient sample along with ADA, which results in formation of immune complexes with possibly false negative results.One approach to address this problem is to collect samples several weeks after dosing of drug, so that drug has been cleared from the system or is present at extremely low concentration.This approach ignores the possible impact of immunogenicity in the days and weeks immediately after the patient is exposed to the drug.Another approach to minimize drug interference is acid dissociation of immune complex followed by neutralization in presence of bridging mixture.Typically drug tolerances of 50–400 have been reported using bridging immunoassay with acid dissociation, and recently a new method termed as precipitation and acid dissociation is able to detect 27–67 ng/mL of ADA in presence of 250 μg/mL of free drug.The PandA method however, is a multi-step protocol involving several centrifugation, washing, and incubation steps including one overnight incubation.NanoLuc bridging immunoassay was able to detect ADA at 100 ng/mL in the presence of 100 μg/mL of free drug with a simple workflow.For some context, mean peak concentrations for Trastuzumab and Cetuximab in serum are 123 μg/mL and 184 μg/mL respectively with half-life of 5.89 and 4.75 days.Drug tolerance was however tunable and could be further optimized by choosing an appropriate bridging mixture.We believe that high drug tolerance of the assay is a result of the bright luminescence signal of NanoLuc along with low non-specific binding of NanoLuc labeled drugs.Bright luminescence signal is required for sensitive detection because, in the presence of large excess of free drug, only a very small fraction of ADA will form a bridge with labeled drug, and will be captured on the plate.Higher amount of labeled drug can increase the amount of bridge complexes and improve sensitivity but also increases the background signal as discussed earlier.We did not elucidate the individual contribution of bright signal and low non-specific binding on improved drug tolerance in this study and may investigate it in future studies.Although not done here, an interesting study to simplify the workflow will be to determine drug tolerance in the absence of acid dissociation, and instead leverage long incubation time and temperature to induce dissociation.Finally, for this technology demonstration study, technical replicates from a single pooled serum were used to determine drug tolerance whereas biological replicates from a large pool of naïve human serum would have to be used for development of a validated assay.In conclusion, the feasibility of using NanoLuc bridging immunoassays for detection of ADA with high sensitivity, wide dynamic range, and high drug tolerance was demonstrated with Trastuzumab and Cetuximab as model drugs.Sensitivity and drug tolerance compared well with the published results generated using multi step ELISAs or expensive platforms like MSD.Additional advantages are a simplified workflow involving a single washing step and no need for secondary antibodies.Additional work is needed to benchmark the NanoLuc bridging immunoassay with other platforms like MSD and ELISA under controlled conditions using identical reagents and matrix lots, and determine accuracy, precision, matrix interference, and other assay parameters recommended by FDA.However, we believe that our approach will, not only find application in immunogenicity assays but immunoassay applications beyond ADA detection will also benefit with use of antibody labeled with NanoLuc. | Anti-drug antibodies (ADAs) are generated in-vivo as an immune response to therapeutic antibody drugs and can significantly affect the efficacy and safety of the drugs. Hence, detection of ADAs is recommended by regulatory agencies during drug development process. A widely accepted method for measuring ADAs is “bridging” immunoassay and is frequently performed using enzyme-linked immunosorbent assay (ELISA) or electrochemiluminescence (ECL) platform developed by Meso Scale Discovery (MSD). ELISA is preferable due to widely available reagents and instruments and broad familiarity with the technology; however, MSD platform has gained wide acceptability due to a simpler workflow, higher sensitivity, and a broad dynamic range but requires proprietary reagents and instruments. We describe the development of a new bridging immunoassay where a small (19 kDa) but ultra-bright NanoLuc luciferase enzyme is used as an antibody label and signal is luminescence. The method combines the convenience of ELISA format with assay performance similar to that of the MSD platform. Advantages of the NanoLuc bridging immunoassay are highlighted by using Trastuzumab and Cetuximab as model drugs and developing assays for detection of anti-Trastuzumab antibodies (ATA) and anti-Cetuximab antibodies (ACA). During development of the assay several aspects of the method were optimized including: (a) two different approaches for labeling drugs with NanoLuc; (b) sensitivity and dynamic range; and (c) compatibility with the acid dissociation step for improved drug tolerance. Assays showed high sensitivity of at least 1.0 ng/mL, dynamic range of greater than four log orders, and drug tolerance of > 500. |
504 | Reducing the environmental impact of hydraulic fracturing through design optimisation of positive displacement pumps | The technology of hydraulic fracturing was first demonstrated in the 1950s and has subsequently been used to enhance the permeability of a range of geological resources, including potable water, geothermal heat, and conventional onshore and offshore hydrocarbon resources .In the past decade combination of horizontal drilling technologies and hydraulic fracturing has transformed energy markets by enabling the economic extraction of unconventional gas resources, including coal bed methane and more notably shale gas.The International Energy Agency has estimated that, by 2035, gas demand will have increased by 50% on 2011 levels.Such growth would impact on the global energy mix and see gas overtake coal as the second largest energy source after oil.The same report also suggested that after 2020 unconventional gas extraction will account for 32% of the total gas production.If the figures suggested by the IEA report are to be realized, gas extraction from unconventional sources will have to double by 2020."Interest in unconventional sources of hydrocarbons has also been motivated by the desire to ensure the security of Europe's gas supply .Although estimates suggest there are significant potential shale gas reserves in Europe, exploration has been limited and to date no large scale extraction operations have commenced.This is largely because concerns about a range of environmental and social impacts have prevented the granting of legal licence for the process in a number of countries.While there are some potential subsurface risks, arguably, surface installations pose the greatest potential environmental and social risks .These risks include surface water pollution, light and noise pollution, traffic, and air quality .In the UK, for example, operators have been refused licences to carry out hydraulic fracturing operations because of concerns about the noise of the machinery , and road traffic .Thus the potential environmental impacts must be minimised if shale gas extraction operations are to be permitted in Europe.There are also concerns about the climate change implications of unconventional gas extractions; from the direct and indirect greenhouse gas emissions from the shale gas extraction process itself, and more generally from the continued exploitation of fossil fuel reserves and the subsequent increase of the global gas market.GHG emissions are a key element of industrial impact, so it is essential that the onshore oil and gas sector develops scenarios for CO2 reduction, similar to those adopted in other industries .The methodologies for doing this are well understood.For example, the development of a computational model for estimating CO2 emission from oil and gas extraction was discussed in Gavenas et al., which allowed the main sources of GHG emissions to be identified, managed and mitigated .Since it is forecast that gas will remain a significant fuel in the future, it is important to minimise the emissions intensity of the shale gas extraction process in order for the resource to be developed in-line with current carbon emissions reductions targets.Life cycle assessments are an important tool that can inform the relative carbon intensity of different energy choices, and so identify means of reducing overall emissions.There is some uncertainty around the magnitude of GHG emissions from shale gas extraction and currently the majority of reported shale gas LCAs have been performed using North American data and practices.Issues such as differences in assumptions and scope of the LCAs can make their results difficult to compare, and estimates of lifecycle emissions are evolving as new measurements become available and as commercial practices change in response to environmental regulation or technological advances.Furthermore, these LCAs must be adapted to the European context, which differ from North America in terms of the resource, environmental regulations, and social factors.A recent comparative meta-analysis of LCAs found that the median difference between electricity generated from unconventional and conventional gas in North America was 3% .These results are similar to LCAs adapted for shale gas extraction in the EU .Indeed, LCAs adapted for shale gas extraction in the UK and Scotland find that the carbon intensity of shale gas could be lower than imported conventional natural gas.These LCAs identify that besides fugitive leaks of methane during gas extraction and transport, which could be the greatest source of GHG emissions from shale gas, the majority of GHG emissions arise from activities during the preparation of the well pad and construction of the well, rather than during gas production .To further reduce the carbon intensity of shale gas and the environmental footprint of the industry, operators should seek to minimise the area of the well pad, the amount of surface infrastructure, size and mass of the construction materials, distances that materials are transported, and the pad power requirements.Local air quality, noise and traffic issues associated with hydraulic fracturing activity impact on communities local to shale gas developments, and concerns around these impacts are causing delays to planning applications in the UK and negatively affecting public acceptance of the industry .The construction and operation of the surface facility requires significant truck movements and transport distances.For example in North America over 1000 truck round trips are required for a single hydraulic fracturing site .Diesel fumes from trucks, drilling, frac-pump engines and emissions from gas processing equipment can significantly reduce the air quality around a hydraulic fracturing site; both for the workers, and local residents .While some significant air quality issues in America are related to practices that would not be permitted in Europe due to environmental legislation, the effect of diesel engines from trucks and pump engines will result in a decrease of local air quality as well as contributing to noise pollution.Recent work by Rodriguez et al. measured fuel consumption and on site emissions for two hydraulic fracturing sites in North America, and found that the fracturing pumps contribute to 90% of total emissions on site.The pumping equipment may also generate the most significant noise on site during the lifetime of the shale gas operations, depending on the number of pumps in operation at any time .In North America, the development of surface hardware has, to-date, largely been driven by the need for incremental responses to the need for hydraulic fracturing at higher pressures and greater depths.These requirements place great demands on the mechanical structures of the pumps and therefore the pumps require frequent maintenance and have finite lives.However there is no reason why the site machinery deployed in the EU needs to be to the same specifications as in the North American sites.For example, an enhanced pump design could contribute to reducing the environmental footprint of the well construction and completion, and also of any re-fracturing during the lifetime of the shale gas well.Given the relative infancy of the shale gas industry in Europe, it is timely to consider opportunities for improved design of required hardware.In this paper, we consider how site machinery, and pumps in particular, could be designed to meet both functional and environmental specifications.There is relatively little information available in published peer-reviewed literature about the practical ‘on site’ aspect of the equipment, energy and water requirements for the exploration of European shale gas reserves.Thus, we first provide an overview of the industrial plant required to carry out a hydraulic fracturing operation."We then consider the functional requirements of the equipment adapted to the European geologic context, before applying a parametric model to analyse the design space of a pump's reciprocating components and solve for both functional performance and efficiency.We present the changes to the pump design, and then discuss the associated benefits of these more efficient pumps in terms of the physical and environmental footprint of the pumping operations.Shale gas extraction by hydraulic fracturing is an emerging industry in Europe, whereas it is well established in North America.In North America, the engineering choices implicit in the current designs of high pressure fluid pumps did not focus on minimising the physical and environmental footprint of the operation, since their design was largely in response to the need for hydraulic fracturing at higher pressures and greater depths.However, there is no reason why the site machinery deployed in the EU has to be identical to that used in North America.In this paper, we consider how more efficient pumps could be designed that meet functional and environmental specifications.We find that there is considerable scope for redesign of current hydraulic fracturing technology.The analysis presented in this paper has demonstrated that a 4.6% improvement in energy efficiency is theoretically obtainable by optimizing the relative proportions of the established design.In 17 stage hydraulic fracturing process, as reported by Rodriguez , such a change would:Reduce diesel fuel consumption by 4,500 l,Reduce CO emissions by 1.5 kg,Reduce NOx emission by 8.16 kg, and other associated pollutants occurring in diesel combustion;,Qualitative discussion of the potential environmental and social implications of these changes suggest that more efficient, and potentially more reliable pumps, have a lower associated environmental impact in terms of direct and indirect greenhouse gas emissions and also nuisance impacts for local communities, including air quality, noise and traffic.We also identify that further improvements could be made by reducing the pump mass and size.Quantification of these benefits is a subject for future work.In conclusion, this paper has outlined engineering rationale for creating a compact, low energy hydraulic fracturing technology which is important for shale gas operations and other geological resources.Optimum pump design ought to be established for better process management and enhanced efficiency of the system.In short, key economic and environmental advances in hydraulic fracturing could come from innovate design and improved operation of site equipment.The location of a well stimulation operation by means of hydraulic fracturing is commonly referred to as a “frac-site”.The frac-site consists of an array of pumps, engines, liquids, sand, pipework and wellbore hardware that can weigh over a thousand tonnes, involve 30–40 operators and cover an area of few thousand squared metres.The mechanical pumps which create the pressures and flows required are central to the process.The depth and therefore hardness of the rock formation being stimulated have steadily increased since 1950s requiring larger pressures and flow rates.The pumping equipment has matched these increasing demands through incremental development of existing designs.Although there are number of commercial pump suppliers there is remarkable uniformity in the mechanical design."Rather than simply adopting the industry's default values this paper investigates the “design space” of several critical interacting parameters to identify an optimum solution.To do this the following methodology was adopted:Establish the duty cycle of hydraulic fracturing hardware in the context of single and multi-stage fracs;,Identify the typical pressure and flow required to fracture low permeability rock at the required depths;,Detail all the elements of the mechanical systems used to generate the high pressure used during hydraulic fracturing;,Use a parametric mathematical model to quantify the behaviour of the pumps for any configuration;,Develop optimisation algorithm to explore possible efficiency improvements and identify best set of design parameters;,Use exemplar scenarios based on frac-site case study to compare power and performance requirements from current and next generation of pumps;,Quantify the environmental benefits that enhanced pump performance could offer for hydraulic fracturing operations;,The approach of modelling mechanical systems and then optimizing their parameters to improve performance has been employed in other process industries.For example Santa et al. employs this methodology for determining the most efficient choice of design parameter values for a heat pump .The physical and performance characteristics of the optimised pump design are examined, with particular emphasis on pump efficiency.The potential impacts of more efficient pumps on the environmental and social impacts of hydraulic fracturing operations are then qualitatively assessed.We also propose opportunities for further improvements to high pressure pump design and operation.This approach will result in an improved pump design that is applicable to any hydraulic fracturing activities.This section gives an overview of the process of hydraulic fracturing.We initially detail a single “stage” of hydraulic fracturing and then discuss how the process is conducted across a number of stages covering the entire “pay zone” of the well.Typical values for the major process parameters are presented for each step.These have been obtained from site visits and available literature, and the sources are identified in the text.In order to hydraulically fracture a well, fluids are injected under high pressure to stress the rock until it cracks.Once hairline fractures have been formed they need to be held open for gas to flow out, otherwise rock will close due to the pressure exerted by the weight of the rock above.To do this the fractures are propped open with sand, that is added to the frac-fluid .Gas then flows from the rock into the well bore, via these propped fractures, once fluid pressure is reduced.After a clean-up phase the well is ready for production.The hydraulic fracturing process can be illustrated concisely by referring to one of the performance monitoring graphs recorded in the control truck.On the right hand axis of Fig. 1 slurry and proppant concentration volumes are plotted against time during a two and a half hour fracturing operation.On the left hand axis, pressure is plotted.Slurry rate in Fig. 1 refers to total flow of frac-fluid from the pump array.Proppant concentration refers to the percent of sand combined with the frac-fluid .The pressure plot in Fig. 1 reaches its peak early in the stage after which it reduces and is held roughly constant to ensure fracture propagation.Flow rate is also held constant from the moment the cracks are initiated to ensure correct fracture size.Proppant is introduced towards the middle of the cycle, and the particle size of the proppant is systematically varied during the hydraulic fracturing process, starting with larger and ending with finer grain size.The proppant concentration increases continually while the grain size is reduced, which is necessary to ensure created fissures are “propped” open with the grains supporting the overburden.Wells are usually fractured in many places along the length of the well.The well is divided into a number of isolated sections, known as stages, which are then fractured individually.The number of sections depends on the length of the well, and can range from 1 up to 50 stages.Wells are fractured in stages to ensure fractures are created along the length of the bore.To enable pressure containment within the desired area, a section of the well bore is closed off using packers .Once that section is fractured and propped, the completed stage needs to be isolated to ensure that the next area is not affected by the previous stage .Fig. 2 illustrates the process for an entire well where the boxed areas represent a single stage, described earlier in Fig. 1.Hydraulic fracturing starts from the far end of the well and progressively moves to the heel of the wellbore, stage by stage.At the end of the hydraulic fracturing process all the internal parts are removed, and the frac-fluid first flows to surface, and is then pumped from the well, allowing the free movement of gas along the length of the well to the surface.Any investigation into the mechanical redesign of hydraulic fracturing equipment must start by considering the necessary performance requirements.The following section provides estimates of the pressures and flow rates required to successfully stimulate a typical shale well.In order to establish the pressure needed to create a fracture the depth and the properties of the target rock formation must be determined.Although the structure of rock is very variable, the typical density, porosity and compressive stress values that define the material can be used to illustrate the order of magnitude of these parameters .Even in the same basin, the depth of the prospective formations will vary significantly in terms of the upper and lower limits.For instance, in the Bowland Basin, the upper limit of the formation range is around 1000m with the maximum thickness up to 4000m .Furthermore, the rock properties will vary within the basin due to heterogeneities in the rock itself caused by natural variations in its formation and so the pressure required is not simply a function of depth.Thus, if σz is acting in vertical direction the joint impact of both σx and σy stresses can be estimated.These stresses are present in the entire reservoir.So if tensile strength of the formation is considered it can be concluded that fracturing will occur whenever σθθ is equal to the tensile strength of the rock.The effect of pore pressure also needs to be accounted for when estimating fracture pressure.In 1923, Terzaghi introduced the concept of effective stress stating that the weight of the overburden is carried by the rock material and the pore pressure.To refine this concept in 1941, Biot introduced a poroelastic constant, β, that describes the efficiency of fluid pressure .The poroelastic constant β can be obtained experimentally.Eq. can now be developed to include additional factors reflecting fluid pressure, Eq., tensile strength of the rock and Terzaghi/Biot stress distribution.Finally, the breakdown pressure required to cause formation failure can be expressed by Eq.Furthermore, Eq. states that horizontal stress is affected by vertical stresses of the overlying formation and pore pressure in the horizontal direction.A logging tool could be used to measure formation density of the individual layers in the overburden.However, due to the well depth and time involved it is more common to use an average pressure factor gradient as expressed in Eq.It can be concluded that depth is driving factor in determining the actual requirements of the well.In the case study sets of data are evaluated using these theoretical equations.Having established the theoretical pressure needed to fracture the rock, the second pumping parameter, fluid volume, can now be investigated.There is no single property of shale rock that is able to accurately describe the volume of water required to hydraulically fracture each individual well.Due to geological differences in the properties of the rock, structural and the relative location of the shale prospective layers, predictions need to be adjusted appropriately.There is currently little publically available information about the properties of shale in Europe, and so North American shale data must be used to estimate the properties of the European equivalent shale.According to the API guidelines, the magnitude of the liquid volume required to successfully hydraulically fracture well is somewhere between 9 million and 18 million litres , other papers report similar volumes .The frac-fluid volume requirement can be divided into two quantities.First, the amount of water needed to fill all the hoses, pipelines and well casing up to the target zone.Second, the water absorbed in the cracked rock during the hydraulic fracturing.This approach requires both quantitative and qualitative assessment of the actual water requirement depending on the changes in the well properties.To calculate the volume required to fill the pipe work and bore on site it is necessary to examine all the lines leading from the water storage units on site to the shale reservoir rock.Because surface leads and lines are measured in tens of metres the following discussion focuses only on estimating casing volume.To evaluate the second volume of the water needed during hydraulic fracturing it is necessary to examine actual field data.Field data was collected from three different hydraulic fracturing operations in structurally different basins during April 2013.In each case the operational time of the hydraulic fracture for a single stage was between 60 and 210 min.A number of flow rates were recorded during operations but for brevity this paper will present only one stage per well.It can be seen that the average volume flow rate is between 6,000 and 10,000 l/min.The volume of fluid needed to fill the casing, Eq. would typically be only measure in tens of thousands of litres in total in other words only a fraction of the overall fluid requirements.Machinery used throughout hydraulic fracturing can be divided into four categories:Transport equipment,Fluid servicing equipment,Pipeline equipment,Pressure pumping equipment.The entire process of hydraulic fracturing is designed to be portable because it will be active and present on site for only a few weeks .On process completion the equipment is disassembled and transported to the next location.The time spent on site is dependent on the length of the well bore, number of wells, number of stages and the geology of the site.Pumps, blender and pipe manifold are all mounted on trailers.Similarly, water, chemicals and sand are transported in separate containers.Hydraulic fracturing is just one of many procedures used to prepare a well for production.The size and weight of the individual units is in many instances the key design constraint, i.e. the component size and weight is limited by the truck specifications.In North America the maximum truck load limits are different from state to state.Consequently equipment manufactures try to design lighter and therefore universally usable components.There are multiple units on site that provide the various fluid services shown of the right hand side of Fig. 6.Storage tanks - All the consumables are transported to the frac-site in plastic or steel containers depending on their chemical property.Additives commonly added to water are used to enhance viscosity so that proppant is suspended in the fluid, decrease viscosity to clean up the bore, chemical breakers to release the sand from the slurry mixture and biocides to eliminate any bacteria from the water .Blender - This unit is used to mix all the ingredients into one consistent fluid commonly referred to as “slurry”.Depending on the desired effect downhole this fluid may be more or less viscous than water."Proppant is transported into the blender's tub using augers, while chemicals and water use separate lines to supply the tub.Once the slurry mixture has been mixed, centrifugal pumps transfer the fluid to a common pipeline which feeds all the pumps.The intakes and outlets of all the pumps used to create the necessary pressures and flows are connected to the manifold trailer, Fig. 5.There are two separate circuits for low and high pressure in the manifold trailer.The low pressure line of the manifold trailer transports fluid from the blender to the suction side of the positive displacement pumps.Depending on the configuration of the manifold trailer different numbers of inlet and outlet ports can be present.The line leading from blender to PD pumps is also known as the low pressure line.Pressure coming from the blender rarely exceeds 10 bar, therefore ports on this side of the manifold trailer in most instances are simple butterfly valves.The high pressure line of the manifold trailer connects fluid coming from the discharge side of the PD pumps towards the wellhead.The high pressure line is positioned underneath the low pressure line.A hydraulic fracturing sites may have as many as 20 independent PD pumps with each pump capable of creating pressures up to 1000 bar .Since significant fluid energy is being transmitted around the site special procedures are used to ensure safe operation.Constraining rings are incorporated in the manifold trailer and restraining ropes are used to tie down all the pipework leading fluid from the discharge side of the pump to the manifold trailer.Once slurry is mixed in the blender unit, fluid flows via a manifold trailer, at a low pressure, to the positive displacement pumps.These pumps have variable speeds that allows them to produce different flow rates.Each pump is powered by an individual diesel engine via a transmission gearbox that is connected to the input shaft of the PD pump.All of these components are jointly mounted on a trailer and transported as single unit.Individual triplex PD pump consumes up to 1,677 kW as shown in Fig. 7.Fig. 6 illustrates a fracturing site layout.On the right side of the schematic all of the consumables are stored prior to being merged and mixed in a blender unit.As discussed earlier, fluid is then transferred via a manifold trailer that ultimately supplies each individual pump with frac-fluid.High pressure pumping equipment is required to pump range of volumes of frac-fluid to pressurize the well formation until the surrounding rock fractures.After fracturing has occurred, pumps are needed to propel and deposit proppant into the newly opened fissures in the rock to keep the formation open.Some pump types, such as centrifugal or rotary pumps, decline significantly in performance once operated outside the point of peak efficiency.However, PD pumps have a broader operating range and are able to provide both high flow rates and pressure for sustained periods.Generally, hydraulic fracturing operation use three or five cylinder pumps, .A typical 3-cylinder pump is shown in Fig. 7.The fundamental physics of fluid movement means that all pumps are designed to operate in predefined ranges as shown in Fig. 8.Operating PD pumps outside their design range can lead to premature failure caused by over stressing their structures .In a hydraulic fracturing operation, pumps must be capable of providing both high pressure and high flow output.The initial phase of a fracturing stage, known as the ‘breakdown’ phase, requires a high pressure to initially crack the rock.Although this duty lasts for only couple of minutes it is crucial to the success of the entire operation.The next part of the operation is referred to as the “fracture propagation” or “extension phase” .In this phase, the cracks initiated in the ‘breakdown phase’ are propagated to create the desired fracture network necessary for maximum gas flow.Thus, this part of the hydraulic fracturing operation is also crucial as it directly determines the effectiveness of the well stimulation .During this phase, the fluid pressure must be maintained at a lower level for a couple of hours while the flow rate increases between 4 and 6 times than in the breakdown phase.These flow rates are achieved either by increasing the speed of the pump, Fig. 8, or by introducing additional pumps to the operation.An experimental study by Fan and Zhang highlights pressure variation due to different injection flow rate dynamics .The negative effect of pressure oscillations are manifested in the form of unpredictable shale fracture development and are also damaging to the pumps and other process equipment generally used during hydraulic fracturing.Consequently the relationship between injection pressure and injection flow rate is critical for successful well stimulation.As previously noted, there is no advantage to designing larger pumps, since, in order to be portable, their size is limited by truck specifications in North America."The functional constrains to the pump's design can be divided into two categories, fluid and strength limitations.In the following section we examine each in turn before considering how the system can be modelled.Although fluid properties such as inertia or viscosity create theoretical boundaries for the flows and pressures that a pump can deliver, some of the most serious practical constraints are secondary to the movement of the fluid.For example, erosion is common even though pumps are manufactured from hardened-alloy steel.This is because, as described in Section 5, the frac-fluid is a slurry of water, chemicals and proppants, that erode and corrode the pump components in two principal ways :During the high flow operating regime sand and proppant particles cause erosion and wear in the fluid chamber.The addition of acid to the frac-fluid in some hydraulic fracturing operation causes corrosion that ultimately reduces the fatigue life of the pump.Together, these processes wear the internal surfaces of the fluid chamber after a number of hours, leading to so called pump “wash out”.The effects of wear include leaking valves and deteriorated plunger seal."This limits the pressure at the outlet of the manifold trailer.When the pressure drops below a critical threshold, cavitation problems occur in the fluid chamber .Perhaps the most serious consequence is that wear varies in proportion to the second or even third order of fluid speed .In other words a small increase in fluid speed might have a dramatic increase in the rates of erosion and these issues lead to ineffective pumps, loss of volumetric efficiency and unbalanced operation.These design challenges must be overcome to achieve consistent flow pattern and avoid oscillation and vibration issues.The structural strength constraints of the pump can also affect operations in several ways.For example:Each pump has a pressure restriction due to the maximum rod load that its drive can transmit without buckling ."Each cylinder is controlled by a crankshaft that is powered from the diesel engine's driveshaft via a gearbox.However, due to the relative incompressibility of water, the pressure in the fluid chamber loads the piston early in the compression stroke, which in turn transmits loads to the entire cylinder assembly including the crankshaft .The pump housing is directly affected by periodic loads, particularly throughout the discharge stroke as shown in Fig. 8 and.The resulting strain frequently causes the pump housing to experience twisting and deflection.The cyclic loads on the structure, due to the drive mechanism, means that the power delivery is non-linear ."The unsteady power delivery from the engine and transmission will impact on a pump's life through fatigue limits and shorter component life.A hydraulic fracturing pump is clearly a complex machine with many interacting elements.Consequently any efforts to optimize the process must take a system view and understand how changes in one area will affect others.The following section describes the analytical methods used to model the system.Pressure in the cylinder is determined not only by the plunger area and displaced volume, but also by the pressure resistance downstream.The downstream pressure is calculated based on different well characteristics rather than pump performance directly, and so it is necessary to use a fixed value for this variable.These equations describe the interaction of the design parameters of a positive displacement pump and lay foundation for exploring alternative configurations.In the subsequent sections the paper will examine alternative concepts based on the current design and quantify the potential impact of changes to the performance.An optimised design needs to incorporate both high pressure capability and sufficient flow capacity.An increase in volume capacity will lead to better time management on site.It is clear that pumping pressure, speed, plunger diameter, stroke length and rod load all interact, so what is the best combination of values?,And could there be scope within the design space to select values that result in a smaller more compact pump which are appropriate for European transport specification, environmental and societal constraints?, "To investigate this hypothesis a numerical model was used to systematically explore the system's design space with the aim of optimizing the size of the reciprocating components for a given pressure and flow.This process of multivariable analysis has five steps:Identify current design specification,Create a computational model of the system,Coarse grid exploration of design space,Identification of sets of candidate parameter values for system improvement,Finer grid search through Monte Carlo optimisation,The following sections detail each step of this process.Identifying parameters values associated with current equipment is the first step in development of the full multivariable analysis.Fig. 10 shows a hydraulic horsepower power curve and the key design parameters used as a starting point for the analysis presented.The red dot represents the single operating state that will be used as a representative example of pump capabilities.A mathematical model was developed to explore the design space using the parameters in Table 2.We adopted a discrete fixed step approach because incremental changes to the output makes the impact of the parameters easier to distinguish.The results show that, as expected), a wider plunger is associated with a relative increase in Rod Load as the pressure rises.Similarly, it is unsurprising that the stress on the crankshaft increases as the plunger area increases, and this stress ultimately limits the maximum operating pressure.Since changes in the design parameters will result in different output characteristics, four areas of output characteristic can be identified in Fig. 11.Large plunger area and low speed: low flow and high rod load performance.Medium - large plunger area and a range of speeds: large variations in rod load and flow rate.Small - medium plunger area and mid to low speed: relatively low rod load and low flow rates.Small - medium plunger area and high speeds: relatively low rod load and high flow rates,For each area, the parameters can be expanded to explore in more detail the possibilities of different pump designs.The aim is to maximize flow rate while minimising rod load; an optimised design needs to be able to deliver both high pressure capability and sufficient flow capacity, since the flow rate of the pump is a significant factor in the overall time taken to complete a stage.The next step is to identify whether the same level of performance can be obtained with the improvements in the equipment footprint."This is achieved by running another simulation with the system's objective functions defined.This second phase of the multivariable analysis involves a more detailed exploration of the reduced parameter space identifier through the previous coarse grid search,.Optimisation was done using a Monte Carlo analysis with filtering to provide information about the model sensitivity and parameter ranges around optimum values.The process has three distinct steps:Explore the reduced parameter space using a Latin Hypercube ,Filter and weighting the simulation according to the chosen criteria,Infer the posterior distributions for each parameter according to the calculated weights.Values Q0 and Q1 present acceptable range for the new design.These values are centred around the current operating range shown in Fig. 10, where Q = 1,472 l/min.The posterior distributions were inferred by sampling with replacement the simulation input vectors, defined by the initial Latin Hypercube design, using probabilities proportional to the calculated weights.The optimal value and range for each parameter were calculated by taking respectively the mode and the 95% confidence interval for such distribution.The value of coefficient N) was elected following a number of model trials.N = 2 was deemed to adequately define the posterior distribution.The optimised values of the PD pump parameters are presented in Table 3.In addition to the qualitative benefits the mechanical structure of the pump that will result from the reduction in plunger diameter the analysis suggests a 4.6% energy saving.Detailed sensitivity analysis for studied parameters is presented in Fig. 12.Fig. 11 illustrates a projection of the six dimensional design space.Each point of the plot represents one set of input parameters.Two of the current functional and physical limits are shown on the graph to illustrate the boundaries of the current design.Lines for constant pump speeds are marked in black, in increments of 25 rpm for the appropriate speed limits.The red dashed line in 11 illustrates the impact of increasing the maximum pump speed by roughly 33% to 380 rpm.Since pressure is directly dependent on the rod load limit, decreasing rod load requirements could achieve an increase in performance.Similarly, the same pressure output could be attained by optimizing the crankshaft to save extra weight and size.The multi-variable model presented gives the initial basis for the optimised pump design.The advantage of this approach is the overall flexibility of the model and the ability to quickly assess design configuration independent of physical limitations.The design space presented in Section 7 has been explored for solutions that minimise power requirements while delivering appropriate performance.To investigate the impact of the proposed design on a hydraulic fracturing process case studies are used.The mechanical properties associated with a rock formation in Woodford Basin are summarized in Table 4.Zhang et al. presents an “energy” study for which typical hydraulic fracturing was modelled using the STIMPLAN software .The reservoir properties in their study are similar to the recorded reservoir data used in our model."The analysis in this paper will use a single stage in “Well 3's” stimulation program, shown in Table 1, as a representative example for energy estimation.The pumping rate for a single stage of hydraulic fracturing will be determined in advance of the propagation phase.The overall time is influenced by the size of the well and the mechanical properties of the rock.For this case study the time of the stage is set to 210 min.Experience in North American shale reservoirs suggest that this estimate is towards the upper limits of a pump stage, i.e. longer than the average time required.For our case study, propagation pressure is therefore approx. 43 MPa.This pressure will be maintained throughout the propagation stage.The entire hydraulic fracturing process can be modelled using the calculated volume requirement parameter and formation breakdown pressure.The pump pressure needed to fracture this well is obtained from the mid-range of the performance curve of the pump, Fig. 8, confirming that the optimised pumps will be capable of delivering this required pressure to the wellbore.Given that the volume of liquid needed is approximately 2.45 Ml and the time to deliver this volume is 210 min, the pumping rate must be 16,000 l/min.To generate this flow, a total of 14 positive displacement pumps would have to be used in parallel requiring power of 25 MW.Since we have determined the overall fluid volume needed to fracture a single stage in the example well, and the number of pumps required to achieve these flow rates, it is important to consider the physical issues of delivering the equipment to site.One of the principal impacts on the local community is nuisance and air pollution from trucking .Additionally, road traffic accidents are one of the most likely risks to the environmental posed by hydraulic fracturing operations .Thus infrastructure delivery to site has important implications for the environmental and social impact of hydraulic fracturing activities, which operators should seek to minimise.Further, the pumps require a great deal of power to operate.This power is usually provided by diesel generators.Minimising the number of pumps would not only reduce transport strains but also the overall power requirements of the pad.All the units on the hydraulic fracturing site are mounted on trailers that are limited in size by transport legislation.A tanker, in accordance with EU road legislation , is able to transport a maximum of 32,000 l of water or petrol.For this case study, 78 water tankers would be needed to transport the required amount of fluid to the well location.There will be additional trucks to transport the frac-chemicals and proppant - the volumes of which will be proportional to the total fluid volume pumped.However, the volume of both sand and chemicals required are an order, or even two orders of magnitude smaller than the water needed.Due to strict road and transport regulations, pump manufactures and final assembly companies are very conscious of the physical size of the frac-trucks.The EU Council Directive 96/53/EC specifies a maximum authorized dimension for national and international road traffic.Similarly pump assembly manufactures specify maximum overall dimensions of their units to fit the size limits.These limits are approaching the very limit of the acceptable range for the European roads.The mechanical properties of the rock and the time scheduled for each stage of the hydraulic fracturing largely dictates the amount of pumping hardware required.While it may be preferable to process a stage in a shorter time, doing so would require more pumps in operation at a given time.For the purpose of this study, an example hydraulic fracturing process from North America has been adopted.For this operation, 2.45 Ml volume of liquid must be delivered to the rock over a period of 210 min, requiring pump flow rates of 16,000 l/min.All the positive displacement pumps on the site individually must be capable of exceeding the formation breakdown pressure.After the breakdown phase, pumping shifts from a low speed, high pressure regime to a high speed, high flow rate.The pumping profile associated with this case study is shown in Fig. 13, which details the fluid pressure, flow rate and fluid density requirements.The case study demonstrates that an optimised pump could deliver adequate pressures and flows for a typical job.The number of pumps and their duty cycle can be used to determine the power needed to run the site.These will determine both the traffic and environmental footprint of a single hydraulic fracturing stage.All the other variables present in the process such as sand and chemicals are affected by the size of the reservoir and the total water requirements.In order to develop shale gas resources in Europe it is necessary to establish energy efficient operations with minimal environmental and social impact.Europe has committed to carbon emissions reductions targets, and so should the shale gas industry be developed, it is important that it is done so in a way that minimised life-cycle emissions of the process.The slower planning and permitting process in the EU and differences in the geological resource , make it particularly important to make the process as economically efficient as possible so as to ensure profitability.If one assumes that basic mechanism of the stimulation process remains the same then any improvements must come from the changes to the equipment.The preceding sections have shown how a reduction in cylinder diameter could result in an energy saving, however, it would also allow mass savings.The smaller diameter will result in lower hoop stress around the cylinders and so allow reduction in the mass.Consider, for example, the economic benefits associated with reduction in size of the equipment:Truck Size: Pressure pumping equipment and water are transported to site by heavy duty trucks.The North American frac-truck is near the limits of acceptance for EU roads.Therefore, more compact equipment will result in better utilization of transported weight and volume.The material costs during pump and truck manufacture could also decrease due to reduced mass.Energy Consumption: Pumping is powered by industrial diesel engines.These units have significant fuel consumption and emission generation.Consequently, a reduction in the power requirements would in turn reduce fuel needs and the pollution/noise associated with 6–20 large industrial engines running simultaneously in a full load condition.Carbon Footprint: The embedded carbon in the pump and pump truck will be lower if they are reduced in material mass.For example, 1.9 tonnes of CO2 are emitted for every tonne of steel manufactured in 2014 .This is discussed in Section 9.2,The preceding discussion has established that the pumps used for hydraulic fracturing are required to operate in several modes, each with different performance requirements:Pad Mode: to fill the well bore with fluid prior to pressurization.Breakdown Mode: to create the fracture pressure at which cracks are initiated.Propagation Mode: to extend the length and width of the cracks.The general approach established in North America is to use the same pump for all three modes.Consequently, all the pumps on a hydraulic fracturing site are designed to have operating profiles that, dependent on the drive speed, can provide both high pressures and high flows.A consequence of this “mono-pump” approach is that all the power-ends and all the fluid-ends are physically larger than they need to be.For example:When operating in Pad Mode: Large diameter plungers would be preferable to generate high flows with a large swept volume running at a moderate speed.The pressure during the pad creation is low so components can be sized to carry modest mechanical loads.When operating in Breakdown Mode: Small diameter plungers would be ideal because the flow rate requirements are low so only a relatively modest swept volume is needed.The physical size of the other components would also reduce because the mechanical strength requirements will scale with the load seen by the drive which in turn will be the product of plunger area and pressure.When operating in Propagation Mode: Plunger diameter must be optimised to match the power curve of the drive with the pressure and flow characteristics of the pump.The multi-variable analysis of the design space illustrated how pump design parameters interact.One direction of design improvement is suggested by the history of mechanical engineering.In the past dramatic improvements to size, energy and emission have resulted from increases in the speeds of reciprocating systems.The mechanical benefits of increased speed are well illustrated by the development of the internal combustion engine.For example, around early 1900s Rolls Royce car engines were significantly larger in size but produced only 20 bhp."In contrast, today's Formula 1 engines are 1,600 cc turbocharged V6 machines and produce up to 600 bhp .Although new engines have incorporated improvements in electronic regulation, valve timing and precision manufacturing, one of the key change is the output speed of the engine."Compared to Rolls-Royce engines from 1900s which were outputting 1000 rpm today's Formula 1 engine are revving up to 15,000 rpm.By applying a similar approach to PD pump the authors have assessed the potential for redesign of current technology to maximize efficiency.Consider how rod load and speed would have to vary to maintain a constant flow as the plunger diameter is reduced:Reducing plunger diameter by 10% implies the pump speed must increase by 23% to provide the same flow but the rod load will reduce by 19%,Reducing plunger diameter by 23% implies the pump speed must increase by 56% to provide the same flow but the rod load will reduce by 36%,Reducing plunger diameter by 30% implies the pump speed must increase by 100% to provide the same flow but rod load will reduce by 50%.Such reductions in rod load and the associated hoop stress in the cylinder would significantly reduce the stresses in the pump.However, increased fluid speed will also be associated with increased wear so the creation of high speed pumps for hydraulic fracturing would have to be associated with the adaption of technology that allowed sand and frac-fluid to be introduced after the pumps.Such a change would reduce erosion and corrosion rates that currently occur due to the abrasive fluid moving through the pump.As discussed in Section 1, the direct and indirect greenhouse gas emissions associated with the construction and completion of the shale gas well can be significant .To reduce the carbon intensity of these activities, and thus the environmental footprint of shale gas, operators could seek to, for example, reduce the surface area of the well pad, the size and mass of surface infrastructure, transport distances of materials, and the pad power requirements.It is also important that these activities minimise the disruption to local communities.Impacts to local air quality, noise and traffic issues are associated with hydraulic fracturing, and, where possible, these impacts should be mitigated or reduced.Noise and emissions mostly source from the transport and operation of site equipment, as well as site materials.Our modelling specifically optimised for efficiency, since more efficient pumps will have environmental benefits and social benefits.For example, the enhanced pump design that we present here could reduce the environmental footprint of the high pressure fluid pumps on site during the well completion stage, and any future re-fracking if required during the operation of the well in several ways:The enhanced pump design is more efficient than the current pump design.This will in turn reduce the fuel requirement for a hydraulic fracturing job, and thus the emissions from fuel combustion.Not only will this reduce the greenhouse gas emissions associated with the operation, but also pollutant emissions that affect local air quality and impacts on on-site workers and communities local to the developments.The enhanced pump design may be more reliable because of the reduce load on the components.Increased pump reliability could demand less standby pumps, again reducing the bulk materials for transport and the associated issues.Improved reliability may also decrease the risk of surface spillages and leaks from pump wash out.In an attempt to quantify the reduction in direct greenhouse gas emissions and other pollutants from improved pump efficiency, we can apply the 4.6% reduction in energy requirements to the on-site diesel consumption during typical hydraulic fracturing.A study by Rodriguez et al. report fuel consumption and on site emissions for 14 pumps operating on a 17 stage well at two hydraulic fracturing sites in North America; in the Marcellus and the Eagle Ford shale.Diesel consumption for these operations was estimated to be 95100 m3 respectively .The study also calculated on-site emissions of CO2, CO, SOx, NOx and other pollutants and, as previously noted, found that powering the pumps contributed 90% of total emissions on site.Thus, introducing a pump power saving of 4.6% would, according to the values measured by Rodriguez et al., save up to 4.6 m3 of diesel per frac.If the EIA figures for diesel price in 2012 are applied, this would save operators $4,000 per frac.Reducing the quantity of diesel combusted to power the pumps would also decrease the quantities of nitrous oxides emitted by 8.16 kg, HC by 0.3 kg, carbon monoxide by 1.5 kg and particulate matter by 0.27 kg.On site diesel consumption will vary site by site, and frac-by-frac, and so in the absence of other published data information, these values are only indicative.Regardless, improved pump efficiency can offer significantly reduced emissions and operational cost, illustrating the multi-faceted value of optimised design.We did not optimize the pump to reduce other parameters such as pump mass and dimensions.However, the reduced plunge diameter may in turn reduce the mass and dimensions of the pumps, which will bring associated environmental and economic benefits.Future research should explore the changes to these parameters further, but here we qualitatively discuss the potential environmental benefits from these changes, for example:Reducing the mass of the pump will in turn reduce the embedded carbon of the equipment, and the emissions associated with transporting the pump to the site.This would reduce the carbon footprint of pump transport and also reduce the impact of their transport on local air quality.Further, lighter pumps could reduce the damage to local roads that arises from transporting heavy goods and can cause disruption to local livelihood and noise problems.Reducing the size of the pump could enable smaller trucks to transport the pumps, further reducing the fuel requirements for pump transport and potentially also reducing the pad area required for the hydraulic fracturing pump array.The environmental footprint of shale gas operations is also affected by the source of power for the site .The utilization of recovered gas to power the frac site can bring economic and environmental benefits improving air quality and reducing site noise and traffic.Leading industrial engine manufacturers have already made this technological development by promoting ”hybrid” powered stations and dual fuel systems that can use both natural gas in addition to conventional diesel fuel.Should the improved pump design be powered by gas, the nuisance impacts for local communities would be reduced further. | The current approach to hydraulic fracturing requires large amounts of industrial hardware to be transported, installed and operated in temporary locations. A significant proportion of this equipment is comprised of the fleet of pumps required to provide the high pressures and flows necessary for well stimulation. Studies have shown that over 90% of the emissions of CO2 and other pollutants that occur during a hydraulic fracturing operation are associated with these pumps. Pollution and transport concerns are of paramount importance for the emerging hydraulic fracturing industry in Europe, and so it is timely to consider these factors when assessing the design of high pressure pumps for the European resources. This paper gives an overview of the industrial plant required to carry out a hydraulic fracturing operation. This is followed by an analysis of the pump's design space that could result in improved pump efficiency. We find that reducing the plunger diameter and running the pump at higher speeds can increase the overall pump efficiency by up to 4.6%. Such changes to the pump's parameters would results in several environmental benefits beyond the obvious economic gains of lower fuel consumption. The paper concludes with a case study that quantifies these benefits. |
505 | A bisphosphonate for 19F-magnetic resonance imaging | MRI is a medical imaging technique that offers high-resolution images of soft tissues without the need for ionising radiation.In addition, and unlike other techniques such as those based on radionuclides, it does not require the injection of contrast agents in order to obtain meaningful images.However, for some imaging procedures such as angiography or molecular imaging, chemical compounds can be used to enhance the contrast of the specific tissue of interest.In this context, one area that MRI currently lags behind other imaging modalities, particularly positron emission tomography and single photon emission computed tomography, is the quantitative measurement of the signal provided by these contrast agents.This is a key requirement for molecular imaging applications.Current contrast-based MR techniques rely on the detection of imaging agents containing paramagnetic ions such as gadolinium, manganese or iron.However, interpretation of the results is difficult due to the varying underlying signal hyper- and hypo-intensities in MRI.In answer to this 19F-MRI has been implemented.The use of fluorine as the nucleus for magnetic resonance has several advantages over protons.First, the lack of endogenous MR-visible fluorine provides an unambiguous readout of the introduced fluorine-containing compounds location.In addition the 19F MR signal can be quantified, giving a measure of the contrast agent’s concentration.This is in contrast to paramagnetic contrast agents used in 1H-MRI and based on Gd, Mn and particularly Fe, where in vivo absolute quantification is not achievable.The main uses of 19F-MRI in biomedical imaging to date has been for cell tracking visualisation of inflammation and for imaging angiogenesis all using 19F nanoparticles.This is an obvious choice due to the capacity of nanoparticles to carry the many fluorine atoms required to obtain sufficient signal.More recently attempts have been made to image smaller compounds by modulating the 19F signal using lanthanide metals and used for the detection of gene expression .Despite these early promising results and clear advantages for molecular imaging compared to 1H-MRI, 19F-MRI remains underused in clinical practice.This is due to a major disadvantage, which is low sensitivity .As a consequence most 19F-MRI probes designed to date need to have many fluorine atoms to provide enough signal in the tissues of interest.However, the number of fluorine atoms that a molecule can carry is limited for several reasons.First is solubility, as the fluorine content of a molecule increases, the water solubility decreases.The second limitation is the number of 19F signals, the ideal 19F-MRI contrast agent having one single narrow resonance to maximise signal and avoid imaging artifacts.To achieve this all the fluorine atoms must be in the same chemical and magnetic environment.Another limitation of 19F-MRI is related to the long longitudinal relaxation times of the fluorine nucleus.This translates into long acquisition times for the MRI procedure due to the 5–10 s required between radiofrequency pulses, which results in long times or more complex non-standard MRI sequences.We are interested in developing 19F-MRI contrast agents for molecular imaging that show single and narrow 19F resonances and short T1 relaxation times.Previously we have shown that 1,1-bisphosphonates bind very strongly to metabolically active bone and calcium phosphate materials such as hydroxyapatite using SPECT and PET imaging .In addition, we found that BPs also bind very strongly to many nanomaterials based on lanthanide metal oxides of the type M2O3 with known relaxation rate-enhancement properties .We hypothesised that a fluorinated BP molecule could be an useful tool in the development of 19F-MRI probes, that would allow to combine of the amplification properties of nanoparticle-based platforms with the relaxation-enhancement properties of lanthanide-based materials without affecting their water solubility.In this way we could potentially achieve 19F-MRI probes with high signal intensity and sensitivity that could be imaged in a short time.In addition, their solution and in vivo properties could be easily controlled by surface modification using the same BP chemistry.In this work, we report our first attempts at achieving this aim by synthesizing and characterising a new fluorinated BP and evaluate for the first time its properties as a single molecule for 19F-MRI in vitro and in vivo.The reaction scheme for the synthesis of 19F-BP is shown in Scheme 1.Tetraethyl aminomethyl-bisphosphonate was synthesized following published methods .Briefly, diethyl phosphite, triethylorthoformate and dibenzylamine were reacted for 29 h at 150–160 °C to yield the benzylated bisphosphonate.The amino group of 1 was deprotected with H2 and 10% Pd/C catalyst to yield 2.After removal of the catalyst, 2 was reacted with 2.9 equivalents of trifluoroacetic anhydride in dry DCM for 3 h. Excess TFAA was used in order to prevent low reaction yields due to potential hydrolysis of the anhydride.After evaporation of the volatiles and work-up, 3 was recrystallised from cold hexanes in good yields.The compound was characterised by NMR, HR-MS and the structure confirmed by X-ray crystallography,The ethyl-protected bisphosphonate group of 3 was deprotected by reacting with excess bromotrimethylsilane followed by methanolisis at room temperature.The reaction gave quantitative yields of 19F-BP as assessed by NMR and MS, confirming complete removal of the ethyl protecting groups.19F-NMR and 31P-NMR also confirmed the stability of the trifluoromethyl and bisphosphonic groups, respectively.The solubility properties of 3 changed from hydrophobic to hydrophilic after deprotection, as expected for bisphosphonic acids, and allowed us to perform our imaging studies in water.One of the main advantages of this compound over most 19F-MRI contrast agents reported to date based on perfluorinated molecules is the chemical equivalence of its F atoms.Non-equivalent F atoms result in broad and/or multiple resonances that have a negative effect on the final 19F-MRI signal.In 19F-BP, however, having a narrow single 19F resonance, maximises imaging signal and minimises the appearance of image artefacts.Phantom MRI studies were performed to evaluate the contrast properties of 19F-BP.The compound was dissolved in water at pH 7 at several concentrations and imaged in a preclinical 9.4 T MRI scanner.A clear concentration-dependent increase in signal intensity and signal to noise ratio was found, demonstrating that 19F-BP can be imaged in the high mM concentration range.Stability studies were also performed using these samples.The 1H NMR and 19F-MRI spectra remained stable for 5 h at pH 7 and 37 °C, confirming the stability of 19F-BP at these conditions.This gave us confidence to study its biodistribution properties in vivo.Preliminary in vivo studies were carried out in a 9.4 T scanner with a healthy mouse.We have recently shown that bifunctional BPs accumulate in areas of high bone metabolism such as the end of long bones and bone metastases using SPECT imaging .Hence, we expected 19F-BP to accumulate in bone.However, after intravenous injection, only signals in the bladder/urinary system and liver areas were detected, the former most probably due to renal excretion as expected for a molecule of this size although this cannot be confirmed with the data available.In addition, uptake in other tissues/organs of the same area such as the uterus cannot be ruled out.It is important to note that the 19F and 1H acquisitions were not performed simultaneously and each modality was acquired with different slice thicknesses, complicating the interpretation of the images.Motion artifacts could also be responsible for the suboptimal overlay of the two modalities.The signal observed in the liver area, which is a much bigger organ and hence less affected by these issues, is more conclusive to uptake by this organ.Liver uptake is common for lipophilic molecules, and since fluorination is known to increase the lipophilicity of compounds, it is likely to be the result of the trifluoromethyl group.We believe that the lack of bone uptake may be the result of its high lipophilicity, compared to non-fluorinated BPs, resulting in higher liver uptake, and/or fast renal clearance.Indeed, recent reports support the notion that fluorinated groups increase the renal excretion of molecules in vivo .Another interesting possibility is that bone binding could have resulted in a chemical shift of the 19F resonance that could result in a lack of signal from bone.However, the presence of the expected single resonance in the broad sweep width spectrum performed prior to the imaging session strongly suggests this is not the case.Another potential reason for the lack of bone uptake observed could be a low signal to noise ratio.SNR measurements are important in 19F-MRI and provide a measure of sensitivity.SNR values of a phantom sample with 19F-BP were found to be in the 50–150 range and 15–40 range for different slice thicknesses.The size of the matrix size is indirectly proportional to the sensitivity, hence the higher values obtained at 32 × 32.For the mouse studies these values were found to be in the 10–40 and 2–12 range, and compare favourably to other animal studies from Bible et al. and Giraudeau et al. .It is important to note, however, that 19F-BP was found to be toxic at concentrations required to achieve in vivo MRI signal.While other BPs used for nuclear imaging such as 99mTc-MDP are required in micromolar concentrations to obtain image contrast, the amount of BPs required for MRI contrast or therapy is much higher.Toxicity has been observed in animal studies with an amino-bisphosphonate used for therapeutic purposes and injected intravenously, at doses of 20 mg/kg.However, doses of 150 mg/kg are required for detecting the 19F-MRI signal of 19F-BP.Hence, toxicity is likely to be the result of the bisphosphonate and not the trifluoromethyl group, although further studies are required to confirm this.These results prompted us to abandon the study of 19F-BP for bone imaging and look for potential strategies in order to increase its sensitivity.The most obvious strategy to improve the sensitivity of 19F-BP is to increase the number of F atoms in the molecule.Interestingly, there are some recent synthetic strategies that would allow us to synthesise a similar BP with several chemically-equivalent F atoms .However, an increase in fluorine content will likely have two main adverse effects.First is solubility, as we anticipate the water solubility will decrease and eventually may result in water-insoluble compounds.The second effect is related to this lower hydrophilicity.We have observed a high degree of liver uptake and hence lipophilicity with a trifluoromethyl group, addition of more fluorine atoms will probably worsen this effect.Another potential adverse effect would be the observed increased in vivo rate of excretion of fluorinated agents others and we have observed .A recent proposed method to improve the sensitivity of 19F-MRI contrast agents is by positioning the F atoms near a lanthanide in order to enhance their relaxation rates.This technique has been recently explored by Parker and Blamire et al. showing this strategy can result in lower acquisition times and detection limits by as much as 2 orders of magnitude .We hypothesised that, given the known ability of BPs to chelate Ln3+ metals and lanthanide oxide materials , we could explore this property to enhance the relaxation rate of 19F-BP and hence increase its sensitivity.This method, of course, would not be useful for bone imaging, unless other bifunctional BPs that contain a Ln3+ binding group such as a macrocycle chelate between the F-containing motif and the BP are designed.However, it could provide a very useful method to label lanthanide-containing nanomaterials with large numbers of 19F atoms and fast acquisition times for other purposes such as cell tracking or molecular imaging using 19F-MRI.A preliminary in vitro MR study in which we measured the longitudinal and transverse relaxation rates of 19F-BP in the absence and presence of 1 molar equivalent of different lanthanide salts supports the potential of this approach as the presence of Ln3+ metals in the solution enhance both relaxation rates by as much as 3 orders of magnitude.It is important to note, however, that well-defined and characterised 19F-BP-Ln3+ complexes would be required in order to validate these findings.We believe this is a strategy that would be particularly useful in conjunction with nanoparticle systems that can combine large numbers of CF3 groups with lanthanide metals at the surface and the required distance from each other.Using this combination, high sensitivity may be achieved thanks to the high numbers of chemically-equivalent 19F atoms with the relaxation capabilities of paramagnetic metals.In addition to the use of paramagnetic ion relaxation, the sensitivity could be further increased in the future by using more efficient MR protocols such as ultrafast sequences recently developed .19F-BP was successfully synthesised and characterised.The compound is water soluble and stable and shows a single and narrow fluorine resonance ideally suited for 19F-MRI.Phantom studies show that 19F-BP can be imaged using a 9.4 T magnet in the high mM range with SNR ratios similar to other reported probes.An in vivo 19F-MRI study strongly suggests that 19F-BP was rapidly excreted renally although uptake by other organs/tissue in the area cannot be completely ruled out with our data.Uptake in the liver was also observed which is probably a result of the lipophilicity of the trifluoromethyl group.This data suggests that the lack of bone uptake observed, the natural target of BPs, may be due to the presence of the fluorinated group resulting in fast clearance, as other studies have recently found .More importantly, 19F-BP was found to be toxic at the concentrations used in this study.From these results it is clear that, while 19F-BP may not be useful for bone imaging by itself it may be an useful compound to provide 19F signal to many inorganic materials of known affinity towards BPs such as calcium phosphates and metal oxides, as our recent work suggests.Future work is aimed at using 19F-BP and related BPs to fully exploit this approach.Reagents and starting materials were obtained from commercial sources and used as received unless otherwise noted.Organic solvents were of HPLC grade.Water was obtained from an ELGA Purelab Option-Q system.Dittmer-Lester’s TLC reagent for the detection of phosphorus was prepared following the original literature protocol .NMR spectra were obtained in a 400 MHz Bruker Avance III.1H chemical shifts are referenced with respect to the residual solvent peak .31P resonances were referenced to an external solution of 85% H3PO4.13C chemical shifts were referenced to the residual solvent peak or left unreferenced.19F resonances were referenced to an external solution of TFA.High-resolution mass spectra were obtained using an Agilent 6500 Accurate-Mass Q-TOF LC–MS system using electrospray ionization.Tetraethylmethylene)bisphosphonate and tetraethylbisphosphonate were synthesised following published methods .2 was dissolved in dry drychloromethane under nitrogen and the flask cooled to 0 °C.After 5 min, trifluoroacetic anhydride was added in small portions over 2 min.The ice bath was then removed and the solution was left stirring at room temperature for 2 h during which time the reaction mixture turned slightly yellow.The volatiles were then removed under reduced pressure leaving a clear yellow residue.This residue was dissolved in 2 cm3 of dichloromethane and to this mixture were added increasing amounts of a 1% solution of sodium bicarbonate followed by shaking, until the pH of the aqueous layer was 7.The organic layer was separated and washed with 3 cm3 of water, dried over sodium sulfate, filtered and evaporated under reduced pressure.The residue recrystallised from hexanes after 24 h standing at 4 °C, yielding large quantities of X-ray diffraction-quality crystals.1H NMR δH 4.162)), 3.562)2CHNH)), 1.342)); 13C NMR δC 159.1), 115.6), 65.32)), 43.93)2CHNH)), 16.02)); 31P-NMR δP 14.02; 19F-NMR δF −76.47; HR-MS 400.0939, 400.0932.422.0759, 422.0721.3 was dissolved in dry drychloromethane under nitrogen and the flask cooled to 0 °C.After 5 min, bromotrimethylsilane was added dropwise over 5 min.The ice bath was then removed and the solution was left stirring under nitrogen at room temperature for 24 h during which time the solution turned yellow.The volatiles were then removed under reduced pressure and the residue dissolved in 1.5 cm3 of methanol, resulting in a colourless solution.The reaction was left stirring at room temperature for a further 1.5 h followed by evaporation under reduced pressure yielding the product in quantitative yield as a clear sticky oil.1H NMR δH 4.602)2CHNH)); 13C NMR δC 158.3), 115.8), 47.32)2CHNH)); 31P-NMR δP 12.07; 19F-NMR δF −76.15; HR-MS 287.9664, 287.9650.Crystal data for 3: C11H22F3NO7P2, M = 399.24, triclinic, P-1, a = 10.2420, b = 10.2485, c = 10.5343 Å, α = 65.615, β = 71.735, γ = 70.761°, V = 930.40 Å3, Z = 2, Dc = 1.425 g cm−3, μ = 2.700 mm−1, T = 173 K, colourless blocks, Oxford Diffraction Xcalibur PX Ultra diffractometer; 3670 independent measured reflections, F2 refinement, 29R1 = 0.0352, wR2 = 0.0972, 3257 independent observed absorption-corrected reflections , 249 parameters.Crystallographic data for the structures in this paper have been deposited with the Cambridge Crystallographic Data Centre.Copies of the data can be obtained, free of charge, on application to CCDC, 12 Union Road, Cambridge CB2 1EZ, UK.The relaxation times T1 and T2 of 19F in 19F-BP and 19F-BP + Ln3+ mixtures were measured in H2O at pH 7 at 400 MHz on a Bruker Avance and converted to the R1 and R2 rates.T1 measurements were performed using an inversion recovery technique with 8 inversion times between 0.001 and 4 s, TR = 7 s and 256 averages.T2 measurements were performed with a spin echo technique with 12 TEs between 0.002 and 0.2 s, TR = 7 s and 8 averages.Analysis was performed using Top Spin software.19F-BP at different concentrations in 250 μL PCR tubes were positioned in a 9.4T Bruker Avance vertical bore scanner using a quadrature volume coil alongside a PCR tube containing water.For 1H imaging for localisation a RARE sequence was used with TR = 1500 ms, TE = 8.5 ms, NSA = 1, matrix = 256 × 256, FOV = 30 × 30 mm, slc = 1 mm.For the 19F imaging the coil was tuned to the 19F resonance frequency and a spin echo sequence used with a TR = 3000 ms, TE = 7.6 ms, NSA = 100, matrix = 32 × 32, FOV = 30 × 30 mm, slc = 6 mm, total scan time = 2 h 40 min.All animal experiments were performed with licences issued in accordance with the United Kingdom Animals Act 1986.One female Balb/c mice, 8–10 weeks old, was anaesthetised using 5% and maintained with 1–2% isoflurane, and injected with 100 μL compound via the tail vein before being transferred to the MRI scanner).For 1H imaging a FLASH sequence was used with TR = 350 ms, TE = 5.4 ms, FA = 40°, NSA = 5, matrix = 256 × 256, FOV = 30 × 30 mm, slc = 1 mm, 30 slices.For the 19F imaging the coil was tuned to the 19F resonance frequency and a RARE sequence used with a TR = 1500 ms, TE = 8.5 ms, RARE factor = 4, NSA = 200, matrix = 32 × 32, FOV = 30 × 30 mm, slc = 5 mm, 6 slices, total scan time = 30 min.In addition the same sequence was run, but with FOV = 64 × 64, which had a total scan time of an hour.19F MR images were overlayed on to the 1H MR images using ImageJ software.To calculate the signal to noise ratios of the phantom, bladder and liver ROIs were drawn around the object and also in the background and then values inputted into the following equation taking into account Edelsteins correction factor: SNR = Intensity ROI//√ . | 19F-magnetic resonance imaging (MRI) is a promising technique that may allow us to measure the concentration of exogenous fluorinated imaging probes quantitatively in vivo. Here, we describe the synthesis and characterisation of a novel geminal bisphosphonate (19F-BP) that contains chemically-equivalent fluorine atoms that show a single and narrow 19F resonance and a bisphosphonate group that may be used for labelling inorganic materials based in calcium phosphates and metal oxides. The potential of 19F-BP to provide contrast was analysed in vitro and in vivo using 19F-MRI. In vitro studies demonstrated the potential of 19F-BP as an MRI contrast agent in the millimolar concentration range with signal-to-noise ratios (SNR) comparable to previously reported fluorinated probes. The preliminary in vivo MRI study reported here allowed us to visualise the biodistribution of 19F-BP, showing uptake in the liver and in the bladder/urinary system areas. However, bone uptake was not observed. In addition, 19F-BP showed undesirable toxicity effects in mice that prevent further studies with this compound at the required concentrations for MRI contrast. This study highlights the importance of developing 19F MRI probes with the highest signal intensity achievable. " 2016 The Authors. |
506 | Data on IL-6 c.-174 G>C genotype and allele frequencies in patients with coronary heart disease in dependence of cardiovascular outcome | Patients who were carrier of the IL-6 c.-174 CC genotype suffered more frequently from a new cardiovascular event whereas carriers of the genotype CG experienced the combined endpoint less often.There was no significant association between the genotype GG and incidence of the combined endpoint.Regarding the allele distribution we obtained a positive association between C allele and new adverse events.There were no significant differences regarding IL-6 serum levels between carriers of the genotypes GG, CG, and CC.The investigation was carried out in accordance with the ethical guidelines of the “Declaration of Helsinki” and its amendment in “Tokyo and Venice” and were approved by the local ethics committee.This subanalysis comprised 942 in-patients with CHD at study entry from October 2009 to February 2011.Inclusion criteria were age ≥18 years and known CHD as defined by a stenosis of ≥50% of a main coronary artery by coronary angiography or percutaneous coronary intervention or coronary artery bypass surgery.At least four own teeth except for the third molars needed to be present.Exclusion criteria were pregnancy, antibiotic therapy during the last 3 months, subgingival scaling and root planing during the last 6 months or psychological reasons rendering study participation impractical.Patients with current alcohol or drug abuse might be not completely able to understand the aim of the study and the necessity of an additional dental examination.If a drug or alcohol abuse was known from patient׳s file or a patient reported during the interview about a current drug or alcohol abuse he/she was not included in the study.A follow-up was performed after three years from November 2013 to January 2015.The incidence of the predefined combined endpoint was calculated.This information was obtained from electronic patient files, physicians, relatives, and civil registration offices.For acquiring follow-up data we sent out a standardized questionnaire.If patients did not return the questionnaires, we conducted a telephone interview with the patient or his/her relatives or contacted the patient׳s physician.If follow-up information could not be obtained from these persons, we contacted civil registration offices and requested information about current address or date of death.From 895 of 942 initial included patients follow-up data were available after three years follow-up.The incidence of the combined endpoint was 16.1%.Blood samples for determination of IL-6 serum level and IL-6 genotyping were taken at begin of the study from all study participants during their hospital stay.Serum level for IL-6 was determined with electrochemiluminescent immunoassay using a Cobas e 602 module in the central laboratory of University Clinics Halle.The determination of IL-6 c.-174 G>C polymorphism was carried out with PCR-SSP using the CYTOKINE Genotyping array CTS-PCR-SSP kit in the laboratory of the Department of Operative Dentistry and Periodontology.Statistical analyses were carried out using commercial available software.The IL-6 genotype and allele frequencies were calculated by direct counting and then dividing by the number of subjects to produce genotype frequency, or by the number of chromosomes to produce allele frequency.Differences between patients and controls were determined by chi-square test.The values for IL-6 serum level were checked for normal distribution using the Kolmogorov–Smirnov test and the Shapiro-Wilk test.As they were not normally distributed, comparisons in dependence of IL-6 genotypes were carried out with Kruskal–Wallis test.In general, p values ≤0.05 were accepted as statistically significant. | In this data article we present data on the distribution of alleles and genotypes of the interleukin (IL)-6 c.-174 G>C polymorphism (rs 1800795) in patients with coronary heart disease (CHD) in dependence of the incidence of new cardiovascular events (combined endpoint: myocardial infarction, stroke/TIA, cardiac death, death according to stroke) within three years follow-up. Moreover, we investigated putative associations between individual expression of IL-6 genotypes and IL-6 serum level. This investigation is a subanalysis of the article entitled “The Interleukin 6 c.-174 CC genotype is a predictor for new cardiovascular events in patients with coronary heart disease within three years follow-up“ (ClinicalTrials.gov identifier: NCT01045070) (Reichert et al., 2016) [1]. |
507 | Do Clean Development Mechanism Projects Generate Local Employment? Testing for Sectoral Effects across Brazilian Municipalities | In order to ensure that global climate average does not exceed the 2 °C target, mitigation measures must be taken to achieve the necessary reduction in emissions to cope with climate change by both industrialized and developing countries.With the Paris Agreement, mitigation efforts are required from both industrialized and developing countries, and industrialized countries are to assist developing countries in their efforts via international climate finance and technology exchange.To understand the impacts of projects funded by such climate finance, this paper draws on experience from the Clean Development Mechanism, which is the primary instrument to support mitigation efforts in developing countries within the Kyoto Protocol.The CDM has a dual objective of helping developed countries fulfill their commitments to reduce greenhouse gas emissions as well as to aid developing countries in achieving sustainable development.Employment generation is recognized as one of the most crucial approaches to attaining sustainable development; that is why, its key role has been featured by the eighth Sustainable Development Goal, which aims at “promoting sustained, inclusive and sustainable economic growth, full and productive employment and decent work for all”."Job creation is one of the benefits most commonly claimed by different types of CDM projects since these investments are expected to bring a significant stimulus to the local economy along project's life.Although CDM projects have this two-fold goal, only the emission reductions objective is linked to pricing mechanisms, which incorporates economic incentives to encourage fulfillment of this objective.While CO2 emission reductions are verified by the UNFCCC and generate revenues to project developers in the form of Certified Emission Reductions1, contributions to local sustainable development lacks monitoring of accomplishment or such a monetary incentive.The objective of this paper is therefore to contribute to understand the impacts of CDM projects on development.In particular, we focus on assessing effects on cross-sectoral employment at the municipal level in Brazil, which is the third largest country worldwide regarding registered CDM projects and hosted the first project worldwide in 2004.2,With nearly 350 CDM projects implemented over a decade, Brazil constitutes an interesting case study to evaluate impacts over time.At the sectoral level, in developing countries like Brazil, the CDM projects typically target the renewable energy sector and the waste handling and disposal sector, 2015).Although all these project types are capable of reducing greenhouse gas emissions and thereby generating CERs, the potential effects for employment generation may differ considerably among types.In this paper, we focus on these two largest categories of CDM projects in Brazil: hydro in the category of renewable energy projects and methane avoidance in the category of waste handling and disposal projects.Renewable energy projects tend to be more labor intensive than conventional energy sources, so they could potentially stimulate local employment in construction, operation and maintenance phases.Moreover, these projects could also induce employment benefits in other sectors such as agricultural and/or industrial through indirect demand of goods and services.But in addition to these positive effects on employment, other effects may be triggered by renewable energy projects.These projects might have the potential not only to induce expansive but also to induce contractive effects on employment, affecting energy-intensive sectors such as manufacturing.The net result on local employment will depend on how much the contractive effect offsets the positive impact at the local level.Waste handling and disposal projects are labor intensive, but previous studies have found a comparatively smaller potential for employment generation.The main difference to renewable energy projects is however that the required skill level e.g. in waste sorting is lower and therefore unskilled workers, who previously worked in other sectors like agriculture, can be employed and trained on the job.This paper therefore attempts to address two important research gaps.Several papers have tried to investigate the achievements of CDM projects on employment generation inside and outside the renewable energy sector.However, there are very few studies in the literature that have explored the economic impacts of waste handling and disposal projects at the local level.In this paper, first, we therefore address this research gap by providing empirical evidence on employment effects triggered by methane avoidance projects and we compare them to the effects generated by renewable energy projects."Second, most of the empirical studies which investigate employment effects generated by CDM projects are ex-ante analyses based on information provided by the Project Design Document of the CDM project, which is basically data on project's expected or potential impacts at the local level, and thus it does not reflect what effectively occurred after project's implementation.Nussbaumer argues that the information provided by the PDDs is accurate and relatively reliable since it represents official documents that are evaluated by the Designated National Authorities before approval and registration of any CDM project in host countries; however, CDM project developers might have incentives to overstate potential achievements in local sustainable development since the fulfillment of this goal is one requirement to obtain validation and registration from the corresponding DNA.In this paper, we focus therefore on estimating effects on cross-sectoral employment by using empirical data that does not draw on the PDD but uses municipal employment data provided by statistical offices.This paper is structured as follows: Section 2 presents the literature review on impacts of CDM on employment generation in the manufacturing sector, while Section 3 characterizes the CDM project portfolio in Brazil.Following that Section 4 illustrates the methodological approach and data, while results from the regression analysis are shown in Section 5.Finally, discussion and some conclusions are made in Section 6.As a pre-requisite for validation and final registration in the pipeline, all CDM projects should deliver sustainable development benefits in the PDD.One of the most prominent and probably best claimed effects is the positive impact of renewable energy projects on local employment due to their labor intensive features of renewable energy technologies that notably contrast with conventional energy sources.While the PDDs do not delineate the causal mechanisms how different types of CDM projects lead to employment generation, there is extensive literature on employment effects in the context of renewable energy projects.Although employment generation is claimed as one of the benefits of promoting renewable energy, it is not straightforward how the causality works.Behind the overall or net employment impact of implementing a renewable energy project, there are direct, indirect and induced effects to be taken into account."The direct effect describes the direct impact on employment of a project; the indirect effect refers to employment generation that takes places in other sectors, while the induced effect refers to those jobs created due to spending that comes from household's earnings from working in the project.Since the overall impact depends on the direction and size of each effect, it is not possible to determine the net impact on employment a priori.While the net or overall employment effect is the sum of the three effects already discussed, the gross employment only considers the positive effects ignoring any possible negative impact.In the context of CDM projects, the most visible and direct effects on employment are generated during the construction phase and also during operation and maintenance activities which requires fewer but highly skilled workers.CDM projects may also generate indirect employment in the context of cross-sectoral employment benefits in sectors such as agriculture, industry, services or construction as well as induced employment through the creation of indirect demand of goods and services.For example, in the case of biomass energy technology, the agricultural sector can gain from the biomass production through planting and harvesting as well as from the switch from traditional to high profit crops for biomass industry.In wind energy, manufacturing can benefit from fabrication and/or assembly of components, while the construction sector could profit from the construction and installation of wind farms.But in addition to these positive effects, also other employment effects may be triggered by renewable energy projects.These projects might have the potential not only to generate expansive3 but also contractive effects that could affect energy-intensive sectors such as manufacturing.This contractive effect describes how the expansion of renewable energy could increase electricity prices and might affect manufacturing production costs, leading to a fall in production as well as a decrease in sectoral employment.The net total result on local employment, which is the sum of direct, indirect and induced employment, will depend on how much the contractive effect offsets the positive impact at the local level."A second issue is that although renewable energy projects generate demand for manufacturing goods and services, it is likely that these goods have to be imported from other regions because specific manufacturing components are not produced everywhere; so this might benefit other localities outside the project's site.Therefore, the cross-sectoral employment generation may not necessarily stimulate and promote local industry."Finally, a third issue is related to the durability or temporariness of the employment generated during project's life.Although these projects may contribute to job creation, not all renewable energy technologies might be able to generate sustained employment effects at the local level.That might be the case for wind projects, which may greatly stimulate job creation mainly during construction phase, but not significantly during operation and maintenance stage; in contrast, biomass projects might tend to generate more stable job positions because of the extent of its production chain.Regarding empirical studies, there are two main groups of research that have assessed the impacts of CDM projects on local employment.They can be classified into ex-ante and ex-post evaluations.Ex-ante studies are the most predominant type of assessments in the empirical literature on CDM; these are mainly qualitative studies that use PDD data for the analysis, which is data based on potential or expected project results.In contrast, there are very few ex-post assessments that used empirical data and have applied quantitative techniques.Most common methodologies applied in ex-ante studies are checklists, scoring pattern methods or the Multi Criteria Assessment method and its further adaptations.Regarding main findings, these are inconclusive.Some studies have reported positive contributions at the local level, while some have found no effects associated with the implementation of CDM projects.As already argued in the introduction, ex-post empirical studies on local employment effects are much scarcer.Du and Takeuchi estimate the impacts of CDM in rural communities in China by combining a difference-in-differences model with propensity score matching techniques.Findings show that while CDM biomass projects have stimulated local job creation also for unskilled laborers, large-scale CDM hydro and solar projects have contributed to employment generation in primary industry at the local level.A comparatively smaller literature assesses economy-wide employment effects by using CGE models and input-output models.With a global CGE model, Mattoo et al. assessed the impacts of climate change financing on the industrial sector in developing countries and reported that CDM host countries may experience reductions in the manufacturing output and exports due to Dutch disease-type effects.Wang et al. applied an input-output model to estimate the impacts of CDM energy projects in China and showed that although CDM has caused direct job losses, it has also created indirect jobs.These impacts differed by project type: wind and biomass energy projects showed positive and significant effects in indirect employment generation that offset the negative effect in direct employment.In contrast, hydro projects had both direct and indirect job losses, particularly in the secondary energy industry and the mining industry.In the particular case of Brazil, very few quantitative assessments have been conducted yet in the context of CDM.For instance, with a focus on development and poverty indicators, Mori Clement estimated impacts of CDM across Brazilian municipalities and identified a positive effect on labor and income indexes4 for biomass, landfill and methane avoidance projects."Again using data from the PDDs as well as stakeholders' interviews, Fernandez et al. find that CDM projects have succeed in delivering positive employment effects in the short-term, during construction and operation phase, but failed to promote long-term benefits in some Brazilian states.Brazil is a pioneering country in hosting CDM projects worldwide.The first CDM project was registered in Rio de Janeiro in November 2004, a landfill gas project located in the municipality of Nova Iguacú.As of 2015, there are totally 338 CDM projects registered in the Executive Board, 2015).They can be divided according to their sectoral scope into two main categories: renewable energy or power projects and waste handling and disposal projects.The rest of CDM investments are projects in the chemical and manufacturing industries."Main project types in the renewable energy or power sector are hydro, wind and biomass energy; most predominant project's subtypes in this sector are: run-of-river hydroelectric power, wind and bagasse power. "Regarding the waste handling and disposal sector, methane avoidance and landfill gas projects are the most representative types; while main project's subtypes in this sector are landfill flaring, landfill power, and manure.In terms of geographic distribution of projects along the Brazilian territory, the distribution by macro region is quite uneven.Macro regions where CDM projects were implemented are the South-east with 39.3% of the total; the North-east with 21.6%, the South with 19.2% and the Central-west with 14.5%.Few projects were implemented in the North, region characterized by its very high forest density.More than 50% of renewable energy projects are located in the South-east and South region, while 28% in the North-east.In the case of waste handling and disposal projects, 51% of total are located in the South-east, 18% in Central-west and 17% in the South.Almost 80% of the CDM projects in the North-east are investments in the renewable energy sector; this reflects the high potential of this region to host energy projects such as hydro and wind.Moreover, the distribution of CDM projects reflects a general division of the country, where the south and southeast are much more developed and industrialized than the north.At the national level, 7.6% has at least one CDM project that was implemented during period 2004–2014.This number exceeds the total number of registered CDM projects because some projects involved more than one municipality.Regarding the temporal development of CDM investments in Brazil, the number of registered CDM projects started decreasing from 2013.One driver was the collapse of the CER prices which started in 2012,5 with prices in secondary markets remaining at very low levels.A crucial determinant in this trend was the introduction of an EU restriction in the use of international credits under the Phase III of the EU-ETS, where only CERs from projects registered after 2012 are eligible if they were hosted by Least Developed Countries6.As a consequence, the overall size of CDM investments to Brazil declined relative to the period before.Despite the collapse of CER prices in 2012 and thus the high risk of project discontinuity due to disincentives to invest in verification and issuance of these credits, most CDM projects in Brazil continued running.According to Warnecke et al., this is due to the fact that some project types are particularly resilient to the development of the CER price.Projects with high capital investment, such as hydro, wind or solar, experienced a low vulnerability of discontinuity due to high revenues for electricity sales as well as low operating costs."However, other project types, such as biomass energy and methane avoidance, may experience variable vulnerability, due to project subtype and local specific conditions.In order to assess the impacts of CDM projects on employment, we investigate effects on total and cross-sectoral employment at the municipality level using a dynamic panel regression model for period 2004–2014.A detailed description of all variables is displayed in Table 2.Regarding employment variables, we use the total employment growth rate, which is the annual growth rate of total employment7 at the municipality level.To explore cross-sectoral effects, we evaluate impacts on sectoral employment shares for the following sectors: industry, agriculture, services, construction and commerce.The selection of these sectors is based on the empirical literature on renewable energy projects and its potential effects on sectoral employment.Main source of employment data is the Brazilian Ministry of Labor and Employment from the Annual Report on Social Information."Regarding explanatory variables in the model, we use a proxy variable for CDM which is a dichotomous variable8 that assigns “1” to those municipalities with a CDM project at time t; this starts from project's registration9 year onwards.Before the CDM registration, a “0” was assigned.In our analysis, only municipalities with one CDM project have been included in order to avoid potential bias due to cross-effects from other CDM projects.10,Two category of projects were analyzed: a) hydro, and b) methane avoidance.11, "To distinguish municipalities with and without CDM projects, first, we use the database from the CDM pipeline, which is a database at the project level that provides information about the municipality where each project has been implemented.Once we have identified those municipalities with CDM projects by year, this database was merged with other datasets in order to build the panel.In a last step, we split the sample into two sub-samples: municipalities with CDM hydro projects and with CDM methane avoidance projects A detailed table with the total number of municipalities with CDM investments for both project types at the federate state level is provided in the Appendix.To evaluate the effect of CER credits on the local economy we use a dichotomous variable, where “1” indicates that a municipality has a CDM project that generated CER credits at time t or during its corresponding crediting period.Through this variable, we attempt to capture activity of CDM projects in terms of CER credits issuance.Data on CDM and CER credits come from the CDM Pipeline Analysis and Database of the United Nations Environment Programme.To capture a potential structural break after the collapse of the CER price, we introduce a dummy variable which takes the value of “0” before the crisis and “1” afterwards.Regarding other explanatory variables relevant for general trends in employment generation, we include both economic as well as demographic indicators such as population growth at the municipal level.13,All economic and demographic data come from the Brazilian Institute of Geography and Statistics.Some descriptive statistics are displayed in Table 3.Further details on employment and GDP growth at the federal state level in municipalities with CDM projects can be found in Figs. A.1 and A.2 in the Appendix.The unobserved individual-specific effects are correlated with the autoregressive term by construction; thus, the Arellano-Bond estimator is constructed by first differencing to remove the panel-level effects and using instruments to form moment conditions.Lagged values of the dependent variable are used to form the GMM-type instruments.One important model assumption is that the error terms are independent across individuals, so they are serially uncorrelated.Although the coefficients of the autoregressive component are not directly interpreted, its incorporation allows for dynamics that might be relevant for recovering consistent estimates of other parameters in the model.Some advantages of using GMM are that it can correct for unobserved heterogeneity, omitted variables bias as well as potential endogeneity problems.Regarding the no serial autocorrelation assumption, when testing validity, we calculate the Arellano-Bond test for first and second-order serial autocorrelation in the first-differenced residuals, which tests the null hypothesis of no autocorrelation.The Wald chi-squared test is also included to test for joint validity of the models.The regression analysis is run using Stata software version 14 and command “xtabond2”.Results are displayed and discussed in the next sections.To estimate cross-sectoral effects of CDM projects on municipal employment over time, we run the dynamic regression models separately for two subsamples: municipalities with hydro projects and municipalities with methane avoidance projects.In both cases, we estimate models of the impacts on sectoral employment.Other model specifications14 are displayed in the Appendix section.Results for the hydro project subsample show a negative and significant impact in the immediate CDM coefficient on total employment growth at the municipality level.Other significant explanatory variables in this model are the autoregressive term, total municipal GDP growth as well as population growth.No significant effects are found for the the CER proxy and the time dummy for the CER crisis in 2012.At the sectoral level, we find effects of CDM on industry, agriculture and commerce employment models; while no effects in the service and construction models.In the industry employment model, the CDM coefficient depicts small and significant effects in its 2-year and 3-year lags, meaning that CDM projects had a delayed indirect positive impact on manufacturing employment during the 2nd year after registration of the project and then this effect turned negative during the 3rd year.A CDM project could potentially contribute to generate employment at the local level through direct and/or indirect job creation during construction, operation and maintenance phases, but could also have a contractive effect in some industries.With respect to this positive effect, the transitory impact found during the 2nd year after registration of the project in the industry employment model is in line with the empirical research on the impacts of renewable energy projects other than the CDM framework, where most significant and positive benefits of hydro projects on local employment took place during construction phase.This temporary effect during construction phase can be explained by the generation of demand for intermediate goods and services in the industry.While this mechanism explains the positive effect of CDM projects in the second-year lag, we find a negative sign for the third-year lag in the industry employment model.For this negative effect, there are two potential explanations.This can be due to a temporal overshooting effect, meaning that employment increase in the second period is partially offset by a slight decline in the third period, e.g. because employment is redirected from other industry production rather than generating additional employment by CDM."Another potential explanation is that the demand for manufacturing goods might not take place within the project's municipality; thus some degree of manufacturing imports might be experienced, with potential negative effects on local industry.Other significant explanatory variables in the industry employment model are the autoregressive term and the industry GDP growth at the municipal level, whose effects are positive as expected according to theory.Moreover, a small and positive effect which is significant at the 10% level is found for the CER proxy on industry employment, while no significant effect is identified for the time-dummy for the CER crisis.This means that CER revenues generated employment and that this effect persisted even after the decline of the CER price in 2012.This finding is less surprising when considering that nearly 80% of hydro CDM projects in Brazil started before 2012, so that a considerable short term effect on employment was already realized before the crisis.In the agriculture employment model, CDM hydro projects show a negative and significant impact only in the immediate coefficient.A possible explanation for this indirect and temporary cross-sectoral effect is that agricultural wage rates are lower than in other sectors and that therefore employment could be relocated from agriculture to other sectors like commerce and industry.No significant impacts are found for the CER proxy on agricultural employment and the time dummy for the CER crisis.Regarding the service sector, as mentioned before, no CDM effects were found.Since hydro projects are very capital intensive, consequently, sectors such as service may not necessarily benefit from job creation.Although the CDM does not show any significant impact, the CER proxy depicts a small and positive effect, which is significant at the 10% level.Other significant explanatory variables in the model are the autoregressive term and the GDP growth in the service sector.No significant effect was identified for the time-dummy for CER crisis.Similarly, no CDM effects were found in the construction employment model.The only significant variable in the construction model is GDP growth in the industry sector.Finally, regarding the commerce model, the CDM shows a small and positive impact in the immediate and 1st lagged coefficient, which means that CDM has contribute to generate a positive induced effect that reached this sector.The positive employment effect found within the commerce sector could be generated by induced employment effects due to the wage income generated by hydro projects.The only other significant variable in the commerce model is municipal GDP growth.Results for the methane avoidance subsample show significant effects of CDM in both total and sectoral employment at the municipality level."In the case of total employment growth, the coefficient of the CDM variable depict a small, but significant impact in the immediate term, so municipalities with CDM projects exhibit negative and transitory effects directly after project's registration.This negative effect of CDM projects on total employment growth may be driven by a contractive effect in some sectors that outweighs the positive effects in other sectors.Other significant explanatory variables in the total employment growth model are the autoregressive term, total federal state GDP growth as well as population growth.No significant effects are found for the CER proxy and the time dummy for the CER crisis in 2012.At the sectoral level, transitory effects of CDM projects on employment are found in the agriculture, services, construction and commerce sectors, but no impacts for the industry employment model."For the agricultural employment model, the impact of methane avoidance CDM projects is significant and negative in the registration's year, while no significant effects are reported for any lagged CDM variables.Other significant variables in this model are the autoregressive term, agricultural GDP growth and population growth.In the case of the service employment model, the immediate term and the lag structure of the CDM variable present significant effects up to the second lag.This may reflect employment demands generated during the construction and operation phase.Methane avoidance projects involve more labor intensive and low-skilled activities provided by other sectors; consequently, the service sector can directly benefit from job creation through activities that do not require high qualifications such as collection, separation, among others.This effect alternates, starting from positive, turning negative and then positive.This can be again due to a temporal overshooting effect.Other significant variables in this model are the autoregressive term and the GDP growth rate in the service sector.The construction and the commerce employment models show a significant and positive impact of CDM also in the immediate coefficient and no significant effects are found for lagged CDM variables.The proxy for CER credits does not show any significant impact in any methane avoidance model.Although some CDM projects promised to share carbon revenues from the generation of CER credits with the municipal government to further contribute to the local development, it seems that the transfer may have not taken place.A potential explanation of this insignificant effect is that this rent was probably captured by the private sector in several ways.Similarly, no significant effects are found for the time dummy for the CER crisis in 2012 on total and sectoral employment.The regression analysis showed that CDM projects in Brazil had mixed and transitory effects in sectoral employment at the local level."The ability of CDM investments to create employment opportunities depends on several variables such as technology type and project's stage.Based on the assessment of two CDM project types, our analysis shows that CDM hydro projects have a small, but mixed impact on industry employment, positive impact in commerce employment, while a negative impacts in agricultural employment.No CDM effects are found in other sectors such as services and construction.Regarding CDM methane avoidance projects, although no impacts are identified on industry employment, small but significant and temporary effects are identified for the agriculture, service, construction and commerce employment.In general, for both hydro and methane avoidance projects, effects in employment are mainly temporary.In accordance with the literature on renewable energy impacts on employment, we therefore find that the cross-sectoral effect of CDM projects on employment is mixed.This is also in line with empirical evidence on the consequences of a shift from traditional to green technologies which will require adjustments to the labor market, which in turn may modify labor demand, thus configuring a situation with winners and losers, in particular in carbon-intensive sectors."Depending on whether the direct employment effect is presumed to be strong, such as for landfill gas or biomass energy which are relatively labor intensive, or whether this direct employment effect is presumed to be small, as for capital-intensive technologies like wind or hydro, sustained and significant impacts along a project's lifetime may emerge for some projects but not for others.Our findings for hydro projects are also in line with some empirical analysis of the impacts of hydro investments on employment, whose effects were very modest and temporary and impacted negatively some industries.In addition to the type of technology and project stage, employment effects of implementing renewable energy projects will also depend on the interdependency that already exists among economic sectors at the local level, as well as on local socio-economic conditions, resource endowments and cultural features.Therefore, before implementation of any renewable energy or waste management project, part of the challenge is to identify local needs as well as resource potentialities in order to choose a suitable technology with a value chain that could contribute to enhance local economic performance.Only when a project type matches the local conditions, both positive direct and indirect employment effect may be generated and thus a net positive effect on overall employment, instead of only a shift of employment from one sector to the other, may be found.Regarding the impacts of CER credits on employment, we find no significant results for methane avoidance, but a very small, positive and slightly significant influence in industry and service sectors in municipalities with hydro projects.Although some CDM projects promised to share carbon revenues from the generation of CER credits with municipal governments to further contribute to the local development, it seems that transfers may have not taken place; probably these inflows were captured by the private sector in several ways.If these revenues were spent at the local level, induced employment effects could be generated due to additional demand.Regarding of the impact of the CER crisis, we find that this dummy variable has no significant impact on sectoral employment for both project types.An explanation for this result is provided by Warnecke et al., who argue that projects with high capital investment such as hydro experienced a low vulnerability of discontinuity due to high revenues for electricity sales as well as low operating costs.Given the heterogeneous level of economic growth among developing countries, further research might attempt to investigate impacts not only in emerging economies like Brazil, but also in least developed countries to compare effects under different socio-economic conditions and resource endowments.Moreover, one potential further explanation why CDM projects generate employment in some municipalities but not in others is the role of the local government which could attract or deter potential investors.Political and institutional barriers have been found important in case study research on CDM projects.Therefore, the influence of the political process at the local level should be investigated in more detail in future research.Finally, as this is one of very few ex-post studies that have attempted to estimate the impacts of CDM over time using real data, more case study research is needed on understanding the mechanisms that drive cross-sectoral employment effects, particularly on the dynamics and cross-sectoral interactions at the local level. | Clean Development Mechanism (CDM) projects have a two-fold objective: reducing greenhouse gas emissions and contributing to sustainable development. But while the contribution to mitigation has been analyzed extensively in the literature, the impact on development has seldomly been quantified empirically. This paper addresses this gap by investigating the impacts of CDM projects on local employment. We use a dynamic panel regression model across Brazilian municipalities for the period 2004–2014 to estimate cross-sectoral employment effects of two project types: hydro projects and methane avoidance projects. We find that CDM projects have mixed effects on sectoral employment. Municipalities with hydro projects show a positive impact on commerce and a negative on agricultural employment. In a similar way, these effects have also been identified in municipalities with methane avoidance projects, as well as positive effects in the service and the construction sector. Regardless of project type, the sectoral employment effects are found to be small and transitory, i.e. these took place immediately or within the first, second or third year after the registration of the project, corresponding to the construction phase and early years of operation. Revenues from Certified Emission Reductions (CER) seem to have no or a very small positive impact on sectoral employment, and no significant impact is found for the CER price fall in 2012. |
508 | Replacement of fish meal with soy protein concentrate in diet of juvenile rice field eel Monopterus albus | Fish meal is typically regarded as the main protein source in diets for aquaculture species.As aquaculture production continues to increase over the last decades, the demand for FM industry expands constantly.Due to the stagnant supply of FM, however, prices will inevitably increase with demand.Replacement of FM with cheaper plant protein sources would be beneficial in reducing the feed costs and has got wide interest globally.Soybean meal has been considered as one of the most promising alternative fish meal sources due to its availability and reasonable price.However, imbalanced amino acids profile and the presence of anti-nutritional factors have limited the use of soybean meal as a plant protein source in aquatic feed.To further exploit and develop the abundant potential protein source, more attention has been focused on soy protein concentrate, a product through aqueous ethanol or methanol extraction of solvent-extracted soybean meal.Compared to soybean meal, several anti-nutritional factors in SPC are almost inactivated through the extraction process, such as trypsin inhibitor, lectin, saponins, β-conglycinin, glycinin, oligosacharides and beany flavor.Moreover, SPC contains high crude protein far more than soybean meal.Considerable success in partially or completely replacing fish meal with SPC without inhibiting fish growth performance has been reported in some fish species, such as black sea bream Acanthopagrus schlegelii, seabream Sparus aurata L and Atlantic Cod Gadus morhua.During the development of skeletal muscle, four transcription factors, Myf5, myogenin, MRF4, and MyoD, play important roles in regulating genes responsible for commitment of proliferating myogenic precursor cells to the myogenic lineage and subsequent differentiation.Myostatin functions as a negative regulator of skeletal muscle development and growth through inhibiting myoblast differentiation by down-regulating MyoD expression.In mammal, regulations of skeletal muscle growth by nutritional and environmental factors are well demonstrated.However, such regulations have not been reported in fish, where comprehensive studies are still quite needed.Rice field eel Monopterus albus has been known as economically valuable carnivorous freshwater fish in China in virtue of commercial importance and delicious meat.Its annual yield rises to more than 386,137 tons.At present, studies with M. albus were mainly focused on sex reversal.However, few reports on nutritional requirement have been published.Our previous study indicated that M. albus can tolerate 18.6% extracted soybean meal with the replacement of 24% of fish meal, while additional soybean meal supplementation resulted in an inferior growth and feed utilization.To our knowledge, the study on the SPC in M. albus have not been reported.Thus, the study was aimed to monitor the responses of M. albus to the diets containing graded levels of SPC in term of growth performance, antioxidant capacity, digestive ability and skeletal muscle growth by measuring the expression of transcription factors that govern the process of myocyte addition in the present study.In the present study, fish meal, shrimp head meal, SPC and wheat middling were used as protein source, α-starch as binder and carbohydrate source and fish oil as lipid source.Six isoprotein and isolipidic experimental diets were formulated.In order to compare the efficiency of soybean meal and SPC in replacing fish meal in M. albus, the control diet was set to contain the same level of fish meal with our previous study on soybean meal.Whereas in the other five diets, SPC were included at 8.5 g/kg, 17 g/kg, 25.5 g/kg, 34 g/kg, 42.5 g/kg to substitute 15%, 30%, 45%, 60% and 75% of fish meal at the expense of wheat middling, respectively, designed as S8.5, S17, S25.5, S34 and S42.5.All ingredients were ground into fine powder and sieved through a 320-μm mesh.The experimental diets were prepared by thoroughly blending the ingredients with fish oil by hand until homogenesis, and then kept in a sample bag and stored at -20 ℃ until used.Before feeding, tap water was added to the experimental diets to make soft dough.The feeding experiment was carried out at Xihu fish farm in Changde, Hunan.Juvenile M. albus were purchased from a commercial farm and reared in floating net cages in pond for 3 weeks to acclimate to the experimental condition.The depth of the water under the cages was 0.6 m.The cages were filled with Alternanthera philoxeroides,Griseb to simulate the natural living conditions of wild M. albus.During the acclimatization, the fish were fed earthworm and fresh fish paste at a ratio of 1:1 for the first week.And then, the control diet was added and the earthworm and fresh fish paste decreased gradually until the fish could eat the experimental diet completely.Fish with similar size were selected, weighted group and randomly assigned into 18 cages with 100 fish per cage.Each diet was randomly distributed to triplicate cages.Fish were fed to apparent satiation once daily by hand for 56 days and uneaten feed was collected to calculate feed conversion rate and feed intake.Water quality parameters were assessed daily following standard methods.During the feeding trial, water temperature was 28–32 ℃, dissolved oxygen 6.3 ± 0.25 mg L−1, alkalinity 71.5 ± 6 mg L−1, ammonia nitrogen 0.47 ± 0.02 mg L−1 and pH 7.2 ± 0.5.The natural light rhythm was followed throughout the feeding trial.The experiments complied with ARRIVE guidelines and carried out in accordance with the National Institutes of Health guide for the care and use of Laboratory animals.At the end of feeding trial, fish were fasted for 24 h. Fish per cage were anesthetized with MS-222 and then group weighted and counted for the determination of survival and weight gain.The liver and visceral weight of five fish per cage were recorded for calculating hepatosomatic index and viscerosomatic index, respectively.Five fish per cage were sampled at random and frozen at −20 °C for whole body composition analysis.Chemical analysis of formulated diets and fish samples was conducted by standard methods.Crude protein was determined by the Kjeldahl method and crude lipid by the ether-extraction method.Moisture was detected by oven drying the fish body at 105 °C till a constant weight, and crude ash was obtained by combusting at 550 °C.The amino acid composition of fish meal and SPC and six experiment diets was determined according to the method described by Mai et al.Briefly, for amino acids, the soy protein concentrate, fish meal and experimental diets were hydrolyzed with 6 N HCl at 110 °C for 24 h and the chromatographic separation and analysis of the amino acids was performed after orthophthaldehyde derivation using reverse-phase high performance liquid chromatography, which followed the modified procedure of Gardner and Miller.While for methionine, the samples were oxidized with performic acid at −10 °C for 3 h to obtain methionine sulfone, then freeze-dried twice with deionized water.The freeze-dried ingredients were hydrolyzed and analyzed as the process of other amino acids.The serum was obtained according to Tan et al., 2007.At the end of feeding trial, blood of five fish per cage were collected from the caudal vein using a 1 ml syringe and pooled in a 10 ml centrifuge tube, and then clot at room temperature for 6 h prior to centrifugation at 5000 g for 10 min at 4 °C.The obtained serum was stored at −80 °C before the determination of blood indices using commercial kits.Intestinal tract of five fish per cage per cage were collected, cut into small pieces and pooled in 10 ml centrifuge tube, and stored at -80 ℃ for the analysis of the activities of trypsin, amylase and lipase according to the instruction of the commercial kits.For gene expression pattern analysis, a portion of dorsal muscle in the same position was removed from five fish per cage at the end of feeding experiment.The obtained sample was immediately snap-frozen in liquid nitrogen and then stored at −80 °C until RNA extraction.Total RNA was extracted using TRIzol reagent.Its purity and quantity were measured using a NanoDrop spectrophotometer and agarose gel electrophoresis.PrimeScript RT reagent Kit with gDNA Eraser were using to synthesis cDNA.Real-time quantitative PCR was performed with CFX96™ Real-Time System.Each PCR reaction consisted of 12.5 μl SYBR Mix, 1 μl forward primer, 1 μl reverse primer and 1 μl cDNA as temple.Double distilled water was added to adjust the total volume of each reaction to 25 μl.The program was 95 °C for 30 s followed by 35 cycles of 95 °C for 5 s, 58 °C for 15 s and 72 °C for 20 s. Melting curve analysis of PCR products was performed at the end of each PCR reaction to confirm the specificity.The gene-specific primers were listed in Table 4.β-actin was used as a reference gene.A total volume of 20 μl PCR reaction consisted of 10 μl of SYBR Mix, 0.5 μl of each primer and 3 μl of cDNA as temple.Data were analyzed using SPSS 19.0 software."Homogeneity of variances was tested using the Leven's test.Significant differences were evaluated by one-way analysis of variance followed by Duncan’s multiple-range test.The relationship between the SPC inclusion levels and weight gain were analyzed by the broken-line method.Statistical significance was set at P < 0.05 and the data are presented as means ± S.E.M.Survival rate ranged from 91.67% to 97.22% and was not significantly different among dietary treatments.Fish fed S42.5 had significantly lower WG compared to that in the control, whereas no significant difference were observed at or less 34% inclusion levels.The broken-line model curves, R2 = 0.96) indicated that the optimal inclusion level of SPC was 26% without inverse effects on the growth performance.Fish fed S34 and S42.5 had significantly higher FCR compared to the other treatments.Feed intake decreased with the increasing fish meal replacement levels with SPC, but statistically significant difference was only observed in fish fed S42.5 compared to the control.Viscerosomatic index was significantly lower in S34 and S42.5 in relative to the control.Hepatosomatic index increased as the SPC inclusion levels increased from 0 to 25.5% and then decreased slightly.Replacing dietary fish meal by SPC caused no significantly changes in body composition.Inclusion of SPC in diets to replace fish meal significantly attenuated trypsin activity in intestine even at the minimum replacement level.Lipase and amylase activities significantly increased firstly and then decreased with the increasing fish meal replacement levels and peaked in S17 group.Total cholesterol, triglyceride and low density lipoprotein cholesterol concentrations in serum showed a decreased trend with the increasing of dietary SPC inclusion.The content of TC, TG and LDL-C was significantly reduced relative to the control when SPC inclusion levels were equal or more than 8.5%, 17% and 25.5%, respectively.No significantly difference on HDL-C was found among dietary treatments.When the inclusion levels were equal and more than 17%, the catalase activity was significantly higher than the control and superoxide dismutase activity had a rising trend compared to the control.Total antioxidant capacity was significantly higher, but MDA content was significantly lower than that in the control group even at the minimum replacement level.When the inclusion levels were equal to and above 8.5% or 17%, GPT or GOT activities significantly decreased, respectively.Given the growth performance of M. albus responding to the increasing SPC inclusion, four dietary treatments were selected to assess the growth-related genes expression profile modulated by SPC in skeletal muscle of M. albus.Dietary SPC inclusion replacing fish meal imposed a significant influence on the transcript levels of MyoD1, MyoD2, Myog and MSTN, but not Myf5 in muscle.There were no statistically significant differences regarding the expression of transcription factor Myf5.Myog mRNA level was significantly lower in S34 and S42.5 group than that in the control, and no significant difference were observed between the control and S17.MyoD1 mRNA level significantly decreased with the dietary SPC inclusion levels up to 34%, whereas there was no difference between S0 and S42.5.MyoD2 mRNA levels were significantly lower in S17, S34 and S42.5 compared to the control.The lowest values in MyoD1, MyoD2 and Myog mRNA expression levels were found in S34.Conversely, the fish fed S34 and S42.5 showed a significantly higher MSTN mRNA level than the control and S17, the highest value of which was also observed in S34 group.In this study, the weight gain of M. albus ranging from 110.96% to 132.73% was much lower than other studied fish species in the same period of time, while it conforms to the normal growth rhythm of M. albus under captive condition.Results of growth performance demonstrated that SPC has the potential to substitute for fish meal in diet for M. albus and up to 34% SPC could be incorporated in diet to substitute 60% of fish meal without compromising the weight gain and feed utilization of M. albus.This observation is in accordance with the data reported previously in rain trout Oncorhynchus mykiss, turbot Scophthalmus maximus L. and black sea bream Spondyliosoma cantharus, lower than that in Atlantic Cod Gadus morhua and African Catfish Clarias gariepinus and higher than that in Japanese flounder Paralichthys olivaceus.The different conclusion on optimal SPC inclusion levels among various studies is closely related to the dietary composition and fish species.Further investigation, using broken-line model analysis of WG, we demonstrated that 26% SPC inclusion level replacing 48.53% of fish meal was optimal.As expected, M. albus more readily accepts SPC as a dietary protein source than soybean meal observed in our previous study.When the SPC inclusion levels further increased, the growth performance was suppressed.This observation is in line with the study in african catfish and turbot.It could be mainly ascribed to the decreased feed intake when fed high SPC diets.Studies with other fish species revealed that high fish meal replacement level with SPC reduced diet palatability, consequently decreasing feed intake and causing reduced growth.This phenomenon should be more applicable to M. albus due to the nature of poor vision and sensitivity to smell of M. albus.Besides, in the present experiment, amino acids analysis revealed that the content of methionine and lysine in SPC were significantly lower than that in fish meal and a decrease of nearly 31% in methionine and 11% in lysine occurred as the SPC inclusion levels increased from 0 to 34%.It has been demonstrated that an inadequate supply of amino acids is associated to reduction in protein synthesis.Supposedly, the deficiency of the two limiting amino acids decreased protein utilization, thus also affecting the growth of M. albus.In this study, trypsin activity was detected to decrease with the increasing dietary SPC levels and positively correlated with growth performance of M. albus.This agrees well with results in sucker Myxocyprinus asiaticus, sturgeon Acipenser schrenckii and seabass Lateolabrax japonicus.SPC contains low levels of trypsin inhibitor or phytic acid, which could combine with trypsin to generate inactive compounds or bind to alkali protein residue respectively, reducing the activity of trypsin.The presence of phytic and the remaining trypsin inhibitor might be considered as another important factor partially accounting for the reduced growth performance of M. albus through negatively influencing the trypsin activity and feed utilization.Amylase and lipase activities were enhanced firstly and then decreased.These results suggested that M. albus might have the capacity to adapt their digestive physiology to changes in nutrition composition to some extent.It has been well established the beneficial effects of dietary soy protein on lipid metabolism in liver and adipose tissue.Similarly, soybean meal inclusion in diet for M. albus showed a hypolipidemic effect manifested by reduced concentration of serum TC, TG and LDL-C.Torres et al. reported that soy protein reduced the insulin/glucagon ratio, in turn, induced down-regulated genes expression involving lipogenic enzymes, by which decreased serum TG, LDL-C and VLDL-TG.Furthermore, soy protein increases the bile acid secretion and stimulates the transcription factor SREBP-2 induced LDL receptor signal pathway, which is responsible for serum cholesterol clearance.Higher inclusion levels of SPC enhanced the antioxidant capacity reflected by the increased antioxidative enzymes activities.It has been reported that the potential antioxidant effect of SPC might be related to the isoflavone component of the soybeans via up-regulation of antioxidant gene expression through activation of ERK1/2 and NF-kB pathway.On the contrary, Lopez et al. did not observed the positive effect of relatively high soy protein on plasma antioxidant capacity.The exact mechanism to explain this phenomenon deserves further investigation.Elevated activities of serum GOT and GPT are generally an indication of liver tissue injure.In the present study, the decreased activities of two enzyme suggested the improved liver function in response to dietary SPC supplementation, in line with the observation in rat and pig.This may be associated with the reduced oxidative damage in liver via the enhanced antioxidant capacity by SPC supplementation.Fish meal replacement by plant protein sources in fish species is a topic of concerted interest.However, the effect of plant protein sources on muscle growth have been poor studied mechanically.Plant protein mixtures and changes in indispensable amino acid / dispensable amino acids ratio largely influenced white muscle cellularity and abundance of MyoD gene expression in rainbow trout.Consistent with the observation in rainbow trout, high inclusion of dietary SPC increased the relative transcript abundance of MSTN and lowered the relative MyoD and Myog transcript abundance in M. albus.Supposedly, SPC inclusion substituting for fish meal might decrease the satellite cell activation, potentially influencing myocyte addition and skeletal muscle growth in M. albus.Despite this, the molecular mechanism by which SPC inclusion affect muscle growth can not be fully understood in the current study and need further studied.In conclusion, the present study revealed that up to 34% SPC inclusion in diet did not hamper growth and reduce feed utilization, and the optimal SPC inclusion level is 26%.The fish meal replacement with SPC improved blood lipid profile and enhanced the antioxidant status, and adjusted digestive enzymes, altered the expression pattern of muscle growth related genes. | Among plant protein sources, soy protein concentrate (SPC) has lower anti-nutritional factors and higher protein content. The aim of this study was to evaluate the effect of replacing fish meal in diet of rice field eel Monopterus albus with soy protein concentrate on growth performance, serum biochemical indices, intestinal digestive enzymes and growth-related genes expression in skeletal muscle. Six isonitrogen (45% crude protein) and isolipidic (5.5% crude lipid) diets were formulated with 0 g/kg, 8.5 g/kg, 17 g/kg, 25.5 g/kg, 34 g/kg and 42.5 g/kg SPC inclusion to replace 0%, 15%, 30%, 45%, 60% and 75% of fish meal (S0 (control), S8.5, S17, S25.5, S34, S42.5, respectively). Each diet was randomly assigned to triplicate groups of 100 fish per net cage (mean initial weight 25.56 ± 0.12 g). The fish were fed once at 18:00 per day for 56 days. Results showed that weight gain and feed intake significantly decreased in fish fed S42.5 (P < 0.05). Feed conversion rate was significantly higher in S34 and S42.5 groups compared to the other treatments (P < 0.05). No significant effects were found on hepatosomatic index and body composition among treatments. Viscerosomatic index significantly decreased with the increasing levels of SPC inclusion. Amylase and lipase activities in intestine peaked at S17 group. Dietary SPC supplementation significantly reduced the activities of intestine trypsin and serum glutamic oxalacetic transaminase and glutamate pyruvate transaminase as well as the content of serum triglyceride, total cholesterol, low density lipoprotein cholesterol and malondialdehyde, but significantly elevated the activities of catalase and total antioxidant capacity (P < 0.05). The superoxide dismutase activity had a rising trend compared to the control. High supplementation levels of dietary SPC down-regulated myogenic determination factor and myogenin mRNA expression, but up-regulated myostatin mRNA expression. These results suggested that fish meal could be partially replaced by SPC in diet of M. albus and the optimal supplementation level of SPC was 26% by using broken-line model curve. Replacing fish meal with dietary SPC has benefit in enhancing serum antioxidant capacity, improving serum lipid profile and modulating growth-related genes expression pattern in skeletal muscle of M. albus. |
509 | Can Private Vehicle-augmenting Technical Progress Reduce Household and Total Fuel Use? | This paper has three main aims.The first is to model the use of energy-intensive consumer services in a more appropriate manner than in the existing literature.In particular, we operationalise the approach suggested in Gillingham et al. by explicitly incorporating both energy and non-energy inputs to both the supply of energy-intensive services and the determination of their price.We take, as an example, the household production of private transport services using inputs of refined fuel and motor vehicles.The second aim is to analyse the impact of technical change in the household provision of this energy-intensive service, focussing on improvements in vehicle efficiency.To be clear, we have in mind efficiency improvements in the use of these inputs in the act of consumption, not in the production of the vehicles that are consumed.1,Adapting a general result derived in Holden and Swales to this particular setting, we identify the condition under which such an efficiency increase reduces the household fuel use in a partial equilibrium analysis.This occurs where the elasticity of substitution between fuel and vehicles in the household production of private transport is greater than the elasticity of substitution between private transport and the composite of all other goods in household consumption.The third aim is to extend the analysis through simulation using the UK-ENVI Computable General Equilibrium model.These simulations investigate the wider implications of household vehicle-augmenting efficiency improvements where prices, real and nominal incomes are endogenous.This captures the impact on the system-wide change in fuel use, including its use as an intermediate in production.The subsequent reduction in the price of private transport services allows the real wage, measured against the adjusted consumer price index, to rise, enabling employment to increase.However, simultaneously the nominal wage, measured against foreign prices, can fall, stimulating UK international competitiveness, increasing exports and reducing import penetration.The increase in household vehicle efficiency thereby provides an additional combined demand- and supply-side stimulus to production, employment and household income.In general, the CGE work supports and extends the partial equilibrium findings.Many studies have analysed the impact of energy-saving technical improvements in consumption so as to assess the potential impact on final energy use.2,These technical improvements simply mean that the same amount of fuel services can be delivered with less physical fuel.However, households typically use energy as one element in the technology that delivers energy-intensive consumption services.Examples of such services include domestic space heating, air-conditioning, lighting and cooking.3,In the present paper we treat these consumption services as though they are produced by the household using the appropriate inputs.Therefore in this case we assume households produce private transport using inputs of fuel and vehicles.4,A small number of papers do attempt to model domestic energy use explicitly in the context of the generation of energy-intensive services.However, the technology implicitly used in these papers is extremely rudimentary.Output is a linear function of energy use, so that technical improvements simply reduce that coefficient.Therefore, for example, in Walker and Wirl private transport is obtained by combining fuel and technology.This technology converts fuel use into miles travelled.In this approach, the price of private transport is calculated as the price of fuel divided by the fuel efficiency of vehicles.The cost of the vehicle, its role in determining the price of private transport and the possible substitution between expenditure on the vehicle and fuel is not discussed.Wirl makes the case for explicitly treating household energy use as a derived demand, as one element of the inputs to domestically produced consumer services and Gillingham et al. similarly argues that producing vehicles using a lighter material would improve fuel efficiency of motoring services and increase the number of miles travelled per unit of fuel.This approach implies that the price of the energy-intensive service depends on the price of energy and all the other inputs that combine to deliver the service.Although it does not discuss specifically how this should be modelled and is mostly interested in the implications of energy efficiency for the calculation of the rebound effect, Gillingham et al. offers an interesting starting point.In the present paper we operationalise this approach, beginning with a partial equilibrium analysis and them moving to a Computable General Equilibrium simulation.In this model households produce private transport, measured here as miles travelled, m, over a given time period, by combining vehicles, v, and fuel, f. Consumption demand for fuel is therefore a derived demand stemming from the household requirement for private transport.It is important to stress that this is essentially an illustrative example and it has been chosen primarily because of data availability in the general equilibrium modelling.We use a conventional, well-behaved production function to determine the relationship between the inputs of vehicles and fuel and the miles travelled.This is a standard approach in economics, but we detail some of its key features for two main reasons.First, the notion of a production function is being applied here in an unusual setting.Second, given the way in which the relationship is characterised we adopt particular definitions of improvements in fuel and vehicle efficiency.These may differ from the definitions used in other disciplines.There are a number of general features of a well-behaved production function that are of interest here.First it is linear homogeneous and therefore exhibits constant returns to scale.If all inputs are doubled, output is doubled.This implies that the household private-transport technology can be studied by focussing on the unit-isoquant, the set of techniques that could be used to produce one unit, say 100 miles travelled per week.Given our formulation, more expensive vehicles are less fuel intensive.5,The consumer chooses the combination of vehicles and fuel that maximises the amount of miles travelled, m, given her budget constraint.This involves a trade-off between the increased vehicle cost and the lower fuel cost per mile.In Eq., p indicates a price, ε is an efficiency parameter and n is a superscript for natural units.In the base period εz = 1 ∀z so that initially natural and efficiency units are the same for both inputs.7,To increase the efficiency of a particular input z, we increase the value of εz.Expression implies that for any input whose efficiency is increased, technical progress is reflected in a change in its price, expressed in efficiency units.Technical changes can therefore be represented through adjustments in the budget constraint, specified in efficiency units.If the price of one input falls, its use per unit of physical output will rise.However, the share of the unit cost that goes to that input will fall only if the inputs are complements and rise if they are competitors.Fig. 1 shows vehicles and fuel as competitors with vehicle efficiency increasing.We parametrise the model so that the initial quantity of fuel, vehicles and motoring are all equal to unity, so that in the absence of efficiency changes, natural and efficiency units are equal.The vertical axis represents vehicles in natural and efficiency units, while the horizontal axis simply represents fuel in natural units as fuel efficiency does not change in this analysis.Initially the consumer is at point m on the isoquant I1.The technical improvement in vehicles, represented by an increase in εv, pivots the budget constraint, expressed in efficiency units, clockwise, as the price of vehicle in efficiency units decreases.At point m1 the consumer chooses the combination of f1n and v1e that maximises the output of private transport.This is where the new budget constraint is tangent to the highest attainable isoquant, I2.If we project the fuel consumption figure onto the initial budget constraint expressed in natural units, we see that private transport output m1 is produced at m* using f1n and v1ninputs, both measured in natural units.At this point it would be useful to clarify the nature of pure vehicle augmenting technical change.This does not depend on how the efficiency improvement is delivered.That is to say, changes in vehicle design, fuel composition or household behaviour can all generate efficiency changes that are purely vehicle augmenting.Imagine a technical change that does not reduce the cost of the vehicle but improves its durability, thereby reducing maintenance and depreciation costs, but has no direct impact on fuel efficiency.Such a change would be purely vehicle augmenting.This could be embodied in vehicle design through the use of more robust materials, result from changes in fuel refining which reduce engine wear or adjustments in owner/driver behaviour leading to lower maintenance or depreciation.With a standard production function, and constant input prices measured in natural units, such vehicle-augmenting technical change will always reduce fuel use per mile travelled.This is because the price of vehicles has fallen, leading to the substitution of vehicles for fuel in the households production of private transport.Note that this is not due to energy augmenting technical change but rather an endogenous choice of less fuel intensive, but already existing, technology.8,However, fuel use per £1 spent on motoring does not necessarily fall.In Fig. 1 we assume that the two goods are competitive.In this case, the efficiency improvement in vehicles reduces the quantity of fuels necessary to deliver the increase in private transport services, while the use of vehicles, measured in natural units, increases.Clearly for energy-intensive household services in general, technical improvements in the non-energy inputs generate endogenous changes in fuel use which can be positive or negative.In this case, the consumption of fuel depends not only on the substitution between vehicles and fuel, σv, f, but also on the degree of substitution between private transport and all the other goods, σm, a. Fig. 2 presents a graphical analysis which extends that shown in Fig. 1.The diagram has two panels.The top panel has vehicles in efficiency units on the vertical axis and refined fuel in natural units on the horizontal axis.In the bottom panel the price of motoring pm is on the downward-pointing vertical axis.Again, we parametrise the model so that the initial quantity, price, and therefore the total budget for private transport are all unity.The consumer initially produces using the technique m1 which includes f1n fuel together with a quantity of vehicles.With a fixed nominal budget, technical progress in vehicles has the effect of pivoting the budget line from b1b1 to b1b3.This replicates Fig. 1 and implies that a constant budget can now produce more private transport because the increased efficiency of vehicles reduces the price of private transport.At this point, if the new budget line is moved parallel downwards until it is just tangent to the initial isoquant, we identify the cost-minimising way for the household to produce one physical unit of private transport.Here we are essentially using the budget constraint as an isocost curve.The unit cost-minimising point is m2.In the lower part of the diagram, the 45 degree line through the origin simply transfers the private transport price, given by the point where the minimum unit isocost curve hits the fuel axis onto the vertical axis.The B curve then gives the total expenditure associated with private transport at this price.Where this expenditure figure is translated to the horizontal axis, it gives the point where the new budget constraint line cuts the fuel axis.In this case we are assuming motoring consumption is elastic, so expenditure rises generating a new budget constraint, b4b4, parallel to b2b2 but further from the origin The point that maximises the private transport output is at m4 with an input of fuel of f4n.If the private transport production function, as represented in Eq., is linear homogeneous, m2, m3 and m4 will all lie on a straight line through the origin, each having the same fuel/vehicle ratio.Also the ratios of the distances from the origin indicate the change, so that in this case output of private transport increases by 0m4/0m2.If the private transport price elasticity of demand has unitary elasticity, the B curve is vertical and passes through b1 and also A.For unitary elasticity, the total expenditure on private transport remains constant and the new budget constraint is b1b3.If the demand for private transport were price inelastic, the B curve would still go through point A but would slope in the opposite direction to the curve shown in Fig. 2.Total expenditure on private transport would fall as efficiency increases.In Fig. 2 energy use decreases from f1n to f4n following technical progress in vehicles.However, while in Fig. 1 the only condition for a reduction in fuel use is for the elasticity of substitution between refined fuels and vehicles to be > 1, here we need to account also for the substitutability between private transport and all other goods.It transpires that in the partial equilibrium setting, whether fuel use rises or falls in response to an increase in vehicle efficiency depends solely on the values of σv, f and σm,a.Holden and Swales address this issue in a more conventional industrial production setting, where output is produced with capital and labour and sold in a perfectly competitive product market.An expression is then derived for the cross price elasticity of one input with respect to a change in the price of a second input.A key result is that a reduction in the price of one input leads to an increase in the use of the second input where the price elasticity of demand for the output is greater than the elasticity of substitution between the two inputs.10,This result translates directly to the household production of energy-intensive services in general and to private transport in particular.In a partial equilibrium setting, if σv, f > σm,a then the negative substitution effect dominates the output effect, and as vehicles become more efficient, and their efficiency price falls, fuel use will also fall.On the other hand, if σv, f < σm,a, any efficiency improvements in vehicles is accompanied by an increase in fuel use.This has the implication that even if the household production of energy services has unitary elasticity of substitution, so that σv, f = 1, the fuel-use response to an increase in vehicle efficiency is ambiguous; it will rise or fall depending on whether σm,a is greater than or less than one.As noted, this partial equilibrium approach is based on the assumption of a fixed nominal income and unchanging market prices.In Sections 4, 5 and 6 we extend this analysis within a general equilibrium framework.This allows the assessment of the impact of three additional effects.It also allows us to track the impact on total fuel demand, which includes its use as an intermediate input in production.First, in general equilibrium the production side of the economy is endogenous to the model, implying that nominal income and intermediate demands are is also endogenous, affecting both the consumption and total demand for fuel.Second, input prices in natural units which are exogenous in the partial equilibrium are endogenous in general equilibrium and are likely to change responding to macroeconomic factors.In the standard formulation of our CGE model, we have no prior expectation as to whether incorporating these two effect will have positive or negative impacts on the level of economic activity or prices.In fact, this will depend on the composition of the demand shifts triggered by the reduction in the efficiency price of household vehicles and by the production characteristics of the commodities whose demand is changing.A third issue is linked to the calculation of the consumer price index.Gordon argues that efficiency improvements in household services, especially energy-intensive services such as domestic lighting, heating and air conditioning, are a significant source of bias in the calculation of the consumer price index.The claim is that national statisticians generally fail to account fully for these technical improvements, although there is a more concerted attempt to identify important efficiency improvements in private transport.Standard CGE simulation models also do not typically incorporate the impact of improvements in household efficiency on the CPI.This is because such improvements do not directly change the production technology, and therefore the price, of commodities produced by firms.And it is these prices which comprise the CPI in the standard CGE treatment, CPI c.However, in the present simulations we can incorporate the private transport price in an adjusted consumer price index, CPIτ.An efficiency increase in vehicles will reduce the price of private transport, which will lead to a reduction in CPIτ.It is important to note that the prices in the UK-ENVI model are measured relative to foreign prices.11, "If the CPIτ falls with no change in the nominal wage, the worker's real wage, which is here measured relative to the CPIτ, increases.But this leads to disequilibrium in the labour market: the real wage has increased with no change in the underlying labour market conditions.If bargaining in the labour market occurs over the real wage, the nominal wage will fall and the quantity demanded of labour will rise until the tightening of the labour market matches the increase in the real wage.In the model the stimulus to output will be seen as an increase in exports and a reduction in import penetration and there will be an additional boost to employment as the nominal wage falls by more than the cost of capital.We operationalise the general equilibrium approach using UK-ENVI.This is a dynamic CGE model designed specifically for analysing the impacts of environmental policies, parameterised on a 2010 UK Social Accounting Matrix with 30 production sectors.12,In the following sections we outline the main features of the model, focussing particularly on the structure of household consumption.Total consumption is then allocated to sectors as shown in Fig. 3.Essentially we assume that households produce, and then directly consume, private transport through purchasing vehicles and fuel inputs.The price of private transport is unobserved in the standard production accounts.However, it can be modelled through this adjustment to the consumption structure and is equal to the unit cost of self-production.We note that vehicles are consumer durables and should be treated as household investments.For this reason we focus in this paper on long-run equilibrium results where the household stock of vehicles is at its equilibrium level.At this point the level of expenditure on vehicles just equals depreciation.Further, household consumption comprises goods produced in the UK and imported goods from the rest of the World, and these are taken to be imperfect substitutes, via an Armington link.In each sector, the production structure is as outlined in Fig. 4.Output is produced via a capital, labour, energy and material CES function.At the top level, value added and intermediate inputs combine to generate output.At the second level, labour and capital produce value added, while energy and materials form a composite of intermediate inputs.Again, imported and locally produced intermediate inputs are assumed to be imperfect substitute.In Eq., investment is a function of the gap between the desired, Ki,t*, and actual, Ki,t, capital stock, plus depreciation which occurs at the rate,ϕ.The parameter β determines the speed at which the capital stock adjusts to its desired level.Steady state equilibrium requires that the desired and actual capital stocks levels are equal, so that Ki, t∗ = Ki.t and therefore Ii, t = ϕKi, t.In this equation, the bargaining power of workers, and hence the real consumption wage, is negatively related to the rate of unemployment, u.The parameter θ is calibrated to the steady state and γ is the elasticity of wage related to the level of unemployment, u, and takes the value of 0.069.Improvements in fuel or vehicle efficiency in the household production of private transport have no direct impact on CPIc but will reduce CPIτ.We assume that the Government faces a balanced budged constraint with constant tax rates so that any variation in revenues driven by changes in economic activity is absorbed by proportionate adjustments to Government current spending on goods and services.There are two main sets of simulations reported in Sections 6.In all the simulations we introduce an exogenous 10% permanent step increase in the efficiency of the vehicle input in the household production of private transport.We report long-run equilibrium results where the conditions discussed around Eq. are satisfied.We are primarily concerned with the steady-state impacts, rather than the short-term dynamics of adjustment.However, earlier test simulations suggest that the short- and long-run results are in fact very similar.In Section 6.1 we attempt to replicate, in a general equilibrium setting, the partial equilibrium analytical results reported in Section 4.Specifically we initially hold the real wage constant, as in Eq., and use the unadjusted CPIc, so that the input prices, measured in natural units, remain unchanged.In a set of simulations, the values of σv, f and σm,a are systematically varied and the impact on fuel use is tracked.These simulations are designed to produce a minimal effect on aggregate variables and in fact the impact on these variables is small.In Section 6.2 we quantify the cumulative effect of introducing a more appropriate adjustment to the CPIτ and an active labour market closure.In these simulations we detail the results for the four combinations of σv, f and σm,a values shown in Fig. 5, where they are labelled A to D. For each of these key elasticities we choose two specific values, one elastic and the other inelastic.The values for σv, f are 1.2 and 0.3 and for σm,a 1.5 and 0.5.We then run simulations for each of the four possible combinations.With each simulation it is therefore straightforward to show the impact of varying one, or both of the elasticities.Note that from the partial equilibrium analysis we expect that with models A, C and D an increase in vehicle efficiency should be associated with increased fuel use.Only with the elasticities given in model B do we expect a reduction in fuel use.In the Section 6.2 we report the simulation results from three separate scenarios.The aim is to show the effect of introducing additional macroeconomic elements whose impacts are excluded from the partial equilibrium analysis but which can be identified through the CGE simulations.In Scenario 1, we assume that the real wage is fixed and calculated using the standard CPIc.That is to say, the same model specification as used to generate the results in Section 6.1.In Scenario 2 we again impose a fixed real wage, but in this case calculated using the adjusted CPIτ, as defined in Eq.The fall in the price of private transport caused by the increase in vehicle efficiency reduces CPIτ which has knock-on effects on the nominal wage and competitiveness.In Scenario 3, we incorporate the wage bargaining function, detailed in Eq., but again use the adjusted CPIτ to calculate the real wage.In this case, any aggregate stimulus to the domestic economy that generates a reduction in the unemployment rate will be partly mitigated by an increase in the real wage and an accompanying reduction in competitiveness.To investigate the sensitivity of household fuel use to changes in the consumption elasticity values in a general equilibrium context, we conduct a sensitivity exercise where we systematically vary both σm,a and σv, f.In these simulations the elasticities take 0.2 increments between the values of 0.1 to 1.3 inclusive.14,Results are represented in Fig. 6, where the percentage change in the use of refined fuels is plotted for each combination of σm,a and σv, f.This shows that the percentage change in fuel consumption is positively related to the value of σm,a and negatively related to the value of σv, f.In particular, within the accuracy of the elasticity values used here, where σm,a > σv, f then fuel use increases with an increase in vehicle efficiency; where σv, f > σm,a, fuel use falls.Within this range of elasticity values the largest fall in fuel use, 4.27%, occurs where σv, f = 1.3 and σm,a = 0.1.These simulation results clearly reinforce the partial equilibrium analysis in Section 3.Table 1 gives the values of six key endogenous variables under the three macroeconomic scenarios.It reports the percentage changes in three fuel use and three aggregate economic variables, all measured as percentage deviations from their baseline values.These are household, total and total non-household fuel use, and the CPI, nominal wage and GDP.A more detailed set of results is given in Appendix A.As discussed in Section 5, there are three scenarios and four combinations of substitution parameters, so that we report results from twelve simulations in all.The results from Scenario 1 are shown along the top lines in Table 1 panel a and b.The combinations of substitution elasticities are as shown in Fig. 5, labelled A to D, and the real wage is held constant using the conventional consumer price index measure.As a result, the impact of the efficiency increase on the price of inputs does not vary across the four simulations.There is no change in the price of fuel and vehicles in natural units and the price of vehicles measured in efficiency units falls by 10%.There are differences in the change in the price of private transport, reflecting the different elasticities of substitution between vehicles and fuel, but this price variation is quite limited.Essentially, the differences between the outcomes in the individual simulations in this scenario reflect how consumers react to the same reduction in the price of vehicles, in efficiency units, and the corresponding similar – across simulations - reductions in the price of private transport.In this scenario, the impact on household fuel consumption is very close to that given in the partial equilibrium analysis.15,Clearly, as shown in Section 6.1, fuel use is positively related to σm,a and negatively to σv, f.The interaction between the size of the increase in demand for private sector transport and the fuel intensity of its household production determines the overall change in the fuel use.This falls only in Simulation B, where the value of σv, f is high and σm, a is low.The more detailed results in Appendix A show that the value of σm,a controls the size and composition of the changes in demand for private transport and all other goods, whilst the value of σv, f determines the changes in vehicle and fuel intensity of the household production of private transport.In the model used in Scenario 1 the macro-economic impact is similar to that which would be generated by a change in tastes that affects the composition of household consumption.If the change in vehicle efficiency leads to the household consumption vector having a higher direct, indirect and induced domestic content, then economic activity will rise: if the change in consumption choice leads to a reduction in domestic content, aggregate economic activity will fall.16,There is no additional accompanying supply-side shock.In the simulations A and D, the consumption of all other goods falls and the consumption of fuel rises.Both simulations exhibit a small decline in GDP, together with employment, investment, household income and aggregate household consumption.On the other hand, in simulation B, where the consumption of all goods increases and the consumption of fuel falls, all indicators of aggregate economic activity rise.In simulation C the consumption of both all other goods and fuel increases and this produces a neutral impact on economic activity.17,These results are consistent with the intuitive notion that fuel has a relatively low, and all other goods a relatively high, domestic content.Outcomes which shift consumption towards the former and away from the latter have a stimulating impact on aggregate economic activity, though this is very small.Note that in this scenario there is no conflict between energy reduction and economic expansion: in these simulations, where fuel use falls, output increases.Table 1 indicates that any variation in household fuel consumption is accompanied by a change of between a third and a half in non-household fuel use.For example, in simulation B the 2.51% reduction in household fuel consumption also generates a 0.20% fall in non-household fuel use, so that total fuel use falls by 0.63%.18,The fact that household and non-household fuel use move in the same direction suggests that this result is driven by the high fuel intensity of fuel production itself.The results from Scenario 2 are shown on the second rows of Tables 1, panel a and b.In this scenario we use the adjusted consumer price index, CPIτ, in which the fuel and vehicle prices are replaced by the price of private transport.This adjusted price index is then used to calculate the adjusted nominal wage corresponding to the fixed real wage, as explained in Section 4.4.The private transport price reduction directly triggers a drop in the CPIτ.Maintaining the real wage leads to a reduction in the nominal wage - that is the wage relative to foreign prices – equal to the fall in the CPIτ and this further reduces commodity prices.Across all the simulations the CPIτ decreases by 0.10%.This has three primary impacts.First, the reduction in product prices, triggered by the fall in the cost of labour, generates competitiveness-driven expansionary effects.This is reflected in an increase in export demand, which rises in the long run by 0.09% in all the simulations in Scenario 2.Second, the lower nominal wage leads producers to substitute labour for capital in production and reduce the relative price of labour intensive commodities.This results in higher employment and in a corresponding reduction in unemployment.Third, household nominal income increases as employment rises, stimulated by the substitution and output effects already identified, so that household total consumption increases.In all the simulations covered by Scenario 2, GDP is higher, by 0.12 or 0.13 percentage points, than the comparable figure for Scenario 1.This means that there is a positive increase in GDP for all the simulations of between 0.09% and 0.15%.Further, the adjustment to the consumer price index increases the consumption of particular commodities, as compared to the results for Scenario 1; the consumption of vehicles, fuel and all other goods are between 0.03% and 0.07% higher.This leads to an increase in total fuel use of around 0.10 percentage points across all simulations, as compared to Scenario 1.However, these changes are relatively small so as not to affect the qualitative fuel-use results.In Scenario 2 the economic stimulus from the increased competitiveness delivers a boost to GDP and all the other measures of aggregate economic activity.In Scenario 3 we further add a bargained real wage, determined by the wage curve as specified in Eq.The central point is that in this case, if employment increases with a fixed labour force the accompanying fall in the unemployment rate drives an increase in the real wage.In the simulations in Scenario 3 this increase in the real wage reduces some of the impact of the efficiency improvement on competitiveness.The results for Scenario 3 are shown in the last rows of Tables 1, panel a and b. Note first that the long-run adjusted real wage now increases for all the simulations as employment rises; the nominal wage falls by less than the adjusted consumer price index.Whilst in Scenario 2 the nominal wage across all simulations falls by 0.10%, this reduction now lies between 0.05% and 0.01%, which limits the fall in product prices as reflected in the CPIτ.Also, in the fixed real wage Scenario 2, exports increased by 0.09% across all simulations whilst in Scenario 3, the long-run stimulus to exports is now much lower, between 0.01% and 0.04%.Whilst all simulations in Scenario 3 register increases in GDP and the other indicators of aggregate economic activity, these are smaller than the corresponding figures in Scenario 2.The long-run Scenario 3 values for all the fuel use variables lie between the Scenario 1 and Scenario 2 figures.The simulations results show the impact of modelling private transport as an energy-intensive self-produced household service.Investigating variation across the simulations produces an increased understanding of the relationship between the inputs in the production of this service.Specifically, when considering improvements in the efficiency in the production of private transport, a vehicle-augmenting technical improvement can lead to a reduction in fuel consumption, depending upon the values of key substitution elasticities.Any reduction here in the fuel-intensity of private transport, and the possible lower household and total use of refined fuels in aggregate, is not brought about by an exogenous improvement in fuel efficiency.Rather it is driven by an endogenous reaction to an improvement in the efficiency of a closely-linked good, either as a substitute or complement, in this case vehicles.This shows the importance of modelling energy-intensive household services in general, and private transport in particular, as the output of a number of inputs.Moreover, in determining the overall impact of technical progress in vehicles on the demand for fuel, it is fundamental to take into account changes in the quantity demanded of private transport.Such changes in the demand for the energy-intensive service generate an additional increase or reduction in the derived demand for the input goods.Whilst there are general equilibrium effects on household fuel consumption, these are dominated by the impacts identified in partial equilibrium.Using general equilibrium simulation to incorporate endogenous variation in intermediate fuel use suggests that these reinforce changes in household fuel consumption.When the CPIc is calculated using the conventional method and the real wages are held constant, the macroeconomic impact of the technical improvement simply reflects the switching of demand between different commodities within the household budget.Commodities, which have, directly or indirectly, more domestic content will have a larger impact on GDP.In the present case, this switching depends on the degree of substitution between private transport and the composite commodity “all other goods”, and between fuel and vehicles in the production of private transport.When, as a result of the efficiency change, the consumer reduces expenditure on the consumption of all other goods competing with private transport, and increases the consumption of fuel, GDP falls.However, we need to recognise that the structure of consumption adopted here is extremely rudimentary.In practice the demand impact will depend heavily on changes in demand for other commodities that are close substitutes and complements to private transport.For example, we would expect consumers to substitute between public and private transport.When the adjusted CPIτ is used, the price of private transport, which is normally unobserved, is incorporated into the calculation of the real wage.With a fixed real wage, we then report an increase in competitiveness and a productivity-led economic stimulus.This arises because the nominal wage falls, lowering domestic prices, stimulating the demand for exports, and reducing the demand for imports.It also leads to some substitution of labour for capital.When workers are able to bargain, the real wage will rise as the unemployment rate falls, limiting the reduction in the CPIτ, the nominal wage and the subsequent increase in economic activity.This work provides a more sophisticated treatment of private transport demand, as a household self-produced energy-intensive service.Although we use the example of private transport, our framework can be applied to other energy-intensive services such as home heating.Other extensions include recognising that the adoption of new technological vintages, such as in vehicles, require investment.The accumulation of the new stock of vehicles should be modelled as a formal investment process similar to the way in which investment is modelled in the production side of the economy.However, whilst this will influence the time path of the introduction of the more efficient technology, it does not affect the long-run analysis applied here.Finally, in the specific case of motor vehicles, fuel saving from efficiency improvement has often been offset by the increase in size and weight of vehicles.A more nuanced way of modelling private transport services should therefore employ a framework which incorporates variations in other inputs and vehicle characteristics and their impact on fuel intensity and use. | This paper demonstrates the importance of modelling energy-intensive household services in general, and private transportation in particular, as combinations of energy and other inputs. Initially a partial equilibrium approach is used to analyse private transport consumption as a self-produced commodity formed by household vehicle and fuel use. We particularly focus on the impact of private vehicle-augmenting technical progress in this framework. We show that household fuel use will fall if it is easier to substitute between vehicles and fuel in the household production of private transport services than it is to substitute between private transport and the composite of all other goods in overall household consumption. The analysis is then extended, through Computable General Equilibrium simulation, to investigate the wider implications of similar efficiency improvements when intermediate demand, prices and nominal income are endogenous. The subsequent reduction in the price of private transport service (not observable in market prices) allows the wage measured relative to the CPI to rise whilst the wage relative to the price of foreign goods falls. This simultaneously increases UK international competitiveness, encouraging increased exports and reduced import penetration whilst allowing employment to rise. This provides an additional supply-side stimulus to production, employment and household income. |
510 | Spectroscopic and AFM characterization of polypeptide-surface interactions: Controls and lipid quantitative analyses | The size distributions of large unilamellar vesicles prepared by extrusion are presented in.Solution phase 31P NMR was used to determine the lipid compositions of the LUVs and to elucidate whether modulations in lipid composition occurred during vesicle preparation.The fold and membrane perturbation capability of the linking segments, Cage-only and Lnk-Only, from the A-Lnk-C and A-Cage-C peptides was investigated by urea denaturation and a Förster Resonance Energy Transfer -based membrane leakage assay.The tryptophan environment and secondary structure of A-Lnk-C and A-Cage-C in the presence and absence of zwitterionic LUVs is shown in.Then, the efficacy of depositing lipid bilayers on mica by spin-coating was elucidated by Atomic Force Microscopy.The presence or absence of bilayers were assessed by probing the prepared surfaces using the tip of the AFM cantilever, and height-profiling of the bilayers.Finally, the aggregation of the polypeptides at neutral and acidic conditions in the presence of mica was investigated by AFM in solution.A-Lnk-C and A-Cage-C were obtained using recombinant expression in E. coli as described in .Cage, based on a Trp-Cage fold, were produced by CPC Scientific using solid-phase Fmoc/t-Boc synthesis.Egg yolk phosphatidylcholine and porcine brain phosphatidylserine were obtained from Avanti Polar Lipids.AFM cantilevers TR400PSA were obtained from Olympus and Mica grade V1 were obtained from Electron Microscopy Sciences.All other reagents were purchased from Sigma-Aldrich.Briefly, vesicles consisting of EYPC and PBPS lipids or EYPC only were prepared by extrusion of hydrated multilamellar lipid aggregates using a protocol adapted from .The hydrated multilamellar lipids were prepared as follows: In a glass vial covered in foil, appropriate amounts of lipids, dissolved in chloroform, was added.The solvent was evaporated using a stream of N2 gas, producing a lipid film.After addition of buffer at pH 4.5 or 7.4, the solution was flushed with N2 gas, stoppered, and incubated overnight at 37 °C and 250 rpm.Finally, the hydrated lipid mixture was subjected to freeze-thawing in liquid N2 and water for 6 cycles.A Mini Extruder was used to pass the hydrate lipid mixture through a double polycarbonate filter 9 times, and its visual appearance changed gradually from milky to transparent during this process.For a description of how to assemble, use, and clean the extruder, please refer to .The extruded solutions was collected in a foil-wrapped glass tube, flushed with N2 gas, and used the same or following day for experiments.Total lipid concentration was 1 mM for both neutral and acidic pH vesicle suspensions.To perform FRET-based leakage assays, it is necessary to include a fluorophore and a quencher into the vesicles.For the preparation of LUVs with ANTS and DPX encapsulated, vesicles were produced as above, except that the buffers in question also contained 12.5 mM ANTS and 45 mM DPX.After the final extrusion step, the unencapsulated ANTS and DPX was removed by gel-filtration using a PD-10 column.The size distribution of the LUVs was determined by Dynamic Light Scattering using a Zetasizer Nano ZS.The following settings were used: Material Refractive Index: 1.45; Solvent Refractive Index 1.333; Viscosity, 0.9238; Scatter Angle 173°.Experiments were performed at 25 °C using an equilibration time of 10 s, and 20 runs with an individual run duration of 20 s.The lower and upper threshold setting was 0.1 nm and 6000 nm, respectively.Samples of the lipid mixtures were collected from both lipid films and from vesicles.The latter were freeze-dried before preparation of the NMR samples.The dried lipid mixtures were dissolved using the CUBO solvent system, consisting of 1 ml dimethylformamide and 0.3 mL trimethylamine by volume, and 100 mg guanidinium chloride .500 µL of CUBO solvent volume and ca. 15 mg/mL lipid per sample were used, as this setup dissolve all lipid species and provide good signal dispersion for quantitative 31P NMR .1D 31P NMR was performed on a Bruker AV500 MHz instrument fitted with a BBO probe at room temperature.Acquisition parameters were as follows: TD: 32k, ns: 4196, ds: 2, d1: 4s, SW: 19.98 ppm, and proton decoupling was achieved using the Waltz-16 pusleprogram.The spectra were processed in Topspin applying 1.5 Hz line broadening to the FID prior to Fourier transformation using for deconvolution and peak integration for signal quantification.Each signal was assigned a percentage of the sum of all phospholipid integrations.FRET-based vesicle membrane integrity assays were performed to determine whether Cage-only or Lnk-only were able to induce leakage comparable to A-Cage-C and A-Lnk-C in LUVs.Vesicles consisting of EYPC and PBPS lipids with ANTS and DPX encapsulated were prepared as described above in Section 2.1.At high concentrations inside the vesicles, the ANTS/DPX fluorophore/quencher pair will have a low fluorescence response.If the vesicle bilayer is perturbed by membrane active substance, ANTS and DPX will be released and diluted.DPX can then no longer effectively quench ANTS, and there will be an increase in the fluorescence of this substance .Initial samples of 800 μL containing 250 μM EYPC:PBPS were prepared and A-Lnk-C, A-Cage-C, Cage-only or Lnk-only were each added stepwise between the measurements up to 40 µM of peptide concentration.Then, the detergent Triton-X was added to the cuvette to release all remaining ANTS and DPX.Fluorescence spectroscopy was carried out using a LS50B florescence spectrometer.Samples were excited at 355 nm and scanned from 450 nm to 550 nm, using slit widths of 5 nm, a scan speed of 200 nm min−1, and 3 averaged scans per experiment.The experiments were carried out at 25 °C.Buffer only was used as blanks and subtracted using the FL WinLab software.In the data analysis the measured intensities were adjusted to account for the dilution as protein were added using a simple linear relationship.The measurements containing only LUVs in buffer were arbitrarily set as 0% leakage and the measurements where Triton-X was added as 100%, based on the intensities at 510 nm which is the at λmax of ANTS.Each peptide investigated contains one single tryptophan in their sequence.Tryptophans are fluorophores sensitive to changes in the polarity of the local environment and thus a useful reporter on whether it is protected from the solvent by a fold or embedment into a lipid bilayer.Intrinsic tryptophan fluorescence spectroscopy was performed using a LS50B florescence spectrometer.The excitation wavelength was set to 295 nm and emission was measured from 310 nm to 380 nm using a scan rate of 25 nm/min, and the slit width was 5 nm for both excitation and emission.Samples of 5 µM peptides at either pH 4.5 or 7.4 were prepared in the presence and absence of various concentrations of EYPC LUVs and loaded in a cuvette and scanned three times.In the case of urea denaturation, experiments were performed in buffer at pH 7.4 and increasing amounts of urea.Polypeptide concentration was 2 µM.All samples were blank corrected using appropriate blank samples.Data was plotted using the approach described in Mårtensson et al. .Briefly, a wavelength corresponding to a tryptophan in a hydrophobic environment and an exposed tryptophan environment is picked.The intensity of the hydrophobic wavelength is divided by the intensity at the wavelength of the exposed situation.Since fluorescence emission produced very broad spectra, this ratio is used as a useful proxy variable for tracking changes in the spectrum, instead of relying on visual inspection of very broad and overlapping peaks.In order to track changes in secondary structure, CD spectroscopy in the far-UV region was performed.For reviews on the application of CD to polypeptides, please see .The experiments were carried out on a Jasco J-810 spectropolarimeter, and experiments were performed at 25 °C.Samples with 50 µM peptide and various concentrations of EYPC LUVs at pH 4.5 or 7.4 were loaded in a 1 mm quartz cuvette.Each sample was scanned three times at 50 nm/min using a bandwidth of 0.2 nm, and a data pitch of 0.1 nm.The recorded HT voltage and absorption showed that the samples had low levels of light scattering and absorption down to 200 nm and tolerable down to 190 nm.The data series were acquired using the same peptide stocks.Their concentration was determined by UV–vis absorption at 280 nm , using a NanoDrop ND-1000 Spectrophotometer.The data was blank corrected using the instrument manufacturers’ software, and converted from optical rotation to mean residual ellipticity using the equation ϴ=ɛ/10·C·n·l).Here, ɛ is the measured ellipticity, l is the path length, C is the determined peptide concentration, and n is the number of amino acid in the peptides, which are 53, 56.AFM data was acquired using a MFP-3D-Bio instrument equipped with a TR400PSA1 cantilever.Imaging was performed in tapping mode and force curves were obtained in contact mode, operating in liquid environment using a Fluid Cell Lite at room temperature.The samples were hydrated for at least 1 h in buffer before imaging and injection of peptide, having a total volume of 1.5 mL.Lowering pH of buffer from neutral to acidic was achieved by titration using a solution of 150 mM NaCl, 100 mM citric acid.The AFM data was processed using Igor Pro and Gwyddion .The samples subjected to AFM force measurements were prepared using a EYPC:PBPS lipid mixture which was spin-coated on grade V1 mica.To control for the presence of lipid bilayers at a given point of mica-supported bilayers, the cantilever tip was pressed onto the surface in contact mode.If lipids are present, the cantilever will push through the bilayer when a certain critical force is attained.This breakthrough can be observed as abrupt leap in force-distance plots. | This article is related to http://dx.doi.org/10.1016/j.bbamem.2017.01.005 (Ø. Strømland, Ø.S. Handegård, M.L. Govasli, H. Wen, Ø. Halskau, 2017) [1]. In protein and polypeptide-membrane interaction studies, negatively charged lipids are often used as they are a known driver for membrane interaction. When using fluorescence spectroscopy and CD as indicators of polypeptide binding and conformational change, respectively, the effect of zwitterionic lipids only should be documented. The present data documents several aspects of how two engineered polypeptides (A-Cage-C and A-Lnk-C) derived from the membrane associating protein alpha-Lactalbumin affects and are affected by the presence of zwitterionic bilayers in the form of vesicles. We here document the behavior or the Cage and Lnk segments with respect to membrane interaction and their residual fold, using intrinsic tryptophan fluorescence assays. This data description also documents the coverage of solid-supported bilayers prepared by spin-coating mica using binary lipid mixes, a necessary step to ensure that AFM is performed on areas that are covered by lipid bilayers when performing experiments. Uncovered patches are detectable by both force curve measurements and height measurements. We tested naked mica׳s ability to cause aggregation as seen by AFM, and found this to be low compared to preparations containing negatively charged lipids. Work with lipids also carries the risk of chemical degradation taking place during vesicles preparation or other handling of the lipids. We therefor use 31P NMR to quantify the head-group content of commonly used commercial extracts before and after a standard protocol for vesicle production is applied. |
511 | Transcriptomic data of Arabidopsis hypocotyl overexpressing a heterologous CsEXPA1 gene | Data reported here describe the sequencing results obtained from the control and dex-treated pOpON::CsEXPA1 Arabidopsis hypocotyls harvested on day 3 and day 5; each set with three biological replicates.This transcriptomic dataset was generated by QuantSeq.3′ mRNA sequencing .A total of twelve raw sequence data were deposited into NCBI SRA database and can be accessed with the BioProject accession number SRP076440 under the BioSample accession number SAMN05192734.This study utilised previously reported transgenic Arabidopsis line pOpON::CsEXPA1 containing a dex-inducible transactivating system, which allowed the induced ectopic overexpression of a heterologous cucumber expansin gene.Seed sowing, growing media and conditions followed as previously described , in the dark with petri dishes double wrapped in aluminium foil and placed vertically.For induction, growth media were supplemented with 10 µM of dexamethasone.Control media were supplemented with an equivalent concentration of solvent DMSO.Etiolated hypocotyls samples were harvested on day 3 and day 5 after seed sowing.A total of 100 hypocotyls were pooled as one biological replicate.Three biological replicates were sampled for each treatment at each time point, totalling twelve samples."RNA from pools of 100 hypocotyls was extracted using TRIzol according to manufacturer's instruction.RNA purity and integrity was measured using the ND-1000 Nanodrop spectrophotometer and Agilent 2100 bioanalyzer, respectively.RNA samples were cleaned using DNAse I kit according to the Rapid out removal DNA kit instruction and converted into cDNA by using QuantSeq."3′ mRNA-Seq Reverse Library Prep Kit according to manufacturer's instruction to generate compatible library for Illumina sequencing.cDNA libraries were assessed using TapeStation before 100 bp single end sequencing using Illumina HiSeq.2500 system at Australian Genome Research Facility based on standard protocols.Raw sequencing reads were processed individually to checked for per base sequence quality and screened for the presence of any Illumina adaptor/ overrepresented sequences and cross-species contamination through the AGRF quality control pipeline as per Lexogen QuantSeq data analysis workflow.To quantify transcript abundance, the processed reads were mapped to Arabidopsis genome reference.The mapping was performed using bowtie2 with stringent “end-to-end” alignment and all other parameters were set to default values according to recommended data analysis workflow by Lexogen.The counts of reads mapping to each known gene were summarised in CPM values using the TAIR10 gene annotation with the featureCounts utility of the subread package .This transcript abundance dataset can be utilised to study the genome-wide changes in gene expression during etiolated hypocotyl development from day 3 to day 5, and to identify differentially expressed genes which are affected by the overexpression of a heterologous CsEXPA1 gene from cucumber.Data from this study can be compared with previously reported expression data from the suppression of endogenous expansin genes . | Expansin increases cell wall extensibility to allow cell wall loosening and cell expansion even in the absence of hydrolytic activity. Previous studies showed that excessive overexpression of expansin gene resulted in defective growth (Goh et al., 2014; Rochange et al., 2001) [1,2] and altered cell wall chemical composition (Zenoni et al., 2011) [3]. However, the molecular mechanism on how the overexpression of non-enzymatic cell wall protein expansin can result in widespread effects on plant cell wall and organ growth remains unclear. We acquired transcriptomic data on previously reported transgenic Arabidopsis line (Goh et al., 2014) [1] to investigate the effects of overexpressing a heterologus cucumber expansin gene (CsEXPA1) on the global gene expression pattern during early and late phases of etiolated hypocotyl growth. |
512 | Understanding glycation kinetics of individual peptides in protein hydrolysates | The reaction kinetics of the Maillard reaction has typically been studied using pure peptides and proteins.However, protein hydrolysates are used as ingredients in many food products, e.g., hypoallergenic infant formula.During industrial processing in the presence of reducing carbohydrates, the peptides can become glycated through the Maillard reaction."In a glycated whey protein hydrolysate, 6 times more advanced Maillard products were found compared with the glycated intact protein.Unlike intact proteins, peptides with various amino acid sequences are present in protein hydrolysates.In addition, there are more α-amino groups present in the hydrolysates than in the intact proteins.This could be the cause of the observed differences in the formation of advanced Maillard products.For synthetic peptides, glycation kinetics were suggested to be influenced by the peptide length, and the type and number of amino groups.It is unclear how much these peptide properties and the hydrolysate composition affect the glycation kinetics of individual peptides in hydrolysates.To answer this, this study followed the glycation kinetics of individual peptides in protein hydrolysates.In the literature, the Maillard reaction was reported to occur more in protein hydrolysates than in intact proteins.For instance, for whey protein hydrolysate with a degree of hydrolysis of 9.3%, the level of advanced Maillard products formed were 6 times more than in glycated whey protein isolate.However, in that study the differences in the extent of the Maillard reaction was mostly observed by the formation of brown colour, but not by the changes in the amount of available lysine.Hence, the conclusions were based on the extent to which the secondary reactions occurred.It was unclear whether the observed difference was similar in the initial stage of the reaction, i.e., the glycation, where the reducing carbohydrates react with free amino groups of proteins or peptides.Studies with synthetic peptides showed differences between the chemical reactivity of α- and ε-amino groups.In the reaction with triazinyl dye, the reaction rate constant of the ε-amino group on lysine was ∼10 times higher than that of the α-amino group.For the Maillard reaction, many studies observed a brown colour formation besides the loss of amino groups and/or carbohydrates.This means that secondary reactions already took place in the samples.For instance, the browning intensities of lysine and 8 other AAs during 0–12 h of heating with d-glucose were measured.The reaction rate constant of the ε-amino group, estimated from the browning intensities versus the heating time fitted with a 1st order reaction equation, was ∼2.5 times higher than that of the α-amino group.The fitted maximum browning intensity of the two types of amino groups were similar.Due to the occurrence of secondary reactions, it is difficult to draw conclusions on the glycation reactivity of the two types of amino groups.For studies that investigated the reaction kinetics of glycation, the substrates used were mainly pure intact proteins.Some of the studies applied enzymatic hydrolysis after the glycation of proteins to investigate the differences in the glycation positions.However, since the enzymatic hydrolysis was applied after the proteins were glycated, only the ε-amino groups of the lysine residues on the peptides could be glycated.Therefore, it is difficult to draw conclusions on the glycation reactivity of the α- or ε-amino groups when a mixture of different peptides was glycated in a hydrolysate.For synthetic peptides that do not contain lysine residues, the glycation of α-amino groups was reported to depend on the type of N-terminal AA and the peptide length.For instance, the glycation rate constants of dipeptides with histidine as the N-terminal AA were 1–4 times higher than that of dipeptides with the other 7 AAs tested as the N-terminal AA, which all had similar rate constants.The effect of peptide length was proposed to be due to the increase in pKa of the α-amino group, caused by the carboxyl group of the C-terminal AA.This means that fewer α-amino groups were unprotonated at the pH during glycation.The rate constants of di- and tri-glycine, calculated from 1st order reaction equation of the percentages of unbound glucose versus heating time, were ∼2 times higher than that of glycine.Since protonation is a very fast equilibrium, the shift in pKa is not expected to affect the extent of reaction.However, the final percentage of unbound glucose in glycine solution was ∼25% higher than that in di- or tri-glycine solutions.The influence of this shift in pKa on glycation was also discussed in synthetic peptides that contained lysines.A shift in pKa was observed for lysines in 78 proteins, ranging from 5.7 to 12.1.This shift in pKa was attributed to the surrounding AAs of lysine residues, i.e., neighbouring AAs.For example, the extent of glycation of synthetic peptides that had aspartic acid as a neighbouring AA was lower than that of peptides surrounded by neutral AAs.Besides the charge of surrounding AAs, the hydrophobicity was also reported to affect the glycation.For dipeptides with lysine as the N-terminal AA, if the C-terminal AA was hydrophobic, the dipeptides were fully glycated after 30 min.When other AAs were used as the C-terminal AA, 30–90% of the peptides remained non-glycated, with an exception of arginine as the C-terminal AA, which 10% of the dipeptide remained non-glycated.In this study, to identify the influences of peptide properties on the glycation kinetics of peptides in protein hydrolysates, the glycation of individual peptides in hydrolysates was followed.Hydrolysates that contained peptides with different properties, i.e., peptide length, type and number of amino groups in the peptide, and different relative abundances, were prepared and glycated under identical conditions.α-Lactalbumin was obtained from Davisco Foods International Inc.Approximately 72% of it was in apo form and the rest was in holo form, based on results from circular dichroism.Bacillus licheniformis protease was obtained from Novozymes.BLP is specific for glutamic and aspartic acids residues.The BLP powder was partly insoluble, and was purified as described previously.The suspension of BLP was centrifuged and the supernatant was dialysed against 150 mM NaCl solution, then against demineralised water, and then freeze-dried.The freeze-dried powder had a protein content of ∼60% based on the Dumas results.The enzyme activity was 3.9 AU mg−1 min−1, as determined using the azocasein assay.The purity of BLP was 100% based on the UV280 peak area determined using the PDA detector attached to the ultra-performance liquid chromatography system and was 92% based on the UV214 peak area.In the UV214 chromatogram, two peaks were found, of which the main peak was identified to be the BLP and the minor peak was the pro-peptide.All other chemicals were of analytical grade and purchased from Sigma or Merck.The DG_Pav,max and kg of peptides were fitted with a normal distribution curve.The structure of the residuals of the fit was used to identify the number of populations in the data set.Once the number of populations was identified, the Matlab method k-means clustering was used to categorise the populations.The k-means clustering is a partitioning method, which uses the squared Euclidean metric to determine the distances and the k-means++ algorithm for the cluster centre initialisation.This is used to group data points into populations with minimised total sum of distances.Ethylenediaminetetraacetic acid was used to chelate the calcium ions attached to the α-lactalbumin molecules.EDTA was added to 1 L of 10% protein solution at an α-lactalbumin to EDTA ratio of 1:5, and the solution was stirred overnight at 4 °C.To remove EDTA, the solution was ultra-filtered with 7 L of Millipore water over a UFP-10-C-5 membrane.The prepared apo α-lactalbumin was freeze-dried and stored at −20 °C before further analysis.The protein content was ∼90% based on the Dumas method.Of the total protein content, ∼90% was α-LA, based on the A214 determined using the PDA detector attached to the UPLC system, and the remaining 10% consisted of β-lactoglobulin and bovine serum albumin.It has been previously published that the extent and rate of glycation were similar for α-lactalbumin and β-lactoglobulin.In this article, the focus was on the glycation kinetics of peptides generated from α-LA.The enzymatic hydrolysis was performed as described elsewhere.The α-LA was dissolved in 20 mL Millipore water at a concentration of 1%.The solution was adjusted to pH 8.0 and equilibrated for 0.5 h at 37 °C in a pH-stat device.The BLP was dissolved in Millipore water and added to the equilibrated protein solution to reach an enzyme/substrate ratio of 1:100.The hydrolysis was performed at 37 °C using a pH-stat device with 0.2 M NaOH to keep the pH constant.The degree of hydrolysis was calculated using the equation published previously.The hydrolysis was stopped at DH 2, 4, 6 and 8% by adjusting the pH to 2 using 5 M HCl.The pH of the hydrolysate solution was re-adjusted to 8.0 after ≥10 min of inactivation.As a control experiment, 1% α-LA solution was incubated in the absence of BLP.The added volume of NaOH from the control experiment was subtracted from the added volume of NaOH at all time points during hydrolysis, to compensate for the consumption of NaOH due to the dissolution of CO2.The samples were stored at −20 °C.The glycation of intact and hydrolysed α-LA was performed as described previously.To a 5 mL protein solution at 1%, 0.05 mL of a 1 m sodium phosphate buffer was added to reach a molarity of phosphate buffer of 10 mM, with a negligible change in protein concentration.d-Glucose was dissolved in 10 mM sodium phosphate buffer at concentrations of 3.5, 3.8, 4.4, 4.9 and 5.5 mg mL−1.Five mL of each d-glucose solution was mixed with 5 mL of the hydrolysate solution with a DHstat of 0, 2, 4, 6 and 8%, respectively, to reach a molar ratio of free amino groups to reducing ends of 1:2.The concentration of free amino groups in each hydrolysate was calculated based on the number of lysine residues and N-termini per original protein molecule.The number of N-termini is equal to 1+DHstat/100 × 122.The mixtures were freeze-dried, followed by incubation in a dry state at 50 °C under 65% relative humidity for 0, 2, 4, 6 and 8 h in a humidity control chamber.The glycated samples were labelled DH2_G0-8-DH8_G0-8.The non-hydrolysed α-LA was glycated for 0–24 h and the reaction was performed in duplicate.As a control, the hydrolysates were also incubated without d-glucose at 50 °C under 65% relative humidity for 8 h. All samples were freeze-dried and stored at −20 °C.The protein content was determined using a Flash EA 1112 NC Analyser.The nitrogen-protein conversion factor of 6.25 for non-hydrolysed α-LA was used, calculated from the AA composition of α-LA.Nitrogen-protein conversion factors of 6.27, 6.29, 6.31 and 6.33 were used for hydrolysates with a DHstat of 2, 4, 6 and 8%, respectively, based on the number of water molecules introduced to the peptides during hydrolysis.The samples were analysed on an H class Acquity UPLC® system equipped with a BEH C18 column.An Acquity PDA® detector was attached to the ultra-high performance liquid chromatography system.The eluents and elution gradient were the same as described previously.Eluent A was 1% acetonitrile containing 0.1% trifluoreacetic acid in Millipore water and eluent B was 100% ACN containing 0.1% TFA.Samples were incubated for 2 h with 100 mM ditiothreitol in 50 mM Tris–HCl buffer at pH 8.0 to reduce the disulphide bridges, diluted to a protein concentration of 0.1% and centrifuged before injection.The gradient was as follows: 0–2 min isocratic on 3% B; 2–10 min linear gradient from 3% to 22% B; 10–16 min linear gradient 22–30% B; 16–19 min linear gradient 30–100% B; 19–24 min isocratic on 100% B; 24–26 min linear gradient 100–3% B and 26–30 min isocratic on 3% B.The flow rate was 350 μL min−1.The UV absorbance at 214 nm was monitored for the absolute quantification of peptide concentrations.The mass spectra of the samples were measured using an online SYNAPT G2-Si high definition mass spectrometry coupled to the RP-UPLC system.Sodium iodide was used for the MS calibration.The MS and MS/MS data were collected using the methods developed previously.The MSe method was not dedicated to optimise the signal of fragments of each specific peptide, but to reach an efficient detection of fragments of all peptides in hydrolysates.Online lock mass was acquired.Based on the differences in the measured and theoretical lock mass, corrections were applied on m/z of the peptides during measurements.The detection was under positive ion mode.The capillary voltage was set to 3 kV and the source temperature was set at 120 °C.The sample cone was operated at 35 V.The desolvation gas and cone gas was nitrogen.The trap gas was set at 1.5 mL min−1.MS and MS/MS were performed between m/z 100 and 3000 with a scan time of 0.3 s.The trap collision energy was set at 6 V in single MS mode and ranged from 20 to 30 V in MSe mode.The transfer collision energy was 4 V in MS mode and switched between 4 and 10 V MSe mode.The data were analysed manually using MassLynx software v4.1.All hydrolysates were injected at an equal concentration .To compare samples, Cpeptide was re-calculated using a correction for the changed protein content in glycated hydrolysates, based on the Dumas results.A214 is the UV peak area at 214 nm, Vinj is the injected volume of the sample, Q is the flow rate and l is the path length of the UV cell, which is 1 cm according to the manufacturer.ε214 is the molar extinction coefficient at 214 nm, calculated as described previously.The molar extinction coefficient of glycated α-LA was previously shown to be the same as the untreated α-LA by determining the UV absorbances of a dilution series of non-glycated and glycated α-LA solutions with known concentrations.Therefore, the molar extinction coefficients of the glycated and non-glycated peptides were assumed to be the same.Due to multiple reflections by the coating of the cell, the effective path length of the light through the cell is not the same as the length specified by the producer.To correct for this effect, the cell constant of the UV detector was determined using a series of standard solutions made by β-lactoglobulin, β-casein and angiotensin II, and the approach described elsewhere.The ratio between the measured and expected A214 was taken as the value of kcell.For the UV cell used in this work, the kcell was determined to be 0.78.The linear region of the A214 in the PDA detector ranges from 5 × 101 to 6 × 105 μAU min.Peptide quantification was done for peptides with a A214≥5 × 101 μAU min.For all hydrolysates, on average 95 ± 10% of the expected total A214 was found.Of the total A214, on average 94 ± 3% in the samples was assigned to annotated peptides.In certain cases, a glycated peptide co-eluted with the non-glycated peptide.In such cases, the A214 was divided over the co-eluting peptide depending on the intensity of the total ion count of each peptide.The standard error on the peptide concentration for one hydrolysate injected in triplicate with this quantification method was previously determined to be ∼6%.In addition, for duplicate hydrolyses, the average standard error of the peptide concentration was ∼15%.The preparation of the α-LA hydrolysates was reproducible, as indicated by the similarity in the hydrolysis curves for each of the four hydrolysates prepared.In the hydrolysates DH2_G0, DH4_G0, DH6_G0 and DH8_G0, 17, 20, 17 and 13 peptides were identified and quantified, respectively.Combining all hydrolysates, in addition to the remaining intact protein, 24 unique peptides were annotated, of which 23 were found in more than one hydrolysate.The relative abundance of these peptides ranged from 0.1% to 24% of the total amount of peptides.Out of the 25 unique peptides, 5 peptides did not contain any lysine.The length of these peptides ranged from 4 to 24 AAs.For the peptides that contained 1 or 2 lysine, the peptide lengths ranged from 2 to 8 and from 10 to 42 AAs, respectively.The quality of peptide analysis was evaluated based on the average AA, peptide and molar sequence coverages, and the DH values determined based on peptide analysis.For all hydrolysates, the AA sequence coverage was 100%.The peptide sequence coverages decreased from 99% to 89% with increasing DHstat values, and the molar sequence coverages ranged from 85% to 58%.The average peptide and molar sequence coverages were 94% and 72%, respectively.Based on previous data, the thresholds of acceptance for the average peptide and molar sequence coverages were set at 80% and 70%, respectively.Both average coverages in this study were above the thresholds.To further check the quality of peptide analysis, the DH values analysed with peptide analysis of each hydrolysate was calculated, and compared with the DHstat based on pH-stat titration.Averaged over all samples, the standard error between DHstat and DHMS was ∼14%.This standard error was similar to previous results.Based on these quality checks, the peptide analysis for non-glycated hydrolysates was sufficient to allow further analysis on the glycation on each peptide.In the control samples DH0_8-DH8_8, no glycated or lactosylated protein/peptides were found, confirming that non-glycated samples did not contain any glucose or lactose.The glycation of non-hydrolysed α-LA for 24 h only resulted in a DGMS of 85%.Fitting the data also showed an expected plateau at 86%).Since the DG was expected to be 100%, the data were also fitted using a fixed DGav,max of 100%.However, this fit did not describe the data well.This was not due to aggregation or incomplete analysis of the sample in UPLC.It was also not due to side-reactions, since no unknown masses were found in the MS spectra.The low DGav,max reached was also not caused by the lack of reactants.For intact proteins as well as for hydrolysates, the amount of added d-glucose was 2 times higher than the amount of free amino groups.In addition, the high-performance anion-exchange chromatography showed that there was still unreacted glucose.Apparently, the glycation process either stopped, or reached an equilibrium while there were still free reactants in the system.A plateau value for the DGMS was previously also observed during the glycation of 3 different proteins with 7 different carbohydrates until 48 h of glycation.In that case, the plateau value of DGav,max was even more defined due to the longer incubation time.Values for DGav,max in that study ranged from 6% to 92% for different protein-carbohydrate combinations.From the literature, we have not seen clear analysis of such information, or have we observed explanations for such an effect.The total degree of glycation of hydrolysates based on the OPA method reached ∼55% after 8 h heating in the presence of d-glucose.This value was similar to the value for the non-hydrolysed α-LA.This indicated that the free α-amino groups created by hydrolysis did not make a major contribution to the total glycation, which implies that the glycation of ε-amino group dominates the total degree of glycation.In the glycated hydrolysates, glycated variants of all peptides were found.The glycated variants of each peptide eluted either slightly before, or at the same moment as that original peptide, illustrated for peptide α-LA in Fig. 4A.The glycation was clear from the mass spectra.This peptide contains 1 lysine residue, meaning that glycation can occur on the ε-amino group of that lysine and/or on the α-amino group of the N-terminal AA.After 2 h incubation, 17% of this peptide had 1 glucose unit attached.The intensity ratio between the glycated and the original peptide increased over the incubation time.After 6 h, the peptide with 2 glucose units attached was detected.Similar to the quality checks for non-glycated hydrolysates, the AA, peptide and molar sequence coverages, and the comparison between DHMS and DHstat, were checked for all glycated hydrolysates.All hydrolysates had an AA sequence coverage of 100%.Peptide and molar coverages were averaged over the non-glycated and glycated α-LA hydrolysates.The average peptide sequence coverages were 99 ± 0, 93 ± 0, 91 ± 2 and 89 ± 0% for hydrolysates with a DHstat of 2, 4, 6 and 8%, respectively, and the average molar sequence coverages were 85 ± 6, 77 ± 7, 70 ± 4 and 62 ± 3%.Averaged over all hydrolysates, the peptide and molar sequence coverages were 93 ± 4 and 73 ± 10%, respectively, which were above the thresholds for acceptance.The DHMS of each sample was comparable with the DHstat.In addition to the above-mentioned quality checks, for glycated hydrolysates, the concentrations of glycated and non-glycated peptides at different Maillard incubation times were compared.Overall, there was only a minor variation in the quantified concentration of peptides in the samples with increasing incubation time, as illustrated for DH2_G0-8.This is similar to the standard error in peptide concentrations determined in duplicate hydrolysates, confirming that the quality of the peptide analysis was acceptable.Furthermore, using the DG_Pav values, the total degrees of glycation of hydrolysates were calculated, and compared with the total degrees of glycation analysed using the OPA method.Averaged over all samples, the standard error between the DG_T determined using the two methods was ∼13%.The maximum average degree of glycation and the glycation rate constant for individual peptides in all hydrolysates were determined.The glycation of individual peptides in the hydrolysates, the glycation curves is illustrated using peptides α-LA , α-LA and α-LA as examples of peptides containing 0, 1 and 2 lysine, respectively.The DG_Pav values of the peptides increased with the number of lysines in the peptides.For peptides that were present in multiple hydrolysates, the glycation curves in all hydrolysates were compared.Despite that, their concentrations ranged from 4.0 to 23.0 μm, the changes in DG_Pav versus incubation time were similar.The average standard errors for DG_Pav,max and kg of the same peptides in different hydrolysates were both 14%.This shows that the glycation of peptides in protein hydrolysates was independent of the hydrolysate composition.Therefore, if peptides were present in several hydrolysates, the DG_Pav values obtained in all hydrolysates were combined to perform the fit for DG_Pav,max and kg.To identify whether there were clusters of populations of kg and DG_Pav,max values, the data were fitted with a normal distribution curve.For kg and DG_Pav,max, 3 and 5 populations were identified, respectively.The data were categorised by the Matlab method k-means clustering with 3 and 5 clusters for kg and DG_Pav,max, respectively.The 3 kg clusters had average values of 1.6, 2.5 and 5.3 s−1, respectively.Peptides α-LA and α-LA had kg values of 6.0 × 10−1 s−1 and 4.5 × 10−1 s−1, respectively.These values were 2–3 times higher than the kg values of the other 23 peptides.For peptide α-LA , it could be that it is a dipeptide with leucine as the C-terminal AA and lysine as the N-terminal AA.In the literature, peptide KL has been reported to have the highest glycation rate among 11 dipeptides with different AAs as the C-terminal AA.For peptide α-LA , no explanation was found for the high kg value.For the other 23 peptides, based on the clustering results, the kg was independent of the number of lysines and the peptide length.The 5 DG_Pav,max clusters had average values of 12.8, 26.8, 36.5, 44.3 and 55.3%, respectively.For the DG_Pav,max values, clearly, the clustering showed a quite good agreement with the number of lysine residues in the peptide.Similarly to the intact protein, even though the amount of added d-glucose was 2 times higher than the amount of free amino groups, the DG_Pav,max for all peptides did not reach 100%.For peptides that did not contain lysines, i.e., peptides α-LA , α-LA , α-LA , α-LA and α-LA , the DG_Pav,max values were comparable: 12.8 ± 2.6%.Because the variation in DG_Pav,max of these peptides was small, no clear effect of the type of N-terminal AA was observed.Since these peptides only have an α-amino group as the glycation site, it means that the maximum degree of glycation for α-amino group was 12.8%.It should be noted that peptide α-LA contains an arginine within its AA sequence.The fact that the DG_Pav,max of α-LA was the same as the other peptides which only contained an α-amino group as the glycation site suggests that arginine did not react during the glycation process.In previous studies with free AA and intact proteins, the glycation of arginine was identified.The reason that different results were obtained between the current and previous studies could be the differences in glycation conditions used.For peptides that contained lysine residues, the DG_Pav,max ranged from 23.0% to 58.1%.The maximum degree of glycation of ε-amino groups in each peptide was calculated using equation.It was found that the DGmax,ε-NH2 averaged over peptides that contained ε-amino groups was 53.9 ± 9.0%.To check if the DGmax,ε-NH2 did not reach 100% was because of the heterogeneity of the samples, the DGmax,ε-NH2 of each peptide was also calculated after correction for the amount of remaining non-glycated peptides.The percentage of glycated ε-amino groups of the glycated peptides was 72.2 ± 13.2%.Since this was also lower than 100%, it was concluded that all lysines in the hydrolysates were glycated to similar extents, but the glycation process either stopped before all available amino groups reacted or reached an equilibrium.This also indicated that the glycation reactivity of each lysine residue in α-LA was not significantly different from each other.This indication was previously suggested in another article, where the bovine tryptic hydrolysis of glycated α-LA followed the theoretical scenario in which all lysines were glycated to the same extent.These data also showed that, on average, only 12.8% of the α-amino groups and 53.8% of the ε-amino groups were glycated.The DGmax,ε-NH2 was also obtained by fitting equation to the experimental data of all peptides.The R2 of the fit was 0.91.The DGmax,ε-NH2 was fitted to be 60.0%, which was close to the average value obtained from individual peptides.Because the DGmax,ε-NH2 was ∼5 times higher than the DGmax,α-NH2, it was concluded that the number of lysine residues of a peptide is the dominant property that determines the glycation of individual peptides in a hydrolysate.As a result, the final extents of glycation reached were similar for hydrolysates and intact proteins.This indicated that, in previous examples, when a high extent of browning intensities was shown for hydrolysates than for intact proteins, the differences were indeed on the level of secondary reactions, but not on the glycation stage.Hydrolysates contain peptides with different numbers of lysines, meaning that the average DG_Pav,max of these peptides would be lower than the intact proteins.However, this does not mean that the total degrees of glycation of the hydrolysates were lower than that of the intact proteins.Actually, in both hydrolysates and intact proteins, all lysines were glycated to ∼60.0% and all N-termini were glycated to ∼12.8%.Still, for peptides with the same number of lysines, there were variations in the DG_Pav,max values.This variation was not correlated with peptide length.This means that the glycation of peptides in a hydrolysate under the experimental condition used in this study was independent of the peptide length.In this study, using a full quantitative peptide analysis, the glycation kinetics of individual peptides in protein hydrolysates was successfully followed.The extent of glycation for all peptides did not reach 100%, meaning that the glycation process either stopped before all amino groups reacted or reached an equilibrium.The rate and extent of glycation of individual peptides in hydrolysates were independent of the hydrolysate composition.For a peptide in a hydrolysate, the number of lysine residues in the peptide is the dominant factor that determines the extent of glycation.This is because the glycation reactivity of ε-amino group is ∼5 times higher than the reactivity of α-amino group.The glycation rate constants were independent of the peptide length or the number of amino groups on the peptide.The outcomes of this work can be used to predict the extent of glycation of peptides in hydrolysates based on the amino acid sequences of the peptides. | Protein hydrolysates contain peptides with different lengths, type (α/ε-) and number of amino groups; these properties might influence the peptide glycation kinetics during the Maillard reaction. To identify the effects of peptide properties and hydrolysate composition on glycation kinetics, the glycation kinetics of individual peptides in hydrolysates was followed using quantitative peptide analysis. α-Lactalbumin was hydrolysed and glycated with D-glucose (0–8 h, 50 °C, dry heating with 65% humidity). The hydrolysates (degree of hydrolysis 2, 4, 6, and 8%) contained 25 unique peptides, ranging from 2 to 123 AAs with 0–12 lysine(s). The glycation rate constant (k g ) and the maximum average degree of glycation (DG_P av,max ) of peptides were independent of the hydrolysate composition. The maximum DG of α-NH 2 and ε-NH 2 groups was 12.8% and 60.0%, respectively. With this information, the DG_P av,max of individual peptides [9–59% for peptides with 0–2 lysine(s)] could be predicted. |
513 | Characterisation of the Medipix3 detector for 60 and 80 keV electrons | Direct electron detection can be achieved using the conventional film or various solid-state detection architectures including monolithic active pixel sensors or variants of hybrid pixel detector technology such as the Medipix3 sensors.MAPS technology forms the basis of many current direct detector systems that are widely applied for cryogenic transmission electron microscopy imaging in life sciences and are beginning to be used in selected materials science applications .This family of detectors typically feature pixels with 6–10 µm lateral size, containing several transistors per pixel and with array sizes greater than 1 megapixel.Silicon wafer thinning processes have been developed such that the entire detector thickness can be reduced to around ca. 25 µm .For this class of detectors primary electrons at energies, >200 keV, are mostly transmitted through the sensor depositing only a fraction of their energy with lateral spread within 1–2 pixels.Counting of single electron events can be achieved using off-chip image processing hardware to process multiple short exposure frames .However, for MAPS detectors operated at lower primary electron energies imaging performance is compromised due to increased scattering leading to signal across many pixels.Primary energies lower than 100 keV can provide greater contrast for thin biological samples or the avoidance of knock-on damage, for example in imaging 2-dimensional materials containing light elements .For these applications, the alternative architecture of hybrid pixel detectors may offer advantages.In contrast to MAPS, hybrid pixel detectors consist of a relatively thick, sensitive semiconductor layer connected to a separate but immediately adjacent readout ASIC that processes the signal in the sensor.In this study, we demonstrate, using the Medipix3 sensor that a thick silicon hybrid with coarse pixel geometry is ideal for low voltage Transmission Electron Microscopy imaging up to 80 keV where the Modulation Transfer Function is almost invariant and yields high Detective Quantum Efficiencies.The Medipix3 detector was designed at CERN within the framework of the Medipix3 collaboration for photon and particle detection and fabricated using commercial 0.13 µm CMOS technology.The sensitive matrix consists of 256 × 256 pixels at 55 µm pitch with an overall area of 15.88 × 14.1 mm2.The readout chip is connected to a 300 µm thick silicon layer.Each pixel contains analogue circuitry consisting of a charge sensitive preamplifier, and a semi-Gaussian shaper which produces a voltage pulse proportional to the electron current collected at the pixel bump bond.The voltage pulse is compared to two discriminators that control the lower and upper threshold levels.Each discriminator has a 5-bit digital to analogue converter whose values are adjusted during initial equalization to reduce the threshold dispersion caused by any mismatch in the pixel transistors.When operating in single pixel mode, if the deposited energy exceeds the preset lower threshold energy value, TH0, then a count is registered in the digital pixel circuitry.Energy calibration of the thresholds is performed using flat-field illumination of the detector with an X-ray source providing a range of photon energies .Each pixel contains two configurable depth registers which can also function to enable a continuous read–write capability in which one register acts as a counter whilst the other shifts the data for readout.Alternatively, the two counters can be linked to provide 24-bit depth counting.When compared to the earlier Medipix2 detector , the Medipix3 design contains additional analogue and digital circuitry for the implementation of a charge summing mode.This is designed to mitigate the effects of charge sharing.Charge sharing occurs when either the primary radiation undergoes lateral dispersion as they lose energy in the silicon slab or when the resulting secondary electron charge cloud is broadened by diffusion.Both processes spread charge across several pixels leading to degradation in both energy and spatial resolution.For incident electrons, in contrast to X-ray photons, the primary effect is most significant, whereby the electrons lose energy sporadically through inelastic scattering events distributed over tens of micrometres.The CSM implemented in the Medipix3 detector has been designed to minimize the effect of charge sharing by summing charge deposited in clusters of immediate neighbouring pixels at pixel corners and allocating the reconstructed charge to the pixel with the highest collected charge.This is accomplished in several steps.As an example, if charge created from the initial electron event encompasses four pixels then the individual pixel charges are compared to TH0.The digital circuitry within the pixels processes the charge distribution to identify the pixel with the largest charge and inhibits the pixels with lower signal.In parallel the charge is reconstructed in analogue summing circuits located at the corners of each pixel and compared to a second energy threshold, TH1.The pixel with the highest local charge increases its counter if the reconstructed charge in at least one of its adjacent summing nodes is above TH1 .We have used a single chip Medipix3 detector to investigate its performance for TEM applications at 60 and 80 keV.The basic metrics used for quantifying the detector performance are the MTF and the DQE.The MTF is defined as the ratio of output to input modulation as a function of spatial frequency and effectively describes how the detection system attenuates the amplitudes of an infinite sinusoidal series.In the present work, we used the established knife edge method to derive the MTF as well as a new alternative technique to calculate the point spread function directly from short exposure flat-field images capturing single electron events.The NPS was calculated from the Fourier transform of flatfield images recorded under uniform illumination.The MTF and the DQE were evaluated in the spatial frequency range from 0 to 0.5 pixel−1 where the upper limit represents the Nyquist frequency beyond which aliasing occurs.In the present case, this limit corresponds to 9.1 lp/mm.As described previously , it is difficult to calculate the NPS and hence the DQE at lower spatial frequencies accurately.The observed variance in a flat-field image results in underestimation of the true noise per pixel as the charge produced by an incident electron is seldom confined to a single pixel.We have carried out a similar analysis to using Eq. to calculate the DQE at zero spatial frequency, DQE, detailed subsequently in Section 2.The MTF of the detector arises from the manner in which electrons deposit their energy in the sensitive silicon sensor layer and by which the resultant electron–hole pairs diffuse under bias toward the readout ASIC circuitry.Given that the energy required to produce a single electron–hole pair is 3.6 eV in silicon, a single primary electron at 60 keV can produce over 16,000 electron–hole pairs.However due to the high SNR of the Medipix3 sensor we have been able to perform analysis of single electron events during short shutter exposures, with durations in the range 1–10 µs, that show both single and multi-pixel clusters.Characterisation of cluster area and detector response was carried out as a function of threshold energy and synthetic PSFs calculated from which MTFs were subsequently calculated by Fourier transformation of the PSF.The Medipix3 detector was mounted on the JEOL ARM200CF TEM/STEM in a custom mount interfaced to the 35 mm camera port located above the viewing screen.This mount included a vacuum feed through for a 68 way electrical connector for the necessary readout electronics.Operation and high speed data readout of the detector used MERLIN hardware/software produced by quantum detectors .MTF and DQE data was recorded for primary electron beam energies of 60 and 80 keV using both SPM and CSM modes.For each primary electron energy, the MTF data was recorded from images of a 2 mm thick Al knife edge inclined by 10° with respect to the pixel readout columns.For an exposure time of 10 ms, 32 images were acquired across the full range of Medipix3 energy threshold values in the SPM mode.The MTF data acquisition procedure was then repeated in the CSM mode by holding TH0 at a fixed energy and scanning the high threshold DAC across the full range of energy values.Fig. 2 shows the variation of MTF as a function of spatial frequency at 60 and 80 keV using SPM for various TH0 energy thresholds.At the highest value of TH0 the MTF in single pixel mode is better than the theoretical maximum due to the reduction in the effective pixel size .However, the DQE at high TH0 values in SPM is significantly reduced as shown in Figs. 3 and 4.This is a consequence of many electron events not being counted because, for these, the charge is deposited in more than one pixel and therefore falls below the threshold for detection.As a result, there is a balance between optimizing the DQE and MTF.The DQE values shown in Figs. 3 and 4 were calculated independently from the analysis of the flat-field images using Eq.Fig. 5 shows the variation of DQE as a function of threshold DAC values.The degradation of the MTF in Fig. 2 and with increasing primary electron energy is consistent with earlier work and with Monte Carlo simulations using the CASINO software package which show that the lateral charge spread at 60 keV is approximately 25 µm and increases to approximately 42 µm at 80 keV for a 300 µm thick silicon substrate.At higher energies, long range electron scattering occurs where electrons lose energy over considerable distances from their impact point , leading to pixels being triggered far from the initial impact point.The consequent reduction in MTF with increasing electron energy impacts the DQE proportionally) as demonstrated in Figs. 3 and 4 since electron scattering cross sections decrease with increasing primary electron energy.The reduction of DQE with increasing threshold DAC values is similarly attributed to smaller proportion of electrons that exceed the thresholds being detected.Fig. 5 shows the variation in DQE as a function of TH0 for 60 and 80 keV electrons.It is evident that the slope of the variation of DQE changes when the TH0 threshold is set at half the primary electron energy.Above this point, the NPS is constant since an incoming electron is either recorded in a single pixel or not recorded.Conversely, several pixels may be triggered by a single electron if the threshold is set below this point.One of the major design advances in the Medipix3 sensor over its predecessors is the implementation of the CSM mode as already described.In principle, CSM should provide excellent MTF performance but without the need to set high values for the energy threshold rejecting electrons that deposit energy across many pixels as required in SPM mode.Thus, in CSM mode using low energy threshold values, almost all detected electrons will be retained, maximising simultaneously the DQE and MTF.Figs. 6 and 7 show the MTF and DQE for the CSM mode.At 60 keV it is clear that the CSM mode gives performance equal to a theoretical square pixel detector with little variation between the three TH1 energy thresholds shown.At 80 keV, the MTF performance is reduced with respect to this theoretical detector values but still maintains high values across the measured spatial frequency range.Fig. 7 shows that these MTF values are matched by high DQE across the spatial frequency range measured.At 60 keV and 80 keV, the DQE values occupy a narrow band, being at most 0.2 and 0.35 lower than the theoretical response of a square pixel detector respectively.Comparison of the MTF between SPM and CSM modes is shown in Fig. 8 which plots the MTF at the Nyquist frequency as a function of threshold for 60 and 80 keV electrons.It is clear that MTF enhancement is obtained at the lowest energy thresholds using CSM at 60 and 80 keV."In particular, the MTF at the Nyquist frequency value is ca. 0.6 for 60 keV electrons when the energy threshold is set to its lowest value, just above the Medipix3 chip's thermal electronic noise floor.The DQE performance can be compared across the two modes by referring to Figs. 3 and 7.In SPM mode, Fig. 3, the DQE exhibits a strong inverse dependence on the TH0 energy threshold where low thresholds yield the highest DQE but lowest MTF.In CSM mode, Fig. 7, there is no longer a strong dependence on TH0 and the DQE is similar, but slightly lower than that for the lowest SPM energy thresholds shown in Fig. 3.In order to understand the behaviour of the Medipix 3 sensor, the response to single electron events was studied by acquiring flat field images with an exposure time t = 10 µs.Fig. 9 and show the response of the sensor to single 60 keV electrons when the threshold energy, TH0 is set to 20 keV.Fig. 9 shows clearly that portions of the charge generated when single electrons impact the sensor are deposited in neighbouring pixels, creating multi-pixel clusters clusters can be seen with areas of 1, 2, 3, and 4 pixels).Increasing the threshold energy TH0 to 40 keV and), shows that only single pixel clusters are obtained but that the overall number of clusters is decreased.This is in consistent with previous measurements by McMullan et al. of the Medipix2 sensor in which for TH0 > E0/2 only single pixel hits are obtained.The variation in the number of clusters counted with respect to threshold energy is plotted in Fig. 9 together with the integrated counts obtained from simple summation of all pixel values.Since the average separation of clusters is relatively large for the combination of beam current and shutter time used, the integrated counts can be considered as resulting from the number of clusters counted multiplied by their area in pixels.As such, when TH0 > E0/2 = 30 keV) the variation in the number of clusters and the integrated counts become equal as only single pixel clusters are recorded.Overall, both the number of clusters and integrated intensity decrease from a maximum value at TH0 = 4.1 keV to zero at TH0 = E0 = 60 keV.Similar behaviour is observed for 80 keV electrons) where a slightly greater number of clusters were counted at TH0 = 4.1 keV due to a higher beam current.The function in N = N0 n describes a variation in effective pixel area.At TH0 = E0/2, when C = 1 pixel, Σ equals N.However, as the value of TH0 is increased beyond E0/2, the number of pixels registering hits decreases.This is because energies greater than the threshold value can only be transferred if the primary electrons impact pixels in a zone located around the pixel centre with radius less than the pixel half-width.The radius of this zone decreases with increasing TH0 and hence the effective pixel area is continuously reduced.This variation in radius depends on the nature of the energy deposition in the pixels and can be modelled using Monte-Carlo simulations.Calculations were performed using the package CASINO and show that for 60 and 80 keV electrons the average radii for deposition of the full primary electron energy are 11 µm and 20 µm respectively.Thus, in the limit where the threshold energy TH0 = E0, hits are only detected if the electron strikes the pixel at a maximum distance from its centre defined by difference between the pixel half-width and the average radius for full energy deposition.This yields values of 16.5 µm or 7.5 µm radius from the pixel centre.As these radii refer to circular sub-pixel areas the pixels are correspondingly reduced to 8.4% and 1.9% of their full area for 60 and 80 keV electrons respectively.The agreement of the MTF response predicted from single electron event characterization with values obtained from knife edge measurement is assessed in Figs. 12 and 13, in which the MTF values at the Nyquist frequency are plotted as a function of threshold energy.It can be seen that an almost linear variation of the MTF at the Nyquist frequency with respect to threshold energy is obtained from the single electron event analysis and that the gradients are within 1.7X for those measured from the knife edge data.While agreement is not absolute, obtaining similar linear in proximity is supportive of our model for energy deposition in the sensor and also demonstrates prospects for characterization of a counting detector purely by investigating single electron events.Short, 1 µs exposures, provide insight into the operational performance of the sensor in CSM mode.Fig. 14 and, show that at 80 keV with TH0 and TH1 set to energy values just above the detector thermal noise floor single electron events are recorded as single pixels.However, this does not necessarily deliver ideal detector performance.Fig. 14 shows the integrated intensity from longer, 10 ms exposure, flat field images for both SPM and CSM modes as a function of threshold energy value.For the data recorded in SPM mode, the integrated image intensity decreases strongly as a function of threshold energy, from 7.9 × 106 counts at TH0 = 3.0 keV to 0 counts at TH0 = 80 keV.This is the same response as that obtained by fitting Σ = C × N as in Figs. 9 and 10 for single electron events.Fig. 14 highlights that CSM operation removes much of the variation in integrated intensity, returning an almost constant number of counts for threshold energies from 5 to 60 keV.Closer inspection of the CSM data shown in Fig. 14 reveals however, that there is a small linear decrease in counts from 3.22 × 106 at TH1 = 19.7 keV to 2.74 × 106 at TH1 = 59.9 keV.This can be attributed to the CSM algorithm not providing perfect correction for ∼15% of electron events at the beam energy used, most likely returning two separated single pixel events.Such events are likely to have been ones in which an incident electron loses significant amounts of energy in pixels separated by two adjacent 2 × 2 CSM pixel blocks.This could lead to the arbitration circuitry identifying two hits, rather than one.For the CSM mode data at TH1 energy threshold values >60 keV) it can be seen that the integrated intensity suddenly decreases, reaching zero at 80 keV.The width of the transition relates to the decreasing probability of the CSM algorithm being able to recover all of the deposited charge where at least some charge is deposited in an adjacent 2 × 2 pixel block.In this section we demonstrate some new imaging capabilities enabled by the pixel architecture of the Medipix3 sensor.Within the sensor each pixel contains two 12 bit counters which offer operational flexibility for different experimental requirements.For example, it is possible to acquire images with zero gap time between them, enabled by counting into one 12-bit register while simultaneously reading out the other 12-bit register containing counts from the previous image.It is also possible to configure the two counters as a single 24-bit counter to access a 1 to 16.7 × 106 dynamic range.This capability directly benefits quantitative recording of diffraction patterns including the undiffracted beam.Fig. 15 shows a diffraction pattern recorded from Au nanocrystals on a carbon support film obtained with a parallel beam illuminating many tens of grid squares and with the current reduced by use of a 10 µm condenser aperture to ensure an electron arrival rate <1 MHz in the central spot.Fig. 15 shows a typical diffraction pattern recorded in this mode where the number of counts varies from a maximum intensity ca. 10 × 106 counts in the undiffracted beam to a minimum intensity of ca. 3000 counts in reflections at the edge of the pattern.Fig. 16 shows TEM images that demonstrate, the variation in MTF response of the detector according to threshold energy selection and mode as quantified in Figs. 2–6.For a primary microscope magnification of 2MX the cross-grating replica sample was imaged in TEM mode at a beam energy of 60 keV.In Fig. 16, in SPM mode with TH0 = 20 keV, exposure time = 500 ms, Au crystals are observed on the amorphous carbon support.Fig. 16 shows the power spectrum for the region enclosed by the solid red box.In Fig. 16, the effect of changing the threshold energy to >E0/2, TH0 = 40 keV, highlights the improvement in the MTF, with both lattice fringes and Moiré contrast recorded in the Au nanocrystals compared to none being visible for the same regions in Fig. 16).The power spectrum in Fig. 16, for the region enclosed by the red box, now shows an intensity peak, corresponding to lattice fringes, at a radius very close to the Nyquist frequency.The scale bars in the images have been set, assuming the observed fringes to be planes with a physical spacing of 0.235 nm.Selection of a higher threshold energy), resulted in 2.1X lower counts and so an increased exposure time of 1000 ms was used to maintain the signal to noise ratio.Fig. 16 shows that by operating in CSM mode, where both TH0 and TH1 were set to values just above the detector thermal noise floor, type lattice fringes were visible in a single area of an image recorded with a 500 ms exposure time and with mean counts similar to those in Fig. 16.Power spectrum analysis, in Fig. 16, performed for the region enclosed by the red box in Fig. 16, provides evidence for the observation of lattice fringes, albeit with weaker contrast than in Fig. 16.Overall, Fig. 16 demonstrates a simultaneous high DQE and MTF as predicted from the data in Figs. 6–8.We have performed a comprehensive analysis of the imaging response of the Medipix3 sensor at 60 keV and 80 keV electron beam energies.Our measurements of the MTF and DQE in single pixel mode using conventional knife edge and flat field image methods agree with trends already observed for the Medipix2 detector .We have also reported data using the SPM mode by analysing single electron events and producing an empirical model that can be used to directly predict the MTF response of the detector.This empirical model yields data that closely agrees with the data from the accepted knife edge method measurements and also provides insight into the variation of the integrated intensity at low threshold energies due to the area the clusters generated and at high thresholds due to the reduction in the effective pixel size.The latter phenomenon is responsible for obtaining MTF values which exceed the theoretical response of a square pixelated detector, but at the expense of a substantially reduced DQE.Prediction of the MTF from single electron events has been reported previously however, our method, differs in that it synthesises a PSF based on empirical fitting of the integrated intensity and single event counting.It is also easier to implement across datasets where images are acquired as a function of detector threshold energy.We have demonstrated that the charge summing mode implemented the Medipix3 sensor results in significant and simultaneous improvements in the MTF and DQE at both electron beam energies considered.However, due to the mechanism of energy loss in the sensor material, we have shown that the CSM algorithm does not provide perfect identification of all single electron events or recovery of all spatially distributed charge.These factors most likely explain why the CSM MTF and DQE responses are excellent but below that of a theoretical square pixel detector.Thus, it is clear that the CSM mode should have obvious applications for efficient low dose imaging of electron beam sensitive materials.Primary electron energies of 60–80 keV are highly relevant in the imaging of 2D materials such as graphene .However, primary energies between 160–300 keV are more commonly used in many materials science applications of radiation resistant materials as they enable higher spatial resolution .These beam energies lead to large average lateral dispersion in a silicon sensor material .In a future study we will investigate both SPM and CSM operation modes at high electron flux in order to understand the extent to which the CSM algorithm can provide performance improvements and the nature of how it will fail in this regime. | In this paper we report quantitative measurements of the imaging performance for the current generation of hybrid pixel detector, Medipix3, used as a direct electron detector. We have measured the modulation transfer function and detective quantum efficiency at beam energies of 60 and 80 keV. In single pixel mode, energy threshold values can be chosen to maximize either the modulation transfer function or the detective quantum efficiency, obtaining values near to, or exceeding those for a theoretical detector with square pixels. The Medipix3 charge summing mode delivers simultaneous, high values of both modulation transfer function and detective quantum efficiency. We have also characterized the detector response to single electron events and describe an empirical model that predicts the detector modulation transfer function and detective quantum efficiency based on energy threshold. Exemplifying our findings we demonstrate the Medipix3 imaging performance recording a fully exposed electron diffraction pattern at 24-bit depth together with images in single pixel and charge summing modes. Our findings highlight that for transmission electron microscopy performed at low energies (energies <100 keV) thick hybrid pixel detectors provide an advantageous architecture for direct electron imaging. |
514 | Nanometric resolution magnetic resonance imaging methods for mapping functional activity in neuronal networks | A nitrogen vacancy center in diamond acts as an atomic size optically detectable electron spin probe with the ability to sense local small magnetic fields due to other electrons or nuclear spins at nanometric distance.Owed to the NV spin dependent fluorescence, its electronic spin state can be both readout and initialized optically.The application of microwave frequency pulses on its optically initialized state then permits the coherent control of its spin state at the single defect level, when weakly coupled with its surrounding nanometric environment.Thus the NV can be used directly as a sensor.Microwave sequences adapted from nuclear magnetic resonance methods allow detecting alteration in NV spin dephasing time originated from a dilute concentration of nuclear spins producing small magnetic field around the probe.Compared to other probes, this defect allows a wide bandwidth sensing of nuclear magnetic Larmor frequency spins resonance.NV can achieve very high magnetic field sensitivity down to single electron spin sensitivity and the current best magnetic field sensitivity is of 0.9 pT Hz−1/2 .Some of these magnetometer techniques have direct sensing applications within biological samples .A nanoMRI method using a single NV center has been successfully applied to achieve the 2D imaging of 1H NMR signal with a spatial resolution of 12 nm .These results can be achieved by combining conventional confocal or wide field optical microscopy in conjunction with specifically adapted Nuclear Magnetic Resonance methods such as Hanh-echo sequence and universal dynamical decoupling .Optical conventional imaging microscopic techniques based on confocal microscopy and wide field microscopy are however intrinsically diffraction limited.To achieve nanometric resolution in the localization of NV with purely optical methods, super-resolution methods based on Stimulated Depletion Emission Microscopy and Stochastic Optical Reconstruction Microscopy must be employed.Therefore alternative methods combining STED and STORM with spin resonance techniques have also been implemented.However, still some hurdles exist for the full deployment of nanoMRI technology .This manuscript provides a summary of the latest works showing nanoscale resolution in localizing NV in diamond with potential applications in magnetic resonance imaging of nuclear and electron spins.One of these methods, FMI, uses the Fourier phase-encoding of the NV electronic spins in a diamond sensor and it has been applied to magnetic field sensing.We will describe the implementation to neural field detection and discusses the potential extensions of these imaging techniques to quantum spin defects in other, possibly more practical material such as silicon carbide.We also report on two main methods to achieve this.Fourier Magnetic Imaging of NV in diamond and optical super-resolution microscopy methods combined with Nuclear Magnetic Resonance methods .FMI acquisition and processing method applied to NV in diamond achieves imaging in the k-vector space providing a 3.5 nm resolution.STED and STORM imaging methods are achieving nanometric localization directly in real space, based on deterministic and stochastic localization of fluorophores.These methods combined with NMR techniques have also been applied to NV and potentially could provide nanoMRI capabilities and magnetic field sensitivity.NV localization with resolution of 2.4 nm and 27 nm have been demonstrated, respectively.These last two methods have not yet been fully applied to magnetic sensing to assess their potential ultimate sensitivity to resolve other nearby spins.They are however expected to be relevant for direct spin-spin interaction studies.This manuscript delivers an introduction to this rapidly advancing area and illustrates the case of advanced methods for the plausible and potentially significant applications to neural pathways.This contribution highlights the present advantages and relative performance of these methods.The non-invasive mapping of functional activity in neuronal networks is one possible application of these techniques and we will discuss its current status.The contribution also discusses the further improvements of the technique including the use of atomic defects in a more probe fabrication friendly material such as SiC and concludes that the subject area is now sufficiently mature to engineering a probe and developing a protocol for practical medical applications especially in neuroimaging.Arai et al. have developed a fast and accurate scanning probe microscope that operates at room temperature and provides a unique combination of high magnetic field sensitivity and nanometric resolution that coupled to Fourier magnetic resonance imaging methods may open new horizons.In conventional MRI, in addition to a static high magnetic field defining the quantization axis, and a radiofrequency pulse exciting the nuclear spins, a magnetic field gradient based on three independently controlled gradient coils is used to link the local spins precession frequency to the spins 3D location r and the gradient vector of the magnetic field along the quantization axis.The k-space is defined as the time integral of the time variable magnetic gradient vector, so that the phase acquired over time and space by the spins at a specific location r can be written as ϕ = 2πk·r.The MRI signal is made of a part associated to the initial radiofrequency pulse inducing magnetization of the spins, which constitutes the real image, and a part linked to the spins phase rotation ϕ, that depends on space and time.Therefore, the real space image is the Inverse Fourier Transform of the phase rotation in the k-space.The MRI signals ϕ in the k-space values are sampled during the image measurement with a carefully designed time sequence of radiofrequency and gradient pulses.Data from digitized image signals are stored during data acquisition and then mathematically processed to produce the final image.The IFT is applied after k-space acquisition to derive the final image.This approach has been modified from conventional MRI, where the probe is a coil sensing nuclear spins in the sample, while in nanoscale MRI the NV electron spin is a probe.Thus a key feature of the NV-diamond Fourier imaging method is that the phase encoding is applied to NV spins in the sensor, rather than nuclear spins in the sample as in conventional macro-scale MRI.Phase-encoding of the sensor spins broadens the applicability to many other types of magnetic samples, beyond those where the magnetic fields are created by distributions of weakly-interacting spins, such as current-carrying and ferromagnetic samples, action potentials in neuronal networks and magnetite in many cellular structures.Pulsed magnetic field gradients are specifically used to phase-encode the spatial information on NV electronic spins in the “k-space” .The wave number space measurement is then followed by a fast Fourier transform to generate nanoscale resolution, wide field-of-view, compressed sensing speed-up, real-space images.The key advantages versus real-space imaging, are the spatially multiplexed detection enhancing the signal-to-noise ratio for characteristic NV centre densities, a high data acquisition rate that compressed sensing can expand and the concurrent acquisition of signal from all the NV centres in the FoV.The FMI pulse series simply consists of a laser initialization pulse, a microwave dynamical-decoupling succession for spin-state manipulation and finally a laser readout pulse.Fig. 1 presents the outline of the probe and the imaging sequence; a and b are the side view and top view outline of the probe, c the NV centre energy level diagram and d the imaging sequence.In addition to the usual optical polarization of NV using a pulsed 532 nm laser, the excitation microwave frequency resonant with NV ground state electron spin transition, and the optical read-out of the spin state, a magnetic field gradient is used.The pulsed magnetic field gradient encodes the phase rotation of NV spin before and after a microwave π-pulse in a k-space, as defined in conventional MRI.In addition to test the NV probe as magnetometer, an AC magnetic field is sent and imaged using the NV sensors.The microwave frequency, magnetic field gradient for encoding NV-phase during the π-pulse sequence, and the variable magnetic field to image, are sent via microwave loops, gradient micro coils and external field wire, respectively, directly patterned by e-beam lithography on a polycrystalline diamond coverslip.The sensor itself is a single-crystal diamond grown by chemical vapour deposition and implanted with N ions to form NV centres at a depth estimated between 20 and 100 nm.One sample contained a very low density of NV, and the FMI method was used to image a single NV with 3.5 nm for 1D imaging and 30 nm for 2D imaging.A second sample was fabricated with arrays of nanopillars each containing two NV centres on average.The two NVs were imaged by FMI and were found separated by 121 nm with 9 nm resolution.Acquisition time was less than 20 ms.This nanopillar containing two NVs precisely located is then used to image the AC magnetic field sent to the external wire with obtaining a magnetic gradient sensitivity of 14 nT/nm/Hz1/2.To extend the FoV to 15 × 15 μm2 but maintaining the same nanoscale resolution, 167 nanopillars were used to image an additional AC field.The imaging was done in a hybrid real and k-space imaging modality across the array of many nanopillars providing a high resolution k-space imaging of the magnetic field patterns, produced by the external wire.The localization of NV centre in several nanopillars was achieved with 30 nm resolution.By employing compressed sensing techniques , based on random sampling at a rate lower than the Nyquist rate, a measurement speed-up factor of 16 has been demonstrated without substantial loss of accuracy.An integration of k-space imaging with a wide-field microscope using a complementary metal oxide semiconductor or charge coupled device camera can provide rapid imaging across a large FoV with nanoscale resolution.STED microscopy was previously shown to be successful to image single NV centre with 6–8 nm resolution by using, in a scanning probe modality, in addition to an excitation beam a second beam to induce stimulated radiative emission depletion of NV.As this beam has a doughnut beam shape superimposed to the Gaussian excitation beam, the NV photo-luminescence is deactivated in part of the diffraction limited spot, thus providing a resolution less than the diffraction limit.An excitation pulsed laser at 532 nm is used, spatially superimposed with a pulsed STED laser .The resolution scales with the square root of the power of the STED beam.However the final limit to the resolution cannot be achieved by merely increasing the power.In fact, even if NV is very photo-stable under high power, however the high refractive index of diamond and its planar surface introduce aberration and scattered light that prevent to have a perfect zero field of the STED-beam.To address this issue and increase resolution a solid hemispherical lens in diamond has been fabricated on the diamond containing NV.The SILs size ranges within 5–8 μm and are directly fabricated within the diamond by focussed ion beams.Two type of diamond were used: the smaller SILs where fabricated in high-purity polycrystalline diamond grown by chemical vapour deposition, while larger SILs were sculpted in a high purity single crystal diamond, where NV centres were generated at a depth of 4 μm by 6 MeV N ions implantation.Optical detected magnetic resonance, Rabi and Hanh-echo sequence have been implemented simultaneously with super-resolution of NV, proving the ability to manipulate single NV spins with this resolution and indicating the opportunity to apply this method in conjunction with more complex sequences used in nanoMRI methods.To achieve STED-ODMR the microwave excitation was integrated with an adapted sequence: the NV centre is initialized to a ground state spin state by exposing it to 532 nm light with a few mT DC magnetic field defining the quantisation axis, then a microwave pulse of varying frequency is applied.The spin signal is read out with high spatial resolution by simultaneously illuminating the sample with excitation and STED light.To increase the signal-to-noise ratio, the sequence is repeated typically 104 times for each microwave frequency.It is expected a magnetic field sensitivity similar to what achieved in diffracted limited imaging methods , depending on the microwave sequence used and the additional presence of an AC magnetic field or a gradient field.In this case the main impediment could be the excessive distance of NV from other nuclear spins given the size of the SIL and the lack of scalability of SIL fabrication method.This could be overcome by using a free-standing SIL rather than a diamond integrated SIL, thus allowing NV to be in very proximity to a nuclear spins containing samples.The most likely application of this nanoscale imaging methods is in study of spin-spin interaction in quantum computing architectures.The higher photon collection is also improving of a 2.2 factor the acquisition time required to access to the spin information, while the nanoscale optical localization is a real time process.STORM enables fast and super-resolved imaging/localization of single emitters in a wide-field modality, provided that their photoluminescence can be temporally switched “on” and “off”.These methods has allowed to achieve few ten nanometres 3D spatial resolution in cellular imaging .Super-resolving single NV centres with a sub 20 nanometre resolution in a wide-field localization microscope based on the photoluminescence blinking of high-pressure high-temperature nanodiamonds was also demonstrated .In a STORM technique has been developed for NV centres in bulk diamond permitting to perform at the same time NV sub diffraction imaging and ODMR measurements on super-resolved NV spins.Commercial diamond samples of type IIa, grown by chemical vapour deposition were used.The samples contain as grown and artificially created NV by N ion implantation.By applying the STORM method the localization of NV was with 27 nm resolution, limited by the sample drift.An estimated resolution without sample drift is 14 nm.The combination of STORM and spin detection permits to assign spin resonance spectra to individual NVs located at nanometric distances and in a parallel acquisition mode as opposite to probe scanning modality.Therefore spin-STORM methods could allow the implementation of parallel NV sensors for nanometric magnetic resonance imaging in nanoMRI applications.NV based magnetometry has been investigated as a non-invasive small magnetic fields measurement of action potentials in neuronal network.In understanding the dynamics of neural network, it is a challenge to resolve the neural dynamics with subcellular spatial resolution or synapse scale resolution.The interest resides in determining the action potential dynamics with single-neuron resolution in whole organisms.Electrophysiology methods are currently used but they are invasive and they cannot achieve both high spatial resolution and wide field of view.A full set of techniques with their figure of merit has been analysed in , providing a useful insight of the many techniques available to study neural network activity.The action potential can be translated in a variable small magnetic field that is within NV-based magnetometry, thus providing a non-invasive and non-toxic method.The challenge of applying NV-based magnetometry is the typical neuronal pulse duration of 2 ms, peak neural magnetic field value ≤10 nT at 100 nm of axon surface.In the first study of the application of NV magnetometry to measure the probe sensitivity to axon transmenbrane potential , the magnetic field generated by a single axon potential is modelled and this magnetic field has been reproduced by a transmitting current micro wire on the diamond surface, which reproduces the temporal dynamics of an axon.A single crystal ultrapure diamond membrane substrate containing ensemble of NV centres was used.The NV photoluminescence is detected in a wide-field combined with confocal microscope.The magnetic field reading modality was a continuous ODMR or a free induction decay, the two methods providing similar sensitivity of 10 μT Hz−1/2.To permit the sensitivity to single axon the method needs to be applied within specific sequence repetition and specific sensing volume depending on the axon size.However even in a not optimised implementation, the NV centres were able to match the spatial structure and temporal dynamics of the simulated neuronal magnetic field.Using NV-based magnetometry in a very simple setup, the magnetic field produced by an actual potential of a single excised neuron from a marine worm and squid, has been demonstrated .The same has been achieved also external to the whole body of a live opaque marine worm.Therefore currently NV-magnetometry can achieve single neuron scale, whole organism scale, no labelling, 10 nm spatial resolution, 30 μs temporal resolution, 1 mm field of view.The method is non-invasive and non-toxic, allows observation for extended period without adverse effect on the live animal.The current status of neural network activity measurements with NV-based magnetometry can be further improved using the above described super-resolution methods, in particular FMI and STORM-ODMR due to the low laser power used.However the magnetic field sensitivity needs to be improve compared to current realization.This will permit to achieve magnetic field imaging.In Table 1 we show in comparison the NV localisation techniques of the here analysed methods.In addition, we stress that nanoscale localisation of NV can be combined with magnetic resonance imaging at the nanoscale, or magnetic field imaging with nanoscale resolution.Two modalities can be used: k-space and real space by scanning probe and wide field.The first permits to achieve a high FoV at the expenses of localisation, however it provides faster magnetic field imaging due to parallel sensing without losing magnetic field sensitivity.It can also be integrated in a hybrid k-space and real space wide field modality.Scanning probe using STED-ODMR can reach the highest NV localization, but so far magnetic imaging has not been demonstrated albeit the technique could achieve very high magnetic field sensitivity .This technique however does not allow parallel imaging.STORM-ODMR is very attractive as it also allows parallel imaging and it is very suitable for biological applications.By comparing the techniques we can conclude that so far, provides currently an excellent compromise to achieve nanometric localization, high sensitivity magnetic sensing, with well reduced acquisition time of the magnetic field imaging compared to point-by-point scanning due to parallel imaging.Fourier k-space imaging allows for high SNR detection with spatial multiplexing and high acquisition rate that can be enhanced by compressed sensing.The ability to probe all the NV centres in FoV at the same time can permit to measure the time-correlated dynamics of phenomena across the sample.The technique also has the major advantage of the relative simplicity of the apparatus needed, with the micro gradient coils used for phase encoding integrated with an optical microscope as shown in Fig. 1, a set-up well suited for a mass deployment of the technology.NV centres FMI may allow many applications in life sciences as nanoscale MRI of individual biomolecules in real time, or the non-invasive mapping of functional activity in neuronal networks with synapse scale resolution ∼10 nm and circuit scale FoV >1 mm.Performances of the techniques,These results are however at the cost of a lower NV spatial localization compared to STED techniques.Therefore depending on the application, based on current results, we envision that STED-ODMR and STORM-ODMR techniques can be used to study spin-spin interaction in array of NV spins in a quantum computer architectures or spin-spin interaction in biological samples.It is also possible to apply SPIN-STED and SPIN-STORM to magnetic resonance imaging at the nanoscale by increasing the microwave sequence complexity.Further improvements that can be applied to all techniques include optimization of diamond samples for greater magnetic field sensitivity , enhanced optical set up for better collection efficiency and spin-state optical contrast , extension of NV coherence time via dynamical-decoupling pulse sequence .Specifically for FMI further improvement can be achieved by using smaller micro-coils for stronger magnetic field gradients, parallel real space image acquisition with a wide field CMOS or CCD camera may enable the study of neuronal activity that span length scales from few nm to many mm.It is important to mention that other methods based on NV-spin sensors have achieved sub-nanometric resolution in nuclear spin spectroscopy/sensitivity.They are based on the use of isolated electron spins on the diamond surface or ancillary nuclear spins coupled to the main NV electron spin sensor.They allow to achieve single proton magnetic resonance detection with about 1 Å resolution and single protein NMR spectroscopy both at room temperature.In , quantum reporters coupled to a nearby NV center are localized with nanometer uncertainty and their spin is manipulated and read out as well.By measuring the quantum reporters spin via NV spin-read out, it is possible to increase the sensitivity to the detection of individual nuclear spin magnetic fields.In , a sensor made of two quantum bits is used to improve the read-out fidelity of the NV electron spin by manipulating both electron and nuclear spins before resetting optically the NV electron spin and using a modified dynamical decoupling sequence.The diamond surface is specially treated to extend 10-fold the spin coherence time of shallow NV centers with proportionate improvement in resolution.The readout fidelity is improved through quantum logic.NV coherence time is increased by wet oxidized chemistry in combination with annealing on the diamond surface, thus increasing the ability to sense single digit numbers of nuclear spins within a single protein.This permits to probe and perform spectroscopy of the isotopically enriched nuclear species within individual ubiquitin proteins attached to the diamond surface and within the NV volume detection.The method allows high confidence detection of individual proteins.In the context of here discussed relevant application of studying the dynamics of neural network using NV-based magnetometry, these techniques can be directly applied to achieve magnetic field imaging with nm resolution.To be able to sense individual mammals neurons, expected to generate an action potential magnetic field of ∼1 nT, it is necessary to improve current magnetic sensitivity, by using a diamond samples with higher NV concentration and to extend NV coherence by implementing other above mentioned microwave sequences. | This contribution highlights and compares some recent achievements in the use of k-space and real space imaging (scanning probe and wide-filed microscope techniques), when applied to a luminescent color center in diamond, known as nitrogen vacancy (NV) center. These techniques combined with the optically detected magnetic resonance of NV, provide a unique platform to achieve nanometric magnetic resonance imaging (MRI) resolution of nearby nuclear spins (known as nanoMRI), and nanometric NV real space localization. Atomic size optically detectable spin probe. High magnetic field sensitivity and nanometric resolution. Non-invasive mapping of functional activity in neuronal networks. |
515 | Comprehensive optical design model of the goldfish eye and quantitative simulation of the consequences on the accommodation mechanism | Aquatic vertebrates have a rigid, spherical lens to compensate the refractive loss of the cornea in underwater conditions.Therefore, they need to implement other accommodation strategies than terrestrial animals.This mechanism has been qualitatively investigated for teleost fish.One of the first studies was conducted by Beer.He investigated the accommodation mechanism of many different teleost species by retinoscopy after electrical stimulation or administration of medication.He found that most teleosts have a slightly myopic refractive state.His investigations have also shown that the accommodation is not caused by a change of the lens curvature, as in many terrestrial vertebrates, but by a lens shift.Beer was able to prove negative accommodation in the majority of teleost species that he studied.This means that the unaccommodated eye focuses on near objects.In order to resolve distance object, the optical system is actively adapted by moving the lens toward the retina.His results were later confirmed by several experimental studies.One of the investigated teleost species was the goldfish, for which different experimental studies on their accommodation capabilities show inconsistent conclusions.For example, Somiya and Tamura investigated the lens shift of the enucleated eye of the goldfish on the basis of photographs before and after electrical stimulation.They noticed a slight lens movement toward the retina.However, the exact distance could not be determined and a nasal-temporal displacement could not be observed.Also, Sivak observed an accommodation by means of retinoscopy and additional photographic control measurements after administration of atropine and pilocarpine.In contrast, the study of Kimura and Tamura found no lens shift by means of photographs after electrical stimulation of enucleated eyes.Also, Charman and Tucker could not confirm any accommodation mechanism by means of slit lamp examination and retinoscopy.They assumed that the eye’s depth of field is sufficient and, thus, no accommodative movement of the lens is necessary.In addition, Frech and Frech, Vogtsberger, and Neumeyer investigated the refractive state of a single goldfish using infrared photoretinoscopy in a training experiment.They presented objects at various distances from 10 to 40 cm and could not detect any variation in refractive power.Furthermore, contradictory to Beer’s assumption that teleosts are myopic in the unaccommodated state, the experiment suggested that the goldfish eye may be hyperopic in a relaxed state of accommodation.From the evaluation of the aforementioned studies, there is no clear evidence as to whether the goldfish eye accommodates or has a sufficient depth of field.The various results may be due to the use of very diverse experimental methods which, for the most part, do not reflect the natural behavior of goldfish.For most experiments, the lens movement was artificially induced by electrical stimulation or medication and was performed on an enucleated eye.Additionally, Fernald and Frech et al. themselves express uncertainty about the applicability of retinoscopy to investigate the refractive power of the goldfish eye because the precise source of the retinal reflection is not known.Should the reflecting layer and the focal plane not coincide, this would lead to a systematic absolute error.Some studies have shown that the goldfish is physiologically able to move the lens, but it is not clear, whether it uses this ability for accommodation.Currently, no quantitative investigations of the accommodation behavior based on optical simulation have been done, which results in an incomplete theoretical understanding of the vision of the goldfish eye, like the necessity of accommodation.To overcome this lack of investigation, we designed a realistic eye model that considered the geometric optic parameters, the refractive gradient index of the lens, the retinal specification, and the resting position.The individual components are widely known.A simplified model was already established by Charman and Tucker based on experimental measurement data.Their schematic model is emmetropic; it assumes the cornea as a single surface and contains a homogeneous lens with an index of 1.69.There are also studies on the GRIN profile of the lens and on the structure of the retina.To the best of our knowledge, an optical model which combines all components and thus enables a quantitative investigation of the optical quality and the consequence on accommodation mechanism, does not exist.This chapter describes our strategy to develop a comprehensive model of the goldfish eye.The databases of the initial parameters and the optimization procedure are discussed.The framework of investigating the resulting image quality and the consequences on the accommodation mechanism are presented in detail.In order to evaluate the resolution capability of the goldfish eye, studies about the retina properties were considered.According to the summary of Palacios, Varelab, Srivastavaa, and Goldsmith, the goldfish retina contains four cone types with a corresponding spectral sensitivity of about 625 nm, 535 nm, 455 nm, and 350 nm.The relative number of the L-, M- and S-cones are 0.45, 0.35, and 0.20, and UV-cones are rare.Regarding visual acuity, the L-, M- and S-cone types have approximately the same contribution.The contribution of the UV-cones is unknown and due to their small number, they will be not considered for the resolution criterion.The simulation was started with the medium wavelength of 535 nm.Additionally, we assume that the resolution of the goldfish eye is limited by the average cone density.The mean inter cone distance is approximately 11 μm corresponding to a resolution of 13.5’.To complete our initial eye model, further details to the cornea, pupil, and the resting position are required.For these components, no specific values are available for the goldfish eye.Therefore, values of other fish and vertebrates were consulted and various assumptions were made.First, we assumed that the front and the back surfaces of the cornea have nearly the same radius of curvature and the refractive index of the cornea is 1.376, as in various other vertebrates.Furthermore, we expect that the pupil is located approximately halfway between lens anterior vertex and equator, adapted from research on the rainbow trout.In conclusion, based on Beer’s research, we assumed that the goldfish has a negative accommodation mechanism and thus its eye focus on near objects at the resting position.Beer’s research is consistent with Fernald indicating the poor underwater visibility caused by scattering and turbidity and stating the unlikeliness of the teleosts ability to see clearly in the distance at the unaccommodated state.The exact visual range of the goldfish is still not known.Experimental results carried out by Neumeyer targeting the visual acuity in goldfish uses an object distance of 30 mm.Therefore, one can assume that goldfish can see clearly at this distance.Thus, we used this visual distance as the smallest possible resting position.Consequently, the resting position can be seen as the object distance for which the goldfish focuses without accommodation and will be used for adjusting the eye model.A schematic representation of the goldfish eye is shown in Fig. 1.The illustration contains all relevant optical components: the cornea; the anterior chamber with the aqueous humor; the iris as aperture; the spherical lens, which is responsible for the main refractive power to focus light onto the retina; the vitreous body as stabilizer; and the retina with the receptor cells as imaging layer.To simulate and optimize our eye model, we used the raytracing software Optic Studio, which allows the performance evaluation of the eye model by a geometric optical approach.For the simulations, the initial eye model was optimized regarding the constraint values based on the geometrical data basis and the GRIN distribution of the improved Matthiessen polynomial with the objective of achieving a sufficient resolution.The resolution capability was evaluated by investigating the spot diagrams.The root-mean-square radius of the spot must be smaller than the permissible cone radius of 5.5 μm.All simulations were initially performed for the mean wavelength of 535 nm, a pupil diameter of 1.8 mm, and for object distance of 30 mm.Due to the lack of a common definition about the focus distance in the unaccommodated state, our eye model was optimized for different near-point positions.When changing the resting position, the initial model must be adapted.For this, the lens radius of curvature, the depth of the vitreous body, the anterior chamber depth, the focal length, and the GRIN coefficients are variably set within the given limits and further optimization occurs.The different eye models are denoted in the following with Model_Rx, where x stands for the distance of the respective unaccommodated position in mm.Beyond this range, it is assumed that an object can no longer be resolved.Hence, to extend the visual range, an accommodation mechanism, like refocus by shifting the lens along the optical axis towards the retina, would be required.In that case, the necessary amount of lens movement was determined by optimizing only the anterior chamber depth and adjusting the vitreous body length so that the total length remains unchanged.This section presents the resulting eye model with the obtained parameters, the simulation results for image quality, and the consequence on accommodation behavior.The previously discussed geometrical and optical constraints and assumptions result in a comprehensive goldfish eye model.The parameters for the adjusted model for a wavelength of 535 nm, an aperture diameter of 1.8 mm, and an exemplary resting position of 30 mm are shown in Table 1.The spot diagrams show that the goldfish reaches a sufficiently good resolution.Even up to 3.3° incident angle the RMS radius is smaller than the permissible RMS radius of 5.5 μm.On axis, the RMS radius is 4.83 μm, resulting in a theoretical resolution of 11.54’, and thus providing a sufficient image quality.For the representative described Model_R30 and Model_R50, all geometric parameters are within the given measuring ranges of Charman and Tucker.The refractive index of the core and the cortex are close to the values of Jagger and Matthiessen and within the measurement range of Axelrod et al.The pupil is located 0.42 mm behind the lens vertex, which corresponds with the data of that the pupil of the rainbow trout lies halfway between lens anterior vertex and equator.Additionally, the GRIN distribution of the lens is comparable with the parabolic profile of the improved Matthiessen polynomial by Jagger.The Model_R30 achieves with 11.54’ a lower resolution than Jagger’s single lens in medium, which reached a resolution of 2’.However, the spot radius is well below the average cone radius.The lower resolution compared to Jagger may be due to his simplified approach of considering only a single lens in medium and not the entire eye.In addition the object distance of the two models differs considerably.Jagger’s model is adapted for an infinity object distance and our model for an distance of 30 mm.For Model_R50 the resolution could be improved to 3.09’.The optimization of the comprehensive eye model for other cases of resting positions resulted in slightly different values for geometrical parameters, as shown for Model_R50.All models provided also a sufficient image quality and are within the expected range of the literature values.In general, the results suggest that the models are suitable for further investigation.With the simulated eye model, we investigated how the image quality behaves for different object distances while the optical system remains unchanged.Fig. 4 shows the corresponding spot diagrams for the distances 30, 50, 100, and 400 mm for Model_R30 and Model_R50.Even small changes of the object distance result in a considerable increase of the RMS radius.For Model_R30, a small displacement from 30 to 50 mm leads to an RMS radius increase of 15.68 μm.As a consequence, the depth of field for Model_R30 only reaches from 29.4 to 33.6 mm.For Model_R50, a displacement from 50 to 100 mm leads to an increase of 15.92 mm.Increasing the focus distance for the resting position to 50 mm extends the depth of field slightly to a range of 43.4–59.8 mm.For both models, an accommodation by lens movement would be necessary to image distant objects within the resolution criteria.For small resting positions, the depth of field is very limited and an accommodation has to be made to resolve distant objects.An enlargement of the resting position results in an increasing depth of field range.From a resting position of 250 mm, a large object distance can be imaged sufficiently on the retina.However, objects closer than 147.5 mm cannot be seen with the maximum resolution capability.Hence, should the goldfish be able to well resolve closer objects, the eye must have a shorter resting position and a corresponding adjustment mechanism.With increasing object distance, the image quality decreases, since the focus shifts towards the lens resulting in a blurry spot on the retina and an enlargement of the spot diameter.This is illustrated in the spot diagrams in Fig. 6A and the corresponding ray paths for the Model_R30.There an object point located at a distance of 300 mm imaged in front of the retina.By a lens shift, this focus drift could be compensated and the image would again be located at the retina, as depicted in Fig. 6B. Hence, in this case, the image quality for far distances could be clearly improved by accommodation.Therefore, in the next step, we determined the necessary lens movement to accommodate and extend the imaging field range.The results for the Model_R30 and the Model_R50 are illustrated in Fig. 7.There, both curves steeply increase at the beginning and flatten for far object distances.In addition, it becomes clear that the amount of the lens shift depends on the resting position.For Model_R30, the necessary lens shift to image objects at a distance of 50 mm is already 92 μm.The total lens shift for resolving objects from 30 mm to over 350 mm is 209 μm.From the object distance of approximately 350 mm, the depth of field is sufficiently extended to see objects from over 10 m clearly, and no additional accommodation is required.For Model_R50, a lens movement of about 131 μm is sufficient.According to the literature, lens movements from 150 μm could be measured for different teleosts.However, for the goldfish only a small shift was observed, without being able to measure the amount.Therefore, it is possible that the goldfish eye has a greater resting position than 30 mm.The further the resting position is, the smaller the necessary lens movement is.For example, it is possible that the lens shift of 131 μm which was determined for Model_R50 was outside the measuring range of Somiya and Tamura.In the last step, the influence of all three wavelengths on the depth of field was considered.The refractive indices and GRIN parameters resulting from the adjustment are presented in Table 3.Thus, a focal length difference of 5.4% and an Abbe number of approximately 50 for the cortex and 39 for the core could be achieved.The values correspond well with the assumptions made in the literature.As shown in Fig. 8 for the blue wavelength an RMS radius of 4.47 μm can be achieved which is in the range of the inter cone distance.The depth of field for the blue wavelength ranges from 27.0 to 31.0 mm.For the red wavelength the RMS radius is greater than the inter cone distance.The slightly poorer image quality for red results from the choice of the refractive indices.By using a lower index for the cortex and a higher index for the core a better image quality for green and thus a better image quality for red can be achieved.For our model we have tried to consider the specifications of Jagger and Matthiessen as well.For Model_R50 the image quality for red and blue is approximately the same.For all models, the consideration of all three wavelength does not lead to a significant enlargement of the depth of field.Since it is difficult to investigate the accommodation behavior of the goldfish eye experimentally, in this work, a comprehensive eye model that allowed a quantitative analysis of the image quality and the consequence on accommodation was simulated.The simulated goldfish eye model includes all relevant specifications from previous experimental investigations: the optical geometrical parameters, the gradient index distribution of the spherical lens, the myopic resting position, and the retina specifications.With the adjusted eye model, a sufficient image quality can be achieved, the resulting RMS radius of approximately 4.83 μm is below the cone radius of 5.5 μm.The good image quality and the correspondence of the individual parameters with the experimentally determined values from the literature justified the conclusion that the created goldfish eye model is suitable for further simulations.The simulation results provided new knowledge regarding aquatic vision, especially accommodation behavior and depth of field.The investigations show, that without an adaptive mechanism the effective focus range of the goldfish eye is limited depending on the resting position.The smaller the resting position is, the lower is the depth of field range.This results in a very low depth of field range especially for the assumed short resting position of 30 mm.Should the depth of field of the eye be sufficient to focus objects in a large distance range, as assumed by Charman and Tucker according to our model a more distant resting position is probable.For example, with a resting position of 250 mm a depth of field ranging from 148 mm to more than 1 m can be well resolved.Since the goldfish in Neumeyer’s training experiment achieved a visual resolution of approximately 15’ for an object distance of approximately 6 mm, it can be assumed that the near point of the goldfish is located close to the eye.In this case, the depth of field would be very limited and a lens shifting accommodation mechanism would be highly advantageous.Furthermore, the simulation results allow a clear quantitative prediction of the expected axial lens-shift, which is necessary to ensure a resolution smaller than the inter cone distance over an extended depth of field.The results of the determination of the depth of field and the potential lens shift can directly be used for the preparation of experimental verifications and validations. | To further extent our understanding of aquatic vision, we introduce a complete optical model of a goldfish eye, which comprises all important optical parameters for the first time. Especially a spherical gradient index structure for the crystalline lens was included, thus allowing a detailed analysis of image quality, regarding spot size, and wavelength dependent aberration. The simulation results show, that our realistic eye model generates a sufficient image quality, with a spot radius of 4.9 μm which is below the inter cone distance of 5.5 μm. Furthermore, we optically simulate potential mechanical processes of accommodation and compare the results with contradictory findings of previous experimental studies. The quantitative simulation of the accommodation capacity shows that the depth of field is strongly dependent on the resting position and becomes significantly smaller when shorter resting positions are assumed. That means, to enable an extended depth perception with high acuity for the goldfish an adaptive, lens shifting mechanism would be required. In addition, our model allows a clear prediction of the expected axial lens-shift, which is necessary to ensure a sufficient resolution over a large object range. |
516 | Concept embedding to measure semantic relatedness for biomedical information ontologies | Semantic relatedness is a general example of semantic similarity referring to the determination of whether two biological terms are related .How semantic relatedness or semantic similarity is calculated is linked to core methods of various technologies, such as bioinformatics, which can distinguish biological terms into meaningful groups, along with the literature-based information retrieval of medical informatics .Calculation methods have been applied in various biomedical fields.Boyack et al. clustered numerous biomedical publications according to their similarity using biological terms.Mathur et al. studied disease similarity levels using methods for finding semantic similarity levels between biological processes.Guo et al. used semantic similarity measures to describe direct or indirect interactions within human regulatory pathways.Shah et al. visualized self-organizing maps of biomedical document clusters based on disease concepts.Semantic relatedness is a measure independent of the hierarchical ontologies of biological terms.It has the advantage of allowing searches of large-scale Metathesaurus and semantic networks developed through the integration of various ontologies , as semantic relatedness is measured by quantifying shared information contents or using context vectors of biological terms, and not depends on a hierarchical ontology and uses “is a” relationships as semantic similarity .For instance, the Lesk method calculates the number of common words among definitions of concept pairs while the Vector method generates a first-order co-occurrence matrix for each word and then builds a second-order co-occurrence matrix based on extended definitions of biological terms as a gloss vector .The gloss vector is used to calculate the semantic relatedness score between biological terms.These methods are indispensable for improving the similarity calculation method used during searches of the Unified Medical Language System, which is integrated with various ontologies.However, these methods are known to have lower accuracy than those using hierarchical relationships.Accordingly, the issue of low accuracy would cause no significant differences among vectorized concepts when utilizing them .The low accuracy occurs because concept definition resources for calculating semantic relatedness from UMLS are inadequate.Insufficient coverage is also known to be a critical obstruction in large-scale biomedical text processing methods .The Vector method attempts to mitigate this problem by extending definition information using the available path information of concepts, but this has had a limited effect; only approximately 6.5% concepts from the 2015AB version of UMLS have definition information in the case of absent path information .Algorithms are another factor.The weakness of the Lesk method is that it is heavily dependent on dictionary descriptions, though this does not necessarily mean that overlapping must arise when biological terms are semantically related .Although the Vector method is a fine-grained measure that resolves the problems of the Lesk method, it is also bag-of words approach that considers the sequence information of words in a sentence less .Thus, it essentially has the limitation of low accuracy when determining the semantic relatedness of biological terms.Here, we propose a concept-embedding model for UMLS semantic relatedness calculations to improve the performance of previous computational semantic relatedness methods based on vectorization with extended data for UMLS.This method consists of generating inexistent UMLS concept definitions as features and creating vectors of concept unique identifiers as paragraph vectors.Our method is based on the distributed representations of sentences and documents .We also use a scoring function to determine the relatedness measures through the cosine similarity values between CUI vector pairs generated from our model.Using this preprocessing method and model, we confirm that our approach has better coverage even without path or definition information in UMLS, and we obtain better relatedness measures compared to that by the Vector method .Therefore, our extended definition dataset and semantic relatedness calculation model will contribute to the development of biomedical information retrieval technology in the UMLS Metathesaurus.Our method is based on the concept of distributed representations of sentences and documents by Le et al. .We also referred to work by Mclnness et al. entitled UMLS-Interface and UMLS-Similarity to find limitations and devise the validation process of our method .The Unified Medical Language System was developed as an integrated knowledge resource of terms in the medical field .Various measures pertaining to the relationships among UMLS terms have been implemented in biological studies, such as disease similarity predictions based on biological processes, pharmacovigilance signal detection, and the construction of biomedical question-answer systems .Identifying the relationships among concepts corresponding to the terms from UMLS is a promising approach by which to understand the relationships among diverse biological concepts.For this purpose, the UMLS Metathesaurus contains information about various biomedical concepts and the relationships between them.The Metathesaurus uses a unique identifier when a concept is added, and it places concepts at the following four levels.Concept Unique Identifier: An identifier for all linked strings from source ontologies that indicate the same meaning,Lexical Unique Identifier: Links for groups of strings that have lexical variants,String Unique Identifier: Each unique string that appears in each language in the UMLS Metathesaurus,Atom Unique Identifier: Each of the strings in each of the source ontologies,One CUI can contain multiple LUIs, SUIs and AUIs.Each unique AUI is stored in a MRCONSO file that stores concept names and sources based on the UMLS Rich Release Format.Definitions, which are the attributes of each unique AUI, are stored in a MRDEF file.The structure of the unique identifiers and relationships with regard to the RRF file) are depicted in Fig. 1.The relationships between UMLS concept pairs fall into two distinct categories according to the definitions of Pedersen et al. : similarity measures and relatedness measures.Similarity measures between a concept pair quantify their closeness in an ontological hierarchy to represent how alike they are.Given that similarity measures are based on path information between biological concepts, they are regarded as ontology-dependent measures .While the semantic similarity measure calculates how much two concepts are alike considering an “is a” relationship, semantic relatedness measures between a term pair quantify how the terms are semantically related based on shared information contents between two terms .Because relatedness measures are based on the definition text information of the concepts, they are regarded as ontology-independent measures .Relatedness measures have several benefits over similarity measures .One evident benefit is the wider coverage of the relatedness measures.Similarity measures can be calculated only if selected concepts meet the requirements, i.e., path information between the concepts exists.In contrast, even in cases where the concepts have no path information, the relatedness measure can be calculated when the concepts have definition information in UMLS .Moreover, relatedness measures consider the semantic information of the definition texts, which is not considered by similarity measures .McInnes et al. developed UMLS-Interface in the form of a Perl package program to provide an API with which to explore locally installed instances of UMLS .The program can be utilized to find information about a CUI, such as its ancestors, depth, definition, extended definition, and all paths to the root and semantic types.The current version of the program is 1.51, and it contains 29 utility programs.UMLS-similarity is a Perl package program which provides new similarity/relatedness measures for comparison with existing methods based on UMLS .The program provides similarity/relatedness scores that are computed from UMLS by extracting the concept information and path according to given method and source.This program includes an application programming interface and a command line interface for users.The current version of the program is 1.47.Semantic similarity and relatedness have been defined in various works, and many methods have been studied accordingly.Rodriguez and Petrakis conducted a study to compute semantic similarity levels from different knowledge sources.They also discussed methods by which to calculate cross-ontology similarity levels using different knowledge sources .Pirró defined similarity and relatedness based on the ontology structure and presented a method to compute their scores."According to Pirró's definition, similarity considers only the subsumption relationship between the two concepts, whereas relatedness considers a broader range of relationships .Banerjee proposed a Lesk measure to determine the relatedness between two concepts.In the Lesk measure, the relatedness between two concepts is determined by the overlap between their gloss definition texts.That is, the relatedness between the two concepts increases as the definition text becomes similar.Fig. 2 shows a simplified example of the calculation of semantic relatedness via the Lesk method.In the Lesk method, the relatedness score is given by the sum of the squares of the length of the overlap words.In practice, the definitions of the CUI terms themselves as well as the definitions of their related terms are considered.However, because the Lesk method involves simple calculations according to the number of overlapping words, it does not effectively represent the similarity between two definition texts.To overcome the disadvantages of the aforementioned Lesk measure, Patwardhan proposed the Vector measure.In the Vector method, the relatedness between two concepts is calculated as the cosine similarity between the gloss vectors in the word space, which is constructed from the co-occurrence matrix of gloss definition texts .Pointwise mutual information is a measure of the association between two features in information theory or statistics.It is conceptually similar to semantic relatedness.While statistical association and semantic relatedness are not equivalent concepts given that a pair of closely related concepts does not necessarily co-occur frequently in the text, previous studies have applied PMI to compute semantic relatedness.Pesaranghader used PMI as an adjunct feature to cut-off co-occurrence data, improving gloss Vector-based relatedness measures .Word2Vec is a natural language processing method that vectorizes words by preserving the co-occurrences of frequently appearing words .It is based on a distributional hypothesis which states that words appearing in similar locations are similar in meaning .Word2Vec has two methods: the Continuous Bag of Words and the Skip-Gram methods.CBOW predicts a word in the center using the surrounding words, whereas Skip-Gram predicts surrounding words using a center word.In biomedical domain tasks, Skip-Gram reportedly outperforms CBOW .Word2Vec has influence beyond general NLP to calculate semantic similarity scores in the biomedical domain.Yu et al. proposed a method of modifying the context vector representations of medical subject heading terms by using the additional information of UML and the MeSH hierarchy to improve the semantic similarity between terms .Studies of distributed representation showed that Word2Vec is useful given its simplicity and versatility of vector representation when determining medical concept similarity levels and for query expansion and literature-based discoveries in medical informatics .In addition, the results of similarity embedded vectors from biomedical datasets without relationship information were comparable to those in previous studies .Glove is also a word-embedding method .Glove preserves the concurrency information of words, similarly to Word2Vec.The inner product of an embedded word vector in Glove is equal to the logarithm of the probability of co-occurrences.In other words, Glove converts the words into vectors by preserving the ratio of the co-occurrence information.Glove improved on the limitations of Word2Vec, which learn co-occurrences in the entire corpus by learning and analyzing only within the window specified by a user.The objective function of Glove is defined as reflecting the statistical information of the entire corpus.However, Word2Vec with a specific language model was observed to perform better than Glove when systematically comparing the similarity and the relatedness of biomedical concepts in the biomedical domain .Doc2Vec is another embedding method for variable fixed-length pieces of text, such as sentences, paragraphs, or entire documents .CBOW and Skip-Gram of Word2Vec correspond to the Paragraph Vector with Distributed Bag of Words and the standard Paragraph Vector with Distributed Memory concepts.The objective of Doc2Vec is to improve the classification performance between labels by reflecting the characteristics of the input texts.The performance of PV-DM is generally better than that of PV-DBOW, similar to like Word2Vec .The learning method of Doc2Vec consists of receiving a list of words and labels of each sentence and then updating a vector representation of each label of each word set.Yao et al. suggested a method which uses Doc2Vec to obtain the best classification performance for traditional Chinese medical records .The AZTEC platform is an analysis tool for processing multi-omics data and similarity calculations between digital resources using Doc2Vec .To the best of our knowledge, none of the methods discussed above are intended to compute the semantic relatedness between concepts of UMLS.Therefore, in this manuscript, we propose a method which complements concept definitions that are not included in UMLS using an external knowledgebase and calculates the semantic relatedness between the concepts.We propose a method that solves the coverage limitations of conventional vector-based UMLS relatedness measures and shows improved performance.We applied two strategies to address the limitations of vector-based UMLS relatedness measures.First, we extended the definition information of the CUI terms using the Wikipedia database to improve the coverage of the similarity model.Second, we adopted document embedding for vector representations of the CUI terms rather than the bag-of-words approach used by the Vector method to improve the performance of the relatedness measure .The UMLS database contains CUI concept definition information for only a small portion of CUI concepts, limiting the potential coverage of relatedness measures.In the UMLS2015 case, only 162,973 CUI concepts contain definition information.We used two methods to expand the CUI definition texts not offered by the UMLS database.First, we utilized the set of terms related to the given CUI term to obtain text information rather than utilizing only the given term itself.Relationship information was obtained from UMLS.We used the criteria of Liu et al. to distinguish proper related terms; only known relationships between CUI concepts were used .We then selected the hierarchical relationships from UMLS, which consist of parent/child and broader/narrower relationships.Secondly, we utilized Wikipedia as a source of context texts for the CUI concepts .Wikipedia has been adopted as a source for various vector embedding models, and it has been confirmed as feasible for use with document embedding methods .In this study, we used both UMLS definition text and Wikipedia articles to derive CUI embedded vectors.We used open-source Python APIs to parse data content in Wikipedia, and pages from Wikipedia were parsed during August of 2017.Articles from Wikipedia were extracted with the following priority: if there is a Wikipedia article which has a title that exactly matches a CUI term, the corresponding Wikipedia article was extracted. If there is a redirection link that matches a CUI term, the redirected article was extracted. If there is any search suggestion for a CUI term, even in the absence of a precisely matching article, the first suggestion was extracted.The Wikipedia article text has features which differ from those of the CUI definition text, making it inappropriate to combine the two texts directly in the expanded definition text.First, Wikipedia articles are much larger on average than CUI concept definition text, and they are more diverse in terms of size as well.Second, the ranges of information contained in the articles are not uniform.Articles on well-known and general subject matter tend to be described from more diverse perspectives, while articles on uncommon subject matter tend to deal only with brief information."For example, the Wikipedia entry for 'Coronary artery disease' consists of eleven sections, including causes, pathophysiology, diagnosis, screening, and prevention of the disease.In contrast, the Wikipedia article on thyrotoxicosis factitia, a type of hyperthyroidism, consists of only two paragraphs.To reflect the Wikipedia and CUI concept definition information uniformly, we collected only lead paragraph text instead of the full text of each Wikipedia article."Although Wikipedia documents are created by users' free participation, the general structure of each document follows a certain format, which is also specified in the Wikipedia user guidelines .The lead paragraph of a typical Wikipedia article presents the definition and gives a brief introduction of the topic .Because lead paragraphs of the articles contain information similar in terms of quality to CUI concept definitions and are uniform in size, we considered lead paragraphs to be appropriate to combine the two texts into an expanded definition text.Fig. 6 shows the examples of UMLS definition texts and the lead paragraphs extracted from Wikipedia for certain CUI terms."Even in the case of uncommon terms such as 'Thyrotoxicosis factitia, in which the CUI definition text does not exist, lead paragraphs can be extracted from Wikipedia.Utilization of the text information from Wikipedia improved our dataset in terms of both quantity and quality.We extracted 946,785 extended definition texts from Wikipedia with this procedure.Finally, we obtained the text information of CUI terms by concatenating the texts obtained from the two methods above, the title, the UMLS definition, and the lead paragraph of the Wikipedia article from the set of terms related to the given CUI term.Word embedding, a method of representing a word with a vector, is done on the assumption of what is termed a distributional hypothesis .General and basic word-embedding methods derive from the bag of words concept.These methods, however, cannot reflect the semantic information of words due to the absence of word order information.This represents is a limitation that hinders our understanding of the differences or similarities among words constituting sentences .Therefore, many researchers have devised word-embedding methods which represent the meaning of the word itself in a reduced multidimensional space.In chronological order, these methods are the Neural Net Language Model, the Recurrent Neural Network based Language Model, and the architecture of Continuous Bag-of-Words and Skip-Gram methods.A document embedding method capable of expressing semantic similarity based on the above methods was also developed using dense vectors of variable-length sentences, paragraphs, and documents .We analyzed the relationship between a CUI and a definition of the CUI and concluded that it is a label relationship, not an inclusive relationship.We found that the CUI is a label and that the definition of a CUI is a feature.We then utilized a previous method based on state-of-the-art word and document embedding techniques.We used a UMLS definition as a paragraph.All words in a paragraph were processed by a continuous word-embedding method with the Skip-Gram architecture.We built a model for obtaining a paragraph vector from embedded words in a paragraph by PV-DBOW.The CUI was used as the paragraph ID in the model.As a model development framework, we used DeepLearning4J 0.9.1 and modified it to achieve optimal performance for solving the problem at hand .Co-occurrence information between the words in the definitions is preserved by embedding the words using Word2Vec before the model is trained by PV-DBOW .In order to compensate for the differences between UMLS and Wikipedia, we used AdaGrad to apply different gradients among the features .It works as a normalization function by decreasing the effective learning rate of the weights with a high gradient value or by increasing the effective learning rate of slightly updated weights or weights with a low gradient value.We defined the number of words to be processed at one time in the definitions of the CUIs as 1000 and reflected the number in the batch size, with a window size of 5.The word count in the corpus was set to 1 to process all words in the definitions.Subsequently, we optimized the hyperparameters of the learning rate, layer size and epoch.The learning rate is the step size for each update, the layer size is the number of dimensions in the vector space, and the epoch refers to all of the learning samples that have been used once .There were ten iterations for batch updating of the data.We set the number of epochs to 1 and adjusted the single hidden layer size from 100 to 500 within 100 steps.We also used a range of different learning rates from 0.001 to 0.03 with a 0.001 step size, and we selected a layer size of 300 and a learning rate of 0.025 to minimize the loss function of PV-DBOW .We used 3, 5, 10, 20 and 30 epochs.Finally, we compared the resulting coefficients with the benchmark set with 30 medical term pairs by means of the Spearman’s rank correlations .We conducted experiments to assess the coverage expansion, the accuracy improvements and result significance levels of the proposed method.In order to compare the coverage of our method with that of the Vector method, classified as the relatedness measure, we randomly generated CUI pairs and compared a number of existing relatedness scores.We demonstrate the performance of our method by comparing vector measures using Spearman’s rank correlation with manually ranked CUI pairs.Our method has 4.77% more coverage compared to the Vector method.We also found relatively higher performance by the proposed method as compared to the Lesk and Vector methods based on the benchmark set.The result process was derived from Liu’s proposed process .Finally, we validated the result significance levels of our model by comparing the benchmark set with a random set.CUIs with path information in UMLS2015AB, UMLS-Similarity 1.45 and UMLS-Interface 1.47 were applied to measure the performance of our method.The detailed results are discussed below.In this experiment, we attempted to identify the coverage when calculating a relatedness score that had not been calculated before using the expanded definition from Wikipedia.The vector method calculates the relatedness score by aggregating the definition information of a CUI and extending the definition information using the available path information from the CUI.We created a total of 998,543 definitions.By extending the definition information with Wikipedia, we obtained 964,785 new definitions after combining 162,973 existing definitions and accounting for overlap.For a coverage difference comparison, we generated ten randomly selected sets of 1000 CUI pairs.We summed number of scores of 0, indicating no definition information in the vector method, and NaN, indicating definition information identical to the vector cases in our method, according to each group.We then calculated the average number of results for all trials.The results show that we have approximately 4.77% more coverage than that by the Vector method.The results demonstrate the importance of complementing the incomplete UMLS, with improved features when comparing each CUI definition .In particular, we used Wikipedia for features, meaning that each definition is potentially reflected by various authors.The potential to reflect its characteristics when embedding the CUI is assumed.This assumption would also include the possibility of more clearly calculating the semantic relatedness of non-related CUIs.Thus, the results here can serve as an important resource in the biomedical text mining field.Semantic relatedness does not simply measure the “is-a” relationship between two biological terms, instead aiming mainly to measure contexts and meanings.Therefore, the development of a semantic-centered comparison method similar to the human thinking process is necessary .To achieve this purpose, we validated that our model shows results similar to those of biomedical benchmark data created by human experts.Benchmark data which contains the semantic relatedness between two biological terms cannot be defined using a generic score and instead uses a rank that is estimated to be relatively close to other term pairs .With regard to measuring semantic relatedness, this process cannot be done by comparing the proposed method with the raw scores of term pairs evaluated by human experts because with semantic relatedness, the two scores tend to change simultaneously, but not in the same way at the same rate compared to the proposed method ."Therefore, we used Spearman's correlation coefficient, which is based on ranked values for each score, to evaluate the relationship between ranks, which represent a sequential variable.We compared the performance of our method with Spearman’s rank correlation coefficients among our model, previous methods, and a benchmark set to confirm the performance improvement.First, we used a CUI pair list from a dataset by Pedersen et al. as a benchmark set .This dataset consists of 30 medical term pairs.Each pair was manually evaluated by nine medical coders and three physicians at the Mayo Clinic.Evaluation scores are from, practically synonymous, to, unrelated on a 1.0 scale.We used the Spearman’s rank correlation coefficients among Liu’s relatedness measure results as basis ranks to test the performance of the proposed method .We compared the coefficient results from the Lesk and Vector methods with those from our method.Table 3 contains the ranks from each method.Our method shows significantly higher correlation coefficients with the rank of the benchmark set compared to the Lesk or the Vector method, indicating that our method is capable of higher performance than previous relatedness measure methods.The second benchmark set consists of 36 biomedical term pairs extracted from similarity results with eight medical experts who used evaluation scores ranging from 0 to 1 .We mapped the biomedical terms to the CUIs using Metamap, which maps input biomedical terms to a CUI based on the UMLS2015AB database .When a term maps to more than one CUI, we selected a CUI which belongs to the Disease or Syndrome category in terms of the UMLS semantic type.For example, adenovirus is mapped to C0001483, C0001486, and C1552907, and we chose C0001486.Depending on whether the semantic types of the CUIs are duplicated or not in T047, we used a CUI which has a name most similar to that of the concept of a term.For example, antibiotics are mapped to C0003232, C0003237 and C3540704, and all CUIs belong to the semantic type of antibiotics.In this case, we selected C0003232, which has a name most similar to the term.In addition, if a pair of CUIs is duplicated and one of the terms is mapped to more than one CUI, we selected another CUI of the term."For example, Down's syndrome is mapped to C00013080 and Trisomy 21 is mapped to C001380 and C3537167.In this case, Trisomy 21 is mapped to C3537167.Table 4 summarizes the results from the Lesk, Vector and concept-embedding methods after the generation of the CUI pairs.Table 4 also indicates that the concept-embedding method has better performance, as shown in Table 3.Our results show that applying state-of-art technologies in the field of deep learning can improve performance outcomes.We used PV-DBOW, rooted in the Skip-Gram technique of Word2Vec, to develop the model.We were able to improve the performance with improved Skip-Gram algorithms, unlike the Lesk and Vector methods.For instance, multi-prototype Skip-Gram, which maps identical words to different vectors if they are used with different meanings, or adaptive Skip-Gram, which reflects semantic differences from the same words depending on the relationships between other words in a sufficient number of processed corpuses, can be utilized to improve the performance .We conducted this experiment to verify the significance of the results of the proposed method.We compared the score distributions from the model between a benchmark set and random sets.The score distributions illustrate that the sets are highly distinguishable, which implies that the model is statistically significant.We verified our method based on Wang et al., and they validated their method with a benchmark set and with random sets .The benchmark set consisted of 70 highly similar pairs manually curated from 47 diseases .Similar to their approach, we validated our model by calculating relatedness scores on a benchmark set and another 7000 random sets.We generated a random set with 70 pairs for 1000 iterations from the MRCUI table of UMLS2015AB.We assumed that if the pairs have high similarity, high relatedness scores would also be assigned on average, while if the pairs are randomly selected, scores lower than those of pairs with high similarity would be assigned.We examined average relatedness scores from both sets on our model.As a result, the average relatedness score of the benchmark set was 0.205 and that of the random sets was 0.021.In Fig. 8, the differences between the benchmark set and the random sets are clear.This confirms that our model generates significance scores if the pairs have some degree of similarity between the pairs.This study proposes a concept-embedding model for UMLS semantic relatedness calculations that uses UMLS concept definitions as features.The main contribution of this research lies in how the proposed method calculates reliable semantic relatedness between UMLS concept pairs regardless of whether the concepts have path information.Moreover, compared to existing context-based relatedness measures, we obtained improved coverage by collecting more extensive context texts.Compared to state-of-the-art methods, our method produces more extensive coverage and shows better performance outcomes on UMLS sets.We also adopted Wikipedia as a knowledge base to extend the context text.In the future, it would be meaningful to utilize other extended corpora to obtain rich text sources.Furthermore, it is notable that our semantic relatedness model can potentially be implemented for the biomedical information retrieval of UMLS Methathesaurus terms.In order to validate our method, we compared our coverage and performance results with those from previous studies.We demonstrated that we can resolve the limited coverage problem unaddressed by the Vector method.We also found that we can obtain better results by comparing Spearman’s rank correlations between the scores by previous methods and that by our model.In the results, we show that the coverage improved by 4.77% on average through a random CUI pair generation test, and we prove the superior performance of our model compared to the correlation coefficients of existing relatedness measure methods.In conclusion, the proposed UMLS semantic relatedness calculation method is a promising method for finding relationships between UMLS concept pairs.Moreover, while we have focused on UMLS similarity in this study, we suggest that our method can also be applied to calculate the degrees of similarity between other biomedical terms.The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | There have been many attempts to identify relationships among concepts corresponding to terms from biomedical information ontologies such as the Unified Medical Language System (UMLS). In particular, vector representation of such concepts using information from UMLS definition texts is widely used to measure the relatedness between two biological concepts. However, conventional relatedness measures have a limited range of applicable word coverage, which limits the performance of these models. In this paper, we propose a concept-embedding model of a UMLS semantic relatedness measure to overcome the limitations of earlier models. We obtained context texts of biological concepts that are not defined in UMLS by utilizing Wikipedia as an external knowledgebase. Concept vector representations were then derived from the context texts of the biological concepts. The degree of relatedness between two concepts was defined as the cosine similarity between corresponding concept vectors. As a result, we validated that our method provides higher coverage and better performance than the conventional method. |
517 | Spatial and temporal risk as drivers for adoption of foot and mouth disease vaccination | Uncertainty surrounding health decisions stems from unknown gains in personal wellbeing relative to the perceived costs of undertaking the intervention.This is specifically applicable to vaccination decisions.For example, the decision to be vaccinated against seasonal influenza weighs perception of individual risk of disease against the direct costs of the vaccination, the indirect costs of the time necessary to be vaccinated, and any concerns about adverse vaccination effects .The implications of individual vaccination in contributing to population immunity further complicates the decision.Importantly, perceptions of disease risk are dynamic and may markedly increase as disease outbreaks are reported closer to the individual .However, by this time much of the potential for inducing population immunity is lost, and vaccination benefits may only extend to the recipient.Understanding the drivers of vaccination decisions and how these are influenced by proximity of perceived risk is a significant gap in vaccine knowledge relevant to increasing vaccination and decreasing the burden of infectious disease.We chose to address this knowledge gap by estimating pastoralist adoption of a livestock vaccination against foot-and-mouth disease.Similar to seasonal influenza, FMD is episodic and not precisely predictable in either spatial or temporal spread or in its severity , thus creating uncertainty of disease risk.Furthermore, there are multiple FMD virus serotypes with each serotype characterized by evolving strains.FMD vaccines vary in their effectiveness depending on the “match” between the vaccine and the circulating serotype and strain , require repeated immunization to achieve optimal protection, and are similar to seasonal influenza vaccines in having effects at both the individual and population levels .Unlike human vaccination or vaccination for zoonotic livestock diseases that have human health implications, the decision to vaccinate for FMD solely fixates on livestock health, and thus focuses our analysis on externally influenced, dynamic risk perceptions .Importantly, in households that are characterized by high dependence on livestock, vaccination decisions have broad impacts on household income and wealth, food security, and expenditures on human health and education .For FMD specifically, reductions in milk production, lost animal draught power, and closure of livestock markets threaten household income and nutritional security .We surveyed 432 pastoralist households in northern Tanzania to identify determinants of FMD vaccination decisions relative to temporal and spatial risk based on two immunization strategies.We extended a commonly accepted survey method for inferring preferences, willingness to pay, to elicit decision responses for two hypothetical vaccination scenarios.The first is “routine” vaccination in which households would vaccinate cattle biannually, a proactive and planned approach that would support immunity at population scale.The second is “emergency” vaccination in which households would vaccinate in the face of a current nearby outbreak, a situation that presents heightened, individualized risk introduced by spatial proximity and temporal immediacy.In each scenario, the stated efficacy of the vaccine was also varied to reflect the uncertainty of the vaccine matching process and to assess sensitivity to improvements in vaccine risk reduction.Herein we present the results of the study and discuss the findings in the context of identifying approaches to influence household vaccine uptake and subsequent improved disease control.The survey questionnaire that was used for data collection targeted key decision makers in cattle owning households to identify behavioral responses and to increase accuracy and precision of those responses.The cross-sectional survey was conducted in April through July 2016 in the Serengeti and Ngorongoro districts of northern Tanzania and contained questions designed to capture household characteristics hypothesized to influence vaccination WTP, including household demographics, livestock management practices, and knowledge of and history with FMD.Within the two districts, a two-stage sampling procedure randomly selected first clusters, then households with the Serengeti district more intensively sampled for analysis purposes .Design and piloting of the survey instrument followed standard statistical practices .Informed consent was obtained after the nature and possible consequences of the study had been explained by local enumerators who were trained and monitored throughout the collection process.Households in both study districts engage in livestock and agricultural activities for subsistence and income, with some additionally earning income from off-farm activities.Households practice open grazing and own 20 cows compared to the national average of 4 cows , in addition to owning sheep, goats, and poultry.Consistent with previous estimates of FMD occurrence in these areas, 69% of the households reported infection within the past year and expected reductions in milk production during outbreaks.All households recognized the clinical signs of FMD, but of the 19% who had vaccinated for any livestock disease in the past year, none reported vaccinating for FMD.This reflects the situation of FMD in East Africa as characterized by poor surveillance systems and limited availability of FMD vaccines.The absence of FMD vaccines in Tanzania during the time of the study led to the use of the stated preference methodology to infer the value households place on vaccination.We used the double bounded dichotomous choice contingent valuation method , which is a standard survey approach that can jointly analyze willingness to pay to adopt a product and determine factors underlying adoption behavior.For both the routine and emergency vaccination strategies, households received an initial, binary choice question eliciting WTP for a single vaccine dose if it protects one cow from FMD over a 6-month period.Households then received a follow up, second binary choice question raising or lowering the offered price depending on the response to the first.Each respondent received the same initial bid price, 2000 Tsh for the routine vaccine and 4000 Tsh for the emergency vaccine, with the follow up bids for the routine vaccines ranging between 500 Tsh and 3500 Tsh, and the emergency between 500 and 7500 Tsh.The binary choice question format of this model is cognitively easier than other question designs by removing the burden on the respondent to formulate a price or choose between multiple hypothetical scenarios .Bias can exist with the double bounded model if the household’s WTP value changes between responding to the first bid and the second, follow up bid question .Use of the interval data model can remove some of this concern by providing robust WTP estimates .Additional pretesting of bid levels and referencing related vaccine prices reduces large deviations from the potential market price range .Compared to the single bounded model that only asks one binary choice question, the gain in asymptotic efficiency of the double bounded approach outweighs the potential bias from anchoring on the initial bid .The empirical strategy is modeled according to expected utility theory and estimated using the maximum likelihood estimator.The average emergency WTP is anticipated to be higher than the routine WTP by an amount proportional to the change in risk reduction between the two .In an emergency situation, the immediacy of the risk increases the individual perceived value of vaccination, whereas the delayed reward of routine vaccination reduces its relative value .However, households can reasonably plan for biannual vaccination costs whereas the unanticipated income shock in an emergency, coupled with increased susceptibility, introduces uncertainty towards the relative gain from vaccination .We presented the routine question first to provide a baseline perception of risk, followed by the question regarding emergency vaccination.To reflect the complexity of the vaccine matching process and assess sensitivity to improvements in vaccine-induced risk reduction, households were randomly assigned a stated vaccine efficacy of 50% or 100%.The emergency vaccination question was then conditioned on outbreak distance, either with a neighbor or at the village level.The proximity of neighbors within one kilometer for all households in the study is assumed to imply immediate, unavoidable exposure.An outbreak at the village level attempts to alleviate perceptions of disease risk through the possibility that exposure has not occurred.The 5 km radius roughly follows village boundaries and was provided to the respondent when the household had difficulty conceptualizing village proximity.Following theoretical and practical guidance on perceived risk under routine and emergency vaccination scenarios, we expected differential behavioral processes between the two.To assess the appropriateness of modeling the two strategies separately, we performed a likelihood ratio test.The test rejected the estimation of both vaccination strategies jointly in favor of separate models, supporting the concept that different decision-making processes influence each type of vaccination.Estimation of the two models separately then revealed a higher mean WTP and more variation in the distribution for emergency vaccination relative to routine, further providing fundamental empirical evidence that household behavior is differentiated between routine and emergency strategies.The mean WTP of emergency vaccination was around 5400 Tsh and about 3900 Tsh for a routine strategy.Separate WTP distributions for the two strategies in and of themselves are not sufficient to conclude that households accurately valued the varying levels of uncertainty and risk associated with FMD vaccination.To support this conclusion, we would expect that responses be consistent and sensitive to marginal changes in risk.We therefore first examined the effect of an increase in vaccine bid price on the probability of adopting vaccination.We found that as the bid price increased for both strategies, the probability of vaccine acceptance decreased, consistent with theoretical expectations .We next compared the average WTP values for each strategy with the calculated change in perceived risk reduction between routine and emergency vaccination.Also following theoretical expectations , we found the marginal change in risk reduction going from a routine to an emergency strategy to be of comparable magnitude to the change in WTP values, with the risk reduction from an emergency strategy to be higher than a routine strategy.For perceptions of risk from increased susceptibility, we found an outbreak with a neighbor compared to an outbreak at the village level presented no difference in the value of vaccination.Altering the vaccine efficacy level by itself did not influence vaccine valuation for either strategy.Contrary to expectations, households that received a vaccine of 50% stated efficacy did not value vaccination differently than those provided with a vaccine of 100% stated efficacy.However, interacting efficacy level with the head of household gender revealed that a male head of household that received the vaccine of 50% efficacy would pay less than a female head of households for either efficacy level and less than a male head of household receiving a vaccine of 100% efficacy.As expected, the drivers behind routine and emergency vaccination adoption also differed.For any health expenditure decision, diversity and liquidity of household income portfolios affects household capacity to invest in proactive measures, exemplified by routine vaccination, and adjust to near term threats and shocks, represented by emergency vaccination and the risk of major disease outbreaks and loss .Our results show income apart from livestock is a primary determinant of emergency vaccination, while additionally influencing routine adoption but at a lower magnitude.Compared to households with no off-farm income, households with higher levels of off-farm income reported higher valuation of routine and emergency vaccinations.Diversity of income had varying effects on willingness to pay for both strategies.Compared to households with no crop income the prior season, earning some seasonal crop income increased both routine and emergency WTP.Being in the highest income bracket from crop sales in the prior season had no effect on adoption for either vaccine strategy.Beyond income, we assessed the role of exogenously determined motivations to vaccinate by including variables on whether a household receives livestock health information from a government veterinarian and the level of formal education of the head of household .We found that households citing the government veterinarian as their main source of information reported lower WTP values than those relying on other sources, with augmented effects for the emergency vaccine.Similarly, results on the education variable offered corresponding evidence that households maintain negative perceptions towards vaccination.We would expect formal education to be associated with adoption of disease prevention practices.Instead we found no effect of education on emergency vaccination, and, for a routine vaccination, head of households with no formal education reported higher WTP values than those with formal education.Use of government veterinarians and educational attainment were uncorrelated across income levels and villages.In agreement with the human health literature, individual risk perceptions affect the valuation of vaccination .Households place a higher value on vaccines when the threat of disease is perceived as immediate.However, concerns about vaccine efficacy and the vaccination process also factor into decisions to vaccinate with an augmented effect in an emergency situation.Unlike human vaccination or vaccination for zoonotic diseases, FMD vaccination minimizes perception of adverse side effects for the vaccinated individual but retains the impact on household welfare in terms of expenditures, loss of income risk, and coping with resource constraints.This is supported by three main findings in our study: income is a major determinant in valuing both routine and emergency vaccination decisions; diversity of income promotes vaccination uptake; and there is a constant increased willingness to pay for emergency over routine vaccination.The fact that complete dependence on livestock income does not consistently promote vaccination further suggests income liquidity and diversity are important to facilitate risk responsiveness .Compared to routine vaccination, weighing an unexpected shock with perceptions of susceptibility and disease severity increases decision uncertainty for emergency vaccination, as reflected by the greater variation in WTP.Similar to human seasonal influenza vaccination, imprecise spatial and temporal risk and disease severity are compounded in an emergency situation and confounds perceptions on the overall gain from vaccination relative to disease impact.As with most vaccines, FMD vaccination requires 7 to 14 days to provide protective immunity .Households do not need to know the precise temporal or spatial information required to induce protection to understand that dense contact networks and the difficulty of self-imposing movement restrictions in pastoralist communities makes the likelihood of near immediate exposure almost certain if an outbreak is nearby.To support this point, we found no variation in the value of vaccination between offering an emergency vaccine when an outbreak is with a neighbor or at the village level.Subsequently, for an emergency vaccine, the perceived risk is consistently higher, and the wider variation in WTP values reflects greater uncertainty about perceived gains from vaccination.In both vaccination scenarios, uncertainty about the effectiveness of vaccination in preventing disease underlies the decision process.This includes doubts about vaccine effectiveness that are common to both human and animal vaccines.Governmental and non-governmental professional institutions attempt to alleviate concerns about effectiveness relative to cost and risk, but similar to influenza, these professional sources often struggle to fully overcome individual or community uncertainty.Pastoralist’s inexperience with FMD vaccines generally and with appropriately strain matched vaccines specifically could contribute to uncertainty in the value of vaccination, even though FMD is a common, episodic disease familiar to the community.This resembles concerns for seasonal influenza about the match of the vaccine strain to the circulating strain .Gender-based differences in WTP may reveal this, whereas men have more experience than women with non-poultry livestock vaccination and appropriately demonstrate sensitivity to improvements in vaccine quality.Furthermore, the disconnection between educational attainment and a positive decision for vaccination, may explain this, as informed households would rationally be less likely to adopt what are perceived as ineffective vaccines.These differences may also reflect overall lack of experience with vaccines.In the few households that had vaccinated for any livestock disease in the past year there was no influence on WTP for either FMD vaccination strategy.Lack of positive experience coupled with a poor understanding about the value of population level immunity—a concern shared with adoption of human vaccines, including seasonal influenza—indicates the relative benefits of vaccination remain imprecise.For any vaccination strategy, we emphasize the need for clear messages about the public and private benefits of vaccination.Households seem to apprehend the presence or absence of direct FMD risk and accurately valued vaccination relative to risk.However, similar to other animal vaccination WTP studies, this relationship between perceived disease risk and WTP for effective vaccines coupled with inadequate understanding of vaccines and population level effects suggests households need additional information and assurance that the benefits of the chosen vaccination strategy exceed the costs.Clearly communicating the need to vaccinate early, prior to local outbreaks, along with presenting an effective and serotype specific vaccine priced to capture the most economically vulnerable populations will help decrease the perceived risk of vaccination.In light of the limited positive influence on adoption from professionals, consistent with prior human and animal vaccination studies , dissemination of vaccine knowledge through informal social networks is critical in overcoming existing negative expectations.Improved surveillance to detect circulating FMD serotypes and strains and vaccines that are better matched to local strains should reduce household vaccination uncertainty and increase vaccine uptake.Our research is limited to eliciting stated preferences for vaccines that may differ from actual market outcomes and activities.Additional investigation into vaccine attributes may provide more precise estimates on specific vaccine qualities and delivery options that will further enhance uptake.Specifying outbreak distance with respect to context-specific herd contact distances may also increase our knowledge on thresholds to risk perceptions.Finally, access to panel data or responses directly before, during, and after FMD outbreaks would improve documentation of how perceptions of risk change with respect to real-time temporal immediacy. | Identifying the drivers of vaccine adoption decisions under varying levels of perceived disease risk and benefit provides insight into what can limit or enhance vaccination uptake. To address the relationship of perceived benefit relative to temporal and spatial risk, we surveyed 432 pastoralist households in northern Tanzania on vaccination for foot-and-mouth disease (FMD). Unlike human health vaccination decisions where beliefs regarding adverse, personal health effects factor heavily into perceived risk, decisions for animal vaccination focus disproportionately on dynamic risks to animal productivity. We extended a commonly used stated preference survey methodology, willingness to pay, to elicit responses for a routine vaccination strategy applied biannually and an emergency strategy applied in reaction to spatially variable, hypothetical outbreaks. Our results show that households place a higher value on vaccination as perceived risk and household capacity to cope with resource constraints increase, but that the episodic and unpredictable spatial and temporal spread of FMD contributes to increased levels of uncertainty regarding the benefit of vaccination. In addition, concerns regarding the performance of the vaccine underlie decisions for both routine and emergency vaccination, indicating a need for within community messaging and documentation of the household and population level benefits of FMD vaccination. |
518 | Activation of podocyte Notch mediates early Wt1 glomerulopathy | Wt1 deletion in mature podocytes results in glomerulosclerosis with compromised renal function by day 7 post tamoxifen induction in adult CAGG-CreERTM+/−;Wt1f/f transgenic mice.29,To investigate events leading to the induction of disease in these mice, we first determined the earliest point at which we could detect glomerulosclerosis after Wt1 deletion.Tamoxifen was administered for 3 consecutive days by i.p. injection to 5-week-old CAGG-CreERTM+/−;Wt1f/f transgenic mice and mice were nephrectomized at 4, 5, 6, and 12 days following injection.Successful Wt1 deletion was demonstrated by recombination polymerase chain reaction and the reduction of Wt1 expression in glomeruli.Following light microscopy analysis of periodic acid–Schiff-stained kidney sections, we determined the severity of glomerulosclerosis by a semiquantitative analysis at each time point in CAGG-CreERTM+/−;Wt1f/f mutants, CAGG-CreERTM−/−;Wt1f/f controls, and heterozygous CAGG-CreERTM+/−;Wt1f/+ mice.Heterozygous mice did not develop glomerulosclerosis by day 12 postinduction.At D4 PI, mutants exhibited early segmental glomerulosclerosis with focal foot process effacement and a trend toward higher levels of albuminuria.By D5 PI, glomerular scarring was more extensive in mutants compared with control and heterozygous mice.Progression of disease was further supported by an increase in urine albumin-creatinine ratio in mutants compared with controls.By D6 PI, extensive glomerulosclerosis with tubules containing protein casts were observed in mutants relative to controls with increased albuminuria.Late stage disease with global glomerulosclerosis was observed at D12 PI, with peritubular cells expressing vascular smooth muscle actin, consistent with progression of tubulointerstitial disease.We conclude that podocyte function appears compromised within 6 days of Wt1 deletion in mature podocytes.Therefore, investigations into the mechanisms underlying manifestation of disease should be undertaken within this time frame.Podocyte apoptosis has previously been implicated in the pathogenesis of glomerulosclerosis.30,Therefore, we next investigated whether apoptosis is evident during disease induction in CAGG-CreERTM+/−;Wt1f/f transgenic mice.Expression of Cleaved Caspase-3 protein was observed within mutant glomeruli as early as D4 PI but quantitatively, was not statistically significantly different compared with control glomeruli.By D5 PI, at the onset of early glomerulosclerosis, we observed an increased number of Cleaved Caspase-3/DAPI-positive cells in mutant glomeruli, consistent with a temporal increase in apoptosis.Terminal deoxynucleotidyltransferase–mediated 2′-deoxyuridine 5′-triphosphate nick end-labeling-positive mutant glomerular cells were evident at D5 PI but were absent in controls.TUNEL staining of D6 PI primary mutant podocytes was also higher than for controls.At the same time point, we observed increased Caspase-3/7 positivity in primary mutant podocytes and an increased number of Annexin V/Sytox blue-positive primary mutant podocytes compared with controls.TUNEL staining at D12 PI revealed clusters of TUNEL-positive cells in peripheral segments of mutant glomeruli not evident in the glomeruli of controls.Together, these studies suggest that loss of Wt1 in mature podocytes is associated with podocyte apoptosis and development of glomerulosclerosis.Constitutive Notch activation in terminally differentiated podocytes results in podocyte apoptosis and glomerulosclerosis,27,28 supporting a role for Notch in podocyte injury.Following our observation of increased podocyte apoptosis in CAGG-CreERTM+/−;Wt1f/f transgenic mice, we hypothesized that podocyte Notch activation plays a role in early Wt1 glomerulopathy.At D4 PI before manifestation of disease, Notch pathway transcripts in primary podocytes of CAGG-CreERTM+/−;Wt1f/f transgenic mice were not statistically different to those of controls.By D6 PI, concomitant with early glomerulosclerosis and albuminuria, we observed increases in canonical Notch pathway transcripts, Notch1, Nrarp, and Hey2.Increased levels of other Notch basic helix loop helix transcription factors Hes1, Hes3, Hes5, and HeyL were also observed in primary mutant podocytes but not in controls, although these changes did not reach statistical significance.Using double immunofluorescence labeling, we observed cleaved Notch1 protein in nuclei of Nestin-positive podocytes as early as D4 PI in mutant kidney sections.These findings were validated by Western blot analysis showing increased cleaved Notch1 protein in primary mutant podocytes relative to controls.Increased podocyte Manic fringe transcript was also observed in mutants compared with controls.Manic Fringe is a glycosyltransferase that mediates glycosylation of the extracellular domain of Notch receptors during signal transduction.31,32,We also found upregulation of Pofut1 protein in primary mutant podocytes at D6 PI.Pofut1 is an O-fucosyltransferase 1 enzyme that mediates fucosylation of the extracellular domain of the Notch.33,Together, these data suggest that post-translational modification of Notch components are active in early Wt1 glomerulopathy.We next sought to determine whether the Notch ligand Jagged1 is expressed in CAGG-CreERTM+/−;Wt1 f/f transgenic mice owing to previous studies implicating podocyte Jagged1 in glomerulosclerosis.28,We observed a striking upregulation of Jagged1 protein in the parietal epithelium of mutants compared with controls.Within segments of mutant glomeruli, foci of Podocin-positive cells expressing Jagged1 were also observed.Immunoblotting of primary mutant and control podocyte lysates revealed induction of Jagged1 expression at D6 PI.Together, these data suggest that Notch signaling is activated when glomerulosclerosis first manifests following deletion of Wt1 in mature podocytes.Given previous studies that showed that Wt1 and Foxc1a can inhibit NICD activation,34 we explored FoxC2 expression in this model.Using semiquantitative PCR analysis, we found decreased podocyte FoxC2 transcript in CAGG-CreERTM+/−;Wt1 f/f transgenic mice compared with their control littermates.This decrease was observed at the same time point when Notch components were upregulated and concordant with disease manifestation.These findings support the hypothesis that Notch activation in mature podocytes is normally repressed by FoxC2 and WT1 and loss of these proteins is associated with increased podocyte Notch activation.Following detection of podocyte Hes/Hey mRNA expression in primary mutant podocytes, we sought to validate the expression of HES1, a bHLH transcription factor in podocytes by triple immunofluorescence labeling.We observed clusters of HES1-expressing cells that were positive for the podocyte marker, Synaptopodin, at onset of glomerulosclerosis in mutants compared with controls.HES1-positive glomerular epithelial cells were observed in regions distinct from HES1-positive Lotus tetragonolobus lectin-positive tubules.Clusters of Synaptopodin-positive HES1-podocytes were distinct from platelet–endothelial cell adhesion molecule–positive glomerular endothelial cells.As HES1 has been previously implicated in epithelial to mesenchymal transition, we next determined that Snail and Slug transcript was upregulated in primary D6 PI mutant podocytes compared with controls.To further understand the role of HES1 in podocytes, we transfected cultured primary Nphs2;rtTA transgenic murine podocytes with constructs expressing either TetOHes1 or green fluorescent protein alone.HES1 was not expressed in untreated TetOHes1 nor doxycycline-treated GFP transfected Nphs2;rtTA podocytes.Treatment with doxycycline led to a dose-dependent increase in podocyte Hes1 mRNA and protein expression.Induction of Hes1 expression led to a 3-fold upregulation of Snail and Slug transcript compared with untreated TetOHes1 transfected and doxycycline-treated GFP transfected Nphs2;rtTA podocytes.These results suggest that podocyte HES1 induction could mediate manifestation of glomerulosclerosis through regulation of EMT genes in podocytes.Following our observation of Notch activation in murine Wt1 glomerulopathy, we next tested biopsy samples from a human subject with FSGS associated with the WT1 c.1390G>T mutation.We found JAGGED1 expression in cells with focal nephrin staining but also found that expression was most marked in the parietal epithelium.The control biopsy from a nondiseased time 0 renal allograft biopsy revealed JAGGED1 expression in regions distal to nephrin staining suggesting endothelial JAGGED1 expression.JAGGED1 expression in the control parietal epithelium was much less than in mutant WT1 patient tissue.Expression of the Notch bHLH transcription factor, HES1, was also observed in nuclei of mutant WT1 glomeruli compared with in control biopsy tissue.We conclude that these data support a role for Notch activation in human WT1-mediated glomerular disease.As podocyte Notch activation is a feature of early disease, we hypothesized that pharmacological Notch inhibition at onset of glomerulosclerosis could influence severity of disease.CAGG-CreERTM+/−;Wt1f/f transgenic mice were treated by i.p. injection with the gamma secretase inhibitor GSI-IX, N--S-phenylglycine t-butyl ester, or dimethylsulfoxide late D4 PI and early D5 PI.In vehicle-treated mutants, we observed hyaline-filled tubules representing proteinaceous material and sclerotic glomeruli with mesangial proliferation.These features were not evident in GSI-IX-treated mutants.The proportion of vehicle-treated mutant glomeruli exhibiting extensive glomerulosclerosis was significantly higher than the proportion of GSI-IX-treated mutants.We validated inhibition of canonical Notch pathway transcripts by semiquantitative PCR.Analysis of Wt1 transcript and protein expression did not reveal a significant difference between vehicle-treated or GSI-IX-treated mutant mice.GSI-IX-treated mutants, compared with vehicle-treated mutants, showed an improvement in urine albumin-creatinine ratio at D5 PI.Western blot analysis of urine albumin confirmed absence of albuminuria in GSI-IX-treated mice compared with in vehicle-treated mutants at D5 PI.We conclude that the observed efficacy of GSI inhibition in amelioration of glomerulosclerosis supports the hypothesis that Notch is activated during disease development.We also tested the role of starting treatment in established disease by administering GSI-IX to CAGG-CreER TM+/−;Wt1f/f mutant mice on D7 PI.Following 2 doses of GSI-IX, mice were nephrectomized late on D8 PI, and kidney histology was analyzed for 2 mice per treatment.We did not observe a difference in degree of glomerulosclerosis, albuminuria, nor tubulointerstitial expression of vascular smooth muscle actin immunofluorescence.We hypothesize that podocyte Notch activation plays a role in early Wt1 glomerulopathy.Given the upregulation of MFng and Rbpsuh transcripts at D6 PI, we sought to determine whether knockdown of Mfng or Rbpsuh would influence podocyte EMT gene or apoptosis gene expression.While we did observe evidence of Mfng and Rbpsuh down-regulation with repression of Hes1 and Hes3 transcripts, we did not observe significant repression of podocyte Snail and Slug transcript nor significant upregulation of podocyte-specific transcripts such as Nphs1 nor Nphs2.Reduced primary mutant podocyte viability precluded optimal transfection owing to slow growth and increased apoptosis.We conclude that genetic rescue of Wt1 glomerulopathy by Notch inhibition would be best validated with in vivo strategies.Using an inducible model of Wt1 deletion, we demonstrate a role for Notch activation in the pathogenesis of Wt1 glomerulopathy.The data provided in this study establish the utility of temporal semiquantitative analysis of glomerular scarring as a platform for the study of early pathological events in an inducible model of mature podocyte injury.We show that podocyte apoptosis is evident as early as the fourth day following tamoxifen administration to adult CAGG-CreERTM+/−;Wt1f/f transgenic mice before glomerulosclerosis is evident.This podocyte loss, secondary to Wt1 deletion, increases when overt albuminuria is evident.Furthermore, we show upregulation of Snail and Slug mRNA, genes that are implicated in epithelial mesenchyme transition in primary mutant podocytes coincident with onset of glomerulosclerosis.Ectopic podocyte Notch activation in mice results in podocyte apoptosis, dedifferentiation, and both diffuse mesangial sclerosis and FSGS phenotypes,27,28 which are also associated with mutations in WT1.5,7,Following Wt1 deletion in this model, we show upregulation of several Notch components, including Notch1 and its transcriptional target gene, Nrarp,35 as well as Notch bHLH transcription factors.The finding of increased Jagged1 and Pofut1 protein in primary mutant podocytes at disease induction suggested a ligand-dependent mechanism of podocyte Notch activation at disease manifestation.Furthermore, induction of HES1 expression in transgenic Nphs2;rtTA primary podocytes led to increased Snail and Slug expression, suggesting that Notch is activating genes promoting EMT in podocytes.Notch inhibition using gamma secretase inhibitors, on the D4 and D5 following tamoxifen administration, led to a reduction in the severity of glomerulosclerosis and albuminuria.We propose a model in which loss of Wt1 in mature podocytes induces podocyte apoptosis and EMT, which could be mediated via activation of Notch.These findings are consistent with previous reports demonstrating podocyte Notch activation in chemically induced models of glomerulosclerosis and in human biopsies of glomerular disease.28,36–38,Increased podocyte Hes1, Hes3, Hes5 and Hey1, Hey2, and HeyL have been found in streptozotocin- and puromycin aminonucleoside–induced glomerulosclerosis.28,Podocyte apoptosis has been shown to play an instigating role in the pathogenesis of FSGS, and Notch activation in mature podocytes induces apoptosis.28,30,Conditional activation of the intracellular domain of Notch1 in mature podocytes is associated with positive TUNEL staining and upregulation of podocyte Trp53 and Apaf1.Pifithrin-α inhibits apoptosis in podocytes transduced with the Notch1 intracellular domain, thereby suggesting that Notch1 induces podocyte apoptosis via the p53 pathway.28,Conditional deletion of podocyte Rbpj in mice with diabetic nephropathy is associated with reduced podocyte apoptosis providing further support for a role for Notch in podocyte apoptosis.28,In the current study, we speculate a role for p53-mediated apoptosis in the pathogenesis of Wt1 glomerulopathy in our model.It would be interesting to explore whether apoptosis is inhibited with conditional inactivation of Notch1 in podocytes of adult CAGG-CreERTM+/−;Wt1f/f transgenic mice.Indeed, conditional deletion of Notch1, but not Notch2, in podocytes of mice with diabetic nephropathy abrogates glomerulosclerosis.37,Furthermore, reduced expression of primary podocyte Snail1 protein and mRNA has been observed in these mice,37 suggesting that podocyte Notch1 activation is associated with both apoptosis and epithelial mesenchymal transition.Our study adds to previous studies demonstrating that ectopic Notch activation in mature podocytes is associated with the development of glomerular scarring.Conditional inactivation of Notch1 and its transcriptional targets in both early and late stages of Wt1 glomerulopathy would further define the role and window for Notch activation in disease pathogenesis.Pharmacological block of Notch signaling has previously defined a narrow window for Notch in proximal nephron identity.21–23,However, a recent study suggests that Notch is required for the formation of all nephron segments and primes nephron progenitors for differentiation.24,Notch1 and Notch2 are expressed in overlapping patterns in the proximal domain of the S-shaped body where podocyte precursors reside.23,During terminal podocyte differentiation, Hes/Hey genes are progressively down-regulated.25,26,Regulation of vertebrate podocyte differentiation has been proposed to involve a multimeric transcriptional network involving Wt1, FoxC1/C2, and Rbpj.34,39,FOX transcription factor binding motifs have been found in a large proportion of WT1-bound regions supporting coordinated action of these transcription factors in regulating podocyte-specific genes.17,18,Double knockdown of either wt1a/rbpj or wt1a/foxc1a in zebrafish caused a reduction in podocyte number in contrast to a single knockdown of any of the 3 genes, supporting a genetic interaction between these transcription factors in regulation of podocyte specification.34,Co-immunoprecipitation studies revealed putative interactions among Wt1, Rbpj, and FoxC2 proteins, and together, combinations of Wt1, FoxC1/2, and NICD can synergistically induce Hey1 expression.34,In Xenopus, knockdown of xWT1 decreased the early glomus-specific expression of XHRT1, the Hey1 orthologue, but did not perturb its late expression in the pronephros anlagen, suggesting that xWT1 mediates expression of XHRT1 early in glomerulogenesis.40,HeyL expression in pretubular aggregates is also regulated by Wt1 during murine metanephric development and studies of Wt1 target genes in embryonic mouse kidney tissue revealed that Wt1 can bind to the HeyL promoter.41,These studies support a role for modulation of Notch transcriptional targets by Wt1 and FoxC1/2 during nephrogenesis.After the proximal nephron forms, podocytes function normally in the absence of Notch.25,26,Following Wt1 deletion in mature podocytes, we find upregulation of Notch pathway components, Notch1, Nrarp, Hey1, Hey2, HeyL, Hes1, Hes3, and Hes5.These findings are consistent with previous studies where Wt1 and Foxc1a can inhibit the ability of NICD1 to activate a synthetic Notch reporter driven by Rbpj sites34 and suggest a model where Wt1 and FoxC1/2 have antagonistic effects on Notch signaling in the mature podocyte.Our study also demonstrated that loss of Wt1 in mature podocytes is associated with a reduction of FoxC2 expression and upregulation of Notch pathway components coincident with onset of glomerulosclerosis and albuminuria.We speculate that FoxC2 could repress expression of Hey2 and other Notch bHLH genes in mature podocytes, based on our observation of increased Hey2 transcript in mutant podocytes at a point when FoxC2 is down-regulated.This would be consistent with a previous report showing that Hey2 is a transcriptional target of FoxC2 in endothelial cells.42,Restoration of FoxC2 levels in adult CAGG-CreERTM+/−;Wt1f/f transgenic mice may be sufficient to restore podocyte-specific gene expression following injury, perhaps via repression of Notch bHLH gene expression.An alternative mechanism of podocyte Notch activation in Wt1-mediated injury could also be mediated via activation of Hippo signaling.Kann et al.18 identified podocyte-specific enrichment for TEAD transcription factor motifs in the vicinity of WT1 chromatin immunoprecipitation sequencing peaks.Notch ligands are transcriptional targets of Hippo signaling,43 and it is possible that loss of Wt1 expression in mature podocytes mediates Notch activation via regulation of Hippo components.Our observation of increased Pofut1 in mutant podocytes supports activation of Notch in CAGG-CreERTM+/−;Wt1f/f transgenic mice.Notch activation relies on O-linked glycosylation of the extracellular domain of Notch receptors and ligands that influence receptor sensitivity to ligand stimulation.24,Pofut1 is an O-fucosyltransferase 1 enzyme that mediates fucosylation of the extracellular domain of the Notch protein and has recently been shown to play an important role in the regulation of cell surface expression of Notch1.33,We also found increased Mfng, a β3-N-acetylglucosaminyltransferase that mediates glycosylation of the Notch extracellular domain in mutant podocytes.31,32,In most contexts, Fringe-mediated glycosylation of Notch1 renders it more sensitive to Dll1-mediated activation.32,We have found evidence for Jagged1 expression in CAGG-CreERTM+/−;Wt1f/f transgenic mice and in human biopsies of WT1-mutated glomerulosclerosis.It will be of interest to study the consequences of Mfng deletion in CAGG-CreERTM+/−;Wt1f/f mice on disease manifestation.In the early stages of disease, we observed Jagged1 expression in podocytes and parietal epithelium of mutant glomeruli.This observation is consistent with previous reports for a role for Jagged1 in glomerulosclerosis.28,36,In contrast, we did not observe any quantitative difference in Delta1 protein expression in primary mutant podocytes compared with controls in the early stages of disease.Both Delta1 and Jagged1 share overlapping expression patterns within the middle segment of the S-shaped body during kidney development.23,Combined loss of both ligands in Six2-Cre;Dll1f/f;Jag1f/f transgenic mice results in a severe reduction in numbers of proximal tubules and glomeruli, thereby supporting a role for ligand-mediated Notch activation in defining proximal nephron identity.23,Replacement of one Jag1 allele in the Dll1-null background can rescue some WT1-positive podocytes, suggesting that an important role for Jag1 in podocyte fate induction.23,In the context of podocyte injury, Niranjan et al.28 reported that TGF-β1 mediated treatment of podocytes resulted in Jagged1 upregulation.Future studies will be directed at examining the effects of conditional deletion of Jagged1 in adult CAGG-CreERTM+/−;Wt1f/f mice on disease manifestation.In summary, we identify podocyte apoptosis as an early event in the pathogenesis of glomerulosclerosis mediated by loss of Wt1 function in mature podocytes.At disease onset, we find induction of podocyte EMT gene expression and upregulation of several Notch pathway components.Induction of podocyte HES1 expression is associated with increased Snail and Slug expression, suggesting that HES1 regulates podocyte EMT.Early pharmacological blockade of Notch signaling leads to a reduction in the severity of glomerulosclerosis and albuminuria.We speculate that activation of Notch is mediated by repression of FoxC2.Given the recent advances in our understanding of the complex biological roles of WT1, transgenic mice carrying point mutations relevant to human disease will provide invaluable tools to investigate the transcriptional networks and posttranscriptional mechanisms underlying WT1-related glomerulosclerosis.Wt1 deletion in adult mice was achieved following generation of bitransgenic mice by crossing CAGG promoter–driven CreERTM mice with homozygous Wt1 conditional mice, where the first exon of Wt1 is flanked by LoxP sites.29,Site-specific recombination between the LoxP sites of the Wt1 gene results in a ubiquitous Wt1 null allele.Successful Wt1 deletion was demonstrated by recombination PCR and the depletion of Wt1 expression in podocytes.Mice were maintained on a mixed genetic background consisting of C57BL/6J and CD1.However, for comparison of phenotypes such as proteinuria, littermates were used.Cre recombinase was induced by i.p. administration of tamoxifen to 5-week-old mice.All animal work was carried out under the permission of license.Mice were housed and bred in animal facilities at the Western Labs, UCL Great Ormond Street Institute of Child Health.Mice were killed at D4, D5, D6, D8, and D12 PI of tamoxifen.Bilateral nephrectomies were performed under sterile conditions.Nphs2;rtTA transgenic mice expressing the tetracycline transactivator specifically in podocytes were used for primary podocyte cultures and Hes1 overexpression experiments.44,Protein was isolated using radioimmunoprecipitation assay buffer supplemented with phosphatase and protease inhibitors.Lysis was completed by shearing through a 26-gauge syringe needle.Samples were denatured with 10% b-mercaptoethanol in 4X Laemmli sample buffer at 95°C for 10 minutes.Primary podocytes were run on 4% to 15% sodium dodecylsulfate–polyacrylamide gel electrophoresis gradient gels.Immortalized podocytes were run on 10% sodium dodecylsulfate–polyacrylamide gel electrophoresis gels.All gels were transferred onto polyvinylidene fluoride and blocked in 5% nonfat milk in PBS before being probed with primary antibodies and secondary antibodies.Blots were developed with Pierce ECL Western Blotting Substrate.Statistical analyses were performed in GraphPad Prism V.7.For each analysis, we examined at least 6 to 10 independent samples per experimental group; for qRT-PCR analysis, the average of duplicate reactions was used as the value of that sample.Where data is normally distributed, results are expressed as mean ± SD or SEM relative to the specified controls.Where data is not normally distributed, results are reported as medians with respective interquartile ranges.For all statistical analyses, we used a 2-tailed, unpaired Student t-test or Mann-Whitney U test to analyze the difference between 2 groups.The Bonferroni correction was used when more >2 groups were present.Values were regarded significant if <0.05; all error bars represent SDs.Animal experiments were conducted with ethical approval from the Animal Welfare and Ethical Review Body of the University College of London, Great Ormond Street Institute of Child Health and carried out under United Kingdom Home Office license 70/7892.Approval for research on human tissue was obtained from the NHS National Research Ethics Service Committee, North-East York, UK.Urinary albumin and creatinine were determined using the mouse albumin enzyme-linked immunosorbent quantitation set and creatinine assay kits, respectively.Western blot detection of albuminuria was determined following loading 4 microliters of urine samples.Kidneys were fixed in in 4% formaldehyde in phosphate-buffered saline and paraffin-embedded kidney sections were stained with PAS.Fresh kidney cortices were dissected into ice-cold Hanks balance salt solution, decapsulated, and minced.After rinsing thoroughly with fresh Hanks balance salt solution, the pieces were pushed through a 100-μm cell strainer into a chilled beaker.Hanks balance salt solution was added to a total of 6 ml, and this was divided between 2 chilled 15-ml Falcon tubes.Then 2.2 ml of Percoll was added to each tube.The samples were spun at 400 rpm for 10 minutes at 4oC to separate glomeruli across the Percoll gradient.45,The presence of glomeruli in the top of the gradient was verified microscopically.The top 1 to 2 ml were removed and passed again through a 100-μm cell strainer to catch any large tubule fragments.The filtrate was then passed through a 40-μm cell strainer to trap the glomeruli.For podocyte culture, harvested glomeruli were placed onto culture dishes coated with 0.1 mg/ml rat tail collagen type I as previously described.Culture medium from D1 to D3 was RPMI 1640 medium containing 15% fetal bovine serum, penicillin streptomycin, and amphotericin B. On D3 of culture, unattached glomeruli were washed away and medium was changed to 10% FBS.Podocytes were examined on D6 of culture after harvest.Tissues were fixed and embedded in paraffin or O.C.T. Compound as previously described.27,For double and triple immunofluorescence labeling, formalin-fixed sections were deparafinized according to previously published protocols.27,Microwave antigen retrieval was carried out in citrate buffer in 4 5-minute cycles at medium-high setting following by a 20-minute cooling period at room temperature.Blocking was performed in Universal Blocking Reagent.O.C.T. Compound embedded cryosections were permeabilized in 0.5% Triton X-100 for 5 minutes, followed by 2 5-minute PBS washes.They were then incubated in blocking buffer for 1 hour at RT.Sections were probed with antibodies diluted in blocking reagent and incubated at 4°C overnight.The following day, sections were washed in PBS and incubated with Alexa Fluor–conjugated secondary antibodies for 1 hour at RT.Slides were mounted using VECTASHIELD and nuclei were stained with DAPI.Confocal imaging was performed using a Zeiss LSM-710 system with an upright DM6000 compound microscope and images were processed with Zen software suite.Z stacks were acquired at 0.5-μm intervals and converted to single planes by maximum projection with FiJi software.Cells were seeded on Matrigel coated 8-well chamber slides.The following day, the cells were fixed in 4% paraformaldehyde for 15 minutes and washed in PBS twice for 5 minutes each time at RT.Cells were then permeabilized with 0.2% Triton X-100 in PBS for 5 minutes, then washed in PBS for 5 minutes at RT.Then 100 μl of Equilibration buffer from the DeadEnd Fluorometric TUNEL kit was added to the cells for 5 to 10 minutes at RT.Following Equilibration buffer, 50 μl rTdT incubation buffer was added to the cells; to ensure even distribution of the buffer, the slides were covered with plastic cover slips and incubated at 37°C for 1 hour.Plastic coverslips were removed by immersing the slides 2x SSC for 15 minutes at RT.Slides were then washed 3 times for 5 minutes each with PBS at RT.Sections were mounted with VECTASHIELD plus DAPI mounting medium.Slides were imaged on a Zeiss fluorescent microscope.Collagen-coated gas-permeable bottom plates were used to culture glomeruli for all apoptosis assays.Glomeruli were harvested from mice 6 days following tamoxifen induction.Following 6 days of glomerular harvest, unattached glomeruli were washed away with Dulbecco PBS and the medium was replaced with Roswell Park Memorial Institute with 0.2% FBS.After 18 hours, CellEvent Caspase-3/7 Green to measure apoptosis were added to each plate per the manufacturers’ instructions.Sections were counterstained with DAPI.Following tissue dissociation, the samples were spun down at 320g for 5 minutes and washed with the Annexin V Binding Buffer.The cells were then stained with Annexin V-PE-Cy7 for 30 minutes on ice, in the dark.After the incubation period, the samples were again washed and resuspended in the Annexin V Binding Buffer.Just prior to sample acquisition, each sample was stained with Sytox Blue at a final concentration of 0.3 mmol/l.The samples were acquired using a 5-laser BD LSRFortessa X-20 Analyser, equipped with 355 nm, 405 nm, 488 nm, 561 nm, and 640 nm lasers.Prior to cell acquisition, the cells were filtered through a 35-mm cell strainer to prevent cellular aggregation during sample acquisition.The following antibodies were used: cleaved Notch1 1:100 immunohistochemistry, 1:50 immunofluorescence; Notch1 1:1,000 Western blot; cleaved Notch2 1:150 immunohistochemistry; cleaved Notch2 1:200 immunofluorescence, 1:1,000 Western blot; Jagged 1, 1:100 immunofluorescence; Podoplanin, 1:100 immunofluorescence; CD31, 1:100; Hes1, 1:1000; Synaptopodin; Podocin 1;100; biotinylated Lotus tetragonolobus lectin, LTL 1:100; and cleaved Caspase-3, 1:400 immunofluorescence.Alexa Fluor–conjugated secondaries 488, 595, and 647.GSI-IX]-S-phenylglycine t-butyl ester, or DAPT) were purchased from Sigma Aldrich and administered by i.p. injection on the evening of day D4 PI of tamoxifen to CAGG-CreERTM +/−;Wt1f/f transgenic mice.Vehicle treatment with dimethyl sulfoxide administered by i.p. injection on the evening of D4 PI of tamoxifen to CAGG-CreERTM +/−;Wt1f/f transgenic mice were used as controls.For late treatments, CAGG-CreERTM+/−;Wt1f/f mutants were treated by i.p. injection with the gamma secretase inhibitor GSI-IX DAPT on the evening of D7 PI of tamoxifen and treated again with DAPT the next morning.Urine was collected at least 8 hours following the second DAPT treatment and mice were then sacrificed.Following nephrectomy, light microscopic examination of PAS-stained specimens were scored for severity of glomerulosclerosis.Comparison was made between GSI-DAPT and vehicle-treated CAGG-CreERTM+/−;Wt1f/f transgenic mice.Primary transgenic murine Nphs2;rtTA podocytes were transfected with TetOHes1 plasmid46 or control-GFP-only plasmid with Lipofectamine 3000 kit.Cells were transfected at 70% confluency.Following 24 hours, both TetOHes1 and control-plasmid transfected cells were treated with doxycycline for 72 and 96 hours’ duration.RNA extraction was undertaken according to manufacturer’s instructions using the Qiagen microRNA extraction kit.Protein was extracted as previously described.47,For MFng and Rbpsuh gene knockdown, the Thermo Scientific Open Biosystems pGIPZ MFng, and Rbpsuh and nonsilencing control vectors were used.Primary podocytes from D6 PI CAGG-CreERTM+/−;Wt1f/f transgenic mice were cultured in Roswell Park Memorial Institute medium in 10% FBS and 1% insulin, transferrin, and selenium.Next, 1 × 105 cells were seeded per well in 6-well dishes in duplicates, 24 hours prior to being transfected with 4 μg of MFng and Rbpsuh short hairpin RNA and 4 μg nonsilencing control vector.One plate was harvested to determine knockdown efficiency by quantitative real-time-PCR 48 hours following transfection.Following primary podocyte derivation, total RNA was isolated using the RNeasy Micro Kit.cDNA was synthesized by reverse transcription using a high-capacity RNA-to-cDNA kit, qRT-PCR was performed with 250 ng cDNA on a Bio-Rad qPCR machine using SYBR Green PCR Master Mix and 0.45 μg of the oligonucleotides outlined in Supplementary Tables S1 and S2.For each gene, the reaction was run in duplicate for between 6 and 10 samples, and for each primer pair, a no-template control was included.The data were normalized to Gapdh gene levels within each sample and analyzed using the ΔΔCt method.48,PJS receives lecture fees from Natera Inc.All the other authors declared no competing interests. | The Wilms’ tumor suppressor gene, WT1, encodes a zinc finger protein that regulates podocyte development and is highly expressed in mature podocytes. Mutations in the WT1 gene are associated with the development of renal failure due to the formation of scar tissue within glomeruli, the mechanisms of which are poorly understood. Here, we used a tamoxifen-based CRE-LoxP system to induce deletion of Wt1 in adult mice to investigate the mechanisms underlying evolution of glomerulosclerosis. Podocyte apoptosis was evident as early as the fourth day post-induction and increased during disease progression, supporting a role for Wt1 in mature podocyte survival. Podocyte Notch activation was evident at disease onset with upregulation of Notch1 and its transcriptional targets, including Nrarp. There was repression of podocyte FoxC2 and upregulation of Hey2 supporting a role for a Wt1/FoxC2/Notch transcriptional network in mature podocyte injury. The expression of cleaved Notch1 and HES1 proteins in podocytes of mutant mice was confirmed in early disease. Furthermore, induction of podocyte HES1 expression was associated with upregulation of genes implicated in epithelial mesenchymal transition, thereby suggesting that HES1 mediates podocyte EMT. Lastly, early pharmacological inhibition of Notch signaling ameliorated glomerular scarring and albuminuria. Thus, loss of Wt1 in mature podocytes modulates podocyte Notch activation, which could mediate early events in WT1-related glomerulosclerosis. |
519 | The influence of waves on morphodynamic impacts of energy extraction at a tidal stream turbine site in the Pentland Firth | Tidal stream turbines are maturing as a means of renewable energy generation: several demonstration devices have been deployed and the world's first array will be installed in the Inner Sound of the Pentland Firth with the aim of 386 MW of installed capacity by 2020 ; .Presence of support structures and extraction of energy will impact a range of receptors, both physical and biological .This contribution simulates impact to the morphodynamics of sub-tidal sandbanks using a fully coupled wave – hydrodynamic - sediment transport model.This enables inclusion of wave driven sediment transport and wave-current interaction in the computation.Attention is given to a sandbank in the Inner Sound of the Pentland Firth, close to the Meygen Inner Sound array site .Sub-tidal sandbanks must be considered in environmental impact assessments because they can be important ecological habitats, navigational hazards and sources of aggregates.A substantial amount of work has been conducted on the physical processes governing the morphology of sub-tidal sandbanks.Sandbanks are often formed and maintained by residual current gyres which are caused by tidal asymmetry around headlands.Sub-tidal sandbanks can be found in the centre of these circulation patterns.The importance of the contribution of waves to sandbank morphodynamics and long term evolution is open to debate .Dependant on environmental setting, the background stirring influence of low energy waves may be important or episodic storm events may be more relevant .Under storm conditions, tidal residuals may be reversed both due to WCI and the dominance of wave driven currents.The process of WCI is complex and highly studied phenomenon with both waves affecting currents and currents affecting waves.When waves propagate in a current field, various phenomena can occur, including: altered wind wave growth; current induced refraction; changes to wave steepness which alters rates of dissipation ; and wave blocking.Wave blocking is the prevention of wave energy transport caused when current velocity is equal and opposite to the wave group velocity .The presence of waves can alter currents via two main processes: firstly, additional currents can be induced via gradients in wave radiation stress and secondly the presence of waves increases turbulence at the bed, effectively increasing the friction felt by the current field.Inclusion of WCI in tidal resource estimation studies can lead to alteration in the predicted available resource .Previous work looking at tidal range schemes has shown that changes to currents forced by energy extraction can alter tidal modulation of wave heights .The impact of TSTs on sandbank morphodynamics been considered by various authors .This work has shown that energy extraction at various locations can disrupt residual current gyres.Research has focussed on sediment transport by tidal currents alone with little consideration given to the relevance of including wave effects in the simulation.Purely simulating tide-driven processes may ignore key physical processes, Robins et al. , assessed the contribution of waves to bed sheer stress and concluded that wave-driven processes may be important.Fairley and Karunarathna demonstrate, for the same sandbank as tested here, that wave action can magnify the impact of TSTs on bed level changes by considering short term simulations of characteristic storm processes.A 24 h period is simulated for storms from opposing directions and tide only conditions, with and without turbines.The same model set up as presented here is used.Residual current magnitudes are altered by up to 10% when waves are included.Patterns of impact to bed level change are similar with and without wave action and are dictated by the presence of sand waves.The short time period and constant wave action used in that study means that more detailed simulations are required to better assess the importance of wave action for TST environmental impact studies.Here, the analysis of Fairley and Karunarathna is extended to consider morphodynamics over a spring-neap cycle with summer and winter wave conditions.Both baseline and extraction scenarios are considered.The aim of this paper is to both provide realistic simulations of morphological changes in the region and to assess if inclusion of wave processes makes a material difference to prediction of impacts.The MIKE3 2012 release was used in this analysis.Two key factors are involved with the alteration of currents by waves.Firstly wave radiation stress can induce a current.The hydrodynamic module takes radiation stresses from the wave module every time step.A uniform variation in radiation stress with depth is used for the vertical variation.Secondly, waves can increase the apparent bed roughness felt by a current.This is caused by increased turbulence intensity and shear stresses in the boundary layer forced by oscillatory wave motion.This research focuses on the Inner Sound of the Pentland Firth.It is the narrow channel between the north coast of the Scottish mainland and the Orkney Islands which links the North Atlantic and the North Sea."The Pentland Firth is considered one of the world's most attractive sites for tidal energy extraction .The Inner Sound is the sub-channel in the south of the Pentland Firth, formed by the presence of the island of Stroma.The tidal regime in the region is dominated by the M2 component .Phase differences of 2 h between the North Atlantic and North Sea cause a hydraulic gradient which drives currents in the Pentland Firth in excess of 5 ms-1 at spring tide.Water depths in the main channel approach depths of 100 m below MSL, in the Inner Sound depths are less than 35 m below MSL.Interpretation of vessel mounted ADCP surveys has shown that in the Inner Sound there is tidal asymmetry in the region of maximum current between flood and ebb tides and that currents are not bidirectional .Wave conditions in the region are some of the most energetic in Europe, however, sheltering and current effects mean that average wave heights in the Pentland Firth are 2 m , and that this reduces to the East and in the Inner Sound .Winter conditions can be characterised by large, long period waves approaching from the SW – NW, whereas summer wave conditions are typically shorter period and approach from a more northerly direction .Large areas of the Pentland Firth are swept bedrock due to the energetic waves and tidal currents in the region.The Pentland Firth has been identified as a bedload parting zone .In regions of lower flow, sedimentary deposits exist including veneers of sand/gravel and fields of sand waves.These are often ephemeral features, only observable in some surveys .Of greater interest when assessing TST impacts are the permanent sandbanks associated with headlands and islands.The largest of these sandbanks is the Sandy Riddle but this is sufficiently far removed from planned array locations that impacts at this early stage of development is unlikely .In this study a sand bank to the east of the Island of Stroma is considered.This is comprised of coarse sand and gravel.Values for median grain size from grab samples taken at three locations from west to east along the sand bank centre line are 4.7 mm, 2.7 mm, 3.2 mm .Recently more detailed surveys have been conducted of the sedimentology of the Inner Sound .This study used grab samples and multi-frequency side-scan to map the seabed of the inner sound.Two sandbanks are identified: the large sandbank considered in this modelling study and a smaller oval sandbank, closer to the island of Stroma.Surveys of the large sandbank showed sand waves are present with wavelengths between 10 and 30 m on the northern flank, 10–15 m on the southern flank and smaller features over the crest with wavelengths around 5 m. Two surveys were conducted and while no difference in plan-shape or location of the bank was identified, a change in orientation of the dunes was noted .They note that much of the retrieved sediment was platelet shaped shell fragments which makes the sand bank more resistant to erosion.The hydrodynamic, spectral wave and sand transport modules of the DHI MIKE3 suite were used in this analysis.These modules are fully coupled: that is, at every time-step currents and water depths for the SW module are read from the HD module; radiation stresses from the SW module fed to the HD module; and wave and current forcing from both the HD and SW modules is used by the ST module to compute sediment transport and bed level changes.Morphological updating is also activated for all three modules every time step.Inclusion of wave current interaction in MIKE2012 is considered in Section 2.The model mesh used is shown in Fig. 3, and a subset of the mesh around the sandbank in Fig. 4.An unstructured triangular mesh is used in this study which was developed using the DHI MIKE meshing tool and then refined using a MATLAB toolbox from DHI.Element areas ranged from 2,000,000 m2 in the outer regions to less than 500 m2 over the sandbank.The size of the domain is constrained by computational restrictions of running a coupled HD-SW-ST model, however previous work has shown that the mesh is sufficiently large for analysis of tidal steam energy impacts on regional sediment transport .Thus, the 5 mm veneer over the bedrock can be considered to represent the sediment that is present within crevices in the bed rock and within interstitial spaces between cobbles and boulders.The parabolic formulation is included since sediment in these spaces will be less easily transported than exposed sediment.The two tested arrays were implemented as series of individual turbines.MIKE3 has an inbuilt turbine tool that allows inclusion of turbines as sub-grid structures by specification of turbine location, hub height, turbine diameter and lift and drag curve.Actuator disk theory is then used to determine a momentum sink.This momentum sink is spread evenly between all vertical layers occupied by the turbine swept area.The velocity used is the average of the cell velocity of the cells occupied by the turbine for all vertical layers occupied by the turbine.The turbine properties were taken from work within the UK EPSRC funded Terawatt project that defined a generic turbine design for academic work on hydrodynamic impact via discussion with developers .This hypothetical turbine had a rated power of 1 MW and a turbine diameter of 20 m.In this study the turbine hub height was specified as 17 m above the sea bed.The cut in speed was set to 1 ms-1 and the cut-out speed to 4 ms-1.A plot of the thrust co-efficient against speed for the hypothetical turbine is shown in Fig. 6.Array layouts were determined by Marine Scotland Science .Turbines were spaced by 160 m in the direction of flow and 50 m laterally.400 turbines were included in the Inner Sound site and 100 turbines in the Ness of Duncansby site.The array layouts for the two considered leased areas are shown in Fig. 7. .Two time periods are considered: a winter spring neap cycle and a summer spring neap cycle.For both time periods, the model is run for scenarios with and without turbines and with and without the wave module being activated.Therefore, in total, 8 simulations were run: summer, tide only, no turbines; summer, tide only, turbines; summer, waves included, no turbines; summer, waves included, turbines; winter, tide only, no turbines; winter, tide only, turbines; winter, waves included, no turbines; winter, waves included, turbines.Summer and winter scenarios were taken from 2012 for comparison of bed level changes under different conditions.A winter scenario from 12/01/2012–27/01/2012 and a summer scenario from 06/06/2012–21/06/2012 were chosen.These periods were chosen due to co-incidence of availability of input boundary conditions and availability of wave data from a wave buoy deployed by UHI in the Pentland Firth.The astronomical tidal envelope for the nearest National Tidal and Sea Level Facility gauge at Wick is shown in Fig. 8 for 2011–2013 and water levels for the two tested time periods.The Wick Gauge is located further south than the study area, outside the bounds of the maps in Figs. 1 and 2, at 58° 26.458′ N, 3° 5.179′ W. Both time periods are representative of the tidal regime in general, neither containing particularly large spring tides nor small neap tides.The winter scenario has slightly greater tidal ranges than the summer and hence faster currents.For the summer scenario, waves in the inner sound are largely incident from the east.They are lower period than the winter wave conditions, although magnitudes of wave heights are similar.For the winter scenario, wave direction is more variable with waves over the sandbank incident from between the west and north for much of the more energetic times.Tidal boundaries were taken from the DHI global tidal atlas and elevations specified at all boundaries.The elevations varied along the boundaries based on the global tidal atlas data.A wave model created by ABPMer for the Pentland Firth and Orkney Waters using MIKE21 SW was run to provide input wave conditions at all model boundary conditions.Model performance was evaluated by comparison with ADCP data in the centre of the Pentland Firth for current data and a wave buoy to the west of the Pentland Firth.No calibration was conducted for the wave model since only one comparison point was available.Instead it was assumed better to rely on the default values and accuracy of model physics rather than tune for a solitary point which may not be representative of the domain as a whole.Comparison between measured and modelled wave parameters was conducted for the two test periods of 12–27 January 2012 and 6–21 June 2012.Validation was conducted using the coupled wave and tidal model.Measured wave parameters were available on an hourly basis.The coefficient of determination and root mean squared error values for the different parameters and test cases are listed in Table 1.Visually, the model well represents the shape of the wave height record.For the winter period there is a slight over prediction of wave height, especially during storm periods.This over prediction is less in the summer period as evidenced by the lower RMSE.For both summer and winter the higher frequency variability is not represented.For the winter wave direction there is a bias of 8°: the mean of the measured data during this period is 315° and the mean of the modelled data is 307°.A similar bias is shown in the summer wave direction plots.Fig. 10 shows comparison of model results against ADCP data for the three sites marked on Fig. 2.The ADCP data consisted of time series of 10-min averaged velocity profiles which were depth averaged for the purpose of validation.The ADCP data spanned 30 days from 14/09/2001.Visually the comparison is good, root mean square errors were from 0.26 to 0.33 ms-1, which is considered acceptable.A lag of approximately 7 min between modelled and measured data was observed.At sites one and three, the model over predicted current speeds.For these two sites peak flood and ebb currents are asymmetrical and maximum currents occur on opposing halves of the tidal cycle, caused by presence of a current jet between the two islands in the Pentland Firth This jet is present in the western half of the Pentland Firth on the ebb and on the eastern half on the flood.Undocumented communication suggests that there may be errors in the ADCP measurements at times of peak current caused by unwanted movement of the sub-surface float to which the ADCP was attached.Therefore, it is difficult to determine whether the discrepancy is entirely down to poor model performance.No sediment transport or bed level data was available to calibrate the sediment transport module for this study.However, confidence can be gained by the good validation of waves and currents which means direction of transport is likely to be correct.Additionally, the model equations and architecture have been validated against analytical solutions within the model documentation and against measured sediment transport for combined waves and currents .Various authors have demonstrated the ability of the MIKE suite of models to replicate sediment transport in real world conditions with good results.In this section results from the numerical modelling are presented.The parameters considered are: wave height and direction; depth averaged current velocities; total load magnitude; vertical bed level change; and total sandbank volume change.Focus is primarily given to five points over the sandbank of interest.Parameters are plotted as time series where the parameters are averaged every 10 min.Values at the tested points were calculated via interpolation from surrounding nodes which was conducted automatically within the MIKE software.The five points are shown in Fig. 11.One point is a central point on the crest of the sandbank, one on each lateral flank and one on each longitudinal flank.Particular attention is given to the point on the crest.Due to the irregular morphology of the sandbank, sensitivity of the results to point location over the crest was assessed and found to not dramatically impact results.Predicted wave heights in the inner sound show substantial tidal modulation, especially for the summer case.Fig. 12 shows predicted wave heights in the Inner Sound for both summer and winter scenarios overlaid on the tidal current speeds.Data is taken from the point on the crest of the sandbank.Drops in wave height are coincident with maximum tidal currents.This pattern is much less noticeable for the winter case.The difference is due to the wave direction.Modelled wave direction in the inner sound for the summer case is consistently from the east which is aligned with and opposed to flow direction of peak flood current speed.Wave direction in the winter case is more variable, being from the north west for much of the time and from the east at the start and end of the time period.A tidal modulation of direction is also observable for both scenarios.Despite the obvious tidal influence on wave conditions, deployment of turbines does not alter currents sufficiently to impact on wave conditions over the sandbank: differences in model prediction of significant wave height are typically less than 2 cm and at most 5 cm when tidal turbine energy extraction is included.Inclusion of the wave module can alter simulated hydrodynamics over the tested sandbank, although noticeable changes only occur under certain storm conditions.Fig. 14 shows plots of depth averaged u and v velocities for the point on the crest of the tested sandbank for both scenarios time periods and the difference caused by inclusion of waves.For the summer case it can be seen that there is minimal difference in depth averaged velocities throughout the record with differences typically much less than 0.01 ms-1.For the winter case, one storm event shows differences in depth averaged velocities of over 0.1 ms-1.This event occurs when waves are incident from the north-west, events from the east do not cause the same difference.The points extracted on the sandbank flanks showed similar results.The primary objective of this paper is to ascertain whether inclusion of waves in simulations affect the impact of TSTs on morphodynamics.To answer this, attention is given to the 6 day subsection of the winter scenario where inclusion of waves are shown to alter u,v velocities to the greatest extent.Fig. 15 shows both the change in u,v, velocities caused by energy extraction and the difference in that change when waves action is included in the simulation.Points on the north-western flank, the crest, and the south-eastern flank are considered.For the north-western flank, the differences in velocity are primarily positive and correspondingly the change caused by wave action is also primarily positive.The point extracted on the south eastern flank shows the opposite trends whereas the point extracted on the crest is more symmetrical between positive and negative change.The change in impact is relatively small however, with change being within ±5% of the tide only impact for over 80% of the time.There is little clear shape to this difference in impact.Time series of the magnitude of total load sediment transport volumes and the impact of energy extraction show broadly similar patterns for all four scenarios although magnitude of total load varies.Therefore just the summer case with no waves is described here.There is an asymmetry in total load magnitude between flood and ebb tides.For the points on the centre, SE and SW, magnitude of total load is greater on the flood tide.The opposite is true for the NW and NE points.Magnitude of total load is greatest for the point on the SE flank which is furthest into the main channel.Implementation of turbines reduces the magnitude to total load transport.There is still an asymmetry in the total load transport, however for all 5 points the magnitude of total load transport is greatest on the flood tide.This change represents the removal of the residual gyres as described in previous work .The relative magnitude of the total load between points is altered with greatest magnitude of transport observed on the SW point of the case with turbines.The shape of change caused by turbines is not uniform between points.For the central point for the first part of the flood tide there is a reduction in magnitude, with an increase in magnitude for the second part.Inclusion of wave action increases the magnitude of sediment transport.This is the case for all tested cases at all five points over the sandbank.Fig. 17 shows time series of the difference between tide plus wave and tide only driven transport for both summer and winter with and without turbines.The differences are asymmetric with greater differences for the flood tide when currents are directed towards the east for all five points.The magnitude of difference is significantly larger than the magnitude of the tide only total load transport.Morphological changes are variable in both direction and magnitude over the tested sandbank.It is believed this variation in direction is caused by the large sand waves present on the sandbank.Fig. 19 shows an example of this: changes to bed level over the winter test case with waves included is shown for both the natural and energy extraction cases.Only a close-up of the considered sandbank is shown in the figure.For the natural case change is focused on the southern flank of the sandbank whereas for the energy extraction case change is focused over the crest.Further examination of the spatial variation in bed level changes is presented in Fairley and Karunarathna .Examination of time series of bed level change shows that similar responses are predicted for both the summer and winter scenarios.The three points on the longitudinal axis show similar patterns, while the points on the lateral flanks show different behaviour.Therefore in Fig. 20 only the winter scenario is presented for the central, SE and NW points.For clarity of behaviour, bed level is plotted such that the bed level at t = 0 is set to 0 for all points.Certain patterns are consistent between all points and scenarios: the semidiurnal variability in bed level is greater for the scenarios including wave in their simulations.Rates of change are faster for start and end of the time series during spring tides and flatter in the middle during neap tide.Differences in bed level are greater for the points on the two flanks compared to the central flank.At the central point, the scenarios including waves show that despite differences with and without turbines over the tested time period the end result is very similar with an accretion approaching 0.03 m. For the tide only scenarios, erosion is shown and inclusion of turbines reduces this erosion.Different behaviour is shown on the NW flank point: for the no turbine case there is minimal change or slight accretion whilst when turbines are included erosion is shown with greater erosion when waves are included.On the south eastern flank, the no turbine cases both show accretion, with greater magnitude for the scenario with waves included.Implementation of turbines reduces the level of accretion; the reduction is a similar level for both tide-only and wave scenarios.In order to provide information on bulk changes to the tested sandbank, Table 2 shows volumetric changes over the spring neap cycles for the whole sandbank.Not only are the calculated volume changes for the eight cases presented but differences in volume change between scenarios also given.Differences caused by turbine implementation are shown as are differences in volumetric change when waves are included in the simulation.The area of the sand bank encapsulated by the −25 m MSL was used in the volume calculations.For reference the total volume of the sandbank above the −25 m contour is ∼658,000 m3 and thus the largest volumetric change is about 1.5% of the total volume for the tested 14 day period.For the tide only cases there is a reduction in volume over the two tested spring neap cycles.Inclusion of wave action reverses this and there is an increase in volume over the sandbank.There is a greater increase in volume for the winter case.For the tide only simulations turbines implementation reduces the amount of erosion, for the simulations with waves turbines cause an increase in accretion; thus in both cases there is a positive difference in volume change caused by turbines.The difference in volume change caused by inclusion of turbines ranges from 0.07% to 0.4% of the total volume.The predicted impact is greater for the case with waves.The difference between the tide only and wave cases are greater than the difference between the turbine and no turbine cases being between 1 and 1.8% of the total sandbank volume.Inclusion of wave action increases magnitude of sediment transport.Given the minimal changes to the depth averaged hydrodynamics it is believed that this increase is primarily caused by the enhanced mobilisation of sediment caused by the orbital velocities.Reductions in sandbank volume are predicted for the tide only cases and increases in sandbank volume when waves are activated.This suggests there may be an interplay between periods of calm and periods of wave activity in the long-term stability of the sand bank.Similar interplay has been previously demonstrated by the authors for sand banks in the Bristol Channel .Predicted volumetric changes are up to 1.5% of the sandbank volume.Given that this change is over a two week period this is a significant change.Assumptions of spherical particles are made in the sediment transport calculations.McIlveny et al. found that the majority of particle are plate shaped, hence having lower form drag and greater resistance to motion.This means that sediment transport and morphological change may well be over predicted in an absolute sense.However it is expected that direction of change and sediment transport pathways will still be correctly predicted.Given the comparative nature of this study it is believed that the conclusions are still valid.This research suggests that for tidal stream sites with energetic wave climates, accurate modelling of impact to morphodynamics may require inclusion of wave action in simulations.Greater changes to baseline conditions are observed when waves are implanted compared to the change when turbines are implemented.Moreover, the change in impact when waves are included is not linear, and hence the contribution to change caused bay waves cannot be simply added to hydrodynamic simulations with and without turbines.The results show the impact of the large sand waves on the sandbank on the predicted patterns of erosion and accretion and these dictate that the patterns of direction of change are similar with and without wave action included.Inclusion of wave action in modelling leads to increases in computational expense, time needed for model set-up and additional data requirements for boundary forcing and calibration studies.Therefore, the decision to include waves is not a trivial one.The relative importance of waves will depend on the wave exposure of the mobile sediment receptors for a given project.Thus, while wave action should not be ignored, it is recommended that assessment of wave climate in these regions is undertaken prior to the modelling decision.The numerical model used in this study does not include the influence of waves on apparent bed roughness that is felt by currents.Wave generated turbulence and the interaction of the wave and current boundary layers will increase the apparent roughness felt by the current and hence reduce current speeds.Omission of this physical phenomena means that the effect of waves on tidal stream turbine impact may be under represented.This study has investigated the impact of TST energy extraction on a sub-tidal sandbank in the Inner Sound of the Pentland Firth and focused on whether it is necessary to include waves in the simulation of TSTs and their impact on morphodynamics.For the inner sound of the Pentland Firth, inclusion of TSTs at the tested level has minimal impact on the wave field.Since the tested level was at the higher limits of likely extraction, the impact of tidal stream turbines on wave climate is seen as unimportant here.Inclusion of wave action can alter tidal currents however this alteration largely depends on the wave direction.Minimal differences in tidal current are seen for waves incident from the east whereas a more noticeable difference is observable when waves are incident from the north west.The difference caused by inclusion of waves on turbine impact on hydrodynamics is small, typically less than 5% of the impact predicted without waves.More consistent differences are observed in the predictions of sediment transport.Inclusion of wave action increases magnitude of sediment transport.The difference in volumetric sea bed change caused by inclusion of waves in the simulation is greater than the difference in volume change caused by inclusion of turbines and hence ignoring waves in simulations is likely to produce erroneous results in terms of magnitudes.However the direction of volumetric change is the same for simulations with and without waves.These conclusions mean that it is recommended that investigators do not ignore the inclusion of waves in simulations of tidal stream turbines on morphodynamics a priori but rather assess the wave climate of a specific site before making a decision on inclusion of waves.The relative importance will depend on the wave exposure of different sites, the depth of the sandbanks and other environmental factors. | Extraction of energy from tidal streams has the potential to impact on the morphodynamics of areas such as sub-tidal sandbanks via alteration of hydrodynamics. Marine sediment transport is forced by both wave and tidal currents. Past work on tidal stream turbine impacts has largely ignored the contribution of waves. Here, a fully coupled hydrodynamic, spectral wave and sediment transport model is used to assess the importance of including waves in simulations of turbine impact on seabed morphodynamics. Assessment of this is important due to the additional expense of including waves in simulations. Focus is given to a sandbank in the Inner Sound of the Pentland Firth. It is found that inclusion of wave action alters hydrodynamics, although extent of alteration is dependant of wave direction. Magnitude of sediment transport is increased when waves are included in the simulations and this has implications for morphological and volumetric changes. Volumetric changes are substantially increased when wave action is included: the impact of including waves is greater than the impact of including tidal stream turbines. Therefore it is recommended that at tidal turbine array sites exposed to large swell or wind-seas, waves should be considered for inclusion in simulations of physical impact. |
520 | Association study on IL-4, IL-4Rα and IL-13 genetic polymorphisms in Swedish patients with colorectal cancer | Colorectal cancer is one of the most commonly diagnosed cancer in the world with increasing incidence rate .Different genetic pathways which affect CRC initiation and progression have been described .Genetic variation, such as single nucleotide polymorphism, is important in individual variability in CRC susceptibility .The connection between inflammation and CRC initiation, progression is well-established .Communication between tumor cells and their microenvironment is thought to be crucial for tumor growth.Particularly, the interaction between tumor cells and infiltrating leukocytes is a powerful relationship that influences CRC progression and prognosis .Inflammation is driven by soluble factors such as cytokines and chemokines which are produced by tumor cells or by the cells recruited to the tumor microenvironment such as lymphocytes .Moreover, polymorphic variants of genes have been referred to as factors that mediate inflammatory response .The search for molecular biomarkers to facilitate early diagnosis, determine prognosis and help in the selection of personalized therapy for patients with CRC is ongoing .It is well established that a subgroup of patients with stage II CRC are at high risk for recurrence and should be considered as candidates for adjuvant chemotherapy .Cancer spread to lymphatic nodes, without evidence of distant metastasis, is classified as stage III with high risk of recurrent disease that motivates routine adjuvant chemotherapy .The decision to use adjuvant therapy could be made more rational using various genetic and molecular biomarkers to disclose subgroups of patients eligible for adjuvant therapy.The current used clinical and pathological markers, such as poorly differentiated tumor, lympho-vascular or perineural invasion, perforation, T4 growth, elevated CEA levels and if fewer than 12 lymph nodes are removed, have all weak prognostic significance .Interleukin-4 and interleukin-13 are two structurally similar cytokines produced by activated T helper type 2 lymphocytes and other different cell types such as B cells, mast cells, basophils and epithelial cells .Several lines of evidence conclude that IL-4 and IL-13 may suppress cancer-directed immunosurveillance and enhance metastasis and tumor invasion.Recent studies have shown that IL-4 and IL-13 have overlapping biological functions in relationship to inflammatory and allergic diseases but also in a wide variety of cancers including CRC .Binding of IL-4 or IL-13 to the Type II IL-4 receptor α which can exist on non-lymphoid cells, initiates the JAK/STAT signaling pathway, particularly STAT6 .Consequently, dysregulated IL-4 and IL-13 signaling is suggested to contribute to mediate inflammation and promoting of the cell survival and tumor proliferation .Various studies have shown an association between IL-4, IL-13 and IL-4Rα gene polymorphisms and diseases such as allergic and cancer diseases including CRC ."Some of these SNPs in the promoter region IL-4 rs2243250 and IL-13 rs1800925 and nonsynonymous IL-4Rα rs1801275 have been investigated to evaluate whether the polymorphic variants confer individual's susceptibility to CRC .However, the data from the published studies have yielded conflicting results.In the present study we hypothesized that functional SNPs in IL-4, IL-4Rα and IL-13 genes can affect CRC risk and long-term survival in patients with CRC.To test this hypothesis, we genotyped IL-4 rs2243250, IL-4Rα rs1801275 and IL-13 rs1800925 in healthy controls and CRC patients in a Swedish population.This present study comprised 466 patients with a mean age of 71 years from south-eastern Sweden who underwent surgical resection for primary colorectal adenocarcinomas at the Department of Surgery, Ryhov County Hospital, Jönköping, Sweden between 1996 and 2016."Clinical information was available from 464 patients and all data about clinicopathological characteristics was obtained from the patient's computerized files that cover all the healthcare providers in the region.Follow-up for the estimation of cancer specific survival ended on the date of death or on January 31, 2016.The tumors were located in the colon and rectum and were classified according to The American Joint Committee on Cancer classification system : stage I, stage II, stage III and stage IV.Key clinicopathological and demographics characteristics of the patients are summarized in Tables 2–4.Healthy blood donors at County Hospital Ryhov with no known CRC history from the same geographical region as the CRC patients were selected as control population.The group involved 232 males and 213 females with a mean age of 58 years.Blood samples for the patients were collected at the start of surgery and for the controls at the time of the blood donation.All blood samples were centrifuged to separate plasma and blood cells and stored at −78 °C until analysis.The investigation was approved by the Regional Ethical Review Board in Linköping, Linköping, Sweden and informed consent was obtained from each of the participants.In the present study, genomic DNA was isolated from blood samples using QiaAmp DNA Kit.The TaqMan SNP genotype assays were used for analysis of the IL-4 rs2243250, IL-4Rα rs1801275 and IL-13 rs1800925 single nucleotide polymorphisms.DNA was mixed with Taqman Genotyping Master Mix and was amplified using the 7500 Fast Real-Time PCR system.The performed amplification utilized an initial cycle at 50 °C for 2 min, followed by one cycle at 95 °C for 10 min and finally 40 cycles at 95 °C for 15 s and at 60 °C for 1 min.The manual calling option in the allelic discrimination application ABI PRISM 7500 SDS software, version 1.3.1 was then used to assign the genotypes.The differences in the frequencies of the IL-4, IL-4Rα and IL-13 gene polymorphisms between CRC patients and control subjects and between clinical characteristics within the CRC subgroups were analyzed using Chi-squared test."Survival analysis was performed by Kaplan-Meier analysis with the log-rank test and Cox's regression.Genotype data of controls were examined for Hardy-Weinberg equilibrium by Chi-squared test.Statistical analysis was performed using Stata Statistical Software Release 13, Starta Corp.College Station, TX, USA and SPSS software for Windows, version 14.0.Results were considered significant at p < .05.No significant differences in the genotype frequencies were observed between the cancer patients and the healthy controls for IL-4 and IL-4Rα.However, a significant difference was found for IL-13.Moreover, we found the rate of genotype T/T to be 3.2% and that of C/T + C/C to be 96.8% in healthy controls.In patients, the rate of T/T was 6.9% and the rate of C/T + C/C was 93.1%.This corresponds to an increased risk of CRC in association with T/T with an Odds Ratio = 2.27; 95% Confidence Interval =1.19–4.31, p = .012.The genotype distributions in the CRC patient and the healthy control group were not associated with demographic characteristics such as age and gender with the exception that patients carrying genotype IL-4Rα rs1801275 A/A were more common in men 68.9% compared to women 58.9%,."Stratification analysis of associations between individual SNP and other patient's characteristics according to the Tables II, III and IV showed no significant difference.The distribution of the healthy control genotypes for each polymorphism was in agreement with Hardy-Weinberg equilibrium.Based on data from our cohort with up to 20 years of follow-up the Kaplan-Meier analysis revealed no difference in cancer-specific survival associated with SNP of IL-4Rα or IL-13.However, Kaplan-Meier analysis showed that the cancer specific survival differed between C/C and CT + TT for IL-4 SNP,.The carriers of the T allele were associated with the highest risk of CRC death with a hazard ratio of 1.57, 95% CI 1.07–2.29, p = .020.In multivariate analysis we also found the highest risk of CRC for the carriers of the T allele with HR = 1.57, 95% CI 1.06–2.36, p = .024.Furthermore, the established features such as stage T4-tumor, low differentiated and < 12 lymph nodes were all found to be associated with poor outcome.Stratification analysis of associations between the C/C and CT + TT for IL-4 SNP and different TNM stages regarding to cancer specific survival showed no significant difference in stage I, II and IV.However, we found a difference in stage III CRC.Moreover, compared with the C/C carriers the carriers of the T allele had a poor prognosis in stage III with an HR = 2.17, 95% CI 1.12–4.22, p = .022 and after adjustment for the covariates an HR = 2.13, 95% CI 1.04–4.38, p = .039,.Identification of genes involved in the genetic predisposition to or progression of CRC is important in clinical practice.Current prognostic indicators are still ineffective for identifying stage II and stage III CRC patients with high risk of recurrence after resection with curative intent .Cytokines expressed in CRC cells or in the tumor microenvironment seems to play an important role in the local immunoregulation .To the best of our knowledge, there are limited studies in a Swedish population whether IL-4, IL-13 and IL-4Rα gene polymorphisms can affect CRC risk and long-term survival in Swedish patients with CRC.Underlying mechanisms of how IL-13 rs1800925 affects colorectal carcinogenesis is not known but it has been reported a significant association of the IL-13 rs1800925 T/T genotype with IL-13 production .We observed a significant difference in genotype distribution between patients with CRC and healthy controls for IL-13 rs1800925.Moreover, we found a higher rate of the genotype T/T in patients in comparison with healthy controls that was significant associated with a higher risk of CRC.These findings are consistent with data from another study in a Polish population .Given that IL-13 and IL-4 share IL-4Rα and are involved in the same signaling pathways , the modulation of these pathways could potentially affect susceptibility to CRC risk by other gene polymorphisms in IL-13 or in combination with IL-4/IL-4Rα gene polymorphisms.In the present study, no significant differences in the genotype frequencies were observed between the cancer patients and the healthy controls for IL-4 rs2243250 and IL-4Rα rs1801275.These results are in line with other studies conducted in Spanish and German populations for IL-4Rα and in Polish , Swedish and Spanish populations for IL-4.Based on data from our cohort with up to 20 years of follow-up the Kaplan-Meier analysis revealed that the cancer specific survival differed between genotypes for IL-4 rs2243250.In multivariate analysis we found that the carriers of the T allele were associated with the highest risk of CRC death.Specifically, comparing between different TNM stages we found that the carriers of the T allele revealed the highest risk of CRC death in stage III.Previous studies have demonstrated that T allele of IL-4 rs2243250 polymorphism can increase gene transcription of IL-4 and thereby increase the level of IL-4.One may speculate that this mechanism may be involved in our results showing that the carriers of the T allele were associated with the highest risk of CRC death and shorter survival.However, this observation seems to be in contrast with a study of Wilkening et al. showed that C allele was associated with a shorter overall survival.The difference in results may be due to Wilkening et al. used outcome in terms of overall survival while we used outcome in terms of cancer specific survival after adjustment for several covariates in a multivariate model.In addition, we had access to nearly 50% more patients.Interestingly, IL-4 rs2243250 polymorphism investigated in the present study showed a prognostic value in stage III CRC patients where the carriers of the T allele had a poor prognosis.Nevertheless, prospective trials are needed to validate this genetic biomarker.There are multiple distinct genetic pathways to CRC development.Later stages of cancer differ from early-stage cancer due to different activated pathways .Intra-stage variability in outcomes has been observed that cannot be accurately predicted by the TNM staging system.It is well established that a subgroup of patients with stage II CRC are at high risk for recurrence .Our data raise the question how the difference of survival in stage III CRC of the two subtypes defined by C/C and CT + TT carriers for IL-4 rs2243250 shall be interpreted.An explanation awaits further research, but we speculate that other molecular events within the subtypes can be modified by different IL-4 levels and contribute to the observed subtype-specific survival differences.In addition, the communication between tumor cells and their microenvironment and the interaction between tumor cells and infiltrating leukocytes are powerful relationships that influences CRC progression and prognosis .Consequently, it is likely that IL-4 acts in concert with additional factors in the microenvironment to regulate the tumor-promoting functions of tumor-associated macrophages .The strength of this study is a well-characterized patient cohort with long follow-up time.A further observation is that the CRC cases and controls were selected from one hospital, which may not be representative of other populations.However, both populations came from a defined geographical region which may represent the general population in Sweden well.Our studies have some obvious limitations such as that our patient group carriers a low number of TT genotypes and that the size of the study population is relatively small and that some stratifications were performed that increased the likelihood of false positive outcomes.Additional larger investigations using larger patient group are required to unequivocally determine the role of these SNPs on CRC susceptibility.We will also in the future investigate whether our results are consistent with other ethnic populations and may differ between populations from different regions.In conclusion, our results suggested that IL-13 SNP rs1800925 was associated with an increased risk of CRC in a Swedish population.However, this gene polymorphism appears not to play a role in the progression of CRC.Moreover, IL-4 SNP rs2243250 was closely associated with poorer patient survival and could be a useful indicator for clinical prognosis for patient selection in stage III CRC.However, our study must be considered as a pilot study and further studies with more cases are warranted to evaluate the significance of our findings in the clinic.The authors declare that they have no competing interests. | Background: Interleukin 4 (IL-4) and interleukin 13 (IL-13) are anti-inflammatory and immunomodulatory cytokines which share a common cellular receptor IL4Rα and are involved in the same signaling pathways. Our purpose was to assess whether genetic variants within IL-4, IL-13 and IL-4Rα are associated with the risk or clinical outcome of colorectal cancer (CRC). Methods: Three single nucleotide polymorphisms (SNPs) were screened in 466 patients with CRC and 445 healthy controls. The selected SNPs were IL-4 SNP rs2243250, IL-4Rα SNP rs1801275 and IL-13 SNP rs1800925. Results: We found that the genotype variant T/T in IL-13 gene was associated with a higher risk of CRC. Kaplan-Meier analysis showed that the cancer specific survival differed between C/C and CT + TT for IL-4 SNP. Moreover, the carriers of the T allele were associated with the highest risk of CRC death with a hazard ratio (HR) of 1.57, 95% CI 1.06–2.36, p =.024. The observed effect of the T allele was restricted to stage III patients. Conclusion: Our results indicate IL-13 SNP rs1800925 as a risk factor for CRC and that IL-4 SNP rs2243250 could be a useful prognostic marker in the follow-up and clinical management of patients with CRC especially in stage III disease. |
521 | Datasets on hub-height wind speed comparisons for wind farms in California | Dataset reported in this article contain hub height wind fields, with special focus on wind farms in California.Two modeling products, three reanalysis dataset, and two observational data are described in the article.The interpolation method for calculating hub-height wind speed is also presented in the article, and can potentially be applied to other studies.Power curves used for calculating wind energy capacity factors at each wind farm location are also provided.Data provided in this article includes two simulations using the Variable-Resolution CESM model.CESM version 1.5.5, a fully coupled atmospheric, land, ocean, and sea ice model, was utilized.Both simulations used the F-component set, which prescribes sea-surface temperatures and sea ice but dynamically evolves the atmosphere and land surface component models.The atmospheric component model is the Community Atmosphere Model, version 5.3 with the spectral-element dynamical core in the variable-resolution configuration.The VR model grid used for this study, depicted in Fig. 2 from the reference article , was generated for use in CAM and CLM with the open-source software package SQuadGen .On this grid the finest horizontal resolution is 0.125°, with a quasi-uniform 1° mesh over the remainder of the globe.Two simulations were conducted using this grid structure: First, the historical run covers the period from October 1st, 1979 to December 31st, 2000, with first three months discarded as the spin-up period, for a total of 21-years.This historical time period was chosen to provide an adequate sampling of inter-annual variability, to coincide with the time period from the rest of the modeling and reanalysis datasets, and because observed sea surface temperatures were only available through 2005.For projecting future wind energy change, our mid-century simulation ran with the “business as usual” Representative Concentration Pathway 8.5 from October 1st, 2029 to December 31st, 2050, again discarding the first three months for a total of 21-years.Greenhouse gas and aerosol forcing are prescribed based on historical or RCP8.5 concentrations for each simulation.More details on VR-CESM can be found in , and the model has been applied to previous studies .The Det Norske Veritas Germanischer Lloyd Virtual Met product is derived from a hybrid dynamical-statistical downscaling system based upon the Weather Research and Forecasting model and an analog-based ensemble downscaling method.A coarse resolution WRF simulation is run for the entire period to be downscaled, while for only a subset of that period a nested version of the same model is run at high resolution.The period over which the coarse and high-resolution runs overlap is called the training period, while the remaining portion is termed downscaling period.For each time of the latter, the best matching coarse estimates over the training period are found.The downscaled solution is then constructed from the set of high-resolution values that correspond to the best matching coarse analogs.This method is based upon Delle Monache et al. .The WRF simulation used telescoping, one-way interacting computational grids.Their respective horizontal grid increments are 20 km and 4 km, with the 4 km grid centered over California.The initial and lateral boundary conditions are specified using MERRA-2.The 20 km grid was run for the entire 01 Jan 1980–31 Dec 2015 period, and generated output every hourly, while the nested 4 km grid was run only during the last year of the full simulation.The high resolution downscaled dataset is constructed for the entire 36-year period using the 4 km resolution training data and the 20 km simulation.The result is an hourly time series at each 4 km grid point for January 1st 1980 to December 31st 2015.Wind speed and direction at hub heights, including 50 m, 80 m, 140 m, are output.DNV GL served solely as a data provider, and is not responsible for any results from this data.The Modern-Era Retrospective analysis for Research and Applications, Version 2 is a reanalysis product for the satellite era using the Goddard Earth Observing System Data Assimilation System Version 5 produced by Global Modeling and Assimilation Office at NASA .MERRA-2 integrates several improvements over the first version MERRA product .For the fields used in this study, the spatial resolution is ~55 km with 3-hourly output frequency from 1980 to present.Vertical interpolation of MERRA-2 data was performed to calculate hub height wind speed.Variables used in vertical interpolation were extracted from two subsets: 3-hourly instantaneous pressure level assimilation and hourly instantaneous single level assimilation .The Climate Forecast System Reanalysis from NCEP is a global, coupled reanalysis that spans from 1979 to present, with ~55 km spatial resolution and 6-hourly temporal resolution of relevant wind fields .Notably, this temporal resolution is the lowest out of the five datasets used.The analysis subset was chosen for vertical interpolation at 6-hourly frequency.The North American Regional Reanalysis, another NCEP reanalysis product, features a slightly higher spatial resolution of ~32 km.It is a dynamically-downscaled data product with spatial coverage over North America, with 3-hourly temporal resolution from 1979 through present .Hub height wind speeds from NARR were also calculated at this frequency."The Integrated Surface Database from NOAA's National Centers for Environmental Information were used for assessment of hourly 10 m wind speed from model and reanalysis.The ISD observational stations are distributed globally, with the highest concentration of stations found in North America.Stations across California that provide full year data were selected.As not all stations had continuous temporal coverage between 1980 to 2000, each year was calculated separately so as to maximize the number of available stations.To compare 10 m wind speeds from model and reanalysis datasets to ISD, the nearest grid point values to each of the ISD stations was used.Coastal stations were neglected in the analysis of 10 m winds, due to coastal biases that tend to occur in near-surface coarse-resolution reanalysis.These biases tend to emerge because similarity theory is typically employed to extract 10 m wind speeds, which produces distinctly different results over the ocean and land surface.Upper air soundings from all the available locations across California are incorporated into the comparison.The three available sounding locations in California are OAK at Oakland airport, VBG at Vandenberg Air Force Base, and NKX at San Diego.The time period from the first two stations spans 1980 to 2000.NKX only has data available starting from September 1989, so only the full years 1990–2000 were assessed.Soundings were collected at 12 hourly intervals at 00Z and 12Z, and logarithmic vertical interpolation was performed to calculate hub-height wind at each sounding location.However, this logarithmic interpolation from sparsely sampled profile data could introduce uncertainties into the calculation.The wind speed at each wind farm location was determined using nearest grid point values to each wind farm site.To obtain hub-height wind vectors, vertical interpolation was performed on 3-hourly VR-CESM, 3-hourly MERRA-2, 6-hourly CFSR, and 3-hourly NARR products from 1980 to 2000.As mentioned above, hub-height wind output is available directly from the DNV GL Virtual Met data product.Vertical interpolation of VR-CESM data uses the 3D wind field on hybrid surfaces and 10 m altitude wind speed, which is computed from similarity theory.For VR-CESM data, the interpolation procedure is as follows: the CAM5 hybrid coordinates are first converted to pressure coordinates within the column being analyzed, the height of each pressure surface above ground level is computed by subtracting the surface geopotential height from the geopotential height of the model level, two model levels that bound the desired interpolation altitude are selected or, if the interpolation altitude is below the lowest model level, the lowest model level and 10 m wind speed field are used, and logarithmic interpolation is applied to obtain the wind speed at the desired interpolation altitude.The interpolation was done by fitting a log equation with the two levels bounding the altitude to be calculated, then with the log profile, interpolating the wind at desired altitude .Vertically interpolated wind speed from MERRA-2, CFSR, NARR, and sounding observations all followed a similar procedure, and were calculated at three hub heights.Figs. 1–4 show the interpolated hub-height wind speed at 50 m and 140 m, respectively, at northern and southern California.For wind speed at 80 m, and further wind speed analysis, please refer to the cosubmitted research article .Wind turbines can contribute to energy via the electric power system.This contribution is the total amount of usable energy supplied by the turbine per year .The capacity factor is often defined as actual power output divided by the max amount of wind power that can be generated through the system.This wind speed and CF relationship is not continuous since there is a discontinuous minimum and maximum wind speed required to begin and cease wind power production, and this is represented with different power curves associated with each of the wind farm sites.The calculated CF at each wind farm site is based on different characteristic power curves at that site, and do not include electrical losses during the power generation process.The normalized power curves at each wind farm sites, with each value corresponding to a 1 m/s wind speed bin increment starting from 0 m/s, are listed in Table 1.To calculate the CF, wind speed is multiplied with the corresponding power curve value from the corresponding wind speed bin, and then times 100 to convert the percentage values.For further details on the CF analysis, please refer to . | This article includes the description of data information related to the research article entitled “The future of wind energy in California: Future projections with the Variable-Resolution CESM”[1], with reference number RENE_RENE-D-17–03392. Datasets from the Variable-Resolution CESM, Det Norske Veritas Germanischer Lloyd Virtual Met, MERRA-2, CFSR, NARR, ISD surface observations, and upper air sounding observations were used for calculating and comparing hub-height wind speed at multiple major wind farms across California. Information on hub-height wind speed interpolation and power curves at each wind farm sites are also presented. All datasets, except Det Norske Veritas Germanischer Lloyd Virtual Met, are publicly available for future analysis. |
522 | Using a rolling training approach to improve judgmental extrapolations elicited from forecasters with technical knowledge | Surveys suggest that forecasts based either wholly or partly on expert management judgment play a major role in company decision making.Sometimes these judgmental inputs take the form of adjustments to statistical forecasts, ostensibly to take into account special factors that were not considered by the statistical forecast.However, in some circumstances, judgment may be the only process involved in producing the forecasts.At times, there is even a statistical forecast provided, but the expert chooses to ignore it.In some cases, judgment is used to extrapolate time series data to produce point forecasts, when no other information is provided.This type of task has been the subject of much research over the last thirty years, and a number of biases associated with judgmental extrapolation have been identified.These include tendencies to overweight the most recent observation, and to see systematic patterns in the noise associated with series.Such biases can apply even when the forecaster has expertise, whether in the domain within which the forecasts are being made or in forecasting itself.This suggests that, when experts are called upon to make judgmental extrapolations, the elicitation process may benefit from the inclusion of devices that are designed to mitigate these biases.Studies in the expert knowledge and elicitation literature have examined a number of ways of designing elicitation methods so as to reduce the danger of biased judgments from experts, particularly in relation to the estimation of probabilities or probability distributions Aspinall, 2010; Bolger & Rowe, 2014; Goodwin & Wright, 2014, Morgan, 2014, Chapter 11).Our focus here is on improving EKE in time series extrapolation.A variety of strategies have been explored in an attempt to mitigate biases in the elicitation of judgmental extrapolations.One promising strategy is to use performance feedback to train forecasters who already have technical expertise in order to improve the accuracy of their extrapolations.The use of feedback to enhance the quality of expert judgments has proved to be successful in other areas of EKE, such as weather forecasting, as well as in applications of the Delphi technique, where the feedback relates to the judgments of other experts.In time series extrapolation, while some studies, such as that of Goodwin and Fildes, have shown that feedback can lead to improvements in the accuracy of point forecasts, more research is needed to identify the most effective form of feedback for improving the accuracy.This is a particularly important topic in demand forecasting, where software provides the expert with information on past errors.This paper reports on an experiment that was designed to examine the effectiveness of providing forecasters with rolling feedback on both the outcomes of the variable that they are attempting to predict and their forecasting performance.The objective is to provide a direct training scheme, thus enabling forecasters who already have technical knowledge to understand the underlying pattern of the data better by learning from their forecast errors directly, thus improving the accuracy of their judgments.Two types of performance feedback were compared: feedback on the bias associated with the forecasts submitted, and feedback on their accuracy.The paper is structured as follows.First, a review of the relevant literature is presented.Details of the experiment and the analysis and results follow.Finally, the practical implications of the findings are discussed, and suggestions are made for further work in this area.In judgmental forecasting, Sanders and Ritzman distinguish between expertise that is founded on contextual knowledge and that which is based on technical knowledge.Expertise relating to contextual knowledge comes from factors such as experience working in an industry or the possession of specific product knowledge.In contrast, expertise based on technical knowledge is present when a forecaster has a knowledge of formal forecasting procedures, including information on how to analyze data judgmentally.Sanders and Ritzman compared the forecasting accuracies of: managers who had contextual expertise but lacked technical expertise, forecasters who lacked contextual expertise but had technical expertise, and forecasters who lacked both contextual and technical expertise.They concluded that expertise based on technical knowledge had little value in improving the accuracy of judgmental forecasts relative to expertise based on contextual knowledge.However, many of the time series that they studied were highly volatile, and contextual factors, rather than time series components, accounted for much of their variation.The forecasters with technical expertise who took part in the study were not privy to these contextual factors.In a review of Sanders and Ritzman’s study, Collopy argues that people may not always be able to apply what they learn in a training process.He cites a report by Culotta, who found that even students who do well in calculus courses cannot apply what they have learned.In Sanders and Ritzman’s study, those who were counted as having technical knowledge had taken an elective course in forecasting, and may therefore have been subject to didactic learning, which is a relatively passive process.This is in contrast to experiential learning, which includes actively participating in the task for which one is being trained, reflecting on the experience, and learning from feedback.Thus, training of this type may be effective in obtaining improvements in accuracy for those with technical expertise.In order for experiential training to be effective, it needs to address the specific challenges of the task.Goodwin and Wright argue that three components of a time series influence the degree of difficulty that is associated with the judgmental time series forecasting task, namely: the complexity of the underlying signal, comprising factors such as its seasonality, cycles and trends, and autocorrelation; the level of noise around the signal; and the stability of the underlying signal.When there are trends in series, studies have consistently found that judgmental forecasters tend to damp them when making extrapolations.This phenomenon appears to apply both to experts working in their specialist field and to participants in experimental studies.This damping may occur either because forecasters anchor on the most recent observation and make insufficient adjustments from this, or because they are unable to handle non-linear change.However, damping may also be caused by forecasters bringing non-time series information, based on their knowledge or experience, to the task.For example, a forecaster’s prior experience may have demonstrated that the sales growth for products tends to be damped.Similarly, in the case of a downward trend in a sales series, people may expect a trend reversal to occur as action is taken to correct the decline.Complex seasonal patterns or cyclical components have also been found to lead to inaccurate judgmental forecasts.Several studies have suggested that judgmental forecasters often confuse the noise in the time series with the signal.For example, they often adjust statistical forecasts to take into account recent random movements in series which they perceive to be systematic changes that were not detected by the statistical forecast.Conversely, when systematic changes in the signal do occur, forecasters may delay their responses, perceiving the changes to be noise.Also, they may pay too much attention to the most recent observation, which will contain a certain amount of noise.It seems reasonable to expect that noise could also impair the detection of underlying trends and seasonal patterns, though this was not the case in two studies where the series were presented graphically.Learning through feedback could potentially mitigate these biases.As we indicated above, feedback is a key component of experiential learning, and has been shown to improve the accuracy of point forecasts.However, there are a number of different types of feedback that may be particularly relevant to the task of time series forecasting, and more research is needed to determine the type that is most effective and how it should be delivered.We might expect the effectiveness of different types of performance feedback to vary.Feedback on biases can provide a direct message that one’s forecasts are typically too high or too low, hence suggesting how they might be improved.This is likely to be beneficial for untrended series or series with monotonic trends.However, it may lead to an unwarranted confidence in one’s current forecasting strategy when a series has an alternating or seasonal pattern, because biases in different periods will tend to cancel each other out if an average across the signed errors is used.In contrast, feedback on accuracy provides no such direct message, and its implications may be difficult to discern.In order for forecasters to learn from accuracy feedback, they would need to experiment with alternative approaches, not specified by the feedback, and then establish whether these have improved the accuracy.This requires forecasters to compare their levels of accuracy across different periods, which adds to their cognitive burden.Thus, it seems unlikely that accuracy feedback will be conducive to rapid learning.This may explain the ineffectiveness of performance feedback that was found in a study by Remus et al., which consisted only of an accuracy measure.Task properties feedback relates to providing forecasters with statistical information on the nature of the task.In time series forecasting, this might involve providing the forecaster with the current estimates of the level, trend and seasonal indices obtained from the Holt–Winters method, for example.However, this would essentially modify the task to one of accepting or adjusting statistical forecasts.Task feedback has been researched widely elsewhere, and is not the topic of the current paper.Ultimately, any form of feedback, regardless of the type, is likely to be most effective in enhancing the judgments of those with technical expertise if it can be understood easily and quickly, and is salient, accurate and timely.We therefore propose and test a rolling training scheme, based on performance feedback.This has a number of innovations that are designed to address the problems associated with feedback that have been presented in earlier studies.Unlike these studies, we have not supplied metrics that summarise the ‘average’ performance over a given number of periods or tasks.Instead, a performance measure is supplied for each individual judgment made by the forecaster, so that there is no arbitrary censoring of earlier performances, and the balance between the sensitivity and stability of the feedback is no longer an issue.Furthermore, the feedback is ‘rolling’, so that a complete and growing record of the forecaster’s performance is presented and updated at regular intervals.These innovations are important because, as we have seen, a key problem with feedback based on ‘average’ metrics is that it can depend on the number of periods that contribute to the average.Also, when a time series contains cyclical or seasonal patterns, a tendency to forecast too low when the time series rises and too high when it falls will be masked by an ‘average’ metric.In the scheme proposed here, forecasters can link their errors to individual observations and patterns.They can also see easily whether their performance is improving over time without having to memorise previous values of the metric.The current research evaluates two judgmental forecasting approaches.Each participant provided judgmental estimates following both approaches, using a fully symmetric experiment, as will be discussed in Section 3.4.Most relevant studies that have focused on the impact of feedback for judgmental forecasting tasks have made use of simulated series.Moreover, many studies have not examined seasonal series, but have confined their attention to stationary and trended ones.Therefore, the current research focuses on real time series that collectively demonstrate a variety of characteristics.More specifically, 16 quarterly series were selected manually from the M3-Competition data set, so as to ensure the required characteristics.These were confirmed using autocorrelation function plots or Cox–Stuart/Friedman tests, or by fitting an appropriate exponential smoothing model, using all the data.In addition, half of the trended or seasonal series did not exhibit any significant pattern in the first two years, but did so later on.This selection was made in order to examine participants’ ability to recognise developing series characteristics and adapt.The 16 series were split into two categories, each containing eight series.These sets of series allowed for the implementation of a symmetric experimental design, which will be described in Section 3.4.Each set contained exactly two series with the same characteristics, as displayed in Table 1.For analysis purposes, the 16 series were split again into two sets of equal size in terms of noise, as measured by the standardised random component of a classical decomposition.Lastly, four additional series were used in the first stage of the experiment, in order to familiarise the participants with the system.The group of participants consisted of 105 undergraduate students who were enrolled in the Forecasting Techniques module at the School of Electrical and Computer Engineering at the National Technical University of Athens.As part of the module, the students had been taught principles of time series analysis, statistical and judgmental forecasting methods, and ways of evaluating forecasting performances.The experiment was introduced as an elective exercise, with the 50% of participants who produced the most accurate forecasts obtaining bonus credit.In order to attract a large number of participants, we decided to build a web application, rather than performing a standard laboratory experiment.The web application was designed specifically for the purpose of this experiment, using the ASP.NET framework for the web development of the front-end and a Microsoft SQL database for storing the time series data and the participants’ point forecasts.The Microsoft Chart Controls library was used for drawing line and bar graphs, as is discussed in the next subsection.The application was hosted in a secure web-server and participants could connect remotely through their internet-enabled personal computers via any web browser.Instead of splitting the participants into two groups, control and test, we adopted a symmetric experimental design, where each participant submitted forecasts for both UJ and RT.The sets of series A and B alternated randomly between UJ and RT, so that half of the series were forecast using UJ by half of the participants and using RT by the other half, and vice-versa for the other series.In order to avoid familiarity with the task, UJ and RT were presented to the participants interchangeably.This means that after a common warm-up round, half of the participants were asked to provide forecasts using the UJ approach for eight time series, then to submit their estimates using the RT approach for the remaining eight series at the next step, while the opposite was the case for the other half of the participants.This symmetric design allowed us to avoid any familiarity with the task effects that could have arisen if the two approaches had been presented in the same order for all of the participants.For the provision of feedback, each participant was assigned randomly to either the signed or unsigned percentage errors treatments.Of the 105 participants, 52 were given feedback on signed errors and 53 on absolute errors.All of the series were presented in a line graph format, using blue for the actual values and green for the submitted forecasts.While there has been no evidence on the relative superiority of graphical or tabular numerical formats, graphical representations are more common in modern forecasting support systems.Historical data points were kept unlabeled in terms of exact values, so that the participants could not export these values to a spreadsheet and use statistical approaches.This is a very important constraint, as the experiment took place in an unobserved environment and a graphical mode of presentation was the only way to guarantee that judgmental extrapolation was used.However, grid lines were provided in order to accommodate numerical estimations.Four text boxes were used for the input of judgmental forecasts for each lead time, while an update button could be used to refresh the graph, so that the participant could check his or her judgmental estimates graphically before submitting.Fig. 1 presents two typical screenshots of the system implemented, both before and after the input of the four point forecasts.Including the warming-up up round, the experiment involved three rounds, each of which is described in detail below.As has been noted, the UJ and RT rounds were presented in reverse order for half of the participants.Warming-up round: Each of the first four series was presented to the participants in turn, withholding the last four observations.The participants were then requested to provide judgmental point forecasts for the next four quarters.A short description of each series was provided, describing any historical patterns.Once the forecasts for each series had been submitted, forecast errors for each point were calculated automatically and displayed in bar charts, using the color red.As this round was a ‘warm-up’, the forecasts thus elicited were not taken into account when the results of the study were analysed.Fig. 2 presents a screenshot showing the information provided to the participants after the four point forecasts for a series had been submitted.After completing each of the latter two rounds, the participants filled in a questionnaire, which included questions on their confidence in the accuracy of their submitted forecasts, their expected forecasting performance, the extent to which they had examined the graphs and series patterns, and the time spent in producing their forecasts.In addition, a final questionnaire was used to ask participants about their familiarity with forecasting tasks, their level of forecasting expertise, their perceptions of the effectiveness of rolling training, and their motivation for providing accurate estimates.The two sets of questions posed are given in Table 2.All of the questions had five-step ordinal response choices.The responses to the questions posed to the participants were analysed in order to discover any relationships between the variables in question and the actual forecasting performances achieved in the respective rounds of the experiment.The results of this analysis are presented and discussed in Section 4.2.Overall, there is evidence that the RT approach results in statistically significant better forecasting performances.The improvements are more prominent for high noise.Although gains of 5.72% and 9.20% were observed for stationary and trended series, respectively, these were not statistically significant at the 0.05 level.Focusing on the very first row of Table 3, where all horizons are considered, the only case in which RT performs worse comes from the seasonal series.Even though this difference is not statistically significant, suggesting that UJ and RT perform similarly, we attempt to understand the reason behind this result by examining separately series with and without evident seasonality for the very first years, as was discussed in Section 3.2.The results of this analysis suggest that RT might not be suitable for series with developing seasonality.In terms of the type of feedback provided to the participants, it is apparent that bias feedback demonstrates the most significant improvements, while the improvements for accuracy feedback are generally smaller and not consistent.One could argue that providing errors in an absolute format may lead to confusion, as the participants may not be able to evaluate this kind of information correctly.On the other hand, bias feedback for each point in the form of signed bar charts is easier to interpret and understand, and indicates a clear strategy for improving one’s forecasts.It is notable that bias feedback, which involved the provision of signed percentage errors for each individual period, improved the accuracy for seasonal series.It is unlikely that providing the mean of these percentage errors would have been as effective, because any tendency to over-forecast for some seasons and under-forecast for others would have been masked by the averaging process.Another very important observation is that RT results in improvements for series both with high noise and when longer horizons are examined.These improvement gains are statistically significant at the 0.05 level when all types of feedback are pooled together.However, the differences between RT and UJ are not statistically significant at the 0.05 level for the shorter horizon and low noise series.Lawrence, Edmundson, and O’Connor suggested that, when the forecasting task is based on graphs, judgmental forecasts can be as good as statistical model forecasts, at least for the shorter horizons.In contrast, unaided judgmental forecasting is likely to be relatively inaccurate for longer horizons and series with high levels of noise.The use of a direct rolling training scheme improves graph-based judgmental long-term forecasting, building on the relative efficiency of judgmental over statistical approaches.The negative association between the confidence level and MAPE in UJ changes to no correlation for RT.Moreover, participants tend to have fewer expectations for the performances of their submitted forecasts when using RT than UJ.These outcomes are very important, as it is obvious that RT leads participants to be more cautious in their expectations, thus potentially mitigating a well known problem of judgemental forecasting, namely the underestimation of uncertainty.As we would expect, a propensity to examine graphs has a negative association with the MAPE, suggesting that improvements in forecasting accuracy are recorded as participants devote more time to this task.However, literally no differences are observed between the two approaches in terms of mean values of the frequency of examining graphs and patterns.One would have expected that RT would motivate the participants to examine the graphs and series patterns more carefully; however, such was not the case.The forecasting performances achieved with both UJ and RT are associated with the time that the participants reported spending in producing the forecasts for each series—the more time they spent, the greater the accuracy they achieved.However, the correlation is stronger in the case of UJ, meaning that the forecasting performance achieved using the RT approach can be seen as more time invariant.Also, there is evidence that participants who were less accurate recognised that spending more time on the task might have resulted in a change of their forecasts.The same analysis was performed for the second set of questions.The majority of the participants found the RT approach to be either effective or very effective.However, familiarity with forecasting exercises, perceived effectiveness of RT and motivation to produce accurate forecasts were associated with the forecasting accuracy only weakly or moderately.Interestingly, participants’ self-reported level of expertise had a strong positive association with their realised MAPE, so that those who considered themselves to have greater expertise produced less accurate forecasts.Further work would be needed to establish why this was the case, but it is consistent with the Dunning–Kruger effect, where relatively unskilled people mistakenly consider their level of ability to be higher than it really is.Clearly, such an effect would have important implications for EKE if choices are being made between experts’ forecasts based on their self-rated expertise.The key finding of this study is that, in tasks involving time series extrapolation where no contextual information is available, the judgmental forecasting accuracy of people with a technical knowledge of forecasting can be improved substantially by providing the forecasters with simple, understandable performance feedback.This suggests that training based on feedback can be a valuable element of the EKE process when time series need to be extrapolated.A number of characteristics of this feedback appear to be crucial.First, in order to be most effective, the feedback should relate to bias, rather than accuracy.As was discussed earlier, feedback on bias provides a clear indication of how future forecasts might be improved, whereas feedback on accuracy does not provide any indication of possible improvement strategies.Nor does it provide an indication of whether any improvement in accuracy is even possible.For example, does an APE of 10% represent the limit of the accuracy that can be achieved, given the noise level, or is there scope for further improvement?,Second, the attribute of the bias feedback that appeared to contribute most to its effectiveness was the feedback of a set of individual errors, rather than an average of these errors.In series where the signal has autocorrelated elements, such as seasonal series, judgmental biases may lead to positive errors at some stages of the cycle and negative errors at other stages.Presenting individual errors allows each observed bias to be associated with a specific period, and avoids the cancelling out of opposing biases that would be a feature of any averaging.Also, the need to select an appropriate length for averaging the point forecast errors is now removed.Third, presenting the bias feedback as a bar chart may have enhanced its effectiveness, though further research would be needed to establish this.For example, a set of four negative bars would be a strong, simple and clear indication that the previous set of forecasts was too high.A table of four numbers would probably provide a less salient message.Fourth, the rolling nature of the feedback enabled it to reflect improvements in performance quickly, while at the same time avoiding the danger of confining a participant’s attention to the performance of the most recent forecast.Moreover, rolling across origins for one series before moving on to the second series helped the participants to focus on each series separately and better understand the improvements in their performance over time.However, this is not a realistic representation of the typical forecasting task; it is more common for feedback to arise across time series.Recent research suggests that the focus on helping people to learn how to avoid bias is appropriate.A study by Sanders and Graman found that accuracy was less important than bias when translating forecast errors into costs.In their survey of forecasters, Fildes and Goodwin expressed surprise at the number of company forecasters who never checked the accuracy of their forecasts.The current study and the findings of Sanders and Graman suggest that monitoring and feeding back levels of bias may be just as important as checking accuracy levels, or even more so, if the objective is to obtain improved forecasts and minimize the costs of errors.The proposed RT approach offers an innovative direct feedback approach to time series forecasting.Usually, time series forecasting occurs periodically and across series.Thus, any feedback from the performance achieved on the previous periods would probably be regarded as outdated.RT offers direct, timely and salient feedback on the performance over a number of periods, focusing on the performance for a single series.Providing the past forecast errors for each period allows specific periods in which the performance dropped to be identified.These two features of RT enable forecasters to achieve better performances for the longer horizons and the more volatile series.This is due to the fact that RT essentially invites the forecasters to examine the patterns in the series closely across a number of horizons, rather than focusing only on short-term forecasts.In addition, as the performance is provided in a rolling manner, forecasters are able to understand the limits of predictability for each series.As such, RT may have an important role to play, being particularly suitable for forecasting and decision making under low levels of predictability.Judgmental forecasting is employed in a wide range of contexts for estimating the future values of time series.However, numerous studies have shown the limitations of judgment, even when it is elicited from individuals with technical expertise.The current study has examined the effectiveness of a rolling training scheme that provides direct feedback by reporting to participants their performances for given tasks.This involved reporting signed or absolute percentage errors for each period on a rolling basis, as opposed to metrics that summarise performances over several periods.Real time series featuring a number of characteristics were used.The participants provided estimates for both the control case and the test case, leading to an increased power.This was achieved using a symmetric experimental design.Although the analysis was not based on data collected in the field, the experimental approach allowed the effects of feedback of different types to be measured and compared efficiently under controlled conditions.Experiments like these have played a valuable role in areas such as behavioural operations management, as one component of a process of triangulation with field research.An analysis of the judgmental estimates indicates that a rolling training scheme can improve the accuracy of the judgmental extrapolations elicited from forecasters with technical knowledge, especially when this is combined with feedback in the form of signed errors.Because signed errors indicate the biases in the forecasts, they enable participants’ forecasting accuracies to be enhanced.This is particularly obvious in non-stationary series.On the other hand, accuracy feedback based on an absolute form of errors is found to be more difficult to interpret, leading to worse performances in the case of series that exhibit seasonality.Sanders and Ritzman found little advantage in employing judgmental forecasters with technical knowledge.In contrast, the results presented here suggest that it is worth designing EKE schemes that build on the technical expertise acquired though didactic learning by providing experiential learning based on feedback that is accurate, timely, suggestive of how improvements might be made, and easy to interpret.One very interesting finding is that the improvements achieved by using a rolling training procedure are greater for longer forecasting horizons and noisy series.On top of the improvements in forecasting performance achieved, the rolling training procedure also made the participants less confident of their forecasts.This is an additional advantage, as there is evidence that people tend to underestimate the levels of uncertainty associated with their forecasts.The current paper has focused on analysing performances over the final set of periods, contrasting unaided judgment with rolling training.However, a further possible objective with the current experimental design would be to analyse how the forecasting performance changes over time within a single series, as a direct result of applying the rolling training procedure.Moreover, policy capturing regression models may provide insights into the forecasting strategies employed by participants with technical knowledge.This could include a large number of potential cues that are linked with time series forecasting.Of course, the time series forecasting task is often carried out in situations where contextual information is available to expert forecasters in addition to time series data, and it would be interesting to test the effectiveness of rolling training in this context. | There are several biases and inefficiencies that are commonly associated with the judgmental extrapolation of time series, even when the forecasters have technical knowledge about forecasting. This study examines the effectiveness of using a rolling training approach, based on feedback, to improve the accuracy of forecasts elicited from people with such knowledge. In an experiment, forecasters were asked to make multiple judgmental extrapolations for a set of time series from different time origins. For each series in turn, the participants were either unaided or provided with feedback. In the latter case, the true outcomes and performance feedback were provided following the submission of each set of forecasts. The objective was to provide a training scheme that would enable forecasters to understand the underlying pattern of the data better by learning from their forecast errors directly. An analysis of the results indicated that this rolling training approach is an effective method for enhancing the judgmental extrapolations elicited from people with technical knowledge, especially when bias feedback is provided. As such, it could be a valuable element in the design of software systems that are intended to support expert knowledge elicitation (EKE) in forecasting. |
523 | Modelling the future impacts of climate and land-use change on suspended sediment transport in the River Thames (UK) | Climate change is expected to alter soil erosion and sediment transport processes, although the extent and magnitude of these variations are poorly understood.According to Nearing et al., changes in precipitation and temperature and their interactions with land use and vegetation cover are the main climate change-related stressors that are likely to affect sediment transport in the future.These factors are expected to alter sediment production and soil loss, as well as in-channel mobilisation of sediment, phosphorus and contaminants.For example, sediment transport is strongly affected by extreme precipitation and river discharge, owing to the non-linear relation between water discharge and sediment transport rate.In many catchments, short and intense precipitation events are responsible for a large part of the total sediment transport.Climate models predict a change in the behaviour of precipitation extremes.For the UK, extreme precipitation is forecast to increase in the next decades.Its impact on soil erosion has been assessed, for example, by Boardman and Burt et al., who detected an upward trend in average rainfall per rain day in southern England which could increase soil erosion.Land cover changes also impact soil erosion and sediment mobilisation processes.Land use and land management are themselves also changing, mainly due to human interventions, but also as a result of the indirect effects of climate and other human-induced environmental changes.For example, soil erosion in South Eastern England has been largely affected by the shift from grassland to arable land in the second half of the 20th century, due to mechanisation and intensification of agriculture.While the impacts of climate change on sediment transport are increasingly reported in the literature, only a few studies have also considered simultaneous changes in land-use and land management.This has been done in the literature by a scenario-type analysis, where plausible future land-use scenarios were hypothesised and used to alter the model parameterisation under different climatic conditions.Furthermore, there is large uncertainty about the forecasted effect of climate change on sediment transport, given that previous studies have demonstrated that different emission scenarios can lead to opposite results.The effects of both climate change and land-use change on soil erosion and sediment transport should be analysed simultaneously, as they can have important synergistic or antagonistic effects.Eventual mitigation measures based on land management and land-use change, such as reduction of arable land, extension of forested areas and introduction of better agricultural practices, must also be evaluated under the framework of climate change to assess their effectiveness and cost.In order to analyse the non-linear interactions between climate-driven processes that affect sediment transport at the catchment scale, hydrological and sediment models have typically been used along with climate projections from global circulation models and regional climate models.In these approaches, climate model outputs obtained under specific greenhouse gas emissions scenarios are used to drive regional or catchment-scale mathematical models, which in turn provide predictions of the variable of interest under the climatic scenarios considered.In the field of soil erosion and sediment transport research, this approach was used recently in Nunes et al., Coulthard et al., Bangash et al., Mullan, Bussi et al., Routschek et al., Francipane et al., Paroissien et al. and Simonneaux et al.As mentioned before, some of these studies also incorporated an analysis of the system response to different land-cover scenarios.The majority of these studies found a strong dependence on the climatic scenario concerned, with the different sensitivity of the response depending on combinations of climate and land-cover.Some of them also found that climate change-induced land-use change and soil management exert a larger control on soil erosion rates than climate variability.Here we employ an alternative, scenario-neutral approach, which is based on the definition of relevant climatic stressors that affect the variable of interest to quantify the joint effect of climate change and land-use change whilst at the same time evaluating uncertainty associated with the choice of climate scenario.These climatic stressors – potentially including, for example, average temperature and precipitation, precipitation intensity, seasonality and the occurrence of extremes – are perturbed within a Monte Carlo framework to establish the sensitivity of the model’s outcomes to their variation.The framework is designed explicitly to quantify interactions between climatic variables and land use.The model results then form a response surface which can be compared with changes predicted using climate models.Recent applications of this scenario neutral method within hydrology, water resources and water quality research are reported by Bastola et al., Fronzek et al., Wetterhall et al., Brown et al., Brown and Wilby, Prudhomme et al., Poff et al., Prudhomme et al. and Bussi et al.The scenario-neutral methodology has some important advantages for sediment-oriented studies.For example, it allows exploration of the system’s resilience to the full range of possible climate scenarios independently from individual climate modelling results.This can be very important in sediment transport studies as it enables to highlight the climate drivers associated with critical thresholds in the system, due to the non-linearity of the processes.Such thresholds might not appear when using a conventional top-down approach if the drivers are outside the range of available climate change projections.The response surface also acts as a tool for decision-makers that can be used to explore a wide range of possible sensitivities within the system to guide low- or no-regrets adaptation measures.In this paper we assess the effects of climate change and land-use change on the sediment transport of the River Thames, which has tangible value for human water consumption, for the ecosystem and for conservation.We present an extension of the scenario-neutral methodology which accounts for the joint effects of different climatic stressors and land-use scenario.Using the hydrological and water quality model INCA we quantify the response of the specific suspended sediment yield to changes in annual average precipitation, extreme precipitation and annual average temperature.This analysis is repeated under four different scenarios of land use: current land use, a future scenario describing an expansion in arable land, a future scenario considering a substantial reduction in the arable land and a theoretical scenario of total agriculture abandonment.The resulting response surfaces are compared with future climatic scenarios to establish the likelihood of changes of different magnitudes and to test the hypothesis that climate and land-cover changes exert a joint control on soil erosion and sediment transport.In this study, we intend to propose a new methodological approach based on the scenario-neutral methodology, which can be used to quantify the impact of land-use change under different climatic scenarios, taking into account the model parametric uncertainty.Using this novel tool, we address the following research questions:What climatic variable exert the strongest control on soil erosion and sediment transport in the River Thames catchment?,What are the interactions and feedbacks between different climatic variables and land use and what is their effect on sediment transport?,What is the role and extent of land-use change in affecting sediment transport under a changing climate and how can it be used to contrast sediment transport?,The study area of this paper is the River Thames, located in southern England and draining toward the city of London.Its water is used for freshwater supply to fourteen million people and its non-tidal section receives treated wastewater from approximately three million inhabitants.The climate of the River Thames catchment is temperate with both Atlantic and continental influences.The annual precipitation is 730 mm per year and the average temperature is 10.7 °C, with a difference of around 2 °C between the uplands and the lowlands.The average flow is 67 m3 s−1, with a Q95 of 206 m3 s−1.High flows usually occur in winter to early spring and low flows in summer to late autumn.The geology of the catchment is dominated by a chalk strip that crosses the catchment in its central part from east to west.The headwaters are composed predominantly of limestone, and clay/mudstone and sandstone are also present both upstream and downstream of the chalk strip.The catchment is characterised by arable land use in its upper part, with little urban land in the headwaters but with intensively urbanised areas in the lowlands.A non-trivial fraction of the catchment is covered by forest.Meteorological inputs for the hydrological and water quality model, consisting of daily precipitation and temperature time series, were obtained from the UK Met Office.The daily precipitation, minimum temperature and maximum temperature from all the available stations within the Thames catchment were interpolated on a 5 × 5 km grid using the Thiessen polygon method, and then the daily average precipitation and temperature series were computed and used as model input.Land cover data were obtained from Fuller et al.For the hydrological sub-model model calibration and validation, records of continuous daily water discharge at the downstream section of three of the four INCA reaches were obtained from the National River Flow Archive.The sediment sub-model was calibrated using different datasets, including weekly observations of suspended sediment concentration from the Thames Initiative research platform dataset, collected by the UK Centre of Ecology and Hydrology.The suspended concentration was measured by collecting a single water sample for each measurement.The samples were collected using a suspended sediment sampler, positioning the sampler intake around the 60% of the river depth.The samples were then stored in plastic bottles and subsequently taken to the laboratory.The laboratory analysis was carried out by filtering the sample through microfibre filters.The papers were then dried, weighed, and suspended solid concentrations were calculated.The suspended sediment sample were collected with a weekly frequency, from March 2009 until May 2014, in six stations along the main stem of the River Thames, corresponding with the stations 1, 3, 4, 11, 13 and 19.The samples were collected regardless of the stage, flow or season.In this study, the INCA hydrological and water quality model was employed to reproduce the water and sediment dynamics of the River Thames.The INCA model was initially developed as a nitrogen and phosphorus model, although several other sub-models were added later, such as a soil erosion and sediment transport sub-model.The hydrological and water quality sub-models of INCA have been applied to several basins across the UK and Europe, and, in particular, to the River Thames catchment.INCA is a semi-distributed process-based model which simulates the transformation of rainfall into runoff and the propagation of water through a river network.Its inputs are daily time series of precipitation, temperature, hydrologically effective rainfall, and soil moisture deficit.The latter two are estimated using another semi-distributed hydrological model, called PERSiST.PERSiST is a semi-distributed catchment-scale rainfall-runoff model which is specifically designed to provide input series for the INCA family of models.It is based on a user-specified number of linear reservoirs which can be used to represent different hydrological processes, such as snow melting, direct runoff generation, soil storage, aquifer storage and stream network movement.The description of its application to the river Thames can be found in Futter et al.The model can be calibrated by adjusting its parameters.Some of the most influential hydrological model parameters are the direct runoff, soil water and groundwater residence times, which control the hydrological response of the catchment, the maximum soil moisture deficit and the flow routing parameters.The model also has several sediment parameters.For example, the sediment production is controlled by the splash and flow erosion potential parameters and EFL in Eq.), the splash erosion scaling parameter) and the flow erosion calibration parameters).The transport of material from the hillslope to the channel network is controlled by the transport capacity calibration coefficients).The sediment transport and deposition in the river channel is controlled by the shear velocity coefficient, the entrainment coefficient) and two background release calibration coefficients.The INCA model has already been applied to the River Thames catchment.In this study, the same model structure is proposed, where the catchment is divided into 22 sub-catchments and the river into 22 corresponding reaches.For each of them, different parameters are considered.For example, topography is considered through the average slope of the sub-catchment and the slope of the channels, sub-catchment shape is considered through the use of the ratio between area and length, soil texture is considered as an input parameter, variable depending on the sub-catchment and on the land use and geology is taken into account by employing different base flow index values depending on the sub-catchment.The following land-use categories were considered: urban, arable, grassland and pasture, wetlands and forest land.A general sensitivity analysis was applied to the INCA model of the River Thames.Following a preliminary sensitivity analysis, and based on the modeller’s knowledge, the following parameters were selected as the most influential and the sensitivity of the model results to them was analysed: direct runoff, soil water and ground water residence times, rainfall excess proportion, maximum infiltration rate, flow-velocity coefficient, flow threshold for saturation excess direct runoff, flow erosion direct runoff threshold, splash detachment soil erodibility parameter, flow erosion soil erodibility parameter, transport capacity scaling factor, transport capacity non-linear coefficient, channel entrainment coefficient, release scaling factor and release non-linear coefficient.ESP, EFL and qDR are land-use-specific, although some expert knowledge-based rules were set to constrain their values, such as for example that ESP and EFL for arable land must be greater than ESP and EFL for grassland.The ranges of variation of the model parameters were also based on the modeller’s knowledge previous studies, although they were kept reasonably broad.The feasible space of model parameters was sampled randomly, and 10,000 different parameter sets were generated.Subsequently, the INCA model was run with each of these parameter sets, and its performance was assessed based on observed values of flow and sediment at two stations, using data from 2010 to 2014.The metric used for model assessment was the Nash and Sutcliffe Efficiency.Thresholds of NSE values were used to split the 10,000 parameter sets into behavioural and non-behavioural.In particular, a threshold of 0.6 for the flow and a threshold of 0.1 for the suspended sediment concentration were used.Following the model evaluation guidelines of Moriasi et al., the flow threshold corresponds to a “good” model performance.The selected behavioural models were used in the rest of the study, providing ensemble results of flow and suspended sediment concentration.In contrast with top-down approaches to climate change studies, which use climate model outputs to drive hydrological and environmental models, the scenario-neutral method takes a bottom-up approach in which vulnerability ranges of a given hydrological or environmental indicator are defined.A response surface is then produced which depicts the changes in the relevant indicator subject to the range of climatic and other environmental changes under consideration.The likelihood of these changes is assessed by integrating information about future climate into the results of this methodology.A schematic diagram showing the method used in this study is given in Fig. 2.First, the climatic stressors most likely to impact SSY were identified.Plausible changes in these climatic stressors, described in Section 3.4, were then applied to the current climatic observed series of daily precipitation and temperature from 1999 to 2015.This allowed the creation of a large set of perturbed input time series which were used to drive the INCA model.The INCA model, driven with the altered time series, produced a set of time series of daily water discharge and suspended sediment concentration, corresponding to each perturbed input series from which a corresponding SSY value was calculated.This procedure was repeated using four land use and land management scenarios described in Section 3.5.In order to produce useful results, the choice of climatic alterations used to construct the response surface should be restricted to the main climatic stressors that affect the variable of interest, but they must also sample the full range of possible climate futures effectively.The climatic variables which exert the strongest controls on river flow are precipitation and temperature.Suspended sediment yield is most strongly affected by precipitation and river flow, which control soil erosion and in-channel processes of sediment mobilisation and deposition.However, it is widely known that suspended sediment entrainment and transport occurs disproportionately during precipitation and flow extremes, due to the non-linear relationship between flow and sediment transport by water and so we also consider the effect of changes in extremes.The projected changes in total precipitation and average temperature following the UKCP09 projection are shown in Fig. 3a for the study region.They are in the range of −20% to +20% for the annual precipitation and 0 °C to +2 °C for the temperature.However, we considered a broader set of changes: changes in average temperature between −1 °C and +6 °C and changes in precipitation between −30% and +40%.In each case the range of possible changes was divided uniformly into fifteen divisions.The resulting set of 225 altered precipitation and temperature time-series were generated by applying a uniform “delta change” transformation to observed daily precipitation and temperature values accordingly.The distribution of future changes projected by the UKCP09 probabilistic sample is given in Fig. 2b and c for the study region.As variations in extreme precipitations are not a standard product of UKCP09, these were obtained by analysing 10,000 transient stochastic daily series of precipitation and temperature produced by Glenis et al. and also used by Borgomeo et al.These are daily time series of precipitation and temperature from 1950 to 2060.Therefore, it was possible to estimate the maximum annual precipitation values for the control period and for the future period, compute an empirical cumulative distribution function for both time periods and calculate the difference.In Fig. 2b, the variations of temperature and extreme precipitation are depicted, following the UKCP09 over two different temporal horizons: 2030s and 2050s.In Fig. 2c, the empirical cumulative distribution function of the annual maxima of daily precipitation is represented, both for the control period and for the future period, calculated from the 10,000 climate projections from the UKCP09 as described above.A shift towards higher values of annual maximum daily precipitation is projected.Owing to the importance of hydrological extremes events for erosion processes, changes in extreme precipitation are also considered in this study.Specifically, extreme precipitation events are defined as events with daily rainfall above 15.7 mm, which is the minimum of the annual maxima of observed daily precipitation from 1960 to 2015.The changes were implemented by altering the baseline daily precipitation time series following a transformation function based on the empirical quantile mapping approach.In order to explore a reasonable range of alteration in extreme precipitation values, two transformation functions were used.The two transformation functions used in this study were based on changes in extreme precipitation forecasted by the UKCP09.The first alteration, or transformation function, corresponds to a small but likely increase in extreme precipitation.Specifically, the median change forecasted by the UKCP09 was selected.The second alteration corresponds to a larger but more unlikely shift in extreme precipitation than the first one.Specifically, a change larger than the change forecasted by 9750 out of 10,000 UKCP09 scenarios was selected.For the sake of readability, these two extreme precipitation scenarios were called “small” and “large” increase in extreme precipitation, respectively.The two transformation functions are showed in Fig. 4 and Table 1.They are consistent with previous studies, including Fowler and Ekström who reported a change of −10% to +20% in the summer 10-day 5-year return period precipitation in South East England, and a range of 0 to +20% in winter.These transformation functions were applied on the precipitation days larger than the threshold specified above, and therefore they also alter slightly the total precipitation, by 0.9% and 4.5% respectively.One of the limitations of the scenario-neutral methodology is the reduction of number of stressors considered for a best readability and easier interpretation.This means that not all climate-related stressor that affect sediment transport have been considered.Other climatic stressors exist that might vary in the future and could have an impact on SSY.For example, a change in precipitation seasonality could change the wetting and drying cycle of the soil and therefore change soil properties such as infiltration and erodibility, in-channel sediment storage as well as affecting river flow.A variation in the number of rainy days also could have an effect on sediment transport, in the case of both an increase or a decrease in the number of days of precipitation.However, one of the advantages of this methodology is that it allows isolating the effects of single stressors leading to conclusions that can drive decision making.Because changes in the stressors mentioned here are expected to have a secondary role compared with the effect exerted by changes in average precipitation and temperature and changes in extreme precipitation, and because the effects of drying and wetting cycles on soil crusting and soil properties are not accounted for in the modelling of sediment delivery and transport model, they were not considered in this study.In this study we considered four land cover scenarios: present day land use; arable land expansion; arable land reduction; and agriculture abandonment.Present day land use was obtained from the UK Land Cover Map 2007.The catchment headwaters are dominated by arable land, while urban and forest land uses assume more importance in the lowlands.The arable land expansion scenario was defined according to the land cover model LandSFACTS, which focuses on crop arrangement under increasing population, considering food security as a dominant driving force for land use change.Arable conversion from other land uses was permitted only on prime land and only from areas previously identified as grasslands, and resulted in an increase from 3526 km2 to 5988 km2 of arable land.This means that the portion of arable land increases from 36% to 60% for the Thames at Teddington.This scenario was introduced into the INCA model by altering the fractions of the catchment assigned to each land use.The INCA model allows runoff production, soil erosion and sediment delivery parameters to be changed depending on the land use, thus representing the effect of changing land use on sediment transport.The arable land reduction scenario was set up by reducing the arable land by 50%, increasing forest land by 20% and assigning the remaining land to grassland.This was done in order to analyse the effect of reducing the arable land as a strategy to reduce sediment export by the River Thames.The agricultural abandonment scenario was implemented by setting arable land to 0%, increasing forest land by 20% and assigning the remaining land to grassland.This theoretical scenario was considered with the aim of analysing the hypothetical sediment transport response of the catchment if it was to be returned to a more natural status.This scenario, although highly unlikely in the foreseeable future for the Thames catchment, has already taken places in other parts of the world, such as the Spanish Pyrenees.The 675 simulations described above were repeated four times, one for each land-use scenario, and their results were compared to assess the net impact of land-use change on sediment transport under a changing climate.The UKCP09 probabilistic change factor scenarios were developed by the UK Met Office to provide climate change projections of climate change over the UK with greater spatial and temporal detail than previous climate scenarios, but accounting for important uncertainties in Global Climate Models.These projections are based on the results of the HadCM3 coupled ocean-atmosphere Global Circulation model, which was run as a perturbed physics ensemble to sample model and parameter uncertainties.HadCM3 projections were downscaled on a 25 km grid over seven overlapping 30-yr time periods based on an ensemble of 11 variants of the regional climate model HadRM3, and a statistical procedure was applied to build local-scale distributions of changes for various climate variables.UKCP09 gives projections for each of three of the IPCC’s Special Report on Emissions Scenarios scenarios.Among the available outputs, expected changes in average precipitation and temperature following the different emission scenarios are given.In the present study, we assess the risk of changes in SSY by comparison with climatic properties taken from a set of 10,000 change factors under the A1FI emission scenario for two temporal horizons: 2030s and 2050s.The A1FI scenario was chosen as it is the most severe scenario available, but one of the strength of the scenario-neutral methodology is that the scenario could be easily replaced without having to re-run all the simulations.The Monte Carlo general sensitivity analysis resulted in 21 behavioural models.Fig. 6 shows their results for the River Thames at the reach 4, both in terms of flow and suspended sediment concentration.The results of the sediment sub-model are also shown in Fig. 7, where the distribution functions of the observed values of suspended sediment concentration are compared to the distribution functions of the modelled values.Note that the modelled values were resampled with the same time frequency as the observed values for consistency.The model results indicate a SSY of 0.030 Mg ha−1 yr−1 for reach 19 and 0.033 Mg ha−1 yr−1 for reach 4 for the period 2010–2014.Fig. 7 also shows the spatial validation of the model.It can be observed that the results of the model at reach 1, 3, 11 and 13 are good in terms of reproduction of the observed distribution function of suspended sediment concentration.Spatial validation showed model biases of 0–27% for reach 1 and 4–65% for reach 13 regarding suspended sediment concentration.The precipitation and temperature changes considered in this study largely affect river flows, causing alterations in the average water discharge of the River Thames at reach 19 from −50% to +83%.The increase in extreme precipitation considered causes a further increase of water discharge.The simulated SSY varies from 0.010 to 0.148 Mg ha−1 yr−1 for the Thames at reach 19, under uniform changes of precipitation and temperature and no increase in extreme precipitation.The small increase in extreme precipitation scenario considered in this study is responsible for an additional average increase of SSY of 2%, given the same condition of uniform precipitation and temperature change, while the large increase in extreme precipitation scenario causes an average additional SSY increase of 11%.Note that here we do not suggest that any of these changes is likely, neither that they are meteorologically plausible; we simply calculate them to be the catchment’s response to such a combination of changes were they ever to occur.Fig. 8 shows the results of the scenario-neutral methodology in terms of SSY response to climatic variations for station 4 under current land use.The top plots represent the median change in SSY and the bottom plots show the standard deviation of the change in SSY.The left-hand side plots represent the response of the system to changes in annual precipitation and temperature, with no changes in extreme precipitation, the central plots represent the response of the system to changes in annual precipitation and extreme precipitation, with no changes in annual temperature, and the right-hand side plots the response of the system to changes in annual temperature and extreme precipitation, with no changes in annual precipitation.The land-use change scenarios considered in this study have little effect on mean river flows compared to the impact of the climatic stressors, with a slight reduction in water discharge, owing to an increase in evapotranspiration.By contrast, the impact of land-cover change on suspended sediment transport is considerable.Land-use change does not alter the pattern of the system response to changes in precipitation and temperature, but does affect the magnitude of SSY values.In Table 2, the net contribution of land-use change to the variations of SSY is shown for reach 4 and 19, i.e. the increase or decrease in SSY caused by a change in land use under the same climatic conditions.For example, for reach 4, the arable land expansion scenario, causes an average SSY increase of 41%, while under the arable land reduction scenario SSY decreases on average by −30%, and under the agriculture abandonment scenario −59%.In Fig. 9, the joint effect of climatic change and land-use change on SSY is shown for the Thames at reach 4 and reach 19.These histograms describe the range and probability distribution of sediment response to a range of climate and land-use scenarios.Due to the use of a model ensemble rather than a single model, the histograms are represented as an envelop curve rather than a single line.In general, the land-use change impact is small compared with the climate change uncertainty range, but certain land-use changes can exacerbate or reduce the impact of climate change systematically.It is very important to note that there is a clear difference between the different land use management options.The response surface plots shown above can be used for exploring combinations of climate change and land use or land management options.Nevertheless, they do not convey information about the plausibility or probability of the expected climatic changes, unless they are compared with projections from climate models.In this section, we analyse the response of the system to changes in average precipitation, average temperature and extreme precipitation projected by the UKCP09.The values of change in SSY depending on the land use and extreme precipitation scenarios are reported in Table 3, where the median changes and their confidence interval are shown for each combination of land use and extreme precipitation change.In Fig. 10, the effectiveness of the arable land reduction as a sediment transport reduction mitigation measure is assessed.In this diagram, the difference between the SSY under the baseline scenario and arable land reduction scenario is represented with different shades of colour.The SSY decrease is a proxy measure of the average effect of reducing arable land.Each pixel in this figure represents the effectiveness for reach 4 and reach 19 under a specific combination of change in average precipitation and increase in extreme precipitation, given a fixed increase in average temperature.The space of combinations of changes in average precipitation vs changes in extreme precipitation following the UKCP09 is also depicted as black dots.The larger plots represent the results obtained with the median of the model ensemble, while the smaller plots are the minimum and the maximum respectively.The standard deviation of the change in SSY is represented in the bottom plots, as derived from the ensemble of models employed in this study.These two plots provide an estimation of the uncertainty affecting the land management effectiveness assessment.The model results, though satisfactory in terms of reproduction of the water and sediment dynamics of the River Thames catchments, reveal the presence of uncertainty in the model predictions.From Fig. 6, it can be seen that some of the highest values of suspended sediment concentration are underestimated by the model.This could be due to processes that are not well reproduced by the model, such as localised river bank failure or erosion from farm tracks.This appears not to be a concern in the upper reaches, as shown in Fig. 7, while in the lower reaches the model tends to slightly underestimate low probability concentrations.From the perspective of this study, this is likely to affect the model representation response of the lower Thames to changes in extreme precipitation, leading to small underestimations.Nevertheless, this study is focused to estimating the impact of different stressors on suspended sediment load, which is the product of flow and suspended sediment concentration.The thresholds chosen for model selection for flow and suspended sediment concentration are well within the ranges provided by Moriasi et al., and therefore the model estimates of suspended load can be considered reliable.The SSY obtained by the model is close to estimates of 0.068 Mg Ha−1 y−1 for the Thames at Days Lock by Neal et al., and lower compared with other large catchments in the UK.Furthermore, the sensitivity analysis was carried out using a variety of climatic conditions, from a dry year to a very wet winter, thus ensuring that the model is able to respond to a wide range of climatic conditions.A broad range of climatic variations was explored, to understand the effect of climatic variations not necessarily included by the available climate model projections.From the response surfaces, it can be seen that SSY increases with precipitation, due to larger soil erosion and to larger river channel flow.SSY is also affected by temperature: higher temperatures increase evapotranspiration and thus decrease flow and SSY.Extreme precipitation also affects SSY, given that an increase in extreme precipitation triggers an increase in SSY.Concerning the uncertainty, this seems to be larger when large increase in annual precipitation are considered, while it is smaller when decrease in annual precipitation are considered.In other words, the ensemble of behavioural models used in this study tends to converge to similar results when a decrease in annual precipitation is considered, while it tends to diverge when a large increase in annual precipitation is taken into account.Fig. 8 shows the magnitude of SSY change in response to climatic alteration and the uncertainty that affects these estimates, but it can also provide a measure of the sensitivity of the system to climatic changes.This can be interpreted by looking at the gradient of the values in the plots.For example, it can be seen that the gradient corresponding to changes in annual precipitation is much larger than the gradient of the other two climatic variables.This means that, within the range of the climatic alterations considered in this study, the most influential, or dominant, is average precipitation, suggesting that the River Thames catchment has a much larger sensitivity to changes in average precipitation than to changes in extreme precipitation or temperature.There is more variability within simulations with the same temperature change than between them, meaning that temperature changes and extreme precipitation changes have second order effects compared to the effects of changes in average precipitation.These results substantially agree with previous studies on the impact of climate change on sediment transport, although the extent of the role played by changes in extreme precipitation is usually high uncertain.In this study, this was quantified in detail for the River Thames.It must be noted that this effect is expected to be highly local, i.e. the impact of changes in extreme precipitation is expected to vary depending on climate and location of the catchment.Even within the same catchment, different sub-catchments respond differently to climatic alterations, due to different land-use configurations.For example, a scenario with an increase of 3 °C and a decrease of 10% in precipitation returns a decrease of 29% in SSY for reach 4 and 20% for reach 19, and a scenario with a decrease of 1 °C and an increase of 20% in precipitation returns an increase of 44% SSY at reach 4 and 39% at reach 19.Similarly, a large increase in extreme precipitation causes an average increase in SSY of 14% at reach 4 and of 11% at reach 19.This is because the uplands of the River Thames are more sensitive to changes in precipitation than the lowlands, due to the larger extension of arable land in the upper sub-catchments.The impact of land-use change on water discharge is very limited and consistent with that reported by Crooks and Davies.No catchment-scale studies were available regarding the impact of land-use change on the sediment transport of the River Thames, to the authors’ knowledge.Land-use change appears to be a key driver of SSY alterations, although it is important to remember that both land-use change and climate change scenarios were chosen by the authors and do not indicate the likelihood of changes.It is very important to note that, despite modelling uncertainty, a strong signal of change between the different land-use options can be seen, i.e. all the behavioural models lead to the same conclusions in terms of impact of land-use change on SSY.This means that the approach proposed in the study was able to identify the effect of land use management on SSY taking into account the model uncertainty, leading to robust conclusions.In terms of spatial variations of the effects of land-use change, Fig. 9 shows that there is a slightly different response of reach 4 and reach 19.This is related mainly to the fraction of catchment dedicated to arable land use, and the proportion of arable land versus grassland, but it is also connected to other phenomena, such as the balance between sediment availability and sediment transport capacity of a river reach, which can be altered by changes in climate and land use.The use of a mathematical model, such as the INCA model, allows us understanding and extrapolating the extent of these non-linear interactions between different processes under conditions that have not been observed yet.It is difficult to compare the results of this paper with previous studies, given that just a few modelling studies have been published so far about the joint impact of climate change and land-use change on sediment transport at the catchment scale.However, a few recent studies also noted that land-use change effects can be as relevant as climate change effects, especially in human-impacted catchments and agricultural areas.For example, Paroissien et al. found that the effect of land-use change was much more significant than the effect of precipitation change in an agricultural catchment in Southern France.Serpa et al. also pointed out the role of land-use changes in minimizing the indirect effects of climate changes for two catchments in Portugal and stressed the importance of an integrated approach combining the effects of climate and land-cover change for a realistic evaluation of the future state of natural resources.Similarly, Simonneaux et al. noted that climate changes alone might be of minor importance compared to changes in land use for an arid catchment in Morocco, especially regarding the evolution of badlands, which are closely conditioned by human actions.Rodríguez-Lloveras et al. showed that land–use change can counterbalance climate change for a catchment in Southern Spain.Nevertheless, these studies were conducted in more erosion-prone areas, and they might not be comparable with the River Thames.Routschek et al. presented a study on a temperate catchment in Germany, which led to conclude that land-use change and soil management induced played a more relevant role than climate change alone, and similar conclusions were drawn by Mullan et al.In the present study we show that for the River Thames the extent of climate and land-use change effect is variable depending on the sub-catchment and the river reach, and, although the climate change impact appears to be predominant at the catchment scale, the amount of arable land also controls an important part of the total sediment production and sediment transport.Finally, it is worth mentioning that the INCA model land use and vegetation biomass parameters are static, and are not affected by intra- or inter-annual climatic variability.Therefore, the effect of climate change acting on vegetation and land use, which in turn affects sediment production, was not taken into account.Over relatively short time-scales in a heavily managed setting like southern Britain, where land cover is often not natural, the effects of this feedback were considered to be minimal.This is acknowledged as a priority for future regional-scale sediment model development, although it is considered for example in the sediment transport model PESERA.The small increase in extreme precipitation considered in this study has limited effect.On the other hand, the large increase in extreme precipitation has a very clear effect at both stations, increasing the SSY by around 13% at reach 4 and around 10% at reach 19, for the 2030s.These figures changes under different land-use changes scenarios: for example, under the arable land expansion scenario they are +16% and +11% respectively, for the 2030s, while under the arable land reduction scenarios they are +11% and +12%.While exploring a broad range of climatic combination may help understanding the system response under global change, the incorporation of climate model forecasts like the UKCP09 in the scenario-neutral methodology provides policy-makers with a clear figure of what the expected changes will be.It has to be acknowledge that the uncertainty of the final results is extremely large, but the methodology described in this paper can at least account for the climate model uncertainty, by using a large set of climate model outcome like the UKCP09 product, and the hydrological model parametric uncertainty, by employing an ensemble of equifinal behavioural models rather than a single model.Fig. 10 shows that the reduction of arable land as a measure to reduce SSY in the Thames is effective under all the climate change scenarios considered for the Thames, with reductions of SSY ranging from 5% to −35% for reach 4 and between 7% and 25% for reach 19.The effectiveness of the arable land reduction within the area defined by the UKCP09 projections ranges between −20% and 30%.On the other hand, this plot also shows that the effectiveness of the arable land reduction may vary depending on the climate scenario.For both sub-catchments, the proposed arable land reduction is effective on a scenario of precipitation reduction, but they are also effective in the case of increase in extreme precipitation.In particular, the SSY reduction is more sensitive to changes in annual precipitation rather than changes in extreme precipitation.Fig. 10 also shows the modelling uncertainty of the results.For example, it can be see that for reach 4 the standard deviation of the SSY reduction is larger than for reach 19 in the central area of the plot, i.e., the area of climatic outcomes defined as plausible by the UKCP09.Knowing the modelling uncertainty in reproducing the system response is of paramount importance for catchment managers, as it provides an estimate of the likelihood of a given land management measure to obtain the expected results.This plot also shows the potential of the scenario-neutral methodology for assessing a soil erosion mitigation strategy under changing climate and changing land use.Owing to the particular approach, which analyses several different combinations of climate and land use, it was possible to assess whether this strategy was robust and effective under different climatic and land-use conditions.This provides decision makers and land managers with a simple tool that might inform climate change adaptation policy.However, this was not considered in this study because the representation of soil erosion mitigation measures into the model parameterisation requires a more detailed investigation that was out of the scopes of this paper.This paper investigated the joint control exerted by climate change and land-cover change on suspended sediment discharge in the River Thames catchment, through the use of a scenario-neutral method and the UKCP09 projections.The results showed that UKCP09 changes in average precipitation and average temperature are likely to cause a median reduction in the suspended sediment yield of the River Thames by 6% in the uplands and by 4% in the lowlands, although the confidence interval is very broad, owing to the high variability in expected future precipitation and temperature.The UKCP09 projections also project an increase in extreme precipitation, which is likely to increase suspended sediment yield by up to 13% in the uplands, potentially compensating the reduction due to changes in average precipitation and temperature.The main findings of this study are listed as follows:This paper has shown a methodological approach to assess the joint impact of climate and land-use change, taking into account the climate and sediment model uncertainties and leading to robust conclusions.If used along with a climate model, this methodology can also offer a measure of the plausibility of expected changes.The control exerted on the soil erosion and sediment transport of the River Thames catchment by a change in average precipitation is larger than the effect of other stressors considered in this study.Climate change and land use change exert a joint control on sediment transport, with interactions that cannot be neglected.The extent and magnitude of land-use and land-management impacts also vary depending on the location on the river and on the sub-catchment considered and must be assessed locally.The proposed methodology allowed assessing the robustness of arable land reduction as a measure to control sediment transport.This measure appears effective under different climatic conditions, although with different effectiveness.This study also pointed out that their effectiveness may vary depending on the future climate outcomes, providing a quantification of how it varies across the spectrum of future climatic changes. | The effects of climate change and variability on river flows have been widely studied. However the impacts of such changes on sediment transport have received comparatively little attention. In part this is because modelling sediment production and transport processes introduces additional uncertainty, but it also results from the fact that, alongside the climate change signal, there have been and are projected to be significant changes in land cover which strongly affect sediment-related processes. Here we assess the impact of a range of climatic variations and land covers on the River Thames catchment (UK). We first calculate a response of the system to climatic stressors (average precipitation, average temperature and increase in extreme precipitation) and land-cover stressors (change in the extent of arable land). To do this we use an ensemble of INCA hydrological and sediment behavioural models. The resulting system response, which reveals the nature of interactions between the driving factors, is then compared with climate projections originating from the UKCP09 assessment (UK Climate Projections 2009) to evaluate the likelihood of the range of projected outcomes. The results show that climate and land cover each exert an individual control on sediment transport. Their effects vary depending on the land use and on the level of projected climate change. The suspended sediment yield of the River Thames in its lowermost reach is expected to change by −4% (−16% to +13%, confidence interval, p = 0.95) under the A1FI emission scenario for the 2030s, although these figures could be substantially altered by an increase in extreme precipitation, which could raise the suspended sediment yield up to an additional +10%. A 70% increase in the extension of the arable land is projected to increase sediment yield by around 12% in the lowland reaches. A 50% reduction is projected to decrease sediment yield by around 13%. |
524 | Removing isoflavones from modern soyfood: Why and how? | Soy has constituted a significant part of the Western human and animal diet since its industrialised processing started in the 1940s.Nowadays, soy isoflavones have undoubtedly become the most prevalent and potent xenoestrogens in human food.Xenoestrogens are known to impair reproduction efficiency.It has been argued since the late 1950s, that reproductive endocrine disruption could reduce human fertility, resulting in fewer births in industrialised countries.There are certainly many factors, ranging from environmental to socio-economic, involved in the lessened desire to have children, inducing subsequent demographic decline.However, several scientists are now hypothesising an adverse effect of environmental endocrine disruption that triggers reduced sperm count, or increased incidence of spontaneous early miscarriages.If endocrine disruptors are, at least partly, responsible for this situation, then isoflavones, as the most prevalent xenoestrogens since the late 1950s in the human diet, should be considered as additional potential endocrine disruptors.Genistein and daidzein, the main estrogenic soy isoflavones, have been extensively studied since the 1960s, both for their beneficial and adverse effects.Both isoflavones are found in soy at several mg 100 g−1.The deleterious effects of these compounds, as metabolites of clover isoflavones, were first documented in 1946 by Bennetts and co-workers studying New Zealand ewes expressing clover disease, an infertility syndrome.When equol, a metabolite of Daid, was found in human urine in 1982, health concerns became focused on humans.As early as 1984, Adlercreutz suggested that soy isoflavones had beneficial effects, founding his argument on contemporaneous observations of Asian populations, proceeding on the assumption that isoflavones had always been part and parcel of the Asian diet.He hypothesised that isoflavones could act as anti-estrogens, thereby protecting Asian populations from estrogen-dependent diseases such as breast, prostate and colon cancer.However, it is common knowledge that the characteristics of an Asian diet differ from a Western one in numerous ways, with soy intake only representing a small proportion of this difference.Since then, the scientific community has mostly tended to argue in favour of either the beneficial or deleterious effects of soy isoflavones.Estrogenic effects are now seen as being beneficial for men and women with steroid deficiencies, i.e. persons over 50.However, some deleterious effects could also be expected in certain types of cancer or early exposure.Bearing this in mind, the role soy isoflavones play in current human health issues, possibly linked to endocrine-disruptive chemicals, warrants more detailed studies.Accordingly, it is crucial to consider isoflavone consumption in human populations, including the isoflavones contained in the soy added to Western food for nutritional, economic or technological reasons.The interpretation of present-day data would be impacted if it were admitted that soy isoflavones were only infrequently associated with older-style soyfood.In other words, their estrogenic effects, either beneficial or adverse depending on the physiological context, should be reaffirmed.It is well known that: 1) soy cannot be significantly ingested without undergoing a cooking process; 2) cooking processes have been progressively elaborated over the centuries in Asia; 3) there are key differences between traditional and modern soyfood processing techniques.The present study, therefore, examined the impact of several soy cooking stages on estrogenic isoflavone content respecting traditional recipes.A modern process, ultra-filtration, was tested on soy-juice, in order to determine its impact on isoflavone content.The soy products tested for traditional recipes came from local food suppliers.Their isoflavone content was assayed according to the technique described below.Soy-juice used in the ultra-filtration process came from Nutrition & Nature, which belongs to the Nutrition & Santé group.The soy products were dehulled soybean seeds sold in bulk, soy flakes sold in bulk, Celnat soy flour, UHT soy-juice and Tempeh.The home-made Tempeh was prepared by Nurlaili Robert.In this document, isoflavone quantities are expressed in aglycone equivalent because of the enzymatic cleavage step used prior to the assay.Because de-conjugation is known to occur at gut level, expressing isoflavones as aglycones makes sense when either human or animal exposure is considered.All products, including Nigari, were dietary grade foodstuff and were purchased from an organic market.The sodium bicarbonate was purchased as food grade salt from a local supermarket.All other compounds were from Sigma-Aldrich, unless otherwise stated.β-glucuronidase-aryl sulfatase was from Roche.The secondary antibody came from Dako.ELISA antibodies were obtained in our laboratory, and were used as described below.Ultra-filtration steps were performed at the AGIR Food Industry Platform on a TIA pilot apparatus with a loading capacity of 50 L, and working with ceramic filtrating membranes 23–25 mm in diameter and 850 mm long.Filtration parameters were adjusted to the best isoflavone removal efficiency based on isoflavone ELISA measurements.The experimental procedures followed in this study for soy-juice and for Tofu are summarized in Supplementary Figs. S1A and S1B.The hypothesis was that prolonged cooking and extensive contact surface between soy protein and water would result in the increased leaking of isoflavones into the cooking water.Therefore, different cooking times were tested on different soybean foodstuff.All tests were made at least in triplicate.Two grams of dehulled soybean seeds were put in 20 mL of distilled water and soaked for 20 h at room temperature.The soaked grains were either left uncooked or were pre-cooked at 90 °C for 5 min, 15 min, 30 min or 60 min.The water was removed and kept for isoflavone assay.Soaked beans, pre-cooked or not, were ground and re-suspended by stirring for 5 min in 14 mL of distilled water.The mixture was additionally cooked to reach a total cooking time of 60 min.It was filtered through a metal grid coated with a fabric which collected Okara.The filtered soy-juice was collected and a 1 mL sample was kept for isoflavone assay.Two grams of soybean flakes, either pounded or unpounded, were put in 20 mL of distilled water and soaked for 20 h at room temperature.The soaked flakes were either left uncooked or were pre-cooked at 90 °C for 5 min, 15 min, 30 min or 60 min.The water was removed and kept for isoflavone assay.Soaked flakes, pre-cooked or not, were ground and re-suspended by stirring for 5 min in 14 mL of distilled water.The mixture was additionally cooked to reach a total cooking time of 60 min.It was filtered through a metal grid coated with a specifically designed fabric.The filtered soy-juice was collected and a 1 mL sample was kept for isoflavone assay.Fifteen litres of soy-juice prepared by the supplier Nutrition & Santé were mixed with 30 L of pre-filtered tap water, and heated to 50 °C.Twenty grams of NaHCO3 per litre were then added.The heated diluted soy-juice was poured into an experimental ultra-filtration apparatus and filtered at 400 kPa through a 5 KDa-pore-size membrane at a temperature of 55 °C.Ultra-filtration parameters were adjusted by progressive changes driven by the isoflavone measurements in the resulting soy-juice.In this case, Tofu was not obtained by pressing in a Tofu maker, but by centrifugation.The hypothesis was that the longer the cooking of the initial soy-juice, the higher the isoflavone transfer rate into the water fraction.Therefore, the duration of the soy-juice cooking step was tested for 4 cooking durations: 3 min, 15 min, 30 min, 60 min.Five mL of soy-juice was heated in a screw-capped 10 mL Pyrex glass-tube, using a heating bath set at 90 °C.After the different cooking durations, coagulation was obtained using 0.125 mL of Nigari solution.The final Nigari concentration was then 5.25 g L−1.The mixture was swirled three times to mix the salt and juice, and put directly onto ice until centrifugation.The curd obtained by adding Nigari was spun for 10 min, at 4 °C, 3000g.The supernatant and curd were collected for isoflavone assay.To evaluate the impact of experimental rinsing on Tofu isoflavone concentrations, 5 mL of soy-juice was placed in a Pyrex glass tube and heated for 1 min to boiling point in a heating bath.One hundred and twenty-five μL of Nigari solution was added, and the resulting mixture was shaken very gently.Samples were cooled on ice.Tubes were centrifuged for 10 min, 4 °C and 3000g.The whey was collected and its volume measured.An original set of three tubes was kept for measurements and the procedure was then continued on 3 additional tubes.A volume of water equivalent to that of the previously collected whey was added, and the curd was re-suspended gently before a second centrifugation.This procedure was repeated in order to obtain an experimental Tofu curd that was rinsed 3 times.Tofu samples were collected for isoflavone analysis.The samples rinsed 3 times were compared to those obtained after simple curd formation and to the initial soy-juice.Tofu was prepared from different soy-matter in accordance with traditional recipes and using a traditional Tofu maker.In the first preparation, Tofu was prepared from Biosoy UHT soy-juice, and curd-rinsing steps were included.Two rinsing procedures were tested: the first included only one rinsing-step, the other included 7 rinsing-steps, as reported in Barrett’s review.In a second recipe, Tofu was no longer prepared from soy-juice but from soy flour.This recipe is described by Wentland, and is still used in the region of Debao-Guangxi in China.For the second recipe, the duration of the soy-meal cooking-step was modulated to determine to what extent the isoflavone removal had taken place.To evaluate the impact of rinsing on Tofu isoflavone concentrations, 200 mL of soy-juice was heated to boiling point on a hot plate for 15 min.Ten mL of Nigari solution was added, and the resulting mixture was gently shaken.The curd was put directly on ice and kept on it until ready for the next step.The curd was poured onto a metal grid coated with specialist fabric, and the whey collected and its volume measured.The curd was put in a glass vial and a volume of distilled water, equivalent to the previously collected whey, was added.The curd was gently shaken for 15 s and then poured onto the metal grid coated with fabric, and the residual water was collected.The curd was then placed in a Tofu maker coated with the fabric and extra water was pressed out for 30 min.A Tofu sample was collected for isoflavone analysis.The same protocol was applied for a septuple rinsing procedure.To evaluate the impact of cooking duration on the isoflavone content of Tofu made from soybean-meal in accordance with the Debao-Guangxi recipe, 285 mL of water was heated at 85 °C in a heating bath.One hundred grams of soybean meal was then gently shaken into the mixture.This was cooked for 5 min, 15 min, 30 min or 60 min at 85 °C in a heating bath.Fifteen mL of Nigari solution was added, and the mixture was shaken gently.The curd was put on ice until further processing, to ensure procedural consistency.It was then filtered through a metal grid covered by fabric.A sample of whey was kept for isoflavone measurements.The curd was put in a Tofu maker and pressed until there was no juice left.It was then kept pressed for an additional 30 min.The whey was collected and its volume measured.A sample of Tofu was collected for isoflavone measurement.The recipe was conducted in triplicate for each cooking time.Tempeh was prepared by an Indonesian migrant in accordance with a traditional recipe transmitted over several generations within her Indonesian family.Traditional Tempeh is obtained under tropical climate conditions using a specific ferment, Rhizopus oligosporus.This fungus was initially present on the hibiscus leaves used to wrap Tempeh for the fermentation step and then wrapped into banana leaves.In order to prevent the development of other fungi or bacteria during the fermentation step, the soybean seeds were traditionally rinsed and cooked several times in boiling water.Briefly, 200 g of dehulled soybean seeds was first rinsed 3 times in tap water.A first cooking step was performed in water until boiling point, with regular skimming for 20 min.The soybean seeds were then soaked for an additional 20 min in the heated cooking water.The soybean seeds were rinsed 3 times using tap water, and subsequently cooked for 20 min in boiling water.These were then left to soak in the heated water for an additional 20 min until made tender, as assessed by a finger crushing test.Once the beans had cooled, the water was removed and the beans dried in an oven at 80 °C.The resulting preparation was then cooled to room temperature, and the Rhizopus oligosporus ferment was added.The mixture was then placed into a 200 g sealed plastic storage bag, with aeration holes made using a fork.The incubation was carried out at 28 °C for 24 h before the Tempeh was wrapped in an additional plastic bag and vacuum-packed for conservation at 4 °C until assay.Gen and Daid, the two main isoflavones from soy, were assayed using two ELISA tools developed by our laboratory, and validated by an international ring test confronted with physico-chemical methods.All data are given in equivalent aglycone.Briefly, the extraction was performed on samples diluted 1/50 in distilled water.For liquid matrices, 1 mL was diluted in 49 mL of water.For solid samples, 1 g was dispersed either ground or not in 49 mL of water.The resulting mixture was swirled for 15 min and 500 μL of the dilution was placed in a 10 mL Pyrex glass tube.For the digestion step, 1.5 mL of acetate buffer was added, followed by 10 μL of digestion enzyme.Caps were then screwed onto the Pyrex tubes.The solution was incubated overnight at 37 °C with gentle swirling.A digestion control was run in parallel, using a solution of genistin.Four mL of ethyl acetate was added to the digestion solution.After a 15-s vortex step, the emulsion was separated by a 2 min spin at 4 °C, 3000g.The tubes were frozen for 60 min at −20 °C, and the organic phase was collected in a glass tube and then evaporated at room temperature until dry, using a speed-vac apparatus.The extraction was renewed twice on the same aqueous phase, with the three organic phases being pooled in the same collecting tube and then dried.The empty tubes then received 500 μL of assay buffer and the extract was re-suspended in the assay buffer by sonication.The samples were kept at −20 °C until assay.An extraction control prepared from a Gen solution was run in parallel to check for extraction recovery.In all cases, the extraction rate was found to be between 96 and 102%.The ELISAs respect competitive procedures with a fixed antigen coated in the microwells of the microtitration plates.The coated antigens were Gen and Daid haptens, coupled to swine thyroglobulin.These compounds were synthesized in our laboratory.The antibodies were specific to each isoflavone.They were raised in rabbits by our team to recognise haptens coupled to bovine serum albumin and their characteristics are indicated in a previous study.The sensitivity of the assays given as the mid-point of the standard curve was 8 ng well−1 for Daid and 3.12 ng well−1 for Gen. The detection limit was therefore 2 ng well−1 for Daid, and 1.9 ng well−1 for Gen. All samples had to be diluted to at least 1:500 to allow accurate determination compared to the standard curve.The inter-assay variations were 13.1% and 12.8% for Gen and Daid, respectively.The data obtained were assumed to follow a normal distribution since no bias of any kind could be identified that would lead to another distribution.All experiments were analysed via an ANOVA test using the Statview software.ANOVA analyses were carried out on the data from each test within the same experiment, and for each of the different experiments.When a difference was recorded, the post hoc analysis was performed using non-parametric tests specially designed for small samples and following the Mann-Whitney procedure.The influence of pre-cooking duration on dehulled soybean seeds is given in Fig. S2.This figure shows the impact of pre-cooking before bean crushing on isoflavone content in soy-juice.At this stage, the pre-cooking water was eliminated, along with any isoflavones it might contain.The subsequent cooking was designed to allow equal thermal treatment of the food matrix.If isoflavones passed into the water at the second cooking stage, they remained in the soy-juice and Okara.The juice prepared from the cooking mixture was separated from the Okara by simple filtration.The difference was highly significant in juice from soybean pre-cooked for 60 min when compared to juice from un-pre-cooked beans.Sixty minutes pre-cooking removed 54.32% of the initial isoflavones.With pre-cooking, the quantity of isoflavones increased in the soaking water, and in the Okara.It was significant in Okara when compared to the non-precooked and the 60 min pre-cooked samples.The results indicated, therefore, that the longer the pre-cooking step, the lower the isoflavone content in the resulting soy-juice.They also showed that when the cooking step is omitted just before filtration, the isoflavones remain in the Okara.The influence of pre-cooking duration on the isoflavone content of soy-juice made from pounded dehulled beans is presented in Fig. 1.This figure shows the impact of a pre-cooking-step, included before pounded-soybean crushing, on isoflavone content in soy-juice.The pounded beans were expected to lose their isoflavones more easily because of a greater contact between seed and water.The graph shows that the greatest efficiency in isoflavone removal was obtained during the first 5 min.However, isoflavones were constantly being removed thereafter, although less efficiently, and the removal was related to the cooking duration.At the end of the 60 min cooking period, the removal percentages were 57.43% for Gen, 55.00% for Daid, and 56.68% for Gen + Daid.The influence of pre-cooking duration on the isoflavone content of soy-juice made from flakes, pounded or un-pounded, is presented in Fig. 2.Pounding was expected to increase contact between the soy matrix and water and to increase isoflavone elimination in pre-cooking water.Fig. 2 shows that the isoflavone levels in juice obtained from flakes ground and pre-cooked for different durations are always lower than those from the corresponding juice made using entire flakes.However, the differences between pounded and un-pounded flakes became significant only after 60 min of cooking.At that time, the values were 13.87 ± 0.15 mg 100 g−1 vs 10.72 ± 0.32 mg 100 g−1 for un-pounded and pounded flakes, respectively.In both cases, however, the isoflavone content after 60 min cooking was significantly lower than in the initial soybean matrix.The goal of the ultra-filtration procedure was to extract 80% of isoflavones from the initial soy-juice.Several attempts were performed to find an efficient protocol.The final combination, validated on 6 different tests, was as follows: temperature 55 °C, filtration cut-off 5 KDa, pressure 400 kPa.First, the diluted soy-juice was filtered until the extra water was removed.Then, 10 L of pre-filtered extra tap water was added to the soy-juice and, once the water was eliminated, this procedure was repeated twice more.Samples were collected after the first step, and extra samples were collected at each water addition until the third and final rinsing step.In these conditions, the final isoflavone content was 18.79% ± 1.99% of the initial quantity.Fig. 3 gives the elimination kinetic of isoflavone with the initial content, the content at mid-process after the first elimination of added tap-water, and the final content after the 3 additional rinsing procedures.Table 1 gives isoflavone content from Tofu made from ultra-filtrated juice, showing that, when compared to the initial juice, the percentage of remaining isoflavone after Tofu processing was only 15.72%.Tofu preparation was tested on experimental Tofu obtained in glass tubes by centrifugation or using a traditional process.This was done to evaluate the impact of curd rinsing on Tofu isoflavone content.According to other authors, Nigari may induce flatulence and diarrhoea.While Nigari has currently been replaced by lemon juice in Indonesia, some cooks in China have reported rinsing the Tofu curd up to 7 times before pressing it.The impact of curd-rinsing on the isoflavone content of Tofu was then analysed.The results obtained after 7 traditional rinsings or 3 experimental rinsings are presented in Table 1.The table shows that when the rinsing procedure was applied on the already formed curd, its mass was affected, even though this was not statistically significant.However, the isoflavone content could be reduced by nearly 50% using the traditional process, and by up to 66% for the experimental procedure in Pyrex glass-tubes.The impact of soy-juice cooking duration on the isoflavone content of Tofu is presented in Figs. S3 and S4.The results are also summarized in Table 2.Fig. S3 shows the evolution of isoflavone content in Tofu made from industrial soy-juice, either cooked for 1 min or cooked for 15 min.This moderate duration already induced a significant decrease in isoflavone content.Fig. S4 shows the evolution of isoflavones in Tofu made from soybean-meal.This reproduces the recipes described by Wentland, which are still followed in the Debao-Gangxi region of China.Four cooking durations were tested, from 5 to 60 min.In these cases, the difference was significant after 60 min of cooking.Here, the isoflavone content represents only 30.54% of the isoflavone content of the initial soy-flour.The data obtained during the process included the first rinsing steps, 2 cooking and soaking steps in hot water and, finally, the fermentation steps, which are presented in Fig. 4.The figure shows that with only 34.19 ± 1.82 mg 100 g−1 in Tempeh, the percentage of isoflavone remaining was only 18.07% ± 0.96, whereas the initial value in the soybean seeds was 189.25 ± 4.37 mg 100 g−1.Traditional fried Tempeh contained the same amount of isoflavones as traditional raw Tempeh i.e. 34.68 ± 2.11 mg 100 g−1 isoflavones with 22.43 ± 1.13 mg 100 g−1 Gen and 12.25 ± 0.98 mg 100 g−1 Daid.The industrial Tempeh, assayed here for comparison, contained 29.23 ± 1.21 mg 100 g−1 of Gen and 16.08 ± 0.29 mg 100 g−1 Daid, totalling 45.32 ± 1.50 mg 100 g−1.The isoflavone content from the initial soybeans used to prepare commercial Tempeh is unknown and therefore the efficiency of the modern preparation on isoflavone removal is unknown.Despite on-going controversy about the beneficial or adverse effects of estrogenic isoflavones, most scientists would probably agree that both effects could be expected from weak estrogens present at the mg range in human and animal foodstuff.Estrogenic isoflavones are associated with soybean, their main source within the animal and human diet.The table combines original data and data from Vergne.Isoflavones, with their low estrogenic effects, were considered to be safe, based on the assumption that they had always been consumed at the same amount in Asian countries.However, is modern isoflavone exposure comparable to historical levels in Asian populations?,In fact, it has been found that Asian age-old cooking habits empirically eliminated soybean’s anti-nutritional factors.Domestic soy preparation, including soaking and simmering, lasted several hours.This procedure, effective even in rudimentary conditions, can remove glycosylated isoflavones from soybean because these compounds are water-soluble.However, as glycosylated isoflavones adsorbed to protein need time to be desorbed, are modern industrial production methods as efficient at removing phytoestrogens?,These considerations raise four further questions: 1) How exactly did traditional Asian cooking habits influence isoflavone content?,2) Is modern-day isoflavone exposure resulting from industrial foodstuff comparable to the age-old Asian diet?,3) If not, and since recent data indicate the potentially adverse effects of isoflavone exposure on reproductive parameters, what type of data is required to improve our knowledge of dietary phytoestrogen health effects?,4) Can we consider isoflavone exposure as homogenous in all soy-consuming countries, irrespective of their degree of industrialisation?,Experimental data about Tofu or soy-juice using pounded or un-pounded seeds, showed that prolonged pre-cooking, prior to crushing, eliminated 50–80% of the initial isoflavone content.Present-day manufacturers could use pre-cooking to reduce isoflavone content in soyfood.The longer the soaking and simmering steps before drying solid soyfood, the more complete the elimination of isoflavones.Modern soy-juice, however, being liquid, logically presents the highest isoflavone:protein ratios.Tofu curd, prepared from soy-juice, can be rinsed to remove isoflavones at significant rates.Centrifugation of rinsed curd could allow industry to reduce isoflavones.The present experimental ultra-filtration process, which reduced soy-juice isoflavone content by over 80%, confirmed Chinese reports.It also maintained Tofu protein levels and its transformation ability.Tofu curd formation requires prior elimination of the NaHCO3 added to prevent clotting and membrane clogging.KHCO3 was found to be more efficient than NaHCO3, since only 1.77 g L−1 of KHCO3 was necessary.Admittedly, additional work by manufacturers would be needed to ensure that the juice and Tofu obtained via ultra-filtration correspond to consumer taste.Equally, as prolonged cooking and several rinsing steps are not environmentally-friendly, the soy-product industry might not retain them.Although traditional Tempeh preparation effectively removed isoflavones, leaving only 18.07% in the final product, it required 2 L of tap water at each soaking and cooking step.The final water:seed ratio of 16 L:200 g may not, however, be sustainable in industry.Because we do not know the isoflavone levels in the soybean seeds used for industrial Tempeh, no comparison can be made between the modern and traditional processes.The present study shows that although traditional recipes successfully removed isoflavones, these recipes might be difficult to adapt to industrial soy food production procedures.Therefore, to reduce consumer exposure to isoflavones, various legumes containing vegetable proteins could be used as a replacement.These include pea, lupine and beans, commonly used in manufactured human foodstuff.Variety selection makes their nutritional characteristics much closer to those of soybean, with a much lower isoflavone content.The traditional recipes developed here led to low isoflavone levels, thereby confirming earlier findings indicating that soybean curd only contains less than 5% of the soybean isoflavones.This indicates that historical isoflavone exposure in Asia was lower than the current cumulative exposure of regular soyfood consumers, as confirmed in a study of rural Chinese areas pursuing traditional cooking practices.Their isoflavone intake was low.Current isoflavone intake, however, for an adult in China ranges between 20 and 40 mg day−1.When the industrialisation of soybean processing started, knowledge about isoflavones was lacking.This meant that isoflavones were not considered in either traditional or modern industrialised foodstuff.The heating steps were considerably accelerated in industry to reduce energy costs, but fewer isoflavones were removed.Later, when these compounds were discovered in the urine of modern soyfood consumers, it was thought that as these probably corresponded to traditional Asian urine concentrations, both modern Asian and Western isoflavone exposures could be considered safe.Recently, Tofu was reported to be by far the most frequently consumed soyfood in China.In addition, traditional Tempeh, Tofu, Nato and Miso are solid foodstuffs, all obtained after long simmering or soaking steps.The subsequent pressing or drying, used to eliminate water, also removed most of the isoflavones.Modern soy-juice, not a traditional drink in Asia, is made by retaining the cooking water and all its isoflavones, and exhibits the highest isoflavone:protein ratio.Soy-juice was not traditionally consumed in Asia because it is unattractive to Asian populations, who have problems in digesting milk.Traditionally, less than 5% of soybean was consumed there as soy-juice, and this still pertains to present-day Japan.In Western countries, 90% of soyfood is derived from soy-juice.The introduction of soy-juice was followed by that of soy-based infant formulas, which led to an infant isoflavone exposure of 2.3–9.3 mg kg−1 day−1.These levels far exceed those shown to disrupt the menstrual cycles in premenopausal women.In Western countries, soybean has become a common part of human diet via soyfood based on soy-juice, and the soy hidden in manufactured food.Isoflavones, now the most prevalent and potent estrogenic compounds in human food, are 1000–10,000 times more concentrated in foodstuff than other xenoestrogens, such as pesticides, themselves 10–100 times less potent than isoflavones.Consequently, since hidden soy is seldom included in food databases, consumers’ isoflavone exposure is generally underestimated.Recently, fertility problems have been reported in humans, and also soy-fed cattle of industrialised countries.The US NTP recently reported toxic effects of Gen on rat reproduction at LOAEL of 35 mg kg−1 bw day−1, thereby allowing initial toxic levels in animals and humans to be defined.The exposure of children, adolescents and adults under 50 to significant xenoestrogen levels is thought to impair fertility.Soy isoflavones are now associated with a reduction in sperm count in Asian and US adults.Over-consumption of soy in women is linked to pituitary, endometrial and menstrual cycle impairments in women.Deleterious effects have also been reported in children fed either soy-based formulas or soy in early life.One study specifically records the effects of soy-based infant formula intake during infancy on adult reproduction, but it does not help in determining whether adult fertility could be affected, as many scientists fear.Reproductive impairments were also reported in domestic animals exposed to phytoestrogens.These data suggest that estrogenic isoflavones should be considered, together with other endocrine disruptors, as a potential cause of such fertility problems.Soy and/or its isoflavones were shown to disrupt the thyroid function.Goitres have been observed in hypothyroid babies fed soybean-based infant formula, and soy isoflavones seem to aggravate any pre-existing hypothyroidism.The controversy about breast cancer is now dying down.The current hypotheses are that Gen may prevent cancer during its promotion phase via epigenetic effects, thereby protecting Asian women exposed from childhood to modern soyfood.However, Gen also induces the expression of genes involved in breast-cancer cell proliferation in women with estrogen-dependent breast cancer.Soyfood has proliferative effects on healthy breast cells in premenopausal women.In addition, Gen and Daid are growth factors for human estrogen-dependent tumour cells both in vitro and in animal models of xenograft nude mice.In Western menopausal women, the effect of soy on breast cancer is unclear.However, most of the existing studies have neglected to include the soy isoflavones hidden in manufactured food, thus reducing the statistical power of their analyses.The Asian diet may, however, be protective for several cancers via other traditional foodstuffs.Although prostate cancer incidence differs in Western and Asian populations, the occurrence of cancer, as analysed by post-mortem diagnosis, shows similar frequencies between populations.Data show that the tumour estradiol receptor subtypes are crucial.ERβ-bearing tumours are protected by isoflavones, whereas ERβ2 variant bearing tumours are stimulated.The link between colon cancer and soy consumption is unclear.Recent data correlated soy consumption in women with a lower risk of colon cancer.These results still need more consistent evidence, since soybean may not be the only positive factor involved.Meta-analyses also show positive isoflavone effects on the prevention of hot flushes, and soy extract-based food-supplements are the most popular world-wide for vasomotor menopausal symptoms.The excretion of bone resorption markers is reduced in the peri- and post-menopausal populations by isoflavone intake from food supplements, but this does not prove that isoflavones actually prevent osteoporosis.Finally, for the FDA, the most consensual effect of soy is a consistent reduction of plasma LDLchol when soybean constitutes a meat substitute.Several studies show that the lowering of plasma LDLchol due to soybean can range from 7 to 10%.However, the relevancy of this biomarker for cardio-vascular diseases is currently under debate.Traditional soy preparation in Asia was mostly confined to solid food-stuffs, such as Tofu, Tempeh, Natto, Miso.Soy-juice was, and still is, only occasionally consumed in Asian countries.All of the traditional foodstuffs mentioned above were traditionally prepared after prolonged simmering lasting up to 4 h, or following several rinsing and cooking steps in water.Here we show that simmering in water time-dependently removes isoflavones from the soybean foodstuff.We also show that rinsing and cooking in water allow the glycosilated isoflavones to leak into the water, thereby reducing the isoflavone content in soybeans.As this cooking water is removed from traditional solid soy-based foodstuff, this indicates that the historical exposure to isoflavones was probably low in Asia.Nowadays, in modern Asian countries, the traditional recipes are prepared using industrialised processes, and the rinsing and cooking steps are reduced to save energy and water costs.These new procedures, developed when the effects of isoflavones were largely unknown, retain a high isoflavone:protein ratio in modern soy-based foodstuff.In consequence, the human exposure currently recorded in modern soy-eating countries is most probably higher than in the age-old ones.In addition, in Western countries, where soy-juice and its derived products are mainly consumed, the cooking water is used, together with all of its isoflavones.This explains why soy-juice is the soy-food that exhibits, by far, the highest isoflavone:protein ratio.Here, we showed that precooking soybeans and eliminating the water before crushing them can significantly remove isoflavones from soy-juice, especially if the cooking step that mixes juice and Okara is reduced or omitted.Soy isoflavones are the most potent and prevalent xenoestrogens in the modern consumers’ environment.They can aggravate the thyroid status of hypothyroid patients.Equally, the current isoflavone exposure is most probably a recent one.Therefore, soybean should be considered as a modern source of endocrine disruptors, and studied as such. | Estrogenic isoflavones were found, in the 1940s, to disrupt ewe reproduction and were identified in soy-consumers' urine in 1982. This led to controversy about their safety, often supported by current Asian diet measurements, but not by historical data. Traditional Asian recipes of soy were tested while assaying soy glycosilated isoflavones. As these compounds are water-soluble, their concentration is reduced by soaking. Pre-cooking or simmering time-dependently reduces the isoflavone:protein ratio in Tofu. Cooking soy-juice for 15 or 60 min decreases the isoflavone:protein ratios in Tofu from 6.90 to 3.57 and 1.80, respectively (p < 0.001). Traditional Tempeh contains only 18.07% of the original soybean isoflavones (p < 0.001). Soy-juice isoflavones were reduced by ultra-filtration (6.54 vs 1.24 isoflavone:protein; p < 0.001). Soy-protein and isoflavones are dissociated by water rinsing and prolonged cooking, but these have no equivalent in modern processes. As regards human health, a precise definition of the safety level of isoflavone intake requires additional studies. |
525 | Responses in a temporarily open/closed estuary to natural and artificial mouth breaching | The South African coastline is dominated by small microtidal estuaries that remain open to the sea for short periods.These estuaries, known as temporarily open/closed estuaries, are relatively common in Australia, South America, North America, and India.Since TOCEs have a relatively small catchment, they are greatly influenced by factors like the quality of inflowing river water.Water circulation patterns are largely influenced by runoff patterns; typically occurring as high energy events that last a few days at most and are separated by long dry periods.These events directly affect the flushing mechanism and flushing times within estuaries, which in turn influence the water quality.As an example, conservative nutrient behaviour is expected in rapidly flushed systems but internal processes have a larger effect on nutrient cycling in systems with longer flushing times.When low river inflow coincides with periods of high wave action, sand deposition at the mouth leads to TOCEs closing off naturally from the sea by the formation of a sand bar.Provided freshwater input is sufficient, the water level rises until breaching of the mouth occurs.However, this process can be delayed by factors such as evaporation and seepage.In 1988, the Department of Water Affairs announced that a 70 m high and 270 m long dam with a capacity of 23 × 106 m3 was to be built 3 km upstream of the temporarily open/closed Great Brak Estuary.The construction of the Wolwedans Dam in 1989 has led to a reduction in freshwater flow to the estuary, resulting in the estuary becoming predominantly closed.Annual flow to the estuary has been reduced during the last two and a half decades as a result of afforestation, direct abstraction, and damming from 36.8 × 106 m3 under natural conditions to 16.25 × 106 m3 at present.An Environmental Impact Assessment conducted on the dam concluded that reduced flow could be mitigated by artificially breaching the mouth while simultaneously releasing freshwater from the dam.The recommended amount of freshwater to be released, referred to as the ecological reserve, was 2 × 106 m3.In response to this, a mouth management plan was developed, which recommended that the mouth be artificially breached during spring and summer of every year.Prior to the development of the dam the mouth of the estuary remained open for longer periods.After the initial construction of the dam, there was still a significant overflow of freshwater into the estuary that kept the mouth open mainly because there was no great demand or abstraction of freshwater from the dam.The mouth of the estuary was artificially breached in February 2011 with a volume of 0.3 × 106 m3 and it closed a week later.A 1:100 year flood with a volume close to 3 × 106 m3 breached the mouth naturally in June 2011, flushing water and sediment out of the estuary.Two major differences were observed with regard to the volume and duration of the events.Firstly, for the artificial breach, estuary water level was capped at 2 m above MSL just prior to the breach compared to water level exceeding 3.5 m during the natural breach, which resulted in the flooding of low-lying properties.Secondly, flow is cut off from the estuary once the artificial breach is initiated, whereas there was a long tapering off period of flow, lasting for months, following the natural breach event.It must be noted that timed water releases from the dam are usually made following artificial breaching events to improve scouring and maintain an open estuary mouth for longer, but on this occasion, there was limited water supply as a result of the extended drought.Reduction in the amount of freshwater entering estuaries has been well documented in South Africa and other parts of the world.Emphasis has been placed on the dependence of these estuaries upon the freshwater flow that serves in conjunction with tidal exchange to flush sediment and water from the estuary.More than two decades ago, Taljaard and Slinger stated that there is still a paucity of information on the requirements of estuaries for effective flushing.The aims of this study were to investigate the two flushing events, determine their effects on the submerged macrophytes and macroalgae, and lastly, compare the conditions of the estuary to those before the construction of the dam.A detailed description of pre-dam environmental conditions can be found in the Ecological Water Requirement Study conducted by DWA.The two mouth breaching events that occurred in 2011, one natural and the other artificial, provided a rare opportunity to compare their effectiveness at flushing water from the estuary.It should be noted that smaller floods such as the 1:10 or 1:20 year events which have estimated flow volumes of 250 m3 s− 1 and 429 m3 s− 1, respectively, are just as important in “resetting” an estuary as the 1:100 flood.The estuary became stratified when it was open in February 2010 and June 2011 with a wedge of mesohaline water present in the surface waters and euhaline water still trapped in the deeper bottom water.Weeks after mouth closure, the estuary once again became homogenous as salinity evened out.In small, shallow systems, the closed mouth state eventually reverts to a well-mixed brackish system, with no distinct salinity gradient or stratification.On both occasions, this pattern was observed in the Great Brak Estuary.Following the artificial breach, low dissolved oxygen concentrations occurred in the water column in April 2011, most likely the result of chemical and biological oxygen demand.The natural breach caused the estuary water column to become well oxygenated and even the bottom water had DO a concentration of 5 mg l− 1 indicative of a well-flushed system.This continued through to July 2011, with only the deeper section of the estuary becoming near anoxic at 3.4 km.Low dissolved oxygen has become synonymous with the closed phase and is even present in the deeper water during the open phase.This occurs mainly because the primary mechanism for flushing the estuary has changed from minor floods, which are now attenuated by the dam, to major floods and episodic overtopping events from coastal storms.This represents a change in the post-dam abiotic environment of the estuary.A steady base flow originating from the catchment, associated with naturally higher water levels would have led to significant scouring of sediment from the mouth region which would have maintained an open mouth.However, this function was eliminated because of the dam construction.The oxygen saturation data showed that saturated conditions were not the general state in the estuary.On the contrary, values below 6 mg l− 1 were rather common.The indications from these data show that both fresh water and sea water exchange are required to retain a medium to high oxygen concentration.The interpretation here is that even in shallow systems, mouth closure causes oxygen levels to fall.The decomposition of organic matter under the relatively high temperatures found in the Great Brak results in a rapid consumption of oxygen that cannot be replaced by normal diffusion from the atmosphere and either water exchange or a greater extent of turbulence is required.Of importance is that after the flood “new” oxygenated sea water replaced the “old” trapped deoxygenated estuarine water in the bottom sections and this shows that the system had been well flushed, i.e. based on the salinity and DO concentration in the bottom water.The Great Brak Estuary is a blackwater system, which in its natural state would have been oligotrophic.Even though the water column nutrient concentration still falls within that of an oligotrophic system, the estuary resides in a eutrophic state, made evident by the persistent blooms of filamentous algae.Such is the case in other studies where water column concentrations alone were unreliable as indicators of nutrient enrichment since nutrients are taken up rapidly by macrophytes or adsorbed to particulate sediments.Still there have been changes in the nutrient concentrations associated with the dam construction.The increase in TOxN in the system in February and July 2011 were due to incoming freshwater from the dam and catchment, respectively.Fifty-five millimetres of rain fell in July 2011 introducing nitrate into the estuary.The elevated nitrate concentrations were associated with agricultural activities in the catchment.Similar findings from the Colne Estuary in the UK by Thornton et al. showed that peaks in winter nitrate and nitrite concentrations were associated with winter rainfall and subsequent runoff from the surrounding land, which flushed nitrate from the catchment soils.The dissolved inorganic nitrogen in the water column was mainly composed of NH4+, which is similar to the findings of Taljaard and Slinger and DWA on the Great Brak Estuary and Spooner and Maher in the ICOLL Corunna Lake in Australia.In these latter studies, the authors suggested that this was primarily due to the longer residence time of the water body and the accumulation of organic matter.Ammonium is the major inorganic nutrient released during remineralization.The higher NH4+ and SRP in the bottom water are linked to the low oxygen environment that favours the release of these nutrients from sediments.SRP usually adsorbs to metals such as iron in anoxic sediments.When the water above the sediment becomes oxic, then SRP is released.Nitrogen on the other hand is present in its reduced form NH4+ in the anoxic layer and is oxidised to form TOxN if the water layer becomes oxygenated.Anoxic water above the sediment layer favours all TOxN to be converted to NH4+.This process worsens if there is nutrient-rich organic material accumulating in the deep deposition zones within the estuary which eventually results in the release of both SRP and NH4+ to the water column through bacterial remineralisation.Snow and Taljaard described the estuary as frequently having high NH4+ and SRP concentrations in the deeper pools of the middle and upper reaches that usually coincided with anoxic conditions resulting from organic decomposition and long residence times.They further suggested that the higher concentrations were probably associated with remineralisation processes and that this condition was present even under the open mouth state when vertical stratification caused “trapping” of the bottom waters.The increase in NH4+ in February 2011 in the surface waters was caused by the release of freshwater from the dam.This is similar to findings by DWA where subsurface flow releases of low-oxygen water from the dam also contributed to the NH4+ concentrations in the water column.Regular seasonal flooding would usually prevent significant accumulation of organic inputs in other systems but the Wolwedans Dam prevents this regular resetting mechanism in the Great Brak Estuary.A further indication that the natural breach flushed the estuary arises from the lower NH4+ and SRP in the bottom water following such a breach.The change in form of N is primarily due to the presence of higher dissolved oxygen concentrations in the bottom waters.Before the construction of the Wolwedans Dam in 1989, the submerged macrophytes coexisted with the macroalgae Caulerpa filiformis, in the water area below the Island Bridge.These submerged macrophytes typically occur at elevations less than 0.89 m above MSL and would be prevalent during open mouth tidal conditions but are also present when the mouth is closed.During open conditions, R. cirrhosa is restricted to the back water channels and pools of water.However, this has changed over the past three decades.The prolific nuisance algae C. glomerata has taken over the shallow intertidal areas in the lower reaches of the estuary and had previously not been reported in this estuary until the 1990s."This event is increasingly common worldwide and is universally considered as a symptom of eutrophication.During the closed phase, the filamentous macroalgae C. glomerata out-competes the submerged macrophytes R. cirrhosa and Z. capensis.This is mainly due to the calm sheltered conditions and available nutrients.The macroalgae follows a seasonal cycle during the closed mouth state whereby growth occurs during the autumn and winter months and die back occurs during the spring and summer months.As expected, this has been observed elsewhere in many rivers and lakes by other authors who indicate that the die-off seems to occur in the middle of summer.However, C. glomerata attained its highest recorded area cover in March 2011 directly after the summer die-off period indicating that another driver was causing the growth response to occur apart from its seasonal growth cycle.The growth response coincided with the artificial release of freshwater from the dam.The flow from the dam was not sufficiently strong to flush water, sediments, and the alga out of the estuary and resulted in the mouth closing after only a week.As water levels rose, the alga was deposited onto the marsh areas.After breaching occurred, water drained out of the estuary, leaving the alga stranded on the marshes, and as the flood tide entered, the macroalgae was once again redistributed.The alga was then able to utilise the available nutrients in the water column and expand its area cover from 35 000 m2 in February 2011 to 64 000 m2 in March 2011.Similar results measured as percentage cover and wet mass were found in 2009.The high biomass and expansion of the macrolagae indicates the deteriorated ecological state of the estuary.After the estuary mouth had been flushed and remained open, the area cover of the submerged macrophytes increased considerably.The persistence of C. glomerata within the lower reaches of the estuary is attributed to its ability to withstand the shear stress found in benthic regions.Another factor contributing to its successful occupancy of the lower reaches in the estuary is the continuous availability of NH4+ in the estuary that favours the growth of the nuisance algae.The estuary has shifted from being allochthonous to an autochthonous system.Moreover, Human et al. showed that the filamentous alga has become an important storage compartment for nutrients within the estuary nutrient budget retaining a large portion of the nutrient load.These changes have undoubtedly occurred because flow has decreased dramatically and conditions have become more stable favouring the deposition of materials in the estuary, and the increased availability of NH4+ and SRP.The natural breach of the estuary mouth caused by the flood in June 2011 effectively flushed water and scoured a sufficient amount of sediment from the estuary, effectively “resetting the estuary.,As a consequence, the C. glomerata had been washed out of the system.Other studies have also concluded that exchange with the ocean in coastal lagoons washes out organic matter that would otherwise cause eutrophication and anoxia.These studies point out that if isolation occurs for long periods, it would favour the development of algal blooms.An increase in nutrients and blooms of filamentous cyanobacteria and diatoms occurred in Imboassica lagoon Brazil after 15 months of mouth closure.Another example was a phytoplankton bloom that occurred in the subtropical Indian River Lagoon in the USA after a residence time of 1 year.A similar condition now exists in the temporarily open/closed Great Brak Estuary where prolonged residence time and the recycling of nutrients from the benthos make the estuary an ideal environment for the prolific growth of C. glomerata.The fundamental conditions that favour algal blooms in the shallow waters of the lower estuary are the prolonged residence time of the water body during the closed mouth state and the availability of remineralised nutrients.Without effective flushing the estuary will continue to experience the macroalgal boom and bust cycles.The post flood estuary condition was similar to pre-dam conditions as the natural breach with a prolonged flow volume flushed the estuary of sufficient sediment and associated organic matter.This supports a strongly tidal and high oxygen environment, suppressing the release of remineralized nutrients from the benthos.In contrast to the artificial breach, the natural breach was not followed by a C. glomerata bloom in the estuary.Natural breaching conditions could be mimicked in the Great Brak Estuary, but in order for this process to be effective, a greater quantity of water needs to be made available to the estuary from the dam, particularly as base flow following a breaching event.Currently, the allocated reserve of 2 × 106 m3 per year is insufficient to flush and “reset” the estuary.Taljaard et al. reported that a total of 2.77 × 106 m3 was enough to ensure full flushing of resident saline water from the estuary.Indeed, this present study supports the findings of Taljaard et al.; however, it must be emphasised that the release of freshwater from the dam should be associated with a tapering off of flow in order to maintain open mouth tidal conditions.The flood tide introduces adequate seawater into the bottom water as far up as 4.7 km to flush the deeper sections of the estuary.These findings indicate the direction the small microtidal estuaries of South Africa and the world are heading.Given the increase in population and ever increasing dam development, available freshwater inflow to estuaries will become an even scarcer resource and will result in many permanently open estuaries closing periodically, and TOCEs will close more frequently and for longer.It is imperative that the allocated water requirement to estuaries is well researched and documented and encompasses all facets of biology so that adequate management actions can be made in order to maintain the health of the entire estuary ecosystem.The Great Brak Estuary is located on the south coast of South Africa approximately 420 km east of Cape Town.It is 6.2 km long and drains a forested, semi-arid catchment area of 192 km2.The catchment receives more or less equal amounts of rain throughout the year with slight peaks in spring and autumn.However, the area is subjected to droughts and occasional flooding and the recorded annual runoff varies from as little as 4.3 × 106 m3 to as much as 44.5 × 106 m3.The Wolwedans Dam with a capacity of 23 × 106 m3 is located 3 km upstream of the estuary.The estuary has a high tide surface area of 0.6 km2 and a tidal prism of 0.3 × 106 m3.The mouth of the estuary is bounded by a low rocky headland on the east and a sand spit on the west.Immediately inland of the mouth, the estuary widens into a lagoon basin containing a permanent inhabited island about 400 × 250 m in size.The lower estuary is relatively shallow with some deeper areas in scouring zones near the rocky cliffs and bridges.The middle and upper estuary is generally less than 2 m deep, with some deeper areas up to 4 m deep located between 2 and 4 km from the mouth.The mouth of the Great Brak Estuary generally closes when high waves coincide with periods of low river flow.Artificial breaching has been practised at the Great Brak Estuary for the last two centuries to prevent flooding of low-lying properties.Sampling was done in November 2010, February 2011, April 2011, June 2011, and July 2011.The mouth of the estuary was open at the time of sampling in February, June, and July 2011 while November 2010 and April 2011 represent closed mouth conditions.Salinity and dissolved oxygen were measured using a Hanna multiprobe at each of the 11 sampling stations at depths of 0.5 m intervals from the surface to the bottom.Nutrient samples were collected with a pop bottle from the main channel at every 1 m depth.Samples were filtered through 0.45 μm syringe filters and frozen.Filtered water samples were analysed for total oxidised nitrogen using the reduced copper cadmium method as described by Bate and Heelas.Ammonium and soluble reactive phosphorus were analysed using standard spectrophotometric methods.All samples were analysed at the Physiology Laboratory of the Nelson Mandela Metropolitan University Botany Department.The area covered by submerged macrophytes and macroalgae grande and Cladophora glomerata Kützing) was mapped in the field on a monthly basis.These field maps were overlain on rectified Google Earth images using Arc GIS software version 10.The area cover was then digitised and calculated for each of the submerged macrophytes and the macroalgae.A PVC corer was used to harvest submerged macrophyte samples in the lower reaches of the estuary.Six replicates were taken at each site.The samples were stored in a cool environment while being transported to the laboratory.The plants were dried for 48 hours at 60 °C and the dry mass determined using an electronic balance.The data were statistically analysed using Statisica Software Version 11.A Shapiro–Wilk test for normality was used to determine if the data were parametric or non-parametric.When the data showed a non-parametric distribution, a Kruskal–Wallis ANOVA for significant difference was performed.If the data had a normal distribution, a one-way ANOVA and Tukey HSD test were performed.In this way, it could be determined if sample differences occurred with depth.Before the mouth was artificially breached in November 2010, the estuary was well mixed.After the artificial breach, the estuary became stratified during its open phase in February 2011, with a wedge of mesohaline water being introduced to the surface waters and the older euhaline estuarine water still trapped in the deeper bottom waters.A month after mouth closure, the estuary became homogenous again as salinity levels evened out.The natural breach in June 2011 caused strong stratification, where even the bottom water had been replaced by “new” seawater as was evident from the cold marine water present in the bottom water.A strong inflow of freshwater during July 2011 caused marine water to be trapped at two deeper pools located at 1.5 and 3.4 km.In November 2010, the system was relatively well mixed and the DO concentration ranged from 4 to 8 mg l− 1 in the surface water and was below 3 mg l− 1 in the deeper section located about 3.4 km upstream.After the artificial breach, there was a significant drop in the DO to 4–5 mg l− 1 in the surface water and became near anoxic in the deeper bottom water at about 3.4–6 km upstream, which continued through to April 2011.The water column was well oxygenated in June 2011 after the mouth had been breached naturally, with concentrations of 5 mg l− 1 in the bottom water.In July 2011, there were very high DO concentrations as a result of the strong freshwater that entered the estuary with only low DO in the bottom water at 2 and 3.4 km.Water column temperature followed a seasonal trend, with warmer months having higher temperatures than the colder months.After the flood, the temperature of the water column mainly of marine origin was 15 °C."The following month, a thermocline developed where cooler surface water, associated with the strong inflow of water from the river and catchment, trapped the previous month's marine water in the deep pool at 3.4 km.The TOxN concentration throughout the estuary water column was below 5 μM for all months, and only in July 2011 with its associated strong flow was the concentration around 20 μM in the surface waters indicative of the catchment acting as the main source of nitrate to the estuary.The NH4+ concentration in November 2010 was generally low throughout the water column with concentrations reaching 7 μM in the bottom water at 3.4 km upstream.After the artificial release of freshwater from the dam in February 2011, the NH4+ concentration increased significantly in the estuary, ranging from 5 to 10 μM in the water column and up to 110 μM in the bottom water at 3.4 km.A month after mouth closure in April 2011, the ammonium concentration in the surface water ranged between 5 and 10 μM while concentrations in the deeper reaches were still greater than 45 μM.However, after the natural mouth breaching in June 2011, the NH4+ concentration significantly decreased to less than 5 μM in the deeper sections in the middle of the estuary and ranged between 15 and 20 μM in the upper reaches.The concentration of soluble reactive phosphate was low and ranged between 0.5 and 1 μM throughout the estuary with peaks around 5 μM in the bottom water at 3.4 km and 1.2 km for most months.After the flood SRP showed a similar trend to NH4+ where there was a decrease in the bottom water to below 0.5 μM.In July 2011, however, there was a significantly higher SRP concentration than in all other months, and this was linked to the strong freshwater inflow from the river and catchment.The area cover of submerged macrophytes and macroalgae in the Great Brak Estuary shows competition between macrolagae and submerged macrophytes over time.The macroalgae reached its peak area cover in March 2011 after the release of freshwater from the dam.There was no significant difference in the biomass of C. glomerata over the sampling period.The biomass of R. cirrhosa for all months was significantly higher than in September and November 2010.The biomass of Z. capensis showed a similar trend with biomass in the months February to June 2011 being significantly higher than the biomass in September 2010 and November 2010.The interpretation here is that the submerged vegetation increased both their biomass and area cover in the absence of C. glomerata. | The Great Brak Estuary is a temporarily open/closed estuary located on the south coast of South Africa. The construction of the Wolwedans Dam in the catchment has reduced flow to the estuary by 56%; reducing the intensity of flushing events and causing the mouth to breach less often. The estuary experiences dense growth of the macroalga Cladophora glomerata particularly during the closed mouth condition. This study compared the condition of the estuary following artificial and natural mouth breaching events. The estuary was artificially breached on 1 February 2011 but closed within 16 days. When open, a well-developed halocline was present with hypoxic water (< 2 mg l− 1) at depth suggesting little flushing had occurred. A month later, C. glomerata attained its highest area cover (60 000 m2). In June 2011, the estuary was breached by a 1-in-100 year flood, which resulted in a strong tidal exchange of oxygenated water (> 4 mg l− 1). Consequently, this prevented the re-establishment of C. glomerata and supported the growth of submerged macrophytes. This research highlights the direction that small microtidal estuaries around the world are heading; as the number of dams and the amount of water being abstracted increases, the amount of water needed for ecological processes is decreasing and the mouths of estuaries are likely to close more frequently and for longer. |
526 | Evaluation of elite rice genotypes for physiological and yield attributes under aerobic and irrigated conditions in tarai areas of western Himalayan region | Rice is one of the principal food crops of world and accounts for almost 60% of the global energy consumption .Flood-irrigated rice utilizes 45% of the total fresh water, accounting for almost two to three times of that consumed by other cereals .However, by the end of the 21st century, decreasing water resources due to anthropogenic and natural factors will reduce the sustainable production of flood-irrigated rice, a heavy user of water .Thus, rice production needs to be increased besides the availability of water, using sustainable water saving technologies and judicious water management practices, in order to feed the increasing global population .Aerobic rice cultivation is an alternative strategy for the conventional methods to deal with water security in the tropical as well as in the sub-tropical agriculture.In the aerobic system, rice is usually directly dry seeded in the non-flooded as well as in the non-puddled fields mimicking the upland conditions, with adequate fertilizer application combined with supplementary irrigation during insufficient rainfall .This technology utilizes reduced surface runoff, seepage, percolation and evaporation leading to substantial water saving .Lafitte et al. reported that several lowland genotypes survive well in irrigated aerobic soils with occasional flooding.However, under aerobic conditions, even high-yielding lowland rice varieties have shown severe yield loss .Therefore, information using morpho-physiological and yield traits to identify and select superior yield performing aerobic rice genotypes is vital for developing aerobic rice cultivars.However, analysis of correlation between the physiological conditions and the yield of rice showed enhanced grain yield under aerobic conditions .In addition, China Agricultural University has developed high- yielding aerobic rice cultivars labelled as “Han Dao” that are being widely grown by the farmers there .In India, 23.3% of the gross cropped area is occupied by rice, which contributes to 43% of the total food grain production and 46% of the total cereal production of India .Moreover, out of the ten million hectares of cultivated rice area in the Indo-Gangetic Basin of India, almost 2.6 million hectares receive either temporal or erratic rains, and are affected by insufficient or irregular surface and ground water supplies during the Kharif season .To meet the increasing food demand under varying climatic conditions, it is essential to develop new rice cultivars with improved water use efficiency and those that can be grown under the Himalayan ecosystem.Therefore, in our current research, we have estimated the influence of continuous flooding and aerobic conditions that differentially affect the different rice genotypes during their growth and yield in the Indo-Gangetic Basin of India.Further, our research aims to provide quantitative information on the productivity of these high-yielding genotypes under flooded and aerobic situations in order to determine whether sustainable yield can be obtained under aerobic conditions by improving water management practices.Field experiment was carried out at the Norman Borlaug Crops Research Center, Pantnagar, Govind Ballabh Pant University of Agriculture and Technology, Pantnagar, Uttarakhand, India during the Kharif season.The soil was silty clay loam with a pH of 7.38, 202.35 kg/ha total N, 20.13 kg/ha P, 178.61 kg/ha K, and 32.5 meq 100 g−1 cation exchange capacity.Four genotypes DRRH-2, PA6444, KRH-2 and Jaya1xT-141) were used for aerobic treatment, and the flooding was taken as the control.Weekly weather data on wind speed, minimum and maximum relative humidity, minimum and maximum temperature, sunshine and rainfall during the cropping season were obtained from the Department of Agrometeorology of the University.Seedlings were raised in dry nursery beds with alternate day irrigation treatment.Transplanting was done after 25 days in 2 × 3 m plots with a total area of 544 m2.The row-to-row distance was maintained at 20 cm and the plant-to-plant distance at 10 cm during transplanting.Two-meter distance was retained between the aerobic and the flooded fields to avoid water flow by seepage.Fifteen cm high earthern bunds were mounded to avoid runoff loss in flooded plots and runoff gain in aerobic plots .Regular doses of phosphorus, potassium, and zinc were applied in all the plots.Nitrogen in the form of urea was applied at three developmental stages.Manual weeding was done to keep the plots weed-free and the recommended doses of pesticides were applied for optimum crop protection.Surface flooding was applied for irrigation through channels connected to the sub surface pressurized pipe system lifting ground water.The samples were taken in three replicates and the dates of panicle exertion, grain filling and physiological maturity were recorded manually.Plant height was recorded from 30 to 90 DAP at the interval of 15 days from the base of the top most fully expanded leaf to the soil level till panicle initiation and thereafter the data was recorded from the soil base to the base of panicle.Soil bulk density was measured using a cylindrical metal sampler and the soil moisture content was measured as described by Reynolds .Leaf relative water content was determined by measuring the turgid weight of 0.5 g fresh leaf samples by soaking for 4 h in water, followed by oven drying till a constant weight was achieved .Free proline concentration was determined from 30 DAT to 90 DAT at the interval of 15 days through a rapid determination method as described earlier .Various photosynthesis-related parameters, such as the net photosynthetic rate, transpiration rate, and stomatal conductance, were measured from 30 DAT to 90 DAT at the interval of 15 days, using a LI-6400XT portable photosynthesis instrument .Total chlorophyll content, chlorophyll a, chlorophyll b and chlorophyll a/b ratio was calculated from 30 DAT to 90 DAT at the interval of 15 days .Tiller number and panicle number per hill were measured to evaluate tiller numbers/m2.Panicles were hand-threshed manually to separate filled spikelets from the unfilled ones.At maturity, the number of filled grains per m2, grain yield per m2 and the total dry matter per m2 were determined and expressed at 14% moisture content, as described earlier .A total number of spikelets per m2 and harvest index were recorded for each replication at the time of grain filling.The weight of thousand grains, from the bulk harvest of each replication, was taken on a digital balance.Leaf area index was measured at the time of flowering.The relative water content, on a percentage basis, was calculated using the equation of Schonfeld et al. .All the recorded data were analyzed statistically using Split Plot Design.The effect of treatment was determined by the analysis of variance of the complete data set from three replications .Critical differences were calculated at 1% probability level, wherever the treatment differences were significant for interpretations.Correlation coefficient was determined using SPSS.During the cropping season, the soil moisture content was found to be always less under aerobic conditions than under control conditions, and the soil bulk density was found to be always higher under aerobic conditions than under control conditions.Significantly higher plant height was observed under flooded than under aerobic condition.The decline in plant height in all the genotypes under the aerobic condition is probably due to the constraint in cell elongation, which must have led to reduced internodal length.Similarly, all the genotypes showed reduced relative water content.Total free proline content was also observed to be increased in all the genotypes with the highest in KRH-2 under aerobic conditions.Proline is a well known compatible osmolyte produced in water-stressed plants .With the increase in water stress the proline levels were also found to increase in all the genotypes in our experiments.Similarly, during aerobic situations, all the genotypes displayed significantly reduced photosynthetic rates in comparison to those under flooded conditions.This paralled the decrease in total chlorophyll as well chlorophyll a/b ratios under aerobic conditions.These results may suggest that the chlorophyll loss from the leaves subjected to drought stress is mainly in the mesophyll cells since these cells are farther from the vascular supply of water than the bundle sheath cells, and hence they develop a greater cellular water deficit which leads to greater loss of chlorophyll.Alternatively, it is so because the mesophyll chloroplasts contain more light harvesting Chl a/b protein, which is labile even under mild water stress .In addition, both the transpiration rate and the stomatal conductance were also found to be reduced under aerobic conditions in all the genotypes in comparison to those grown under flooded conditions.Significantly higher tiller number/m2, panicle number/m2 and spikelet number/m2 were observed in DRRH-2 and PA-6444 under aerobic conditions as compared to that under flooded conditions.In contrast, KRH-2 and Jaya showed a drastic reduction in tiller number/m2, panicle number/m2 and spikelet number/m2 under aerobic conditions in comparison to those under flooded conditions.Inspite of achieving maturity at the same time under both the above-mentioned conditions, all the genotypes showed a delay in achieving 50% flowering under aerobic conditions.DRRH-2 attained 50% flowering as well as maturity prior to the other genotypes.Similarly, total dry matter was also reduced significantly under aerobic conditions as compared to that under flooded conditions.However, KRH-2 showed the highest TDM followed by PA-6444.In contrast, the lowest TDM was recorded in Jaya under aerobic conditions.Data obtained on the yield attributes and the yield further confirmed that flood-irrigation gave notably higher values as compared to the aerobic conditions.DRRH-2 showed higher number of filled grains/ m2 in comparison to that in all other genotypes.Thousand grain weight and the grain yield of all the genotypes were reduced significantly under the aerobic condition.This led to a severe decline in the harvest index in all the genotypes under aerobic condition.The rice genotype DRRH-2 had the highest grain yield as well as the harvest index under both flooded and aerobic conditions.When we correlated the mean growth related traits with the yield related traits, we observed that in aerobic environment, the days to 50% flowering and days to maturity were negatively correlated with productive tillers, spikelet fertility and the grain yield, whereas the total yield was positively correlated with productive tillers, spikelet numbers and the leaf area index at flowering.Water is an essential commodity for rice production, thus plentiful rainfall and high water table allows successful irrigated lowland rice cultivation in the Northern Region of India.Historically, lowland rice cultivation practices have continued for centuries in Asia in flooded or flooding prone areas during wet/monsoon season.Most suitable areas for wetland rice cultivation are the large rain-fed low-lying regions including the Indo-Gangetic plains and the inland valley areas that are benefited from the flooding soils during submergence in monsoon .However, due to the varying climatic conditions, rainfall is becoming more erratic with the passage of time, resulting in water stress at various stages of rice development .In contrast, as a primary source of calories for more than 2 billion people, it cannot be substituted just because of water shortage.In fact, higher production of rice using less water is the need of the day to feed the increasing human population .Development of productive and sustainable cultivation strategies under water-stress is the original concept for the so-called aerobic rice .Several studies have pointed to the sub optimal growth of rice in aerobic soil through evaluation of drought tolerant medium-yielding varieties for the explicit purpose of reducing the risk of having low yield.In the temperate climate, aerobic rice cultivars show less yield penalty in comparison to those in tropical regions .Recommendation of specific aerobic rice cultivars, suitable for local conditions, in addition to effective crop management practices that can reduce the yield penalty in comparison to the flooded rice, is the key towards successful cultivation of aerobic rice.Keeping water saving strategy in mind, our study, presented here, was conducted by using aerobic rice technology to determine whether few promising high yielding rice genotypes, recommended for lowland cultivation, can give higher productivity under aerobic conditions.The yield barrier between the aerobic and the flooded rice also depends upon the physical properties of the soil, soil moisture content and other environmental conditions.In addition, the lack of soil tillage may induce dryness in the soil, making superficial soil layer more compact under aerobic conditions.However, this compact layer surrounding the seed can be broken by using a chisel with the cutting disk of the no-till planter that allows application of the fertilizer at effective soil depth .The reduction in plant height may be due to the inhibition of stem growth which might, in turn, be due to inhibition of cell length or cell division because of “pressurized” roots under water limited conditions.With an increase in the severity of stress, the hybrids as well as inbred varieties show reduced plant height and number of tillers resulting in yield penalty .Because of a poorer root system, leaf water potential in rice plants decreased significantly with the depletion of soil moisture.In such plants, the reduced leaf water potential drastically decreases leaf expansion, photosynthetic rate, dry matter production and grain filling period .This decline was associated with the down regulation of PS II activity and a concomitant increase in free amino acid content, and in reactive oxygen species; further, there was a shortened grain filling period, indicating that water deficit enhanced the leaf senescence .We note that this reduced photosynthetic rate does not recover even after irrigation.Similarly, a decrease in leaf area is due to the extreme sensitivity of rice towards soil moisture content showing significant changes between saturation and field capacity .Further, leaf rolling is a primary indicator of any rice genotype for its ability to maintain water status under water stress .However, soil moisture content in the aerobic plots during the wet season also remains in saturation, due to frequent rainfall and shallow water table in this Indo-Gangetic region.In addition, free proline content has been reported to enhance in rice kernels during ripening under osmotic stress .It is well known that proline acts as an osmoprotectant and its overproduction provides enhanced tolerance against osmotic stress .Proline protects thylakoid membranes and stabilizes proteins and DNA against free radical induced photodamage by quenching singlet oxygen and scavenging OH• radicals.Water deficit primarily affects the sterility of panicles and spikelets and the final grain weight.Few of these spikelets could be filled, leading to high sterility and low harvest index under reduced water levels.The yield gap between the flooded and aerobic rice mainly occurs because of the variation in total dry matter accumulation, sink capacity, panicles/m2 and the thousand grain weight .However, higher sink size is the most critical factor for reduced productivity under aerobic conditions .In contrast, several reports have shown severe yield penalty under aerobic conditions in high-yielding lowland rice cultivars .This decrease is probably due to water stress after panicle initiation, which causes spikelet sterility and reduction in spikelet number and in translocation of assimilates to the grain .In addition, water stress-induced lack of root pressure would dehydrate the panicles and reduce the number of endosperm cells, resulting in a decline in the sink size per kernel and in the total dry matter content .Field studies, however, has demonstrated that under aerobic conditions, the yield is decreased in all the rice genotypes.Xiaoguang et al. and Grassi et al. have shown 20–30% decline in yield under aerobic conditions in comparison to irrigated lowland genotypes.This decline in the yield may be due to reduced biomass and grain yield as a result of water stress occurring particularly during flowering to anthesis stage .Similarly, lower harvest values indicate that water stress at booting and flowering stages severely affects translocation of assimilates towards the grains during grain filling stage.Heat waves and dry weather conditions prevail during the end of May to June, in most parts of India; this increases spikelet sterility.Exposure to 41 °C for 4 h at the flowering stage causes irreversible damage and plants become completely sterile .The low grain yield under aerobic conditions has been shown to be due to the prevalence of high atmospheric temperature during the study that affected almost all the growth stages of rice from panicle emergence to ripening and harvesting .The most salient feature of our study was the identification of the genotypes showing improved yield under aerobic situations i.e., combined with rainfall and slight irrigation from the sowing to the harvest stage.These cultivars can be grown in areas with inadequate water availability for lowland rice production.In comparison with this type of rice, water consumption by cultivars, used in our study, is much reduced than their yields showing higher water use efficiency.Better performance of DRRH-2 and KRH-2 under aerobic situation was also evident from their higher yield, plant height and biomass in comparison to Jaya, the control variety, which has been reported to be the most suitable variety under water stress conditions.The physiological and yield parameters of these varieties can further be used for breeding under aerobic environment.Further studies are required to standardize the nutrient uptake and fertilizer response under rainfed hill ecosystems and to optimize the aerobic rice crop management practices.Earlier reports and the current study confirm that sustainable yield can be achieved by scheduling irrigation depending upon the sensitivity of a variety towards water stress.Water saving could also be achieved by reducing irrigation cycles, termed as “dry saving” .Farmers can utilize this dry saving by increasing the irrigated area to increase total productivity, thus, reducing their production cost.Similar results can also be achieved either by shifting the transplanting date to reduce evaporation or by growing shorter duration genotypes or through aerobic cultivation.However, by breeding drought resistant traits of upland varieties with high yielding traits of lowland varieties, new genotypes with enhanced yield under aerobic situations could also be achieved.Thus the development of aerobic rice cultivars is a novel approach from the socio-economic point of view.These aerobic rice varieties can also be grown in upland irrigated areas with less water availability or in completely abandoned areas to meet the growing demand of food globally with the burgeoning population and shrinking resources.Authors declare that there is no conflict of interest between all the authors and each one has read and approved the manuscript. | All the irrigated rice systems are currently facing a worldwide challenge for producing higher yield with lower water availability. Aerobic rice is considered to be promising for rice production under water constrained environments where it can be grown under non-flooded and unsaturated soil. All practices for aerobic rice cultivation must start by first identifying promising rice varieties that are expected to produce higher grain yield under such conditions. Therefore, we conducted a field experiment with an experimental design of split-plot in the Tarai region of the Western Himalayas, India, in two irrigation regimes i.e., of continuous flooding and of aerobic condition, using four high-yielding rice genotypes: DRRH-2, PA6444, KRH-2 and Jaya. A grain yield of 743 to 910 g/m2 was obtained on a typical freely draining soil i.e., under aerobic conditions. Further, DRRH-2 showed enhanced panicle number, spikelet number, filled grain number under aerobic conditions, resulting in the higher grain yield of 910 g m/m2. We conclude from our studies that the higher productivity of rice depends upon the improved sink capacity (grain number x grain weight) of the genotype, and that this acts as a major factor limiting yield potential under aerobic and flooded conditions. |
527 | ‘Tagging’ along memories in aging: Synaptic tagging and capture mechanisms in the aged hippocampus | As the percentage of the elderly population continues to rise, aging and the associated health problems pose a huge burden on the socioeconomics of the society and on the quality of living of the individuals and also the caretakers.Deficit in the cognitive functions including memory is one of the most common observations in aging and have been referred to as “age-associated memory impairments” or “age-associated cognitive decline”.However, there is a great variability in the extent of the memory impairments observed, in the brain regions affected and in the age of onset.There is also a significant overlap and diversity in the deficits observed in some of the neurodegenerative disorders and the so called ‘normal aging’ or ‘successful aging’.Efforts have been underway to understand the changes with aging in the structure and function of the brain regions and also the molecular underpinnings.These efforts have uncovered some of the prominent brain regions affected and some potential neurobiological mechanisms.However, what we know so far still appears to be just the tip of an iceberg.A common notion is that aging is associated with significant loss of neurons in the hippocampus.However, studies in rodents, monkeys and humans suggest that there is no substantial neuronal loss with normal aging in humans, monkeys, rats, or mice.Nevertheless, hippocampus exhibits extensive changes in the connectivity, structural organization and functional properties, with differential sub-region specificity.One of the most prominent observations is the alterations in the synaptic plasticity properties in the different synaptic circuits within the hippocampus, Rosenzweig and Barnes).Long-term potentiation, the activity-dependent increase in the strength of the synapses, is widely regarded as the leading cellular correlate of memory.Persistent forms of both LTP and memory require protein synthesis and transcription for the consolidation and long-term maintenance).The initial induction or the short-term forms of LTP and memory, on the other hand, are not dependent on translation or transcriptional activation.At the CA1 Schaffer collateral synapses, there is an age-related reduction in the magnitude of LTP, possibly due to lower depolarization during induction or reduced activation of NMDARs.Drugs that affect the cAMP-dependent intracellular signaling, such as analogs of cAMP, the dopamine D1/D5 receptor agonists and the phosphodiesterase IV inhibitors modulate the late phase of LTP and attenuate the spatial memory deficits in aged mice in a dose-dependent manner.It has been suggested that the basal synaptic properties, LTP induction and E-LTP are mostly preserved in the aged hippocampal circuits but the protein-synthesis-dependent L-LTP mechanisms are impaired.The spatial memory deficit in the aged rodents is associated with a defect in CA1 L-LTP.Aged rodents seem to exhibit greater deficits in certain types of long-term memory than in short-term memory.The ‘Synaptic tagging’ hypothesis proposed by Frey and Morris and now widely referred to as the Synaptic Tagging and Capture hypothesis, provides a conceptual framework for how short-term forms of plasticity are transformed to persistent forms in a synapse-specific and time-dependent manner.This model proposes that glutamatergic activation during LTP/LTD-induction or memory encoding results in instantaneous local ‘tagging’ of activated synapses.These ‘synaptic tags’ later ‘capture’ the diffusely transported plasticity-related products synthesized in soma or local dendritic domains.They demonstrated, with their experiments in rat hippocampal slices, that the tag-setting process in itself is protein synthesis-independent and thus the tag is set by both weak and strong forms of activity and that the tags have a decay-time.They further showed that even though only strong forms of activity lead to the synthesis of PRPs, the weakly potentiated synapses will also be able to capture the PRPs, if the PRPs arrive prior to the decay of the tags.Thus, E-LTP induced in one synaptic input can be transformed into L-LTP by a preceding or subsequent strong tetanization of another independent but overlapping synaptic input within a specific time-window.The level of neuronal activity plays a critical role in regulating the maintenance of plasticity and has a critical role in regulating the duration or the induction of the synaptic tag.The STC mechanisms have been shown to be operating in a compartment-specific/clustered manner within the proximal, distal and basal dendritic compartments of the neurons that function as units for temporal and spatial integration of information.Although the original STC theory was proposed to show the interaction and consolidation of E-LTP and L-LTP, Sajikumar and Frey provided evidence for similar interactions between LTP and LTD that has been termed ‘cross-tagging’.It has been suggested that both tag and PRPs could be process-specific.Further, the capturing of plasticity-related factors occurs within a specific time window, from a common pool of PRPs that involves LTP or LTD specific or PRPs common for both plasticity.The increased or decreased availability of PRPs can lead to competitive maintenance of long-term plasticity and STC.Interestingly, dopamine D1/D5 receptor mediated mechanisms can influence these processes by increasing the availability of PRPs that eventually prevents synaptic competition and promotes STC.Behaviorally, STC-like phenomena might help in adding richness to contextual details since moderately significant events occurring around a highly significant event can all acquire the same mnemonic potential.It reveals a complex pattern of temporal associativity in synaptic plasticity.As such, the fate of memory traces can be dynamically regulated by heterosynaptic events that occur before or after encoding.STC-like phenomenon has been demonstrated in anaesthetized and behaving animals.Identical phenomenon in behavioral settings, called ‘behavioral tagging’, showed the reinforcement of a weak inhibitory avoidance memory into a persistent form by the open field exploration paradigm and also in a spatial memory task.Recently, a similar mechanism is also reported in humans to be involved in the retroactive strengthening of emotional memories.Aging is accompanied by the performance impairments in learning and memory tasks that require associative information processing which has been proposed to be due to the functional alterations in the hippocampal circuits.Since STC model provides a framework for how cellular correlates of memory can exhibit associativity of weak and strong learning events in a time-dependent manner, it is imperative to think that age-related alterations in the STC mechanisms could underlie some of the associative memory deficits observed in aging.Some of our investigations along these lines using the acute hippocampal slices of the aged male Wistar rats have provided evidence that plasticity and STC mechanisms are indeed impaired in the hippocampal CA1 Schaffer collateral synapses Fig. 2;.These synapses fail to exhibit STC, as demonstrated by the inability of the strong-stimulus-induced L-LTP to consolidate the weak-stimulus-induced E-LTP delivered to two independent but overlapping inputs onto a common neuronal population within a critical time period.Interestingly, however, the strong-stimulation-induced LTP, in itself, was able to maintain for extended durations albeit at lesser magnitude of potentiation compared to that in slices of young adult rats.This was supportive of the findings in earlier reports that persistent forms of plasticity could be established in the synapses of aged hippocampus using sufficiently stronger stimulation paradigms.Lack of STC could be due to a number of reasons: Impairments in the process of tag-setting Instability of the set tag or its early decay Impaired synapse-to-soma signaling leading to inefficient transcriptional or translational activation Impaired transcriptional or translational mechanisms in the synthesis of PRPs Impaired transport, distribution or capture of PRPs.Below we review some of the factors that could contribute to the STC deficits observed in aging.In the CA1 region of aged hippocampus, one of the consistent electrophysiological findings is a decrease in the NMDAR component of the synaptic transmission.NMDARs have been demonstrated to act as synaptic coincidence detectors and play crucial roles in many forms of synaptic plasticity and memory.NMDAR activation and signaling is also shown to be critical for establishment of STC, at least in CA1 region.Particularly, activation of NMDAR is involved in the induction of synaptic plasticity and setting of synaptic tag.A synergistic interaction between dopaminergic and NMDAR signaling is suggested to be involved in the D1/D5 receptor-mediated upregulation of plasticity products.Expression of GluN1 subunit has been reported to be decreased in the DG of aged primates and rodents.GluN2B expression also decreases with aging in the hippocampus and cortex of rodents.Studies with GluN2B transgenic mice have suggested that the changes in its expression contribute to age-related memory deficits.The subunit composition of the NMDARs changes significantly with aging leading to an increased GluN2A:GluN2B ratio.The GluN2A-containing NMDARs have a reduced channel opening time and are highly sensitive to modulation by Zn2+ ions.A recent study provided evidence that the shift in NMDAR subunit may have a profound effect on neuronal excitability.The subunit composition of NMDA receptors is developmentally regulated and the expression of GluN2B declines at the age of sexual maturity.Although it is clear that the ratio GluN2A:GluN2B continues to increase in aging, the shift occurs much earlier in life and may not have a detrimental role in memory.One interesting aspect is that GluN2B has a fundamental role in memory reactivation which leads to memory update and eventually may result in memory loss.The developmental shift from GluN2B expression to GluN2A may limit the acquisition of new memories but it may also limit the dynamics of previously acquired memories.The decrease in NMDAR function has also been suggested to the intracellular redox imbalance, increased Ca2+ release from the intracellular stores and increased Zn2+ levels.Overall, it appears that the alterations in NMDAR-mediated mechanisms could be one of the major contributors to age-related memory deficits.Impairments in LTP and memory in aged rats is accompanied by a decrease in protein synthesis.With age, general protein synthesis decreases in certain regions of the brain although the overall level of protein synthesis in CA1 region remains fairly constant.A decrease in protein synthesis with aging has been demonstrated in the dentate gyrus.However, the expression levels of certain genes and proteins involved in plasticity, either increase or decrease.Studies have indicated significant attenuation in the LTP-induced transcription of syntaxin-1B, subunits of NMDA receptor, and alpha-CaMKII in aged animals.Evidences from biochemical studies have demonstrated age-related changes in the signaling pathway involving dopamine-cAMP-PKA-CREB.MAPK-mediated signaling shows age-dependent alterations which has been shown for nerve growth factor signaling in the septo-hippocampal pathway.Some authors have also suggested that the time for transition from short-term to long-term memory is extended in aging, probably due to slowdown in the protein synthesis or transcriptional mechanisms.Another protein, that is a mechanistic target of rapamycin complex 2, controls the actin polymerization required for consolidation of long-term memory and the activity of mTORC2 declines with age as demonstrated in fruit flies and rodents).Neuropsin, an extracellular serine protease, is implicated in the process of LTP tag-setting and suggested to be taking part in synaptic associativity.Expression of neuropsin decreases in the hippocampus of aged mice which correlates with alterations in dendritic morphology.IEGs represent the first line of genomic response to patterned synaptic activity without any intervening synthesis of other transcription factors.They are a group of genes with diverse functions, including synaptic plasticity.Expression of several of the IEGs is known to be either reduced in the basal state or following activity or learning.Zif268 is an IEG involved in synaptic plasticity and memory.Resting levels of both mRNA and protein of the transcription factor Zif268 are decreased in CA1 and CA2 regions of aged animals that display hippocampus-dependent memory deficits.Aged rats also showed lesser c-Fos-positive neurons compared to the younger ones following a radial arm maze learning task.Expression of another important plasticity-related gene encoding activity-regulated cytoskeletal-associated protein is also significantly reduced in aged animals showing impaired spatial memory consolidation, as does the resting expression in CA1 region.Interestingly, the reduced Arc mRNA levels in CA1 were due to the reduction in transcription and not due to the reduction in the number of cells expressing Arc.Arc mRNA is rapidly transported to dendrites and localized to the active dendritic regions, indicating its important role in LTP, LTD and LTM consolidation.Brain-derived neurotrophic factor is another IEG strongly implicated in LTP and memory), and studies indicate age-associated down regulation of Bdnf and its receptor TrkB.Cyclic-AMP response element binding protein is a transcription factor whose activity is demonstrated to be critical for hippocampal memory consolidation, Silva et al.).CREB is activated by phosphorylation on Ser-133 residue in response to a variety of stimuli leading to CREB-dependent plasticity gene expression, Kida and Serita).Evidence suggests age-related changes in the activation of CREB and its downstream mechanisms.Expression of hippocampal phospho-CREB was higher in aged rats compared to the adults following fear conditioning.Resting expression of CREB-binding protein, a co-activator of CREB, is also reduced in the hippocampus of aged rats.One study also found a reduction in the number of pCREB-immunoreactive cells following spatial learning in dorsal CA1 of aged mice showing memory impairments.Several alterations in the Ca2+ dynamics and homeostatic mechanisms have been reported in the aged brain.One prominent observation is the increase in the Ca2+ influx through L-type voltage-gated calcium channels.This is proposed to contribute to the increased slow afterhyperpolarization in aged hippocampal neurons.Increased L-VDCC activity also leads to reduced neuronal excitability and trigger Ca2+-induced Ca2+ release via ryanodine receptors.Increased RyR activity in aging drives the increase in sAHP.Age-related increase in the sAHP correlates with Morris Water Maze deficits and nimodipine, a L-VDCC blocker, facilitates learning in aged rodents at a concentration that reduces sAHP.Release of Ca2+ from intracellular stores or influx through VDCCs activates Ca2+-dependent SK potassium channels leading to hyperpolarization of dendrites.Age-related loss of several calcium-binding proteins is also reported.Additionally, expression and activity of the Ca2+-dependent protein phosphatase calcineurin increases in the hippocampus during aging and is also associated with increased activation of CaN-regulated protein phosphatase 1.This could be one of the reasons for the observed deficits in LTP in aged hippocampus, since increase in the phosphatase activity has been shown to shift the threshold of LTP towards LTD.In support of this, raising the bath Ca2+ level or overexpression of active CaN in young animals induces aging-like deficits in LTP.Further, aging is also associated with a trend towards decline in protein kinase activity.Age-associated impairments in the mitogen-activated protein kinase pathways are seen in aged brain and MAPKs are one of the central signal integrators subserving synaptic plasticity and memory Reviewed by Davis and Laroche, Sweatt).CaMKIV, a primarily nuclear calcium/calmodulin-dependent protein kinase, is rapidly activated by nuclear calcium entry following LTP-inducing stimulation and is implicated in the transcription of plasticity genes.Reduced expression of CaMKIV is observed in the hippocampus of aged mice that correlates with memory deficits.Expression of CaMKII, a protein with diverse roles in LTP and LTM, Lisman et al.), decreases in hippocampus and cortex with normal aging.Protein kinase C family of enzymes consists of several isozymes with different roles in synaptic plasticity and memory.In the aging rodent brain, even though no changes in the levels of PKC isoforms were observed, the dynamics of their activation and translocation was found to be impaired along with the alterations in the content of the adaptor protein RACK1.Such changes in the membrane localization and/or shifts in the dendritic-to-somal ratios of PKC in the hippocampus were associated with age-related deficits in spatial memory.One of the important kinases in the context of LTP and LTM is PKMζ, an autonomously active atypical protein kinase C isoform.It is an important plasticity-related protein demonstrated to be a crucial player in the maintenance of LTP and memory and to take part in synaptic tagging and capture.A causal relationship between transcriptional and epigenetic regulation of PKMζ expression and the age-associated cognitive impairment has been suggested wherein, aged rats exhibit increased baseline level of methylated PKMζ DNA and decreased unmethylated PKMζ DNA in the prelimbic cortex compared with young and adult rats.In the dentate gyrus of aged primates, PKMζ-dependent GluA2 maintenance was shown to be impaired that correlated with memory deficits.Interestingly, polymorphisms in another important memory-related protein KIBRA, which stabilizes PKMζ, was reported to modulate age-related decline in episodic and spatial memory.The nervous system is highly sensitive to oxidative stress.Compromised antioxidant defense mechanisms with aging leads to increased levels of reactive oxygen species which have been proposed to contribute to the age-related deficits in LTP.A progressive imbalance between intracellular ROS concentrations and antioxidant defense accompanies brain aging).Age-associated increase in the protein carbonyl levels and changes in proteasome activities has been demonstrated for various brain regions including the hippocampus.Decrease in the levels of antioxidant molecules, changes in key antioxidant enzymes, decrease in plasma cysteine concentration, dysregulated metal-ion homeostasis and mitochondrial damage are some key factors contributing to the oxidative stress in aging), which in turn affect synaptic plasticity in the hippocampus).Aged animals show deficits in LTP, which are accompanied by increases in pro- inflammatory cytokines, Hippocampal interlukin-1beta levels rise in aging as a result of increased neuroinflammation and stress.Increased IL-1β robustly downregulates the levels of BDNF mRNA and protein.The maturation of BDNF is particularly affected.Further, CD200, a protein that is preferentially expressed on neurons and which acts to inhibit microglia activation, is reported to be downregulated both at gene and protein levels in the aging hippocampus.Elevated glucocorticoid levels are another observation in aging.Hippocampal corticosterone levels are locally increased by the action of the enzyme 11beta-hydroxysteroid dehydrogenase type 1.Expression of 11beta-HSD1 gene and its protein levels are significantly elevated in the normal aged hippocampus and aged 11b-HSD1−/− knockout mice do not show deficits in hippocampal learning and memory.Stress and high levels of glucocorticoids affect LTP, memory consolidation and retrieval).Growing evidences implicate proteasome-mediated protein degradation in both early and late phases of LTP and memory), suggesting that a balance in the protein synthesis and degradation is involved in the consolidation and persistence of long-term memories.Ubiquitin-proteasome system was originally proposed to function in removing inhibitory constraints on the synapse strengthening following an activity, for instance, by degrading the regulatory subunits of PKA are degraded by UPS upon LTP induction or CREB repressors.Induction of long-term facilitation in Aplysia was shown to result in ubiquitination and degradation of a CREB repressor CREB1b.Similarly, ATF4, the mammalian orthologue of CREB1b was shown to be degraded following induction of long-term synaptic plasticity in the rodent hippocampus.The UPS also affects the turnover of many postsynaptic proteins and thus contributes to activity-dependent remodeling of the synapse structure and post-synaptic density composition.It modulates the levels of translational activators such as eIF4E and eEF1A, and translational repressors such as paip2 and 4E-BP.UPS regulates the abundance of glutamate receptors by the APC and SCFFbx2, PSD remodeling by targeted degradation of PSD scaffolds AKAP79/150, GKAP, Shank and PSD-95.UPS degrades SPAR, a postsynaptic actin regulatory protein, to bring about changes in spine structure.Protein degradation by UPS has been suggested to take part in synaptic tagging and heterosynaptic stabilization of L-LTP.PKMζ, an important plasticity-related protein, is targeted for proteasomal degradation if not stabilized.Studies indicate that alterations in proteasome activity may occur during, and possibly contribute to, the aging process.Generally, an age-dependent decrease in the activity of UPS is suggested.An age-dependent attenuation of 26S proteasome assembly, activity and abundance was reported in Drosophila.However, the role of UPS activity in aging and neurodegenerative disorders remains contradicting; as Ciechanover and Brundin have rightly pointed ‘sometimes the chicken and sometimes the egg’.A number of studies have shown that the activation of neuromodulatory inputs, especially dopaminergic or noradrenergic inputs, is crucially involved in the induction of the late phase of potentiation and in the consolidation of persistent forms of memory, Jay, Lisman et al.).Similar role of dopaminergic or noradrenergic systems is suggested from behavioral tagging experiments.A recent report has also shown that blockade of dopamine D1/D5 receptors during novel object recognition blocks the behavioral tagging mediated by it.Dopamine-dependent protein synthesis is proposed to be necessary for stabilizing and maintaining the short-lasting strengthening of synaptic connections resulting from encoding.An interesting finding in our studies with the hippocampal slices of aged rats is that the persistent form of slow-onset potentiation induced by bath application of dopamine D1/D5 receptor agonists in the adult rats is blocked in aged rats Fig. 2B;.Intriguingly, Chowdhury and colleagues found that the lack of dopaminergic modulation with aging leads to associative memory impairments in humans.Dopaminergic mesolimbic system is indicated to be vulnerable to age-related impairments and aged rats display reduced concentrations of dopamine and noradrenalin in the hippocampus.Further, age-related deficits in dopaminergic modulation of memory have been demonstrated in rodents using Morris water maze task, Barnes maze task and the late-phase of the LTP induced by high-frequency stimulation.Dopamine functioning through D1/D5 receptors controls the maintenance of long-term memory storage by a late post-acquisition mechanism involving BDNF.It could be speculated that the deficit in dopaminergic D1/D5 receptor-mediated signaling in aging could affect transcription and translation of BDNF causing the memory deficits.Decrease in the effective cholinergic drive could be another contributor to these deficits.Immunohistochemical analysis in aged macaque monkeys shows a decline in the BDNF-immunoreactivity in the hippocampal neurons and the spatial memory performance in the senescent rats correlates with the hippocampal BDNF mRNA levels.In addition, BDNF can regulate intrinsic excitability of cortical neoruns and differentially modulate excitability of hippocampal output neurons.Given that excitability of neurons decreases with aging, it can be assumed that the impairment in STC observed in aged animals may also depend on the altered BDNF level.Indeed, BDNF and its receptor TrkB play an important role in establishing STC mechanisms.Dysregulation of epigenetic mechanisms is increasingly being implicated in the aging-related disruptions in synaptic plasticity and memory, Penner et al.These mechanisms include methylation of DNA and post-translational modifications of histones.DNA methylation, brought about by DNA methyltransferases and histone modification, such as reversible acetylation by histone acetyltranseferases and histone deacetylases serving to activate and repress the transcription respectively, dynamically regulate the expression of plasticity and memory-related genes.There are two ways in which epigenetic mechanisms could work: by activating the transcription of memory promoting genes by inhibiting the transcription of memory suppressor genes.Many of the histone modifiers also act as co-activators or co-repressors of transcription: for example, CREB-binding protein is a HAT and is also a co-activator of CRE-dependent transcription by CREB.Penner et al. have proposed that the dynamics of these epigenetic alterations is dysregulated in aging leading to memory impairments.The evidence implicating epigenetic modifiers in age-related memory impairments is growing.Aging is associated with a decrease in learning-induced activation of Dnmt3a2, an activity-dependent IEG, in the hippocampus.Dnmt3a2 is associated with transcriptionally active euchromatin.DNA methylation dynamics of the Arc gene is altered in the hippocampus of aged rats displaying memory impairments.A causal relationship between transcriptional and epigenetic regulation of PKM-zeta expression and the age-associated cognitive impairment has been suggested wherein aged rats exhibit increased baseline level of methylated PKMζ DNA and decreased unmethylated PKMζ DNA in the prelimbic cortex compared with young and adult rats.HDACs, which generally act as transcriptional repressors by deacetylating histones, have been proposed as ‘molecular brake-pads’ of memory formation.Supportive to this is a number of studies showing the enhancement of LTP and augmentation of memory by HDAC inhibitors.HDAC activity is reported to increase in aging.However, the specific function of each isoform of HDAC differs and so far HDAC2 and HDAC3 have been reported to have memory-suppressor function.Currently, there are lack of studies investigating the role of epigenetic modifiers in STC mechanisms.In our recent report, we have demonstrated a link between one of the HDAC isoforms and age-related deficits in STC using acute hippocampal slices of rats Fig. 3;.Our results indicate that aging is associated with an increased activity of HDAC3 in hippocampal CA1 region that results in deacetylation of many histone and non-histone targets, one of them being Nuclear Factor kappa-beta.NF-kB is a transcription factor expressed both in neurons and glia and plays a crucial role in the neuronal survival and plasticity.NF-kB activation upon LTP induction leads to its nuclear translocation where it results in the expression of plasticity-related genes such as Zif268, Bdnf, c-Fos, Camkii and transthyretin, Freudenthal et al., Meffert et al., Salles et al., Snow et al.).Active HDAC3 can deacetylate p65 component of NF-kB and promote its export from nucleus, thus preventing NF-kB-mediated transcription of plasticity genes that could take part in STC processes.This appears to be a potential mechanism in aging as we showed increased levels of active NF-kB when inhibiting HDAC3 using a specific inhibitor RGFP966 which also resulted in rescue of deficits in synaptic plasticity and STC.Though not tested in our study, HDAC3 inhibition-mediated rescue could also involve relieving of repression on CBP- and myocyte enhancer factor 2-mediated transcription.Zinc is an important trace metal required for the proper functioning of the brain.Hippocampus is one of the brain regions with the highest concentration of chelatable zinc, found within the synaptic vesicles.In a significant proportion of hippocampal glutamatergic synapses, zinc is co-released along with glutamate, upon which it acts as a modulator of the function of many receptors and also as a second messenger in the intracellular compartment.Homeostasis of zinc levels is critical for brain function and the alterations in the homeostasis is implicated in neurological diseases and in normal aging.The deficiency of zinc in aging and the consequent effects on brain function have been well acknowledged.However, suggestions have been made that excessive zinc-mediated signaling could also be a contributing factor for the age-related cognitive deficits.Biological stress has been suggested to result in excess intracellular zinc signaling in hippocampus.We have recently demonstrated increased zinc levels in the hippocampal slices of aged Wistar rats using a cell-permeable fluorescent zinc probe FluoZin-3™-AM Fig. 4;.This increase in zinc levels correlated with the deficits in synaptic plasticity and STC in CA1 region since regulating the zinc levels with a cell-permeable zinc chelator, TPEN, rescued the deficits.It has been observed before that the denervation of cholinergic fibers in rat hippocampus leads to increased chelatable zinc levels in mossy fiber synapses.It will be interesting to see whether the decrease in dopaminergic and adrenergic fibers in aging contributes to increased zinc levels.Levels of certain zinc transporters are known to be altered in advanced aging, which could also be a factor behind increased zinc levels.Systemic zinc deficiency is a consistent finding in aging).However, zinc deficiency assessed using blood samples may not reliably indicate brain zinc status.Additionally, systemic zinc deficiency has been reported to increase brain zinc retention in rodents by suppressing zinc transporter ZnT1 levels.On the other hand, rats raised on increased dietary zinc show cognitive and memory deficits and high levels of zinc in the brain.Further, a zinc-enriched diet leads to spatial memory deficits in wild-type mice and worsens the deficits in transgenic AD models.Weak zinc chelators and pro-chelators have been suggested as a potential therapeutic strategy in studies of AD animal models and human AD patients.Further investigations are needed to establish the link between increased hippocampal zinc levels and the age-associated memory deficits and how dietary zinc supplementation affects the hippocampal zinc levels and plasticity.Synaptic tagging and capture processes provide a mechanism for the association of information over extended time-frames and between cerebral hemispheres.Similar mechanisms demonstrated in freely moving, behaving animals and also in humans make STC an attractive cellular process underlying association of memories.We have presented here an overview of alterations in the STC processes in aging and the possible contributing mechanisms as an attempt to provide a connecting link between the STC alterations and age-associated memory deficits.Although the direct evidence in this context is slim, we have made an effort to provide an outline in the hope of conveying the significance and the need for further investigations.We also stress on the alterations in multiple cellular mechanisms in aging.Compelling evidence demonstrates that activity-dependent, persistent synaptic modifications play crucial roles in learning and long-term memory formation and the alterations in these mechanisms could underlie memory deficits observed in aging.This calls for continued efforts towards a detailed understanding of the molecular basis of various forms of synaptic plasticity and their alterations in normal aging.Such knowledge will further our understanding and impact the development of preventive as well as therapeutic strategies to treat age-related cognitive decline. | Aging is accompanied by a general decline in the physiological functions of the body with the deteriorating organ systems. Brain is no exception to this and deficits in cognitive functions are quite common in advanced aging. Though a variety of age-related alterations are observed in the structure and function throughout the brain, certain regions show selective vulnerability. Medial temporal lobe, especially the hippocampus, is one such preferentially vulnerable region and is a crucial structure involved in the learning and long-term memory functions. Hippocampal synaptic plasticity, such as long-term potentiation (LTP) and depression (LTD), are candidate cellular correlates of learning and memory and alterations in these properties have been well documented in aging. A related phenomenon called synaptic tagging and capture (STC) has been proposed as a mechanism for cellular memory consolidation and to account for temporal association of memories. Mounting evidences from behavioral settings suggest that STC could be a physiological phenomenon. In this article, we review the recent data concerning STC and provide a framework for how alterations in STC-related mechanisms could contribute to the age-associated memory impairments. The enormity of impairment in learning and memory functions demands an understanding of age-associated memory deficits at the fundamental level given its impact in the everyday tasks, thereby in the quality of life. Such an understanding is also crucial for designing interventions and preventive measures for successful brain aging. |
528 | Station of the fetal head at complete cervical dilation impacts duration of second stage of labor | The second stage of labor encompasses the events between complete cervical dilation and delivery of the fetus.Management of this second stage is commonly based on its duration.Studies have reported inconsistent results on the impact of epidural analgesia on duration of first stage of labor.However, most studies agree that EA can increase duration of second stage.In 2003 the American College of Obstetricians and Gynecologists 6 defined prolonged second stage of labor as > 2 h without and > 3 h with EA in nulliparous women, and > 1 hour without and > 2 h with EA in parous women.The optimal duration and management of the second stage of labor is still being debated. ,Zhang et al challenged existing knowledge by stating a longer duration of first and second stage of labor than has been previously accepted.They found the 95th % of duration of second stage of labor to be 3.6 and 2.8 h in nulliparous women with and without EA, respectively, and suggested that the 95th % is more useful in the assessment of normal progression of second stage of labor. ,These and other reports led the ACOG/Society for Maternal- Fetal Medicine to publish new labor management guidelines , which accepted an additional hour of duration of the second stage of labor in both nulliparous and parous women before diagnosing arrest.The new accepted duration was even longer when EA is administered.However, station of the fetal head at complete cervical dilation and the rate of fetal descent are not included in the new guidelines in regard to normal duration of second stage of labor or the definitions of prolonged second stage of labor.The objective of this study was to examine the association between station of the fetal head at complete cervical dilation and duration of second stage of labor, as well as prolonged second stage of labor, without or with the use of EA.We conducted a population based retrospective cohort study of women who gave birth from January 1, 2011 to December 31, 2013 in the north and middle parts of Troms, Norway.Of the 4 545 women who gave birth during this period, we included 3311 women with a singleton pregnancy, gestational age ≥ 370 weeks, and cephalic presentation who reached the second stage of labor, and had valid information on station of the fetal head at complete cervical dilation and duration of first and second stages of labor.Data were retrospectively transferred from the electronic medical birth record into a case-report-form, which was validated manually against all source information by one of the authors and a medical student.Source information comprised the electronic medical birth record, medical notes during pregnancy and delivery, partographs, the “antenatal fact sheet”, the anesthesia report form and the personal health form provided by the women.The primary outcome was duration of second stage of labor, defined as min from complete cervical dilation to expulsion of the fetus.The main exposure was station of the fetal head at complete cervical dilation, which categorized in three groups; at the pelvic floor, beneath the ischial spines, but above the pelvic floor, and at or above the ischial spines.Our institution practiced a -3 to +3 scale for assessment of station of the fetal head.Cervical dilation and station of the fetal head was assessed by digital vaginal examination, which was performed at admission by the midwife or the obstetrician in charge.Regular contractions and cervical dilation > 3-4 cm defined the start of labor.For women with a cervical dilation of > 4 cm at admission, the midwife estimated the start of active labor based on information from the woman and vaginal examination.The time variable “onset of labor” was validated against available source information during data entry.All inductions of labor were carried out at the maternity department at the University Hospital of Northern Norway.Specialist consultants assessed the indications for labor induction.Based upon cervical ripening, the method of induction was either a cervical ripening agent administered vaginally or artificial rupture of membranes followed by an oxytocin regimen administered intravenously.It is established knowledge that duration of second stage of labor varies by parity ; therefore all analyses on this duration were stratified by parity.In order to compare our results with internationally accepted definitions the outcome variable “prolonged second stage of labor” was transformed into a binary categorical variable: duration ≥ 3 h with EA and ≥ 2 h without EA for nulliparous women and duration ≥ 2 h with EA and ≥ 1 hour without EA for parous women.Further, we estimated the 50th, 90th and 95th % of duration of second stage of labor.The time variable “first stage of labor” was dichotomized based on the 90th % and stratified by parity.Pre-pregnancy body mass index was defined as weight divided by the square of the body height and categorized according to the World Health Organization’s classification.Chi-square test for independence was used for categorical variables to explore the relationship between maternal and labor characteristics and station of the fetal head and prolonged second stage of labor.The Mann Whitney U test was used to examine duration of second stage of labor by parity, and the Kruskal-Wallis test was used to compare median durations of first and second stage between groups.The distribution of duration of the second stage of labor by category of station of fetal head was displayed by survival curves, censoring women with cesarean delivery.Furthermore, binary logistic regression was performed to predict the odds of having prolonged second stage of labor based on station of the fetal head, after adjusting for possible confounding factors.Parity and use of EA were included in the definition of prolonged second stage of labor, and thus we did not adjust for them.Oxytocin was considered a mediating factor and therefore was not included in the analyses.Statistical analyses were performed using IBM SPSS statistics version 24.0.P-value < 5 % was considered statistically significant.The Regional Committee for Medical and Health Research Ethics and the Patient Ombudsman, University Hospital of North Norway, Tromsø, approved the study protocol.Of the 3311 women included in the analysis 42 % were nulliparous and 58 % were parous.The maternal characteristics age and parity; and the labor characteristics gestational age, onset of labor, use of EA, prolonged first stage of labor and fetal birth weight differed significantly by categories of station of the fetal head at complete cervical dilation, whereas pre-pregnancy body mass index and onset of labor did not.EA was administered to 32.3 % of nulliparous and 11.1 % of parous women during the first stage of labor.Median duration of the first stage of labor was 249 min in nulliparous and 150 min in parous women not receiving EA, and this duration nearly doubled in both nulliparous and parous women receiving EA.The station of the fetal head at complete cervical dilation was diagnosed at/above the ischial spines in 37.1 % of nulliparous and 38.8 % of parous women who received EA versus 24.6 % and 17.1 %, respectively, among women who did not.Median duration of the second stage of labor in nulliparous and parous women was 71 min and 14 min, and the 90th % were 192 min and 68 min, respectively.Receiving EA in the first stage of labor was associated with a longer second stage of labor, with a median duration that was was 32 min and 17 min longer in nulliparous and parous women, respectively.Pairwise analysis on categories of station of fetal head found significantly longer durations of second stage of labor among both nulliparous and parous women who received EA when compared to those who did not.In addition, we observed a consistent pattern of longer duration of second stage of labor by increasing distance from the pelvic floor at full cervical dilation for both parity classes and subsets of EA use.This pattern is graphically illustrated by survival curves for duration of second stage of labor.In total, 470 women had a prolonged second stage of labor; 5.3 % when the fetal head was at the pelvic floor, 36.2 % when it was beneath the ischial spines, but above the pelvic floor, and 58.5 % when the fetal head was at/above the ischial spines at complete cervical dilation.Women with prolonged second stage of labor were older, more often nulliparous and had a higher gestational age.In addition, they more often had prolonged first stage of labor and fetal birthweight > 4000 g. Station of the fetal head at complete cervical dilation was significantly associated with prolonged second stage of labor.The adjusted odds ratio for prolonged second stage of labor was 13.1 times higher when the fetal head was beneath the ischial spines, but above the pelvic floor, and 32.9 times higher when the fetal head was at/above the ischial spines compared to when the head was at the pelvic floor.Long duration of first stage of labor, fetal birthweight > 4000 g, and maternal age independently predicted prolonged second stage of labor, whereas gestational age and onset of labor were not associated with this outcome.The strength of the association between categories of station of fetal head and prolonged second stage of labor varied less than 4% across all investigated confounders.In addition, the impact of station of the fetal head on prolonged second stage of labor was evenly distributed across categories of possible predictors.We found a strong association between station of the fetal head at full cervical dilation and duration of second stage of labor in both nulliparous and parous women.We observed a consistent pattern of increasing duration of second stage of labor with increasing distance from the pelvic floor for both parity classes and parity-stratified subsets of EA use.Clinically this association is reasonable, however few studies have systematically assessed and documented this relationship.This information can be helpful for health care providers when presenting expectations for labor progress during the second stage of labor, and for encouraging laboring women to endure a time-demanding delivery.New ACOG/SMFM recommendations 14 from 2014 state that pushing can continue for 3 h without progress in fetal descent or rotation in nulliparous women and 2 h in multiparous women prior to diagnosing labor arrest, and this limit is extended for an additional hour when EA is provided as long as progress is documented.However, our study found that station of the fetal head had a major impact on the duration of second stage of labor.Clinical assessment of station of the fetal head when reaching the second stage of labor, and documentation of EA use, can help clinicians understand the large variations in the duration curves of the second stage of labor, especially when supplemental ultrasound examination are not available to determine station of the fetal head. ,Furthermore, continuous fetal monitoring must confirm neonatal safety .Where we found 91 %, Graseck et al. reported that 95 % of all women had a station of the fetal head at or below the ischial spines at full cervical dilation.More than 50 years ago, Friedman reported that “the higher the station at the onset of the deceleration phase, the more protracted the labor in the deceleration phase and second stage is likely to be”.Piper et al reported that station and position of the fetal head at complete cervical dilation were two of the many factors that influence duration of second stage of labor.Kimmich et al reported that EA may decelerate fetal descent in the active phase of labor.In our study, a higher proportion of women who received EA had a recorded station of the fetal head at/above the ischial spines at the start of the second stage of labor, which could be related to a higher station when EA was administered.Women with slow labor progress may be more likely to ask for EA, and thus, by indication, be more likely to have prolonged first stage labor.Indeed, women with prolonged first stage of labor, the proportion with prolonged second stage nearly doubled, which is in agreement with Nelson et al who reported that the length of the second stage of labor in nulliparous women increased significantly with increasing length of the first stage of labor.Several retrospective studies , have reported an association between EA use and longer duration of second stage of labor.Our estimates of the 95th % duration of second stage of labor in nulliparous women who did not receive EA and those who received EA are very similar to what Cheng et al reported in a US study comprising 42 000 women.Later Zhang et al reported shorter 95th % for duration of second stage of labor for both nulliparous women without EA and with EA.In the latter study only 1/3 of the study population was eligible for duration analysis and only deliveries with normal neonatal outcomes were included.Consistent results in line with our results have been reported for parous women and use of EA.Duration of first stage of labor, maternal age, fetal birthweight and gestational age were independent predictors of prolonged second stage of labor without any confounding effects on the association between station of the fetal head and duration of second stage of labor.Despite adjustment analysis, residual confounding might be present, for example by rotation of the occiput posterior position of the fetal head when passing through the pelvis.We had data for position at time of delivery, and occiput posterior position at delivery was associated with prolonged second stage, but this variable had minimal confounding effect on the station estimates.We did not include position in the model as we lacked information on position of the fetal head through the passage of the pelvis.One study showed that two-thirds of occiput positions when reaching the ischial spines will rotate to an anterior position at the time of delivery. ,The strengths of our study include the long-term utilization of an established electronic medical birth record system, a steady study population, committed employees and validated outcome data.The extensive literature review we performed confirms that we included the most common confounders when analyzing station of the fetal head as a primary exposure for duration of second stage of labor.Limitations of our study are the retrospective study design and that we did not considered rate of descent.Further, the definitions of onset of first and second stages of labor will influence duration curves.The assessment of station of the fetal head and cervical dilation was done by vaginal exploration when indicated, not at determined intervals, thus the definition of onset of the second stage, can in some cases, be somewhat arbitrary.This subjective examination method is not very reliable and may lead to intra- and inter-observer biases as demonstrated in a birth simulator study .Tutschek et al found that intrapartum ultrasound examinations were more reproducible in the assessment of labor progression than digital vaginal palpation.Furthermore, selection of the study sample, clinical practice and lack of standardized protocols on prolonged second stage of labor may also contribute to variations in duration estimates.These issues may affect the generalizability of our study results, but they mimic the real-world scenario in the delivery room.We found that station of the fetal head at complete cervical dilation had a significant impact on duration of second stage of labor and on the risk of prolonged second stage of labor.Assessment of station and position of the fetal head must be considered important factors in the clinical examination of laboring women to anticipate remaining time to delivery and the likelihood of achieving vaginal delivery.Changing the guidelines for the management of the second stage of labor exclusively based on the duration of second stage, may be an oversimplification of the complex process of labor.FES designed the study.EL did data collection.EL/FES ran consistency analyses, cleaned data, and analyzed data.EL was lead author.EL/FES interpreted the results, evaluated literature, and agreed upon the final manuscript for submission.The study has received funding from theNorthern Norway Regional Health Authority. | Objective: To examine the association between station of the fetal head at complete cervical dilation and duration of second stage of labor, as well as prolonged second stage of labor, without and with the use of analgesia (EA). Study design: We conducted a population-based retrospective cohort study of 3311 women with a singleton pregnancy, gestational age ≥ 370 weeks, and cephalic presentation. Station of the fetal head at complete cervical dilation was categorized as at the pelvic floor, beneath the ischial spines, but above the pelvic floor, and at or above the ischial spines. In logistic regression analysis, we defined prolonged second stage of labor as > 2 h without and > 3 h with EA in nulliparous women, and > 1 h and > 2 h, respectively, in parous women. Results: Survival curves demonstrated longer durations of second stage of labor in nulliparous women and women with EA in each category of station of fetal head. The adjusted odds ratio of prolonged second stage of labor was 13.1 (95% confidence interval (CI): 8.5-20.1) times higher when the fetal head was beneath the ischial spines, but above the pelvic floor, and 32.9 (95% CI: 21.5-50.2) times higher when the fetal head was at or above the ischial spines compared to at the pelvic floor. Conclusion: Station of the fetal head at complete cervical dilation was significantly associated with duration of second stage of labor. |
529 | Multivariate analysis of trace elemental data obtained from blood serum of breast cancer patients using SRXRF | Cancer of the breast affects millions of women worldwide and stands in second position among the leading causes of mortality in women.Numerous efforts have been made to develop new diagnostic methods in order to reduce the number of deaths by this disease.However, being a multifactorial disease, associating a particular mechanism with pathogenesis of breast cancer has become a challenge.Therefore, in order to improve the early detection of breast cancer, simultaneous screening of a patient specimen for multiple biomarkers is indispensable.It has been become well established that owing to increased urbanization and industrialization and consequent contamination of the environment, human beings are constantly being exposed to the toxic effects of certain metals through food, air, water, soil and also by the use of consumer products .Factors such as heavy smoking, long menstrual history, older age, inherited mutations, type 2 diabetes and long term exposure to estrogens are correlated with increased breast cancer risk .Among these known risk factors, elevated levels of estrogens is considered to be the main risk factor .It has been amply demonstrated in a few earlier studies that certain metals like Cd, Cu, Fe, Zn, Co, Cr, Pb, Al, Hg, Sn, As and Ni activate estrogen receptors and thereby induce the expression of estrogen target genes and the proliferation of breast cancer cells .Deciphering the molecular mechanisms that are involved in metal-induced breast carcinogenesis might help in identifying suitable metal chelating agents and thereby adopting viable therapeutic approaches.Identifying the imbalance of essential elements in breast cancer patients with respect to healthy individuals can serve as a vital biomarker for early diagnosis of malignancy since these elements play a role in many biochemical processes such as protein synthesis, immune function, antioxidant defense and inhibition of cell proliferation .Last two decades of research on breast cancer has witnessed extensive work on establishing a link between alterations in elemental concentrations and the pathogenesis of breast cancer.Recently, some of the research groups have carried out studies on blood serum samples of breast cancer patients in order to find the association between the role played by trace elements and breast cancer and to understand the kind of mechanisms involved in carcinogenesis.Ding et al. determined 15 trace elements in serum of breast cancer patients and found significantly higher levels of Cd, Mg, Cu, Co and Li and significantly lower levels of Mn, Al, Fe and Ti when compared to their matched controls.In a study, Adeoti et al. reported that the mean concentrations of Cu and Cu-Zn ratio were significantly higher in the breast cancer bearing group compared to the controls.Martin et al. observed elevated serum Ca levels in untreated postmenopausal breast cancer subjects.In another study on serum and tissue trace elements in breast cancer patients by Kuo et al. , it was found that the serum of breast cancer patients had depressed levels of Zn and Se when compared to control subjects while malignant tissue samples contained elevated levels of Cu, Zn, Se and Fe.Moreover, they reported highest levels of Cu and Cu/Fe ratio and Cu/Zn ratio in both the serum and tissue samples of patients with advanced stage of malignancy.In our earlier works , diminished blood sera levels of Ti, V, Cr, Mn, Co, Ni, Zn, As, Se, Br, Rb and Sr and elevated levels of Fe and Cu were observed in breast cancer group when compared to control group whereas in breast cancer afflicted tissues, elevated levels of Cl, K, Ca, Ti, Cr, Mn, Fe, Ni, Cu, Zn, As, Se, Br, Rb and Sr were observed with respect to normal breast tissues.Multivariate analyses of obtained trace elemental data in biological fluids and tissues of cancer patients has been reported by several authors .Discrimination analysis of normal and malignant tissues of breast, colon and lung was reported by Drake and Sky-Peck in which they obtained 85% accuracy in classification based on trace elemental concentrations in paired malignant and normal tissues.However, meagre studies on multivariate statistical analysis of biological fluids of breast cancer patients have been reported till date.Synchrotron Radiation based X-ray Fluorescence is an excellent and comparatively simple analytical tool, which is commonly used for quantification of inorganic elements present in different types of specimens .The excellent brightness of the radiation source, multi-elemental investigation ability, non-destructive nature, less data acquisition time and high sensitivity for trace elements makes this technique suitable for analysis of elements present in a wide variety of archaeological, geological, environmental and biological samples.When compared to conventional XRF having less intensity of photon beam, poor detection limits and energy tunability depending on X-ray tube, SRXRF offers the advantages of high intensity beam, superior detection limits and wide range of energy tunability.The aim of this work was to determine the serum elemental profile of breast cancer patients and healthy subjects by employing SRXRF technique and further to perform multivariate statistical evaluation of trace elemental data obtained from the two studied groups.IBM SPSS Statistics v24 software was used to carry out the statistical analysis.Forty healthy female volunteers who did not use any medicines and the same number of histologically proven breast cancer patients were taken as study subjects in the present work.The second group comprised of newly diagnosed breast cancer patients prior to any treatment such as chemotherapy, radiationtherapy or surgery.Present research work was approved by Ethics Committee of Mahatma Gandhi Cancer Hospital and Research Institute, Visakhapatnam, India.A signed informed consent was obtained from each of the study subjects following full description of study.Whole blood specimens of breast cancer group were collected from MGCH&RI, Visakhapatnam, India.Five to seven milliliters of whole blood samples drawn from each subject was gently accumulated in distinct trace metal analysis vacutainer tubes.Centrifugation was done at 3500 rpm for 9–16 min and then the separated serum was carefully pipetted out into storage vials.These vials were kept at −40 °C in a deep freezer until further analysis.Known quantity of yttrium was added to each serum specimen as an internal standard.The rest part of sample preparation can be found in one of our earlier works .The present experiments were performed on micro-probe XRF beamline-16 of Indus-2, a 3rd generation synchrotron radiation source operating at Raja Ramanna Centre for Advanced Technology, Indore, India.The primary excited photon flux measured by positioning an ionization chamber just ahead of the sample showed an average flux of about 1012 ph/s.The studied targets placed in air were irradiated at an angle of 450 using monochromatic photon energy of 17.8 keV generated from beamline-16 of Indus-2.The irradiation energy was just above the Kβ of yttrium which was used as an internal standard.The SRXRF measurements were performed by focusing the beam on 3 × 3 mm2 area of the sample using a live time of 500 s.The emitted characteristic X-rays were detected by Silicon Drifted Detector with an energy resolution of 138 eV at 5.9 keV.The photograph of the current SRXRF experimental set-up is shown in Fig. 1.Fig. 2 depicts the typical SRXRF spectra obtained for trace elements in serum of healthy subjects and breast cancer patients.The obtained SRXRF data was processed by utilizing non-linear curve fitting procedure implemented in the routine batch-fitting tool of PyMCA .The validity and reliability of the experimental method were verified by analyzing International Atomic Energy Agency reference material-animal blood in the same experimental conditions as that of the samples.Pearson’s correlation analysis and discrimination analysis were performed on the elemental data obtained from serum of breast cancer patients and healthy subjects.Significant differences in elemental concentrations between the two studied groups were determined by using Mann–Whitney test.All afore mentioned statistical analyses were carried out by using IBM SPSS Statistics v24 software.Synchrotron based XRF technique was employed for analysis of trace elements in serum of breast cancer patients and healthy subjects.A total of sixteen trace elements were identified in the two studied groups and their mean concentrations and allied standard deviations are shown in Table 1.The analyzed SRXRF spectrum of IAEA reference material-animal blood is depicted in Fig. 3 and the measured elemental concentrations are provided in Table 2 along with the certified values.It is concluded that the measured values for reference material are in consonance with the certified values.The level of significance was calculated for each element to identify the significant differences between the two studied groups.The obtained data shown in Table 1 indicate that the concentrations of K, Ca, Fe, Cu, As and Pb are elevated while Ti, V, Cr, Mn, Co, Ni, Zn, Se, Br and Rb are depressed in serum of breast cancer group when compared to healthy subjects.This change however was significant only for Ti, Cr, Mn, Fe, Co, Cu, Zn, Se, Br and Pb.Iron plays an essential role in several biochemical mechanisms and also takes part in carcinogenesis .Pinnix et al. emphasized the significance of Fe related proteins ferroportin and hepcidin in the deregulation of Fe homeostasis in cells of breast cancer.They attributed enhanced Fe levels in cultured breast cancer cells to overexpression of ferritin, which is a consequence of elevated levels of hepcidin in tumor cells.In a few other studies too, elevated serum ferritin levels have been reported in breast cancer patients when compared to healthy controls .Marques et al. have correlated breast cancer progression with elevated Fe levels.Some in vitro and in vivo studies have revealed cell cycle arrest and inhibition of breast tumor cell proliferation in the presence of Fe chelators .Excess levels of Fe are also recognized as the generators of reactive oxygen species which in turn are associated with the progression of carcinogenesis .The elevated levels of Fe reported in serum of breast cancer patients when compared to healthy subjects in the present work support the afore-mentioned studies.Copper is an essential trace element needed for cell growth and it is a cofactor of a wide range of enzymes.Oxidative stress as a result of elevated Cu arises via two different mechanisms.The first mechanism involves ROS formation by Cu ions cupric) and cuprous) during Fenton-like oxidation and reduction reactions .In the presence of superoxide anion radical, Cu is converted into Cu which consequently generates reactive hydroxyl radicals by reacting with hydrogen peroxide .The second process by which elevated Cu causes oxidative stress is by causing a significant decrease in the levels of antioxidant glutathione.Enhanced cytotoxic effects of ROS as a result of depleted levels of GSH increases the catalytic activity of Cu and consequently generates elevated levels of ROS .This correlation between increased Cu toxicity and depleted GSH accounts for the fact that maintaining physiological levels of Cu is very important to balance the antioxidant activities of GSH.The ROS induced DNA damage leads to carcinogenesis .The significantly excess levels of Cu in blood serum of breast cancer patients observed in this work support these speculations and are in good agreement with the results obtained in earlier works .Zinc is an essential trace element in biological fluids and plays a pivotal role.Zinc as a component of several enzymes is required for amalgamation of DNA and RNA.The stimulation of apoptosis in malignant cells and inhibition of cell growth by Zn have been reported in earlier studies .Several authors have implicated Zn deficiency in causing variety of cancers via various biological mechanisms .Al-saran et al. studied the response of cultured human breast cancer cells to depletion and supplementation of Zn.Enhanced cancer cell growth was observed in Zn deficient environment due to oxidative stress induced DNA damage.Additionally, Zn supplementation was found to regulate proliferation of breast cancer cells through upregulation of tumor suppressor genes.In their study on the effect of marginal Zn deficiency on mammary glands of mice, Bostanci et al. reported microenvironmental changes associated with oxidative stress induced DNA damage.Dizaji et al. studied the status of Zn in gastrointestinal cancers and suggested Zn deficiency to be a primary risk factor for causing digestive cancers.The association between Zn and cancer has been demonstrated in some epidemiological studies .Present findings of low levels of Zn in breast cancer patients are in consonance with above studies.Selenium is essential for living organisms and is known to exhibit anticancer effects in breast cancer cells .High exposure to Se has shown to reduce breast cancer risk in an epidemiological study by Cai et al. .Protection from different cancers , inhibition of tumor cell migration and invasion , and prevention of cell death by decreasing oxidative stress are some of the positive effects shown by Se compounds as reported in several studies.The observed depleted serum levels of Se in cancer group of the present work supports the above hypothesis on Se as an antitumor agent.In the current study, when compared with healthy subjects, serum of breast cancer patients had relatively high levels of Pb.Lead intoxication induces cellular damage mediated by the formation of ROS .Reacting with proteins and inhibiting the actions of Ca are among the other key mechanisms by which Pb causes toxicity .Lead interferes with essential metallic cations by binding to several sulfhydryl and amide group of enzymes, consequently inhibiting their enzymatic activity and altering their structural configuration .The concentration of Pb was found to be elevated in scalp hair of breast cancer patients .In a study by Kaba et al. , higher levels of Pb were observed in serum of prostate cancer patients with respect to healthy subjects.Qayyum and Shah reported significantly elevated levels of Pb in nails, scalp hair and blood of oral cancer patients when compared to healthy controls.Noticeably enhanced activities of antioxidant enzymes like superoxide dismutase and glutathione peroxidase were observed in erythrocytes of workers exposed to Pb when compared to the non-exposed category .The high levels of Pb observed in cancer patients of this work are in consonance with these studies.Pearson’s correlation coefficients among the elements in serum of healthy subjects and breast cancer group were calculated to know the correlation between pairs of elements and these results are provided in Table 3.In the case of healthy subjects, significant and strong positive correlations were noticed for Ca-K, Cr-Ti, Mn-Ca, Se-Ni, Br-Cr and Rb-Br, while no strong negative correlations were noticed as shown in lower triangular matrix of Table 3.In the case of breast cancer patients it can be seen from Table 3 that significantly strong positive correlations exist between Ca-K, Cr-V, Zn-K, Br-Ni, Rb-K, Rb-Ca, and Rb-Br, while strong negative correlations were noticed for V-Ti, Co-Fe, and Br-Ti.This data obtained through correlation analysis might be used to carry out studies at the cellular level to check whether the appropriate supply of positively and negatively correlated elements would restore the physiological levels of essential elements.Discrimination analysis was used to classify blood serum on the basis of elemental concentration with the obtained result being an indication of a healthy or a cancer state.This was done by using IBM SPSS Statistics v24 software.The main aim of using multivariate discrimination analysis in the present study was to classify the type of serum, to explore the connection between distributions of elemental profile and build a predictive model of group membership based on elemental concentrations.Based on the linear combination of original variables a discrimination analysis was generated.Table 4 shows the degree of success of the classification between the two studied groups.A sixteen elemental discrimination analysis consisting of K, Ca, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zn, As, Se, Br, Rb and Pb correctly classified 39 of 40 breast cancer patients and 40 of 40 healthy subjects resulting in an overall classification accuracy of 98.8%.The original classification results of breast cancer patients within predicted group membership showed 97.5% accuracy.In a study, Drake and Sky-Peck performed discrimination analysis of trace elements in normal and cancerous tissues of breast, colon and lung and obtained over all classification accuracy of 98% in breast tissue with inclusion of nine trace elements.Da Silva et al. and Ng et al. also carried out discrimination analysis in different types of breast tissues to classify the possible predicting group.Silva et al. obtained overall tissue classification of 65% and 87% for benign and malignant tissues respectively and these classifications were lower than the present study due to inclusion of less number of trace elements .The obtained overall classification of 98.8% in present study is in good agreement with the results of Drake and Sky-Peck .The serum elemental content in healthy subjects and breast cancer group were investigated by employing SRXRF analytical technique.The results unveiled significantly elevated levels of Fe, Cu and Pb and significantly depressed levels of Ti, Cr, Mn, Co, Zn, Se and Br in serum of breast cancer patients with respect to healthy subjects.Excess levels of Fe, Cu and Pb observed in serum of breast cancer patients with respect to healthy subjects might have possibly lead to cellular damage through the formation of ROS.The data obtained through correlation analysis might be used to adopt a viable therapeutic approach via supplementation of appropriate metal based drugs.The high predictive accuracy obtained through discrimination analysis of the obtained data in the current study suggests that discrimination analysis can be used as a prospective diagnostic tool when applied to specific trace elements in blood serum of breast cancer patients.Further studies with inclusion of larger number of serum samples are required to elucidate the present conclusions.The authors declare that they have no competing interests.This work was supported by University Grants Commission, Government of India, New Delhi, India through UGC Major Research Project dated 22/03/2013). | Synchrotron radiation based X-ray fluorescence (SRXRF) technique was utilized for the determination of serum elemental profile in breast cancer patients group and healthy subjects group. The SRXRF experiments were performed on microprobe beamline-16, Indus-2 synchrotron radiation source at Raja Ramanna Centre for Advanced Technology (RRCAT), Indore, India. The accuracy and reliability of the experimental method were validated by using International Atomic Energy Agency (IAEA) reference material-animal blood (A-13). Sixteen elements were identified and their concentrations were measured in the serum of the two studied groups. The analysis of results showed significantly elevated levels of Fe (p < 0.000005), Cu (p < 0.0005) and Pb (p < 0.00001) and diminished levels of Ti (p < 0.0005), Cr (p < 0.05), Mn (p < 0.000005), Co (p < 0.001), Zn (p < 0.000005), Se (p < 0.000005) and Br (p < 0.05) in the serum of breast cancer patients with respect to healthy subjects. Correlation analysis of obtained data revealed significantly different correlation patterns for some elements in the two studied groups. With the help of discrimination analysis, the differences in serum elemental profile between the two studied groups could be identified with an overall accuracy of 98.8%. It is anticipated that the pathological mechanisms affected by the perceived changes in concentrations of certain trace elements might have lead to the carcinogenic process. |
530 | Modelling cellular signal communication mediated by phosphorylation dependent interaction with 14-3-3 proteins | Cellular adaption and decision-making rely on the communication between multiple signalling pathways.Post-translational protein modification on multiple sites is one important mechanism for signal cross communication.To execute a response specific to such multisite modifications, it often has to be “read” by specialized domains in the receiving signal transduction protein.Among the many specialized cellular signal transduction molecules that have been discovered during recent decades, the family of 14-3-3 proteins occupies a remarkably ubiquitous role as downstream effectors of phosphorylation events.Essentially being soluble dimers of single phospho-Ser/Thr binding domains, the 14-3-3 proteins are reported to bind hundreds of different cellular proteins, although there are also examples of binding to non-phosphorylated target proteins.Upon binding to their phospho-recognition site, they are reported to affect the target protein by modulating its activity , interaction with other molecules , intracellular localization or stability .Dimeric 14-3-3s are required for binding to many targets, and several TPs like the PKCε, Cdc25B, c-Raf and Foxo4 require two sites to be phosphorylated for high affinity binding of 14-3-3 .The term “gatekeeper site” refers to the primary role of one phospho-site in determining 14-3-3 binding.A secondary phosphorylation site, which in some cases can be more divergent from the consensus sequence, can further contribute to increased affinity binding or induction of structural changes in the TP.The latter is referred to as the “molecular anvil” hypothesis .Multi-phosphorylation events have also been reported to negatively regulate binding of 14-3-3 proteins, by phosphorylation of TPs at a site close to the 14-3-3 interaction site, thereby preventing complex-formation .Context dependent signalling mechanisms can act at the level of signalling pathways or at their downstream targets.Examples exist for the action of 14-3-3 proteins at both levels .We wanted to investigate in more detail how 14-3-3 proteins may influence signal transduction, particularly in multi-pathway communication executed through phosphorylation at multiple sites.A mathematical modelling approach was chosen for generality, but the modelling was based on previously reported mechanisms of interaction between 14-3-3 and TPs and experimentally derived affinity constants.We first investigated 14-3-3 binding to TP downstream of one signalling pathway and its regulation by phosphorylation kinetics and binding affinities.Next, we modelled inhibitory signal communication for 14-3-3–TP interactions where the TPs in addition had a phosphorylation site that inhibited 14-3-3 binding, a mechanism reported for several TPs.The capability of dual phosphorylation site recognition of 14-3-3 proteins has been suggested to make them functional logic AND-gates or coincidence detectors .Based on the nature of their signal output, we defined three functional classes of dual site 14-3-3 TPs.The influence of phosphorylation rate kinetics of TPs and the binding affinities of 14-3-3 to different phosphorylated states on the signalling response of the different classes was then investigated.In particular, we report on conditions that gave optimal synergistic cross-talk between the two signalling pathways, mediated by 14-3-3 interactions.These conditions varied substantially between the three classes of 14-3-3 TPs.Our findings provide new insights into the regulation of 14-3-3 target proteins and open up for new strategies for therapeutic modification of 14-3-3 regulated processes.We refer to the Supplemental text for details on the modelling approach.The reactions were implemented in the simulation software Copasi .We used the LSODA algorithm for numerical integration using an absolute tolerance of 10−12.Several hundred binding partners have been reported for the 14-3-3 proteins.Most of the available 14-3-3 proteins in a cell can therefore be expected to be bound to different target proteins.To tackle this in our models we classified the bulk cellular 14-3-3 TPs as high and low affinity binders, and used a buffered protein interaction modelling approach for all the calculations.Starting with the simple case where only one signalling pathway targets the TP 14-3-3 binding site, we show how different signalling strengths and Kd-values of 14-3-3 binding can modulate outputs such as complex formation and TP phosphorylation.This simple model would apply to many 14-3-3 TPs.Multiple signalling inputs occur when several kinases or phosphatases target the same phosphorylation site.As such, additional input signals will change the rR-value, leading to altered response.Alternatively, signalling pathways may modulate the affinity of 14-3-3 binding, e.g., by expression of different 14-3-3 isoforms, modification of 14-3-3s or by additional modification of their TPs .The latter mechanism has been shown for the proteins Cdc25B, RGS18, Rap1GAP2 and Bad, where phosphorylation-mediated inhibition of 14-3-3 binding by direct phosphorylation of the TP close to the 14-3-3 binding site has been reported .This provides a mechanism for inhibitory signalling communication, and we investigated this mechanism more closely where one signalling pathway downstream of signal 1 controls phosphorylation of site 1 on TP, necessary for 14-3-3 binding.We modelled the situation where a strong inhibitory phosphorylation was included downstream of signal 2, that targeted a site 2 on TP and rendered it incapable of binding 14-3-3.As the proteins reported to be regulated by this mechanism have the inhibitory phosphorylation site placed adjacent to the 14-3-3 binding site, we assumed that site 2 was unavailable for phosphorylation in the complex between 14-3-3 and pS1-TP.Expectedly, for high 14-3-3 binding affinities only modest inhibition was obtained even at phosphorylation conditions favouring high phosphorylation stoichiometry on the inhibitory site.This low inhibition occurred as most of the TP resided in a complex with 14-3-3 unavailable for inhibitory input.This is shown in Fig. S4B, where increasing S1 generated more pS1-TP:14-3-3 complex that was unresponsive to S2 signal input.A lower affinity opened for more sensitive pathway communication.In comparison, the influence of 14-3-3 proteins on site 2 phosphorylation was reciprocal for the two situations.For both TPs, the strength of signalling communication depended on the basal signalling conditions of the inhibitory signal and the signalling strength on site 1.Thus, a cell exposed to basal stimuli of 1 nM S2 is much more sensitive to a 10-fold increase in inhibitory signal than one experiencing a basal S2 level of 0.1 nM.We suggest that for high affinity 14-3-3 binding TPs, this mechanism for negative regulation would only be efficient if both pathways could be modified in conjunction.This may occur automatically as the two phosphorylation sites in general are adjacent to each other.Hence, it is expected that phosphorylation of one site would also affect the phosphorylation or dephosphorylation of the other site.However, to our knowledge the possible interdependence has not been investigated for any of the reported TPs that are regulated by this mechanism.We wanted to investigate the ability of 14-3-3 proteins to mediate positive signalling cross-talk through their dual phospho-site binding capability.Several reported 14-3-3 binding proteins such as PKCε, c-Raf, AANAT, Foxo4 and Bad have reported gatekeeper sites, that contribute the most to 14-3-3 binding, with secondary sites that further increase the affinity of the complex .We modelled a TP where the binding of 14-3-3 was facilitated by phosphorylation on two sites.Due to the symmetry of this model, we considered site 1 to be a gatekeeper site, and binding to TP phosphorylated on both site 1 and 2 was at least as strong as that for site 1.Similar to the case with single phospho-site TPs, we considered different output types for phospho-TP:14-3-3 complexes: Class 1, the type of phospho-TP:14-3-3 complexes where binding to any of the two sites was sufficient to mediate the biological effect; class 2, where the functional output of 14-3-3 association was increased phosphorylation of one of the sites .Finally, we defined class 3 as the type of TP where 14-3-3 binding to doubly phosphorylated TP provided the alterations in bioactivity, similar to the anvil type of complex reported for AANAT and Cdc25B .We modelled first the case with modest affinity for 14-3-3 binding to TP when only one site was phosphorylated, whereas high affinity binding was included for the double-phosphorylated TP.We also used relatively low kinase activity relative to phosphatase, r1 = r2 = 0.1).This was expected to provide little output when only one signalling pathway was activated, but much higher output values when both signalling pathways were activated.The steady state outputs of the three different classes of 14-3-3 TP were then calculated for values of signal 1 and signal 2 varying between 0.1 nM and 1 μM.For both class 1 and 2, a low response was found when only one signal was activated, whereas much higher activation was found when both signalling pathways where stimulated.As expected, the formed pS1pS2-TP:14-3-3 complex had extremely low response to single-pathway stimulations, whereas in general the three values of the output types were quite similar.From the surface plots it was possible to calculate the synergy obtained by dual pathway stimulation compared to that obtained by the two pathways alone.We chose to calculate the ratio of the output for when both S1 and S2 are stimulated to that of the sum of outputs obtained with single pathway stimulation.We refer to this as the synergy ratio, and rS will be >1 if higher activation is obtained than could be expected from the combined effect of both pathways.The rS for total phospho-TP:14-3-3 complex and total TP site 2 phosphorylation was very similar, whereas that for the anvil type was extremely high due to very low levels of 14-3-3 complex with dual-phosphorylated TP when only one signalling pathway was activated.The similar behaviour of the three output classes led us to investigate how changes in the value of Kd1 and r2 affected their signalling response.First, lowering the Kd1-value to 10 nM increased 14-3-3 binding in response to S1 as expected, whereas the AND-response was not enhanced accordingly.This led to a strong decrease in the rS of total phospho-TP:14-3-3, whereas the change in TP site-2 phosphorylation was much lower.We next investigated the effect of decreasing the activity constant of the protein phosphatase acting on site 2.This gave a more robust increase in total phospho-TP:14-3-3 complex and of TP site 2 phosphorylation upon stimulation with both S1 and S2.However, only the synergy for total phospho-TP:14-3-3 complex response was increased, as TP site 2 phosphorylation in response to S2 alone also increased considerably.The two outputs therefore varied in their response to changes in different parameters.The synergy ratio for the class 3 output type varied little between the mentioned parameters.The results above show that both phosphorylation kinetics and binding affinities affect the response strength and synergy between the signalling pathways.They also suggest that there are differences between the three types of functional classes of 14-3-3 effector mechanisms.We therefore performed a comprehensive analysis of these three output classes and how their response changed with different values of r1 and r2 and Kd1, Kd2, Kd1,2, which is within the typical range of Kd-values reported for 14-3-3 complexes.We compared the responses to basal and high levels of S1 and S2 and calculated the synergy ratio for each parameter sets, as this would be an important factor when considering communication between signalling pathways.We found that for TPs where any of the three phospho-TP:14-3-3 complexes provide functional outputs, the synergetic signalling response was critically dependent on lower affinity binding to monophosphorylated TP compared to that for dual-phosphorylated TP.Increased affinity of 14-3-3 binding to pS2-TP greatly decreased the opportunity for pathway synergies, as complex formation of 14-3-3 to singly phosphorylated TP competed more strongly with that of binding to pS1pS2-TP.It was interesting to note the narrow range of values for r1 and r2 that allowed potent pathway synergies and how these ranges shift upon changes in affinities.In particular, we noted that the synergies for total phospho-TP:14-3-3 complex and TP site 2 amplification complex type had maxima at quite different ranges of r1 and r2, for Kd-values where they both showed synergies.Thus, whereas pS2-TP phosphorylation in general showed maximal synergy at low r2 values and medium r1-values, the total phospho-TP:14-3-3 complex had maximal synergy at medium to low r1 values and medium r2 values.The synergy of class 2 TPs was also much more robust against values of Kd1 and Kd2 approaching that of Kd1,2.In fact, decreasing both Kd1 and Kd2 retained much of the synergy and led to a peculiar r2-shift where maximal synergies were now observed at high r2-values.Going from a flat non-synergistic response curve at high Kd2 and Kd1 = Kd1,2, this shift occurred as Kd2 decreased.At high site 2 phosphorylating conditions 14-3-3 complex formation could then be shifted away from pS1-TP, which does not contribute to the functional output of the class 2 TPs.As observed above, the synergy of pS1pS2-TP:14-3-3 formation was high for all sets of Kd-values, but was higher at lower values of r1 and r2.Increasing the affinity for pS1-TP led to higher dependency of low r1-values, whereas decreasing both Kd1 and Kd2 led to a narrowing of high synergy peak giving a saddle along the line of r1 = r2.It should be noted that the extreme synergy values observed for very low rn-values are probably less relevant biologically as very low phosphorylation levels of low abundance proteins will be subject to stochastic variation, in particular in compartments confined to smaller volumes.Thus, the less spectacular, but still very strong synergy levels are probably more biologically feasible.Synergistic interactions between signalling pathways provide opportunity for context-dependent responses, but also for filtering noise.Although several known TPs have increased binding affinity for 14-3-3 upon dual phosphorylation, this increase was inadequate to guarantee signalling synergism.In particular, to promote synergistic output of class 1 14-3-3 TPs, binding to singly phosphorylated TP should occur with relatively low affinity, particularly for the secondary 14-3-3 binding site.In addition, the gatekeeper site should operate at low kinase/phosphatase ratios.The proapoptotic protein Bad is as an example of a class 1 TP.The protein complex with 14-3-3 provides a functional output by localising Bad in the cytosol, away from the outer mitochondrial membrane.Phosphorylation at Ser136 and Ser112 cooperate to regulate its interaction with 14-3-3 proteins .Synergistic signalling would be of interest to inhibit Bad sequestration in cancer or to increase its 14-3-3 binding for neuro- or cardioprotection.For Bad the Kd-values for binding to Ser136 and Ser112, although not quantified in isolation, seem to fulfil the criteria for synergistic interaction to occur.Thus, Ser136 seems to play a primary role in controlling 14-3-3 binding, whereas Ser112 alone shows weaker binding and functions to further increase the affinity for 14-3-3 binding .Additional requirement for synergy between the two pathways is low r1 and r2 ∼ 1.Furthermore, as discussed above, the strength of inhibitory cross-talk between pathways depended on the affinity of 14-3-3 binding.Thus, the inhibitory JNK site reported in Bad is predicted to differentially modulate the interaction with 14-3-3 depending on the affinity, i.e., if Bad is phosphorylated on Ser112 and/or Ser136.These additional requirements may explain some of the conflicting observations regarding these sites .Still, considerate sequence difference between human and murine Bad, multiple isoforms, additional phosphorylation sites and other types of modifications are all complicating factors for understanding the interaction of Bad with 14-3-3s.Numerous compounds are now available for pharmacological intervention of 14-3-3 protein interactions that can stabilise or destabilise complex-formation .For more simple interactions involving one pathway TPs, the binding of 14-3-3 increases the robustness for perturbations of the upstream signalling pathway.Efficient signal attenuation therefore necessitates both decreased affinity for 14-3-3 and inhibition of kinase/signalling input.For inhibitory cross talk, it may readily be amplified or attenuated by compounds that stabilise or destabilise the pS1-TP:14-3-3 complex, respectively.For synergistic signal communication the differences in parameter robustness and optimality could be used for selectivity among the TP classes or to compensate for off-target effects.Class 3 type of TP showed highly robust signalling output to all parameters, but can be inhibited by inhibition of both binding affinity and signalling input.For TPs of class 1 and 2 targeted by the same signalling pathways, a convenient strategy could be to decrease Kd2 and r1 to inhibit class 1 or to decrease Kd1 and increase Kd2 for class 2 TP.We have investigated the quantitative requirements of signalling processes involving 14-3-3 proteins.For inhibitory signalling by phosphorylation of TPs, efficient communication depends both on basal signalling status, as well as the 14-3-3 binding affinity.Similarly, robust signalling synergy was only observed for a limited range of affinities and phosphorylation kinetics, suggesting that it may only be observed for moderate signal strengths.Different classes of 14-3-3 TPs also responded synergistically to dual signalling inputs within very different ranges of parameters.The kinetic properties of various 14-3-3 TPs and their phosphorylation are still largely uncharacterized, especially in cells.However, for some well-studied TPs like Bad, measurements suggest that synergetic signalling interaction is facilitated in vivo.Our findings also suggest novel strategies for intervention.Thus, selective modulation of different classes of 14-3-3 binding proteins can be achieved by combining modulation of kinase activity with compounds that directly target the binding of 14-3-3 proteins to their TPs . | The 14-3-3 proteins are important effectors of Ser/Thr phosphorylation in eukaryotic cells. Using mathematical modelling we investigated the roles of these proteins as effectors in signalling pathways that involve multi-phosphorylation events. We defined optimal conditions for positive and negative cross-talk. Particularly, synergistic signal interaction was evident at very different sets of binding affinities and phosphorylation kinetics. We identified three classes of 14-3-3 targets that all have two binding sites, but displayed synergistic interaction between converging signalling pathways for different ranges of parameter values. Consequently, these protein targets will respond differently to interventions that affect 14-3-3 binding affinities or phosphorylation kinetics. © 2013 Federation of European Biochemical Societies. Published by Elsevier B.V. All rights reserved. |
531 | Proteome-wide dataset generated by iTRAQ-3DLCMS/MS technique for studying the role of FerB protein in oxidative stress in Paracoccus denitrificans | Two strains of P. denitrificans were used in the study: Pd1222 and Pd20021.These strains were cultivated in 0.25 ml bottles filled with 45 ml of aerobic growth medium which was composed of 9 mM Na2HPO4·2H2O, 33 mM KH2PO4, 50 mM NH4Cl, 1 mM MgSO4·7H2O, 50 mM succinate and 0.033 mM ferric citrate.Rifampicin was added to medium for wt strain growth, rifampicin and kanamycin were added to medium for FerB− strain growth.The following culture conditions were grown: wt, wt with the addition of 25 µM MV, FerB−, FerB− with the addition of 25 µM MV.Each of these culture conditions was cultivated in two biological replicates to gain 8 independently grown bacteria cultures.All cultures were cultivated aerobically at 30 °C from initial optical density of 0.1 till optical density of 0.6.The cells were then harvested by centrifugation for 5 min at 5000×g, washed once with 0.05 M NaH2PO4·2H2O and were stored at −80 °C.100 µl of lysis buffer containing 4%SDS, 50 mM NaHPO4 and 0.1 M DTT were added to each bacterial pellet.The suspension was homogenized by needle sonication and then heated for 5 min at 99 °C.The homogenates were then centrifuged at 16,000×g for 20 min at 4 °C.The supernatants were precipitated overnight with 7.5 volumes of acetone at −20 °C and then centrifuged at 16,000×g for 20 min at 0 °C.Protein pellets were vacuum-dried in speedvac for 5 min and dissolved in 100 µl of SEC mobile phase composed of 10% methanol, 50 mM KH2PO4, 10 mM Tris 50 mM ammonium acetate, 0.3 M NaCl and 6 M guanidine hydrochloride for 45 min at RT, being vortexed every 15 min and then centrifuged at 16,000×g for 20 min at 15 °C.The protein concentration in solubilized protein extract was determined by RC-DC Protein Assay.Solubilized protein extract containing 1 mg of protein was injected onto the SEC column accommodated in Agilent Infinity 1260 LC system using the flow rate 0.2 ml/min at 30 °C.The signal was monitored at 280 nm by diode array detector.The isocratic elution took 85 min and the fractions were collected in 4 chromatographic segments as follows: 1st segment was collected from 25.0 min to 38.0 min, 2nd segment from 38.0 min to 44.0 min, 3rd segment from 44.0 min to 51.5 min, 4th segment from 51.5 min to 70.0 min.The protein content in the segments was determined by RC-DC Protein Assay with two modifications: the sample volume was 50 µl and the first precipitation step was performed by adding a double volume of Reagent I. and II.Trypsin digestion was performed using filter aided sample preparation protocol with several modifications: The aliquots of the four segments from the SEC fractionation containing 100 µg of protein were added onto Vivacon 500 ultrafiltration spin columns.The columns were centrifuged at 14,000×g for 45 min at 20 °C.200 µl of 8 M urea in 0.5 M triethylammonium bicarbonate were added onto the columns followed by centrifugation at 14,000×g for 15 min at 20 °C.Subsequently, 100 µl of 8 M urea in 0.5 M TEAB and 10 μl of 50 mM tris-2-carboxyethyl phosphine were added onto the columns.The samples were reduced for 1 h at 37 °C, the centrifugation at 14,000×g for 15 min at 20 °C then followed.The alkylation was performed by the addition of 5 μl of 200 mM methylmethanthiosulfonate with 100 µl of 8 M urea in 0.5 M TEAB.The samples were then mixed in thermomixer at 600 rpm for 1 min at 25 °C, incubated for 10 min at RT without mixing and centrifuged at 14,000×g for 15 min at 20 °C.100 µl of 0.5 M TEAB were added onto the columns followed by centrifugation at 14,000×g for 20 min at 20 °C.The digestion was performed by the addition of 3.33 µl of 1 µg/µl trypsin followed by incubation for 12 h at 37 °C.The digests were collected by centrifugation at 14,000×g for 15 min at 20 °C and vacuum-dried to the final volume of 26 µl.After the digestion, the iTRAQ 8-plex labeling was performed.After adjusting pH to 7.5 by addition of 5 µl of 0.5 M TEAB, pH 8.5, four sets of iTRAQ 8-plex labels 113−121 were then added to the samples and incubated for 2 h at RT.The samples in each 8-plex were then mixed and vacuum-dried to the volume of 10 µl and stored at −80 °C.The HILIC-Kinetex column accommodated in Agilent Infinity 1260 LC system was used.Mobile phase was composed of 100% ACN, mobile phase of water and mobile phase of 50 mM ammonium formate.20 µl of mobile phase were added to the sample and a sonication was performed using ultrasonic bath for 2 min.Then, 20 µl of mobile phase and 5 µl mobile phase were added and after further 2 min sonication the sample was centrifuged at 16,000×g at 20 °C for 20 min.The sample injection volume was 40 µl and the separation method was set follows: 5 min isocratic 0% B, 7 min gradient to 20% B, 23 min gradient to 34% B, 5 min gradient to 50% B, 5 min isocratic 50% B, 0.5 min gradient to 0% B and for 4.5 min isocratic 0% B; 10% mobile phase C was kept all the time.The flow rate was 0.2 ml/min, column temperature was 30 °C and the signal was monitored at 280 nm.7–13 fractions were collected per each HILIC run.Each fraction was vacuum-dried and stored at −80 °C.Fractions collected within first 20 min of HILIC run were further cleaned by SCX chromatography to remove unreacted iTRAQ labels as described below.The HILIC fractions collected within first 20 min of HILIC run were reconstituted in 100 μl of mobile phase A and sonicated using ultrasonic bath for 2 min.The SCX cartridge supplied as a part of ICAT kit was inserted into Agilent Infinity 1260 LC system.The separation method was set as follows: 3 min 0% B at 0.5 ml/min, 2 min isocratic 0% B, 2 min isocratic 35% B, 2 min isocratic 100% B, 2 min isocratic 0% B, 2 min isocratic 100% B, 3 min isocratic 0% B.The eluent collected during the elution step only was vacuum-dried, reconstituted in 200 μl of 0.1% formic acid, desalted on C-18 column as previously described, vacuum-dried and stored at −80 °C.All LC–MS/MS analyses were performed by nanoscale reversed phase liquid chromatography coupled on-line to Orbitrap Elite mass spectrometer.Individual HILIC fractions were re-dissolved in 0.1% FA and loaded onto trap column filled with sorbent X-Bridge BEH 130C18.The peptides were eluted at 300 nl/min onto Acclaim Pepmap100 C18 analytical column.Mobile phase A was composed of 0.1% FA and phase B was composed of ACN:methanol:2,2,2-trifluoroethanol containing 0.1% FA.Gradient conditions were: 1% mobile phase B, 1–11% B, 11–27% B, 27–50% B, 50–95% B and 95% B held for 15 min.Equilibration of trap and analytical column was performed before loading of sample into sample loop.Analytical column was coupled on-line to Nanospray Flex Ion Source.MS data were collected by data-dependent strategy selecting 15 precursors from MS scan.50,000 ions for max.200 ms with isolation window of 1.3 m/z were accumulated for MS/MS spectra acquisition in Orbitrap.“Higher energy collisional dissociation”; 40% relative collision energy was used for gaining of precursor fragments and iTRAQ reporter ions.MS/MS spectra were measured with resolution 15,000 at m/z 400.Dynamic precursor exclusion was allowed for 45 s after each MS/MS spectrum measurement.Two or three LC–MS/MS analyses were done for selected samples with sufficient sample amount and relatively high complexity.The second and the third analysis was performed with exclusion of m/z masses already assigned to peptide from target database in the previous LC–MS/MS analyses of the same sample.Mass tolerance for m/z exclusion was set to 10 ppm and retention time window to 3 min.Exclusion lists for the repeated analyses were generated using Proteome Discoverer – see Supplementary file 2 for details.Protein identification and quantification in the iTRAQ experiment was performed with MaxQuant 1.3.0.5. using Andromeda database search algorithm.The data analysis parameters were: Spectrum properties filter: Peptide mass range: 800–7000 Da.Peak filters: S/N=3.Input data: P. denitrificans protein database downloaded from http://www.uniprot.org with 5019 protein sequences, enzyme name: Trypsin, max.missed cleavage sites 2, taxonomy: P. denitrificans, strain Pd1222.Decoy database search: True.Peptide FDR 0.01.Protein FDR 0.01.Tolerances: 20 ppm/6 ppm precursor mass tolerance and 20 mDa fragment mass tolerance.Modifications: Dynamic: oxidation, succinylation.Static: iTRAQ 8-plex, methylthio.The statistical analysis of the proteomic data was performed with Perseus 1.3.0.4.Proteins identified by search against decoy database, commonly occurring contaminants and proteins identified only by a modification site were removed prior to statistical analysis.The data were log 2-transformed, missing values were replaced by normal distribution and inverse logarithm of the log 2-transformed fold changes was calculated.The resulting fold changes were considered as significant if higher than 1.50 or lower than 0.67.Moreover, data were statistically analyzed via two-sample t-test when effect of MV or ferB gene mutation was evaluated; protein level changes with p<0.05 were considered as statistically significant.Statistical analysis was not possible in the case of evaluation of proteins induced by MV specifically in FerB− or in wt strain because of low number of observations. | 3DLC protein- and peptide-fractionation technique combined with iTRAQ-peptide labeling and Orbitrap mass spectrometry was employed to quantitate Paracoccus dentirificans total proteome with maximal coverage. This resulted in identification of 24,948 peptides representing 2627 proteins (FDR<0.01) in P. dentirificans wild type and ferB mutant strains grown in the presence or absence of methyl viologen as an oxidative stressor. The data were generated for assessment of FerB protein role in oxidative stress as published by Pernikářová et al.; proteomic responses to a methyl viologen-induced oxidative stress in the wild type and FerB mutant strains of P. denitrificans, J. Proteomics 2015;125:68-75. Dataset is supplied in the article. |
532 | Phenology of honeybush (Cyclopia genistoides and C. subternata) genotypes | Phenology is the study of the timing of plant growth phases that are biologically important, including the causes of timing and interrelation between growth phases of the same or different species.Phenological research assists in the monitoring of plant developmental stages during the growing season, understanding species interactions, and making comparable observations."Apart from phenology being important in agricultural practices, these growth phases may determine a species' ability to establish and persist in favourable and avoid unfavourable climatic conditions, thus shaping their distribution.Furthermore, phenology helps differentiate populations from different altitudes and latitudes, which may lead to variations in the morphological traits of widely distributed plant species, thus indicating survival ability and resource acquisition in heterogeneous and variable environments.Species with wider geographical ranges can therefore exhibit a larger intraspecific variation in morphology, physiology, phenology, and growth rate.In most cases, the timing of each phenological phase is regulated by mechanisms which act to ensure that each phase occurs in suitable conditions in its own period, although there is some interdependence between them.Phenological studies may include observation, recording, and interpretation of the timing of plants life history events.Presently, information on the phenology of Cyclopia is scarce.Cyclopia, also known as “honeybush”, is a largely unstudied leguminous genus of South African herbal teas restricted to the megadiverse Cape Floristic Region.Many Cyclopia species flower in spring, although some species e.g. C. sessiflora flower in winter.However, to date, no research has been conducted on the comparative phenology between and within Cyclopia species.Schröder et al. specified that plant phenophases can display inter-annual variability and large spatial differences attuned to environmental cues, individual characteristics such as genes, age, soil conditions, water supply, diseases, and competition.Phenological studies in ecosystems such as the Mediterranean climate of the Fynbos biome where Cyclopia naturally grows are important in order to identify phenological patterns within different areas.This paper assesses the phenophases of two commercially important Cyclopia species, namely, C. genistoides and C. subternata.The objective of the study was to determine the phenophases and the differences between and within these two Cyclopia genotypes in the Agriculture Research Council Infruitec-Nietvoorbij genebank collection."An understanding of the phenology of the two species will thus enhance farmers' ability to plan management practices in relation to the events occurring in the plant in order to schedule for pollination, irrigation, fertilisation, crop protection, harvesting, and other cultural practices at optimum times.In addition, this will assist in identifying suitable areas for commercial seed production purposes.The study was carried out on 2-year-old plants, at Elsenburg Research Farm, situated in the Stellenbosch area of the Western Cape.Twenty-two C. subternata genotypes were replicated 60 times in a completely randomised block design, while 15 C. genistoides genotypes were replicated 90 times.However, mortality of cuttings resulted in a usable study population of approximately 651 and 156 individuals, respectively.Clonal genotypes were selected and identified based on the area of collection.C. genistoides genotypes were prefixed “G” denoting the species name, whereas C. subternata genotypes were prefixed “S”.The subsequent alphabetical letter represented the site at which the genotype was initially collected before being rooted.Cyclopia genistoides, genotypes were collected from Gouriqau near Gouritsmond, Koksrivier near Pearly Beach and Toekomst near Bredasdorp.Cyclopia subternata, genotypes were collected from Tolbos near Napier, Kanetberg near Barrydale, Groendal near Loutewater, and Haarlem.Rooted cuttings of 12 months old were planted during 2011 in a sandy loam soil at a spacing for both species of 1 × 1 m.Both species where irrigated using drip irrigation in the dry summer seasons.No fertilisation or disease and pest control was applied since Cyclopia species are most frequently organically grown, and weeds were manually kept at minimal levels.Five randomly selected plants of each genotype were sampled.Where the number of plants was limited, other plants were sampled as in the case of STB101, GG3, GG34, GK8, and GG9.To distinguish between plants and ensure correct data recording, sampled plants were marked with chevron tape.Data comprised weekly observation of plants using visual estimates of phenophases.Observations were scored on a 0–100 scale with 10 increments.The criterion applied in all the phenophases was a threshold value of 10% per individual plant modified from Pudas et al., where a 10% budding and 90% flowering meant that 10% of the buds on the individual plant had developed into flowers by the observation date.Data were collected in 2013 and 2014.The different stages within a phenophase were not recorded."Data from local weather stations' were used to describe the climate near the observed species, for both years and also that of the provenances where clones were initially collected.Four phenophases were observed: budding, flowering, fruiting, and seed dispersal.The budding phase was determined when buds visually appeared.Bud set was followed by the bud development phase characterised by swelling of buds before bud maturation and where the bud was completely opened with petals still clustered together.The time to start of budding was not calculated because the study started after buds had already formed, especially in early genotypes.However, the duration of budding from first day of observation until first flowers appeared could be quantified.Flowering was defined as the stage when the reproductive parts were visible between unfolded or open flower parts.This stage marked the period in which pollinators would be attracted, leading to pollen dispersal and thus pollination.In Cyclopia, the flower opening period is terminated by petal withering or abscission, which coincides with formation of pods.This marks the start of the fruiting phase which is characterised by the remnants of the stigma and style still being attached to the young pod, which then increases in size, length, and width.Thereafter, the developing pod swells as a result of differential growth of different tissues.At the end of this stage, the pod matures by changing in colour from grey to brown.This signals the beginning of seed dispersal so as to avoid pod splitting and dispersal of seeds.Harvesting was defined as the stage when the fruit reached maximum size and changed to a reddish-brown colour, and continued until all pods were eventually harvested.An analysis of variance was carried out to determine the genotypic variation in the occurrence of phenophases according to the month of observation, the mean time to the start of each phenophase; and duration of each phenophase: flowering, fruiting, and seed dispersal."Fisher's least significant test was used to compare group means for the interaction between species genotype and month of observation of the observed phenophases.The test was conducted at a nominal significance level of p = 0.05.All analyses were carried out in the statistical software package SAS 9.2."The PROC GLM function in this software was used to perform the ANOVA as well as the Fisher's LSD test.The standardised residuals were tested for normality using the Shapiro–Wilk test.The time of the start and duration of each phenophase are shown in Tables 2 and 3 for the two species.Time to start was calculated by subtracting the time taken to the start of each phenophase of each genotype plant from the first day of field observation.Phenophase duration was calculated by the number of days from the start of each phenophase up until the last day of observation for that phase.Grouping of genotypes according to phenophase pattern in Table 4 was based on ANOVA analysis of time to the start of each phenophase per genotype.The data used here were grouped according to the following criteria: 1) early—genotypes with shorter phenophase, 2) intermediate—genotypes with transitional development, and 3) late—genotypes with longer phenophase.Mean budding percentage of C. genistoides genotypes was significantly higher during July and August compared to September and October.However, budding of GG31, GK1, GK2, GK3, GK4, GK5, GK6, GK7, and GT2 were significantly higher in September compared to other genotypes.Budding remained significantly higher between July to September for GK1, GK2, and GK4 compared to all other genotypes.Mean flowering percentage of C. genistoides was concentrated between September and October, peaking in September compared to October.Flowering was significantly higher in September for GG3, GG53, GT1, and GG9, while was delayed in October for GK2, GK1, GK4, GG31, and GT2 as depicted in Fig. 5B.Fruiting in C. genistoides started in September extending to November, and peaking in October, being significantly higher than the values recorded in September and November.Fruiting, however, peaked in September for GK8, GG9, GG3, GT1, and GG34, while peaking in November for GK1, GK2, GK4, and GT2.Seed release from pods in C. genistoides started in November and ended in December when all genotypes had been harvested.Approximately 43.5% of C. genistoides genotypes were harvested in November compared to the 100.0% in December.Seed dispersal significantly peaked in November for GG9, GG3, GK8, and GG9 GT1 and GG34 compared to all other genotypes.Mean budding percentage of C. subternata peaked in July relative to August and September.In July, budding of 94.0–100.0% was observed in 19 of the 22 C. subternata genotypes.Budding of SKB4 and STB101 significantly peaked in August, while in SKB3, SKB15, SKB18, STB102, and STB103, budding was extended from July to August.Mean flowering of C. subternata genotypes peaked in September compared to August and October.Flowering significantly peaked in August only for SGD6, SKB13, SKB11, STB1, SGD9, SKB6, and SHL2 compared to all other genotypes.A significantly higher number of pods in C. subternata were observed in October than in September and November.However, fruiting of 9 of the 22 genotypes peaked in September; SGD6, SKB11, SKB13, STB1, SKB6, SHL2, SGD7, SGD1, and SKB14.Fruiting of SGD6 did not differ between September and October, although SGD6 was significantly higher than in all other genotypes.Seed release from pods in C. subternata commenced in October and increased to 79.2% by November, and by December all genotypes had been harvested.Seed dispersal was started in October for SGD6, SHL2, SKB13, SGD1, and SHL3, peaking in November compared to all other genotypes.Mean budding of C. genistoides genotypes lasted 46.4 days.Budding was significantly shorter in four genotypes: GG9, GK8, GT1, and GG3 compared to other genotypes.However, there were no significant differences in budding of GG3 and GT1 relative to GG53 and GG34, while was significantly extended in GK4, GK1, GK2, GT2, GG31, and GK3.The mean time to start of flowering in C. genistoides was 51.2 days, although mean flowering duration lasted approximately 13.2 days.The mean time to start of flowering was increased to 35.0 days for GG9, GK8, GG3, and GT1 compared to GT2.Although start of flowering was significantly delayed for GT2, it did not differ significantly to GK2.Mean flowering duration lasted 10–15 days for a majority of the genotypes.However, GG53 had a significantly longer duration compared to the other genotypes, although GG53 did not differ significantly to GK7.Fruiting in C. genistoides started after 61.5 days and lasted 45.6 days.Fruiting started earlier in GG9, GK8, GG3, and GT1.However, start of fruiting for GT1 was not significantly different to GG34 and GG53.Start of fruiting in GK2 was delayed to 78.0 days, although did not differ significantly to GK4, GK1, GT2, GG31, GK5, and GK6.Comparable to the fruiting duration of 51.0–56.0 days in GG9, GG3, and GK8, fruiting was significantly shorter in GK2, GK5, GK7, GK3, GT2, GG31, and GK4.No significant differences were however observed between fruiting of GG3, GG9, GT1, GG53, and GK1.Seed release from pods in C. genistoides was started after 110.0 days and lasted 4.3 days.Seed dispersal started earlier for GG9, GK8, GG3, GT1, GG53, and GG34 as shown in Table 2.Seed release was delayed to 126.0 days in GK1, although did not differ significantly to GT2, GK2, and GK4.Duration of seed dispersal was shorter for GG9, GG34, GT2, GK3, and GK1 being approximately 1.0 day compared to GG31.Seed dispersal although was extended for GG31, did not differ significantly to GG53, GK4, GK8, GG3, GK2, GT1, GK6, GK7, and GK5.The budding duration for C. subternata genotypes lasted 32.5 days.Budding was shorter for SGD6 compared to other genotypes.Budding was significantly extended in STB101 compared to other genotypes.Mean budding of all other genotypes was generally between 30.0 and 35.0 days, although significant differences between genotypes were observed.Generally, flowering in C. subternata started after 25.9 days and lasted 24.3 days.However, flowering was earlier for SGD6 compared to other genotypes.In contrast, start of flowering was delayed in STB101, STB102, SKB7, SKB4, SKB5, and SKB15.Duration of flowering was shorter in STB101 compared to the other thirteen genotypes, apart from SKB7, SKB4, SKB18, STB102, SKB14, SKB3, SKB5, and SKB15.Flowering duration was extended in SKB13 and SGD6 compared to other genotypes.The mean time to start and duration of fruiting of C. subternata genotypes was 46.1 and 52.5 days, respectively.Fruiting was earlier for 13 of the 22 genotypes studied.Start of fruiting was significantly delayed in STB101, STB102, and SKB5 compared to other genotypes.Start of fruiting for the remaining six genotypes was 46.8–49.8 days; this being SGD2, SHL3, SKB7, SKB3, SKB4, and SKB15.Fruiting duration was shorter for STB101 and STB102 being 41.0 days compared to the other eighteen genotypes, but did not differ significantly to SGD6 and SGD3.Fruiting was extended in SGD7, SKB11, SGD1, and STB1, with that of the remaining twelve genotypes between 50.0 and 56.0 days.Seed release from pods in C. subternata genotypes was started after 96.3 days and lasted 8.9 days.However, seed dispersal started earlier for SGD6 and SKB13 as indicated in Table 3.Start of seed dispersal was, however, delayed in SGD2, SKB3, STB101, and SKB4.Time to start of seed dispersal in SKB13 did not differ significantly to the remaining 15 genotypes which ranged between 93.0 and 99.2 days.Seed release duration in STB101 significantly differed to the other sixteen genotypes, but did not differ significantly to SGD2, SKB3, SKB4, SKB18, and STB102.The phenophases of the Cyclopia genotypes were determined using the mean start time from the first field observation day.Using the observational qualitative analysis, phenology of the Cyclopia species can be categorised into three groups; early, intermediate and late genotypes, with start time and duration of each phenophase as summarised in Table 4.Mean start time for each genotype phenophase was determined for flowering, fruiting, and seed dispersal.Start time for budding phenology was, however, not determined due to the visibility of buds on the individual plants already when the study was initiated.Generally, the studied phenophases: budding, flowering, fruiting, and seed dispersal of the Cyclopia species peaked in the months of July, September, October, and December, respectively.However, the peak of budding in C. genistoides extended to August.In C. subternata, flowering, fruiting, and seed dispersal started 1st week of August, 1st week of September, and 4th week of October, extending to 1st week of November, mid-November and mid-December, respectively.In contrast, flowering, fruiting, and seed dispersal in C. genistoides started last week of August, mid-September, and 1st week of November, extending to end of October, 1st week of November, and last week of December, respectively.Interpretation of the Cyclopia phenology variation among and within species poses a challenge since these species and factors known to affect their phenology have not been reported prior to this study.However, a number of factors influence phenology of species thus varying the time to start and duration of each phenophase among and within plant species.Among a majority of factors, the response to chilling and heat units by plants in order to break bud dormancy is a factor instigating timing and duration of phenophases; which represents a critical ecological and evolutionary trade-off between survival and growth.Therefore, variation in the timing of reproductive phases due to differences in species life history will influence the intensity of the reproductive phase.In accordance, the mean time to start of flowering for C. subternata was shorter compared to C. genistoides, also for fruiting and seed dispersal, respectively.However, the duration of flowering, fruiting, and seed dispersal was shorter in C. genistoides compared to C. subternata."If the response to cold and warm temperatures are ascertained to cause variation in the timing and duration of the Cyclopia species, the chilling requirement of C. subternata to release bud dormancy could be much smaller, thus requiring a longer period of warming in turn lengthening the phenophases' duration.This probably allows enough time for biomass accumulation for fruit and seed set compared to C. genistoides.In accord with the study findings, phenology although started early, was longer in C. subternata than in C. genistoides.As a result, in species and/or genotypes where phenophases are shorter, the return on investment is expected to be faster than in plants where phenophases are delayed.On-going studies have also indicated that C. subternata have a higher flowering, fruit set, and seed production than C. genistoides.Furthermore, factors such as water content, soil type and nutrients are also significant in varying phenology.Soil stored nutrients especially have varying effects on phenology of plant species as reported in Betula pendula Roth.Therefore, a change in nutrient availability is expected to cause a response in plant growth.It is difficult to correlate phenology with water content and soil nutrients in this study since the two species were grown without additions of artificial nutrients and the water content was not quantified since plants were irrigated during periods of rainfall scarcity.However, these species may possess conservative mechanisms that allow them to withstand water shortages and scarcity of soil nutrients, a trait mostly associated with species growing in the Mediterranean climate, although this varies for the different species.Potentially, the fertiliser requirements and species ability to accrue and utilise resources stored in the soil or additions of nutrients of the two species when cultivated may differ, influencing phenology differently, and thus affecting input costs and seed yield when commercially cultivated.Among the Cyclopia species genotypes, phenophases were prolonged in genotypes such as SKB5, SKB3, SKB15, GG31, GK2, GK1, GK4, and GK6; others had clearly defined peaks i.e. GK3, GK7, GK5, GT2, SGD9, SGD7, SGD1, and STB103; others occurred earlier, i.e. GT1, GG9, GG3, GK8, GG53, GG34, SGD6, SKB13, SKB11, SHL2, and STB1 and lastly, delayed in GK2, GT2, GK4, and GK1, STB102, SKB15, and SKB5.Variability between the Cyclopia genotypes could be an indication of high inter-annual variability in phenology due to geographical locations which influences the length of the growing season, owing to a rise in spring air temperature; which amongst other factors control the timing of phenophases in plants.Individual plants therefore have timing of phenophase that are attuned to the varying conditions they experience in their range.This assists in reducing competition for pollinators and other resources in order to maintain co-existence in diverse plant communities.In natural stands, co-existence of different plant species may lead to hybridisation which creates genetic diversity and thus formation of new species or increasing genetic variation within species.However, when hybridisation arises as a result of human disturbances or habitat fragmentations, it will result in a compromise in species’ biodiversity causing extinction of closely related species especially when their distribution restricted.In cases where related species occur at different altitudes over relatively short distances, it may be possible that hybrid zones can be formed between them which serve as pathways for interspecific gene flow.Under such circumstances, analysing hybrid zones assist in understanding of the genetic basis of adaptation to conditions at different altitudes, and maintenance of species divergence in the face of gene flow.The effect of geographical location on phenology of the Cyclopia species was not ascertained since all cuttings from the different provenances were planted within an identical environment, curtailing environmental variation.Therefore, phenology differences potentially could be due to geographical locations at which the cuttings were initially collected, in accordance with Vitasse et al. and Azad.The altitude of Elsenburg where the species genotypes were monitored for their phenophases differed from that of the areas where clonal material was initially collected before being rooted and grown at Elsenburg.Therefore, cultivation of Cyclopia genotypes from the Haarlem, Groendal, and Kanetberg may thus be expected to advance their phenology at the Elsenburg site compared to that from the Tolbos, Koksrivier, Toekomst, and Gouriqua due to altitudinal differences between the sites.In accordance, phenophases were advanced in SGD6, SKB13, and SHL2 compared to STB101, GK1, GT2, and GG31.However, variability was also observed for genotypes from the same geographical area such as in STB1 and STB103; GT1 and GT2; which could likely be genetically controlled within species.Other factors apart from genetics may be potentially influencing phenophase variability since in other genotypes such as GG3 and GG9; GK1 and GK2; SKB3 and SKB4, no differences were observed.Therefore, this could likely be as a result of reduced environmental variation when different clones from different origins are grown under the same environment.During the year 2013, observation and recording of phenology data were prolonged compared to 2014.This could be ascribed to the fact that in 2013, the period of cold and winter precipitation was extended which could have lengthened the bud development phase thus lengthening the growing season.The average maximum, minimum temperature and rainfall recorded indicated a slight temperature increase and rainfall diminution in 2014 compared to 2013, which may have contributed to the delay in timing of flowering in 2013 which began during the 35th week compared to the 32nd in 2014 in the case of SGD6.Evidently, seed dispersal of both species pods was delayed in 2013 compared to 2014.Seed dispersal commenced on the 47th week and 49th week in 2013 compared to the 44th week and 45th week in 2014, respectively.The acceleration of phenophases by approximately three weeks in 2014 compared to 2013, echoes Fenner who stated that a triggering of the phenophases response is effected by the ability of the plant to detect environmental cues.Cultivation of Cyclopia species in areas or during years associated with warmer accompanied by low rainfall conditions may thus be expected to accelerate and squatter the length of their reproductive development.In contrast, in areas or years when conditions are colder and rainfall is higher, the start dates of phenophases may be extended.Consequently, the environment prevailing weather conditions where Cyclopia species are cultivated may thus play a significant role in altering phenology thus seed production yield, time of harvest, and harvest strategies since species genotype will vary in their maturing of seed.Cyclopia differences in phenological pattern due to climatic differences will thus have an implication on their natural fecundity, since plants seed set is regulated by the timing of switches between vegetative and reproductive phases.Early and late flowering individuals may thus be prone to seed predation than mid-season individuals due to saturation effect at peak of abundance as suggested by Fenner.Therefore, the influence of phenology on the fecundity as a result of climatic differences, are expected to result in unpredictable selective pressures favouring early, average, or late individuals in different years, hence variations in their pollinator or seed predator abundance.The current scale used for observation could have also caused variations in the length of each phenophase between and within species, since no observations were recorded during adverse weather conditions.Therefore, the observational intervals could be modified for future studies to two or three observations per week or daily in order to reflect variations in the phenophases.Furthermore, coding systems such as the Biologishe Bundesanstalt, Bundessortenamt and CHemical Industry, physiologically based with statistical approach mathematical models can be used to accurately predict phenophases in Cyclopia plants.Plants monitored for phenology studies were also of different sizes and age which could have contributed to differences in timing of phenophases.Noticeable, the seed dispersal duration of STB101, GG9, GG34, GT2, and GK1 was 0.0 days.This could have been due to lack of sample plants and plant size or a combination of both to allow for variation.The majority of these genotypes were characterised by having a small and low plant size and height, with very few short inflorescences.Having smaller and less plants with few and short inflorescences could have accelerated harvesting of pods to be completed between observational days, hence the zero value.In contrast, genotypes with enough sampling and larger plants, with numerous inflorescences extended harvesting causing variation probably due to varying ripening stages of pods per inflorescence per plant.Therefore, use of individual plants with similar growth form, size, and age, and improved management practices may be expected to enhance outcomes of this study.However, the challenge of achieving the latter remains high not only due to insufficient study sites and plant material, but as a direct result of huge variations in germination currently observed between genotypes, thereby limiting availability of large numbers of rooted plant material of similar age.Furthermore, natural disasters such as fires occur almost annually, destroying or partially destroying Cyclopia plantations.Preliminary findings indicated differences in phenology between and within species and that time to start of phenophases is inversely proportional to phenology duration.This is the first practical study that confirms that flowering in C. subternata and C. genistoides occurs between September and October although differences within species are inevitable.Studies are needed to determine the exact start of budding in order to refine the peak period and duration of budding since in this study, budding was already initiated by the first field observation day.Certain phenophases such as bud development and fruiting were observed to have longer developmental phases while in flowering and seed dispersal phenophases were shortened.However, prevailing weather conditions could play a significant role in altering seed dispersal since it appears that very hot and dry conditions accelerate pod and seed maturation.These findings serve as a platform for investigating factors affecting the reproductive phases, morphological and physiological studies in Cyclopia species, and assist farmers and researches especially breeders for timing of crop requirements and management practices.The study provides a future opportunity to study the potential relation of phenology and root length, fruit size, fecundity, geographical location, climatic conditions, nutrients, irrigation, photoperiod, chilling units, plant stand characteristics, plant size and age, pollination/pollinator mechanisms, and photosynthetic activity among others which affect timing and/or switches in plant species phenology.Future observation over a number of years and across sites on some of these factors could yield better results and understanding in the phenological dynamics of this South African herbal tea genus, and use of more reliable phenological data and models developed for larger geographical areas rather than for a single location. | No information is available on the phenological phases between and within genotypes of Cyclopia (honeybush). This information is important to understand the timing of plant development and growth, species co-existence, and growth dynamics of the genus. The study spanning 2 years determined the monthly genotypic variation, the time of the start of a growth phase, and duration of budding, flowering, fruiting, and seed dispersal in C. subternata and C. genistoides. The results indicated differences in phenology between and within species and that initiation period among individuals of a phenological phase is inversely proportional to that duration of that phenophase. Budding, flowering, fruiting, and seed dispersal peaked in the months of July, September, October, and December, respectively. Compared to C. genistoides, C. subternata genotypes had a shorter time (days) from start of observation to start of flowering (25.9 versus 51.2), fruiting (46.1 versus 61.5), and seed dispersal (96.3 versus 110.0). However, the duration (days) of flowering (13.2 versus 24.3), fruiting (45.6 versus 52.2), and seed dispersal (4.3 versus 8.9) was shorter in C. genistoides. Using observational qualitative analysis, phenology of these Cyclopia species was categorised into three groups; early, intermediate, and late. The findings serve as a platform for investigating factors affecting the reproductive phases, morphology, and physiology in these and other Cyclopia species. It will assist farmers and researches to time crop requirements and management practices, thus having practical implications in the cultivation of the species. |
533 | Computational and experimental data on electrostatic density and stacking tendency of asymmetric cyanine 5 dyes | The dataset of this data article provides information on the electron densities and stacking interactions of various cyanine 5 dyes.The Figs. 1–3, show the different molecular structures, electron densities and stacking behavior, respectively.The evaluated fluorophores are depicted in Fig. 1.These structural analogues were analyzed to determine whether molecular substituents on the fluorophore influences their utility as an imaging label .Spartan modeling software was used to calculate the most stable conformation, i.e., lowest potential energy, and electron distribution for all synthesized dyes.Calculations were performed on the dyes including their counterions Cl− or K+ to allow for comparison at neutral overall charges.Although the K+ ion is not included in the electron density cloud calculated by Spartan, its electrostatic effect on the sulfonate was included.The stacking tendency of compound 2–11 in DMSO, PBS and H2O was evaluated by measuring the dyes’ absorbance at various concentrations.Previous reports describe the stacking of methylene blue and this dye is therefore not included in this Data in Brief.The absorption spectra were assessed at concentrations ranging from 0.3 to 100.0 µM as the stacking characteristic of a dye is concentration-dependent.A hypsochromic or bathochromic shift is to be expected when dyes aggregate by H- or J-stacking, respectively .Other than previous reports using symmetrical Cy7-analogues , the differences in the asymmetrical Cy5 analogues revealed no or only a minor stacking tendency for compounds 2–11.It is interesting to note that for compounds 2–11 the height of the shoulder peak is found constant around 0.07 arbitrary units in DMSO and H2O, and around 0.08 AU in PBS.The characteristic shoulder peak originates from vibronic coupling, i.e., intramolecular electronic transitions .Although the absorption efficiency, i.e., molar extinction coefficient, varies by structurally altering the dye׳s molecular structure, the persistent height of the shoulder peak indicates that vibronic coupling was not affected by these alterations.Synthesis of compound 2–11 was performed according to previous reports .Absorbance spectra were measured with the Ultrospec 3000 UV–Visible spectrophotometer according to previously described methods .Stock solutions of the dyes were prepared in DMSO-d6 containing ethylene carbonate and were stored at 4°C before further use as previously described .Electrostatic potential mapping of the dyes was performed using Spartan ’16 using the semi-empirical model and the PM6 method in the gas phase on the N-methylamide form of the fluorophores.The DMSO-d6 stock solutions of the dyes were diluted to 100 µM in either DMSO, H2O or PBS.Subsequently, dilutions were made from these 100 µM solutions.Further dilution in the same solvent allowed for a final concentration range of 100.0, 50.0, 25.0, 12.5, 5.0, 2.5, 1.3, 0.6, and 0.3 µM, respectively.To keep the signal below 1.5 AU, absorption spectra of different concentrations were measured using different cuvettes: for concentrations ≤5.0 µM 1 mL disposable plastic cuvettes were used, for concentrations ≥12.5 and ≤50.0 µM, quartz cuvettes were used, and for 100.0 µM concentrations two glass microscopy slides separated by a PET plastic spacer were used.Spectra were measured at t = 10 min after preparation and normalized for cuvette path length and concentration.The research was supported by the European Research Council under the European Union׳s Seventh Framework Programme FP7/2007–2013, and Netherlands Organization for Scientific Research STW-VIDI grant. | Far-red dyes such as cyanine 5 (Cy5) are gaining interest in (bio)medical diagnostics as they have promising features in terms of stability and brightness. Here, the electrostatic density and stacking tendency in different solvents of nine systematically altered asymmetrical Cy5 dyes are reported. In addition to this, the influence of molecular alterations on the vibronic coupling was reported. The data presented supplement to the recent study “The influence of systematic structure alterations on the photophysical properties and conjugation characteristics of asymmetric cyanine 5 dyes” (Spa et al., 2018). |
534 | Structure of the UspA1 protein fragment from Moraxella catarrhalis responsible for C3d binding | Moraxella catarrhalis is a gram-negative bacterium that infects humans exclusively.Up to 80% of children under 2 years old carry M. catarrhalis: this rate drops to 10% for older children and to 5% for healthy adults, and increases again in the elderly.Although for many years it was considered to be only a commensal, M. catarrhalis is now classed as a pathogen.After Streptococcus pneumonia and Heamophilus influenza, it is the third most common pathogen causing acute otitis media in children.In adults with chronic obstructive pulmonary disease, M. catarrhalis induces not only upper but also lower respiratory tract infections, causing infections as severe as septicaemia, meningitis or endocarditis in immunocompromized patients; and pneumonia in the elderly.To cause infections, M. catarrhalis expresses different adhesion macromolecules that act as virulence factors in key aspects of bacteria pathogenesis.The most important ones are outer membrane proteins such as M. catarrhalis adherence protein, protein CD, M. catarrhalis filamentous Hag-like proteins, M. catarrhalis immunoglobulin D binding protein/hemagglutein, and ubiquitous surface proteins.The “ubiquitous surface protein” family consists of three proteins: UspA1, UspA2, and UspA2H.UspA2H is a hybrid of the first two; it contains a UspA1-like N-terminal domain and a UspA2-like C-terminal domain.M. catarrhalis attaches to epithelial cells via UspA1, which binds carcinoembryonic antigen-related cell adhesion molecule 1, and as a consequence suppresses the human inflammatory response.UspA1 also binds extracellular matrix proteins laminin and fibronectin, whereas UspA2 binds preferentially to laminin, fibronectin, and vitronectin.Another important function associated with UspA proteins is serum resistance.Both UspA1 and UspA2/A2H have been proposed to bind the C3d domain of C3, inhibiting both the classical and alternative pathways of the complement cascade.Furthermore, UspA1 and UspA2 appear to bind to the complement inhibitor C4b binding protein in a dose dependent manner.Finally, UspA proteins block generation of the opsonin C3a, which may result in decreased inflammatory reactions.This last would be consistent with binding C3d.UspA proteins belong to the trimeric autotransporter adhesin family.TAAs are anchored in the bacterial outer membrane by a 12-stranded β-barrel comprised of four strands from each monomer, from where the passenger domain is exposed to the extracellular environment.The passenger domain consists of an N-terminal β-strand head domain followed by the neck domain and a coiled-coil stalk.Although no full-length structure of UspA1 is available, there are structures of three UspA1 fragments.Two structures together give a fragment comprising UspA142–366, containing the head, the neck, and 33 amino acids of the stalk domain.The head domain consists of 14-to-16 residue repeats placed parallel to each other forming a trimeric left-handed parallel β-roll, first identified in YadA.The neck is a positively charged region of the UspA1 structure forming large loops; it belongs to the long neck type as found in SadA or BpaA.The structure of part of the stalk of UspA1 has been solved.It is supposed to bind CEACAM1.It reveals a continuous left-handed trimeric coiled-coil stalk with, as expected, an underwound periodicity of 3.5 residues per turn, characteristic for TAA proteins.Taken together, currently available crystallographic structures of the UspA1 molecule cover 464 out of 821 amino acids, not much more than 50%.So far, no high-resolution structure of UspA1 in complex with its ligands is available.However, based on small-angle X-ray scattering, molecular modelling and mutagenesis studies, models have been proposed of UspA1-ligand complexes.The CEACAM1 binding site appears to be within the segment 578–597 of the stalk domain containing His584, which is the only potentially charged residue in that region surrounded by hydrophobic residues.Mutagenesis and binding studies strongly indicated that Ala568, 588 and 509, Leu583, and Met586 are crucial residues involved in CEACAM1 binding.In addition, SAXS measurements suggested that UspA1 bends upon CEACAM1 binding.Similarly SAXS and binding data suggested that fibronectin binds at the base of the β-roll head domain and causes bending of UspA1 at the interaction site.Finally, the Riesbeck group proposed, based on ELISA binding assays between C3d and truncated UspA1 fragments, that C3d binds the coiled-coil stalk in the 299–452 region.In this study, we present the crystallographic structure of UspA1299–452, which contains the putative C3d binding site.We performed multiple binding studies between recombinant C3d and UspA1299452, but were not able to demonstrate saturation binding; the Kd appears to be 140.5 ± 8.4 μM, which is not consistent with physiological concentrations of C3.We amplified UspA1299–452 from the UspA1 gene of Moraxella catarrhalis strain ATCC 25238 as a template using the following primers: forward GCCGCATATGAAAACTGGTAATGGTACTGTATCT containing an NdeI restriction site, and reverse GGCGAAGCTTGCTGCCGCGCGGCACCAGATCAATGAGGCGACCGCTTA containing a thrombin cleavage site and a HindIII restriction site.The PCR product was digested with NdeI and HindIII restriction enzymes and ligated into pET22b plasmid using T4 ligase.For C3d, we used a construct previously received by our laboratory, recloned it to the pET22b vector by restriction-free cloning using the following primers: forward: CTTTAAGAAGGAGATATACATATGCATCATCATCATCATCACAGCAGCGGCGAAAACCTGTATTTTCAGAGCGA and reverse: TCGGGCTTTGTTAGCAGCCGGATCTCAGCGGCTGGGCAGTTGGAGGGACAC.Secondly, we reversed the E1153A point mutation using the primers: forward: GTTCTCATCTCGCTGCAGGAAGCTAAAGATATTTGCGAG and reverse: CTCGCAAATATCTTTAGCTTCCTGCAGCGAGATGAGAAC.The last step was to remove the free Cys1010, changing it to Ala using primers: forward GACCCCCTCGGGCGCGGGGGAACAGAAC, and reverse GTTCTGTTCCCCCGCGCCCGAGGGGGTC.Point mutations were introduced by QuickChange® Site-Directed Mutagenesis).For expression, positive plasmids carrying C3d or UspA constructs were transformed into BL21 E. coli chemically competent cells, plated on LB-agar plates supplemented with 50 μg/ml ampicillin and incubated overnight at 37 °C.For large-scale expression of UspA1299–452, clones from the plate were inoculated into 5 ml LB media supplemented with 100 μg/ml ampicillin, grown O/N at 24 °C, diluted into 500 ml of fresh LB with 100 μg/ml ampicillin and then incubated at 37 °C, 220 rpm shaking.Protein production was induced with 1 mM isopropyl β-d-1-thiogalactopyranoside.For large-scale expression of C3d, clones from the plate were inoculated into 5 ml LB media supplemented with 100 μg/ml ampicillin and grown O/N at 18 °C.They were then diluted into 50 ml of fresh LB with 100 μg/ml ampicillin and incubated at 24 °C, 220 rpm shaking for 4 h, followed by another dilution into 500 ml of fresh LB with 100 μg/ml ampicillin, grown at 24 °C until they reached OD = 0.8 and protein production was induced with 1 mM IPTG.For both proteins, expression was continued for 4 h, after which bacteria were collected by centrifugation for 20 min at 5000×g at 4 °C.The supernatant was discarded and pellets resuspended, for UspA1299–452 in 10 ml of buffer A, and for C3d in 10 ml of buffer B.Proteins were either purified directly or cells were flash-frozen in liquid nitrogen and stored at −80 °C for later use.Cells were disrupted using the Emusiflex-C3 for 10 min at 1500 psi, then centrifuged for 45 min at 18000×g at 4 °C; the supernatant was transferred to a 50 ml Falcon tube and incubated with 2 ml of NiNTA beads for 30 min with gentle shaking and loaded onto a 20 ml gravity flow column.Beads were washed with 10 column volumes of buffer A or buffer B.For UspA1299–452, there was an additional washing step with 5 CV of 20 mM NaxHxPO4, pH 8.0, 500 mM NaCl, 50 mM imidazole.The protein of interest was then eluted with 3 CV of 20 mM NaxHxPO4, pH 8.0, 250 mM imidazole and either 300 mM or 500 mM NaCl.Elutions containing the protein of interest were pooled and loaded onto Superdex 200 size exclusion chromatography column with 1 × PBS for C3d purification and 20 mM HEPES, pH 8.0, 500 mM NaCl for UspA1299–452 purification.Fractions containing the protein of interest were concentrated with Amicon® Ultra 4 ml concentration filter with a molecular mass cutoff of 10 kDa, flash-frozen in liquid nitrogen and stored at −80 °C.The binding of C3d to UspA1299–452 was studied by thermophoresis using a Monolith NT.115.Before the experiment, both proteins were exchanged into 0.5 × PBS, which was used as the reaction buffer during the entire experiment.Preparation, labelling, dilutions and initial measurements were performed according to the manufacturer’s instructions.The concentration of fluorescently labelled UspA1299–452 was kept constant at 50 nM and the C3d was titrated from 23 nM to 750 μM.Measurements were performed with 70% LED and 40% MST power.Sample preparation and measurements were repeated three times for statistical relevance.The difference in normalized fluorescence was plotted against concentration of unlabelled C3d and the Kd calculated using equations provided in the software for data analysis from thermophoretic measurements.Final graph was prepared using Prism program.For crystallization trials, UspA1299–452 and C3d were concentrated to 2.4 mg/ml and 10.7 mg/ml respectively using Amicon® Ultra 4 ml concentration filters with a molecular mass cut-off of 10 kDa; as UspA1 is a trimer, proteins were mixed in 1 to 3 molar ratio and crystallization drops of 200 nl were set up in 96-well MRC crystallization plates using a mosquito LCP®.Helsinki Random I and II screens, our local modifications of the classic sparse matrix screens, yielded initial hits from conditions: HRI: 30% MPD, 0.1 M Na-Cacodylate, pH 6.5, 0.2 M Mg-Acetate; and 18% PEG8000, 0.1 M Na-Cacodylate, pH 6.5, 0.2 M Zn-Acetate; HRII: 3.4 M Hexanediol, 0.1 M Tris, pH 8.5, 0.02 M MgCl2.Grid screens prepared manually around the initial hits were used to optimise crystal growth and diffraction.For final optimization, hanging drops were set up manually using the following grid screen: 0.5–4.0 M 1,6-Hexanediol, 0.1 M Tris-HCl, pH 8.5, 20 mM MgCl2.The 2.9 M 1,6-Hexanediol present in the well solution also served as a cryoprotectant when flash freezing crystals in liquid nitrogen.Data were collected on the ADSC Quantum Q315r detector at beamline ID14-4 at the European Synchrotron Research Facility in Grenoble, France.Images were processed to 2 Å resolution in space group P21 using XDS, the quality of the data assessed with phenix.xtriage and data anisotropy analysed using the UCLA-DOE Diffraction Anisotropy Server.Diffraction data were reprocessed to 2.5 Å based on F/σ values for each crystal direction obtained from anisotropy analysis.The structure was solved by molecular replacement using Molrep in the CCP4 package with the structure of UspA1165–366 as a model.Model building was done in Coot followed by refinements in Refmac5 and phenix.refine.Finally, structure quality was assessed using the MolProbity webserver and the Phenix software package.The structure was analysed using the daTAA server for TAA structure analysis, and HBPlot to analyse secondary and tertiary protein structure.The coiled-coil characteristics were evaluated using programs TWISTER and SOCKET.A model of full-length UspA1 was built using CCBuilder.Figures of UspA structures were prepared using PyMol.After data processing with XDS we truncated the processed data to 2.0 Å resolution, based on CC values exceeding 50%.However, after solving the structure and starting refinement we noticed that Rfree remained around 30–35%, which was unexpectedly high.One possible explanation was that C3d was still missing from the model, as it was present in the crystallization solution.However, we did not see any unbuilt density that would indicate that UspA1299–452 crystallized in complex with C3d, and the Vm was 3.54 Å3 Da−1, which was not consistent with the presence of another molecule.The second option was data anisotropy, which was confirmed by the UCLA-DOE LAB Diffraction Anisotropy Server.Plots of F/σ for the three reciprocal space axes a*, b*, and c* against resolution give the maximum resolution for which F/σ exceeds 3 in each direction.For our dataset, the resolution limits along a* and b* were 2.0 Å, but only 2.5 Å along c*.As a next step the server performs ellipsoidal truncation, anisotropy scaling and applies negative isotropic B-factor correction.The automatically generated corrected structure factors, however, resulted in overall completeness lower than 90%; 35% between 2.10 and 2.05 Å, 68% between 2.29 and 2.22 Å.We therefore chose to cut our data manually to 2.5 Å.Using these data we solved the structure and refined it to acceptable R factors for the resolution: Rfree = 27.20% and Rwork = 21.22%.The asymmetric unit of the crystal contained one trimer of UspA1299–452, as is typical of a trimeric autotransporter adhesin.It consists of part of the neck domain and the stalk.The neck domain of UspA1299–452 is a long neck containing, towards its end, the characteristic DAVN motif that mediates the transition from the left-handed parallel β-roll headgroup to the α-helical stalk.The neck ends with the typical QL sequence: Leu337, the last residue of the neck, is also the first residue of the stalk domain that follows.The stalk domain is a left-handed coiled-coil built mostly from heptad repeats with the typical arrangement of repeating hydrophobic amino acids separated by polar residues in the seven-residue abcdefg pattern hpphppp.There are, however, two disruptions from the heptad pattern: after Gly396 and Gly442 in the d position, there is a Leu that occurs structurally at position a, instead of the canonical polar amino acid at position e to complete the abcdefg heptad pattern.In addition to Leu, there are another three amino acids inserted in these sites making them an 11-residue pattern.This changes the periodicity of the coiled-coil from heptad to hendecad, which is consistent with the daTAA server predictions.Typically, hydrophobic residues in positions a and d form the core of the coiled-coil.In our structure the majority of the residues in position a are hydrophobic: 16 out of 17 amino acids are either Ile, Leu or Val, with one amino acid in position a being polar, Gln425.On the other hand, 9 of 17 of the d positions are occupied by Leu, 2 of 17 by Gly where the heptad pattern changes, while the other six positions are occupied by the polar amino acids Asn, Gln407, and His428.The side chains of Asn347, 375, 382 and 414 in the d positions face the central core of the trimer and the amide nitrogens of their side chains coordinate chloride ions, forming characteristic N@d layers; the distances of those interactions range from 2.90 to 3.61 Å, as expected.Cl− coordinated by amide nitrogens of Asn414, in contrast to Cl− coordinated by other Asn, have only 50% occupancy, as there was strong negative density after refinement with 100% occupancy.Asn347 forms a VxxNxxx pattern, whereas Asn375, 382 and 414 form an IxxNxxx pattern.Gln407 and 425 in d positions coordinate water molecules in the centre of trimeric core.Gln407 interacts with water molecule through Nε2 of chain A and by Oε1 of chains B and C.This mixed orientation of the Gln407 side chains is caused by additional interactions of Nε2 from chains B and C with oxygen of Leu404 from a neighbouring chain.Hydrogen bonds formed by these interactions are 2.6–2.8 Å in length, whereas Oε1 Gln407 of chain A and Leu404 from chain B are 3.2 Å apart.In the case of Gln425, the geometry of the interactions is different.The side chains of Gln425 are arranged clockwise, which is in contrast to the anti-clockwise arrangement of Gln407.Side chains of all three Gln425 are oriented with Nε1 towards the trimer core where they coordinate water molecule.This orientation is additionally stabilized by second interactions of Gln425 Nε1 with Leu421 oxygen from neighbouring chain.Finally, His428 coordinates a water molecule through Nδ2.The core facing orientation of His428 is stabilized by interactions of Nε2 with Oγ of the following Ser429 from the neighbouring chain.The solved UspA1299–452 is a parallel, left-handed, 3-stranded coiled-coil.The angles between the helices are between 5.6 and 6.9° calculated using SOCKET.There are 74 type 4 ‘knobs into holes’ interactions with packing angles between 37.8 and 74.4°.Angles were calculated between the Cα-Cβ bond vector of the knob residue and the Cα-Cα vector between the two residues that form the sides of the hole.TWISTER showed that the average α-helix rise per residue is 1.51 Å.It appears that the long neck at the N-terminus, with the characteristic 120° rotation of the monomers keeps the coiled-coil together, whereas the truncated C-terminus has no structural motif keeping three chains coiled, which would be supplied by the C-terminal β-barrel anchor in the full-length protein.For that reason, we checked the parameters of the coiled-coil to make sure it is not distorted.The radius of coiled-coil and helix vary from 5.91 to 6.64 Å and 2.20 to 2.38 Å respectively.The variations of the coiled-coil radius are the result of many factors.The transition from the neck to the stalk domain causes the biggest drop in the coiled-coil radius from 6.64 Å to 5.91 Å at N347, forming tight interactions by coordination of the Cl− ion.Disruptions of heptad pattern are another apparent cause of increase of the coiled-coil radius.The N@d layers appear to keep the coiled-coil tight, as can be seen in local radius minima.The Crick angle describes the orientation of a residue in relation to the coiled-coil axis, whereas its shift expresses the difference between angular values of the amino acid of the given structure and the model structure.To calculate the Crick angle shift we first calculated ideal angles for positions a – g using the GCN4 leucine zipper core mutant as a model TAA structure.We then subtracted those values from angles calculated for our structure for each position.The Crick angle Δ varies between +3.6° and −8.9° and significantly deviates in two regions of the UspA1 structure due to the change in periodicity to hendecads after Gly396 and Gly442, where the pattern is not abcdefgabcdefg, but abcdabcdefg.The deviation starts with a gradual increase in Δϕ from Ile386 in position a through the whole heptad to Glu396 in position d of the following heptad for which Δϕ reaches a maximum and returns to average values for the following residue, Leu397 in position a.We observe the same pattern for another region of the UspA1299–452, which starts with Ile432 and reaches maximum for Gly442.Previously solved structures of UspA1 did not show any other periodicity than the heptad pattern.For the binding and crystallization studies, we introduced two point mutations into C3d, E1153A and C1010A.The first one was to revert to the wild-type C3d sequence and the second removed the free Cys residue; E1153 and C1010 form the active thioester payload of C3 and we preferred to have the charge of the glutamate but not the potential for aggregation of an unpaired cysteine.UspA1 fragment was the same as the fragment previously designed and reported to bind C3d by Hallström et al.Our attempts to crystalize C3d in complex with UspA1299–452, which was previously reported to bind C3d, were unsuccessful.Moreover, in contrast to previous reports, we were not able to demonstrate binding between C3d and UspA1299–452 using ELISA or biolayer interferometry, and the two proteins ran separately on size-exclusion chromatography and blue native gel, suggesting that the binding is weak.This is also consistent with the fact that the crystals, grown at 1:3 molar ratio of UspA1:C3d and with a C3d concentration of 10.7 mg/ml contained no C3d at all.We decided to investigate binding in solution using thermophoresis.This method does not require immobilization of either of the proteins to measure binding, so there would be no interference from surface effects.Recombinantly produced and purified C3d was concentrated to 750 μM, which was the highest we could achieve without aggregation.UspA1299–452 was concentrated to 200 nM, as needed for the labelling.In the thermophoresis experiment, we titrated 50 nM labelled UspA1299–452 with increasing concentrations of unlabelled C3d from 22.9 nM up to 750 μM and measured the fluorescence.We could not reach full binding saturation even with the maximum concentration of C3d.Changes in fluorescence were normalized, plotted against C3d concentration, and a binding constant of 140.5 ± 8.4 μM was calculated.The UspA1299–452 we solved in this study is the fourth high-resolution structure of part of UspA1.Two of the structures solved previously, UspA142–345 and UspA1153–366, also contain the neck domain and beginning of the stalk.Our structure, however, contains a longer fragment of the stalk, which encompasses the putative C3d binding site.The fourth structure, UspA1527–665, is of a fragment of the stalk closer to C-terminal membrane anchor.As expected for a TAA, we observed polar amino acids in positions a and d.This is unlike canonical coiled-coils, where amino acids occupying positions a and d build the core of a trimer and are always hydrophobic.In the UspA1153–366 and UspA1527–665 structures, Asn occupy positions d with their side chains facing the core of a trimer and in each case coordinating a chloride ion.UspA1527–665 has eight N@d layers, and additionally three His in either the a or d positions.Two of the His coordinate phosphate ions and the other a water molecule.This is similar to our structure, where His428 in position d coordinates a water molecule.This histidine is part of the most frequent heptad motif QxxHxxx in TAAs with a hydrophilic core.Many other TAA structures have N@d layers in their coiled-coil stalks, such as EibD, SadA, and AtaA.In the above cases, Asn side chains in N@d layers coordinate Cl ions.In the UspA299–452 structure that we report here, four Asn residues occupy positions d.The first three form standard N@d layers, coordinating Cl−.In the case of Asn414, however, evidence for Cl− coordination is not that obvious.First of all, the density is weak and is consistent with approximately 50% occupancy: using 100% occupancy results in strong negative density.Furthermore, the concentration of Cl ions in the solution is at least 200 mM so half-occupancy implies a binding constant of 200 mM.On the other hand, when modelling water molecule at 100% occupancy in place of Cl− we satisfy the density map but orientation of Asn side chains remains as for Cl ion coordination.Moreover, coordination distances ranging from 2.94 to 3.61 Å would argue against coordination of water molecule.In both cases B factors 56.00 and 59.91 Å2 for Cl− and water, respectively are lower than coordinating Asn residues.Finally, R and Rfree factors for structure with Cl− equal 21.22 and 27.20%, respectively and are very similar to the ones for the structure with water molecule instead.Resolution of the structure, which is 2.5 Å, does not allow us to resolve this issue.Following the rule of N@d layers and lack of known exception to it, we decided to refine the structure with Asn414 coordinating Cl ion with half-occupancy.Nonetheless, it is worth pointing out that this binding site is not occupied at physiological concentrations of chloride ion, which are typically around 100 mM in blood plasma.In addition, we found two Gln facing the core of the trimer, one in position a and another in position d, each coordinating a water molecule.In none of the published UspA1 structures has Gln been reported to occupy core positions.However, in the light of all the TAA structures and predictions, Gln is one of the most common polar amino acids found in positions a or d.Furthermore, as the UspA1299–452 solved in this study was previously reported to bind complement protein C3 and in particular to its cleavage product C3d, we performed a series of binding experiments between UspA1299–452 and C3d.We were not able to reproduce the ELISA results using their approach though, of course, not their reagents.We were also unable to demonstrate complex formation between UspA1299–452 and C3d by size exclusion chromatography or biolayer interferometry.ELISA and biolayer interferometry methods require, however, surface immobilization of one of the proteins, which could result in a geometrically unfavourable orientation of the molecule where access to the binding site might be limited or completely blocked.This would lead to underestimation of binding affinities of the two proteins.There might also be other surface effects, such as unfavourable interactions of the ligand protein with the surface.If the binding was weak, such effects might make it unobservable.Thermophoresis, we reasoned, is performed in solution and so allows both proteins to interact freely with each other.Despite this we were not able to obtain full saturation of the binding at the maximal possible C3d concentration of 750 μM.Curve fitting of the binding data nonetheless allowed us to calculate a Kd of 140.5 ± 8.4 μM.This is about twenty times higher than the physiological concentrations of C3 in the serum, which ranges from 4.3 to 8.5 μM.The Kd is also ten times weaker than that measured between full-length UspA1 and C3met by Nordström et al.What might explain this discrepancy?,One possibility is that Nordström et al. performed their measurements using full-length UspA1 passenger domain, and even though later Hallström et al. narrowed down the C3 and UspA1 interactions to C3d and UspA1299–452, they were only able to show it by ELISA.Secondly, some of their experiments were performed on whole bacterial cells expressing UspA with serum or serum-purified C3d, not on a biochemically-pure system.Our in vitro experiments measure for the first time the interaction of C3d with UspA1299–452 without any confounding factors.We therefore suggest that additional factors may be important in UspA1-C3d interactions.Other parts of the UspA1 passenger domain might be also involved in interactions with C3d, or fragments of C3 molecules that are cleaved off in C3d take part in stabilizing the interactions.In addition, other molecules on the bacterial surface or present in serum could enhance binding of those two molecules.Studies of the interaction of full-length UspA1 with C3d should, however, now be possible with the new generation of electron microscopes.Coordinates and structure factors of the UspA1299–452 crystal structure were deposited with Protein Data Bank in Europe; accession code 6QP4. | The gram-negative bacterium Moraxella catarrhalis infects humans exclusively, causing various respiratory tract diseases, including acute otitis media in children, septicaemia or meningitis in adults, and pneumonia in the elderly. To do so, M. catarrhalis expresses virulence factors facilitating its entry and survival in the host. Among them are the ubiquitous surface proteins (Usps): A1, A2, and A2H, which all belong to the trimeric autotransporter adhesin family. They bind extracellular matrix molecules and inhibit the classical and alternative pathways of the complement cascade by recruiting complement regulators C3d and C4b binding protein. Here, we report the 2.5 Å resolution X-ray structure of UspA1299–452, which previous work had suggested contained the canonical C3d binding site found in both UspA1 and UspA2. We show that this fragment of the passenger domain contains part of the long neck domain (residues 299–336) and a fragment of the stalk (residues 337–452). The coiled-coil stalk is left-handed, with 7 polar residues from each chain facing the core and coordinating chloride ions or water molecules. Despite the previous reports of tight binding in serum-based assays, we were not able to demonstrate binding between C3d and UspA1299–452 using ELISA or biolayer interferometry, and the two proteins run separately on size-exclusion chromatography. Microscale thermophoresis suggested that the dissociation constant was 140.5 ± 8.4 μM. We therefore suggest that full-length proteins or other additional factors are important in UspA1-C3d interactions. Other molecules on the bacterial surface or present in serum may enhance binding of those two molecules. |
535 | White matter hyperintensities are seen only in GRN mutation carriers in the GENFI cohort | Frontotemporal dementia is an umbrella term used to denote a group of neurodegenerative disorders affecting principally the frontal and temporal lobes.It is a highly heritable disorder with approximately a third of cases being caused by mutations in predominantly three genes: progranulin, microtubule associated protein tau and chromosome 9 open reading frame 72.Whilst the clinical features of GRN-, MAPT- and C9orf72-associated FTD largely overlap, the underlying molecular processes leading to that phenotypic endpoint are fundamentally different.To date, most antemortem studies of familial FTD have focused on changes in gray matter, but some types of FTD are known to be associated with white matter pathology.Such changes may be seen by magnetic resonance imaging e.g. cerebral white matter hyperintensities.These WMH are usually identified on T2, FLAIR or PD-weighted MRI and reflect an abnormal tissue fat/water ratio.They are commonly seen in healthy aging and more extensively in patients with neuroinflammatory disorders or small vessel cerebrovascular disease."Research into the link between WMH and other forms of dementia has been ongoing for some time with lesions seen particularly in those with vascular dementia as well as being commonly associated with Alzheimer's disease.In such cases of dementia lesions are generally felt to represent ischaemic damage.WMH are less commonly seen in FTD but recent small studies have reported their presence in some cases, particularly in those with GRN mutations.However, a detailed investigation in a large cohort has yet to be performed.Many studies of WMH in dementia use visual rating scales, manual or semi-automated segmentation methods.However, for large cohorts time consuming operator-dependent segmentation becomes unfeasible.Automated segmentation methods have therefore been developed for extracting WMH from either FLAIR or T2-weighted images.We have previously developed a methodology for automatically segmenting WMH through modelling of unexpected observations in MR images.In order to investigate the presence of WMH in FTD further we used this methodology on data from the Genetic FTD Initiative which investigates symptomatic and at-risk members of families with mutations in GRN, MAPT and C9orf72.The first phase of the GENFI multicentre cohort study comprised 13 research centres across Europe and Canada.Local ethics committees gave approval for the study at each site and all participants gave written informed consent prior to enrolment.Between January 2012 and April 2015 365 participants were recruited into GENFI, of whom 190 underwent both 3D T1 and 3D T2 acquisitions on a 3T MRI scanner.Ten scans did not pass quality control and so 180 were used for the final analysis.Four scanner types were used with protocols designed at the outset of the study to minimise discrepancies between scanners.Of the 180 participants included in the study, 43 were symptomatic and 61 were presymptomatic mutation carriers with a further 76 participants found to be mutation negative non-carriers.Demographics for the cohort are described in Table 1.Whilst FLAIR images have become the standard for the study of WMH due to good separation between CSF and lesion signal, T2-weighted images may also be used, although are more challenging to segment due to the proximity in signal signature between lesions and CSF.In order to segment the WMH in the GENFI dataset our previously described algorithm was adapted to address the specific challenges that arise in the use of T2-images.In this model an adaptive hierarchical three-level multivariate Gaussian mixture model using both T1 and T2 weighted images is used to model both normal and unexpected signal observations.At a first level, inliers and outliers are segmented and a priori information on their location is progressively introduced by smoothed maps of typicality measures.At the second level, anatomical information is introduced through statistical atlases so as to model the different biological tissues for both the inlier and outlier parts of the model.The appropriate number of Gaussian components necessary to model the different tissues and their parameters are presented at the third level of the hierarchy.In this study, the anatomical atlases are obtained as a result of a label fusion algorithm: Gaussian components parameters are optimised via an expectation-maximisation algorithm that incorporates contextual constraints with the application of a Markov Random Field.In order to determine the number of Gaussian components required to model the data, splitting and merging operations are tested at the third level of the hierarchy.To ensure a balance between model accuracy and fit, the Bayesian Information Criterion is used to assess if a model change should be accepted or not.The list of model changes to test is determined each time the model complexity evolves and the algorithm stops once all of these changes have been tested and rejected.Once the data model has been determined and optimised, it can be used to segment WMH.The characteristics of the healthy appearing WM are used as a reference to select the voxels classified as outliers that could be considered as lesion.More specifically, the probabilistic maps of outliers are multiplied voxelwise by a weight wn defined as:Image 1where yn indicates the intensity at voxel n, and μIWM and σIWMrefer to the mean and standard deviation of the white matter inliers.The anatomical tissue segmentation result is then combined with a brain parcellation to expunge the probabilistic map of voxels whose intensity reflect partial volume effect between the main tissues and the ventricular lining.In particular, the ventricular segmentation is morphologically dilated and voxels classified as lesion removed from this area.In determining WMH location, biases may be introduced if non-linear registrations are applied to images with lesions or if absolute distance to the ventricular lining that does not account for atrophy is chosen to differentiate between lesion locations.Additionally, in the case where few lesions are present, voxelwise analyses may prove prone to noise.Therefore, to study the lesion distribution in the brain, a patient-specific location scheme was applied dividing the white matter into regions reflecting their distance to the ventricular surface and lobes.To separate lobar regions, a parcellation of the gray matter was used to divide it into frontal, parietal, occipital and temporal lobes for the left and the right hemisphere.Euclidean distance from these defined lobes was then used to separate the WM. Basal ganglia were considered as a separate region and the infratentorial region excluded from the analysis.In order to avoid using absolute values that would be biased by brain atrophy, normalised distance maps between ventricular surface and cortical sheet were computed using the solution to the Laplace equation as described by Yezzi et al. and discretised into four equidistant layers as suggested by Kim et al. with layer 1 being nearest to the ventricle and layer 4 being juxtacortical.Ultimately the white matter domain was separated into 36 zones.To visualise the zonal separation, a bullseye plot was used in which the regions were encoded by the angular position and the layers were given by the radial position, with the distance to the ventricular surface increasing with the distance from the plot centre.The zonal characteristics such as the proportion of the zone affected by WMH can then be colour-encoded in the plot.With this representation, complex 3D information was thus summarized and gathered into a planar systematic infographic.Fig. 1 presents an illustration of the zonal separation and the lesion segmentation for a case of a symptomatic subject with a GRN mutation.Stata v14 was used for all statistical analyses.Due to the skewness of the data, both global and zonal volumetric WMH values were log-transformed before analysis.For the global volumes analysis, linear regression was performed considering WMH volume as the dependent variable and adjusting for age, gender, total intracranial volume, years from expected symptom onset and scanner type.A different adjustment was allowed for their mutation status and genetic group .A similar analysis was also performed for the different lobes and layers.In order to exclude any confounding influence of cardiovascular risk factors, the effects of hypertension, hypercholesterolaemia and diabetes mellitus on WMH volumes were assessed separately for each mutation group.None showed any significant association and were therefore not further included as covariates in the model.For the location analysis, the zonal standardised log-volumes were used as dependent variables and corrected separately for age, gender, TIV, scanner type and again a different adjustment was allowed for mutation status and genetic group.A further analysis was performed on the distribution of lesion intensities, which can be expressed as a standardised intensity or Z-score to the normal appearing WM tissue and allows groups to be distinguished.Although T2-weighted MR acquisition does not provide a quantitative measurement of the damage to the WM, such an intensity analysis provides an estimate of the appearance of the lesions i.e. how similar the hyperintensities of different lesions are.In order to avoid including participants that present with enlarged perivascular spaces as signal hyperintensities in the WM only participants with at least 0.5 mL of WMH were selected for this analysis.WMH volumes are shown in Table 1.Age was significantly associated with WMH volumes as was TIV.Mean adjusted back-transformed results with confidence intervals and p-values for group comparisons are shown in Table 2: symptomatic GRN subjects had a significantly higher mean global WMH volume than presymptomatic GRN cases and noncarriers, as well as more than the symptomatic MAPT and C9orf72 groups.By contrast, no significant difference was observed between the presymptomatic GRN cases and the noncarriers.Furthermore, no significant differences were seen between the symptomatic C9orf72 or MAPT groups and noncarriers.For visualisation purposes, the beeswarm plot of the WMH volumes for the different groups is presented in Fig. 2 along with a colour-coded representation of the effect size when comparing the different groups.The presymptomatic groups include a heterogeneous population including some participants near to the mean age of onset in the family and others more distant.In order to evaluate the change in WMH with disease progression, disjoint regression models were used to analyse the relationship between WMH volumes and years from expected age of onset in each mutation group, correcting for gender, scanner type and TIV.An association between WMH volume and years from expected age of onset was only significant in the GRN group .Beeswarm plots of the raw volumes of lesion and effect size calculated for the mean of log-transformed volumes adjusted for gender, age, TIV, scanner type are presented in Fig. 3 for the four layers and Fig. 4 for the different regions.Significant differences were seen in the symptomatic GRN group compared with noncarriers in the three layers nearest the ventricle with no significant differences in the most juxtacortical layer: layer 1: p = 0.0209; layer 2: p = 0.0005; layer 3: p = 0.0099; layer 4: p = 0.3223.Significant differences were also seen in the symptomatic GRN carriers compared with the symptomatic MAPT and C9orf72 carriers: layer 1: p = 0.0457, 0.1345; layer 2: p = 0.0539, p = 0.0147; layer 3: p = 0.1314, 0.0094; layer 4: p = 0.3882, p = 0.0248.Lastly, significant differences were also seen between the symptomatic and presymptomatic GRN carriers for layers 2 and 3: layer 1: p = 0.0691; layer 2: p = 0.0102; layer 3: p = 0.0348; layer 4: p = 0.4404.No significant differences were seen between other groups.For the regions, significant differences between the symptomatic GRN group and noncarriers were seen in the frontal lobe and the occipital lobe as well a smaller difference in the parietal region with no significant difference observed in the temporal lobe.Significant differences were also observed between the symptomatic GRN carriers and the symptomatic MAPT and C9orf72 carriers in the frontal and occipital lobes: frontal: p = 0.0170, 0.0040; occipital: p = 0.0203, 0.0196.Furthermore, symptomatic GRN carriers had significantly more WMH than presymptomatic carriers in the frontal and occipital lobes: p = 0.0184, 0.0105.Lastly, both MAPT and GRN symptomatic carriers presented significantly more WMH in the parietal lobe than the symptomatic C9orf72 carriers: p = 0.0117, 0.0337.None of the differences were significant in the temporal lobe.In the basal ganglia, significantly less WMH were detected in the symptomatic GRN and MAPT cases compared with noncarriers, p = 0.0002) and with symptomatic C9orf72 carriers, p = 0.0055).A symptomatic difference was also seen between the GRN symptomatic and presymptomatic carriers with other group comparisons nonsignificant.The location analyses are summarized in the bullseye plots in Fig. 5 which encode the mean adjusted standardised value for each zone.For the 97 subjects whose WMH volumes were higher than 0.5 mL, histograms of lesion intensities standardised with respect to the normal appearing white matter as obtained by the segmentation model were averaged for each subgroup.Fig. 6 shows the corresponding histogram bar plots.The distribution of lesion intensities in the symptomatic GRN group was narrower than for the other groups suggesting greater consistency of signal intensity in the lesions within this group.The interquartile range of the lesion intensity Z-scores was used to quantitatively assess this observation: the boxplot of IQR distributions across groups is presented in Fig. 7 along with the effect sizes calculated for the adjusted mean corrected for lesion load and scanner type.Through the use of a lesion segmentation algorithm adapted to segment WMH from 3D T1- and T2- weighted scans this study extends previous work on WMH in genetic FTD to show their presence in a symptomatic GRN mutation group only, and not in presymptomatic participants nor in those with MAPT or C9orf72 mutations.Furthermore, within the GRN group there was an association of increased WMH volume with disease progression.Differences were seen in the symptomatic GRN group in all three analyses suggesting global and local differences in WMH as well as differences in their appearance.At the regional level, the frontal lobe was particularly affected by WMH, consistent with previous findings.However, we also found significant differences in the occipital lobe, and to a lesser extent in the parietal lobe.WMH were found in the three most central layers, i.e. closest to the ventricles.It may well be that this periventricular distribution represents a particular pathogenetic feature of GRN mutations: progranulin deficiency has been shown to be associated with blood-brain barrier dysfunction and increased permeability, which have been shown to be associated with periventricular lesions.However, as well as the load and location pattern, the intensity outlierness analysis showed that the appearance of the lesions were also different in the symptomatic GRN cases from the other groups.The small number of lesions seen in other groups with increasing age are likely to represent small vessel cerebrovascular disease, and therefore the different appearance of the lesions in the GRN group may be representative of a non-ischaemic origin.Progranulin has been shown to play a key role in regulating wound repair and inflammation, including affecting tumor necrosis factor alpha signalling, and progranulin deficiency is known to promote neuroinflammation: therefore it may be that the lesions seen in these patients are inflammatory in nature.Further evidence for active neuroinflammation in GRN carriers comes from studies of knockout GRN mouse models which show increased microglial activation and increased levels of pro-inflammatory cytokines, and of blood cytokine levels in human GRN mutation carriers which display elevated levels of TNF-α and IL-6.Of note, fewer hyperintensities were observed in the basal ganglia for both the symptomatic MAPT and GRN groups compared with the other groups.The algorithm cannot differentiate between WMH and enlarged perivascular spaces and in the basal ganglia hyperintensities are likely to correspond to the latter rather than true WMH.It is unclear why fewer are seen in these two groups although this may be related to underlying atrophy of the basal ganglia which tends to be seen in the GRN and MAPT groups to a greater extent than in the C9orf72 group.It is less likely to represent a feature of the underlying molecular processes although progranulin is known to promote angiogenesis and so progranulin deficiency may potentially be associated with fewer perivascular spaces.From a technical perspective, the use of this algorithm has advantages over other methods of analysis of WMH location.The application of a patient-specific systematic location scheme prevents the WMH location pattern analysis from suffering from any biases due to registration error or atrophy.Furthermore, the adapted algorithm for T2-weighted images avoids the inclusion of elements at the border between normal tissue that share a common intensity signature with WMH on T2 images.Inherent to the use of T2-weighted images, it must however be noted that in this WMH segmentation, enlarged perivascular spaces are not distinguished from white matter lesions.An additional limitation of the study lies in the fact that no information is given on how individual lesions span multiple zones separated in layers or in lobes.Further investigation of individual lesions both in terms of extent and appearance through texture analysis would be of interest.The main strength of this study derives from the large cohort available which enables comparisons not only between mutation carriers and non-carriers but also between genetic groups and those at different stages of the disease process.Clinically, the presence of WMH is an important sign of a potential GRN mutation in a patient with familial FTD, whilst from a clinical trial point of view, it may be that measurement of WMH load will be a useful biomarker, particularly in trials targeting anti-inflammatory measures.In order to investigate this further, the longitudinal change in load, location and appearance of WMH, and their relationship to other neuroimaging measures of atrophy or neuropsychological measures of disease severity and progression, in GRN carriers will be an important subject for future study.Correlation of WMH burden with blood and CSF biomarkers of disease intensity or progression in FTD, or with markers of inflammatory processes, would also be helpful to investigate.In addition, post-mortem studies of patients with GRN mutations will also be important to understand what the WMH represent histopathologically, and their relationship, if any, to markers of demyelination, neuronal loss, neuroinflammation, small vessel disease, and TDP-43 pathology. | Genetic frontotemporal dementia is most commonly caused by mutations in the progranulin (GRN), microtubule-associated protein tau (MAPT) and chromosome 9 open reading frame 72 (C9orf72) genes. Previous small studies have reported the presence of cerebral white matter hyperintensities (WMH) in genetic FTD but this has not been systematically studied across the different mutations. In this study WMH were assessed in 180 participants from the Genetic FTD Initiative (GENFI) with 3D T1- and T2-weighed magnetic resonance images: 43 symptomatic (7 GRN, 13 MAPT and 23 C9orf72), 61 presymptomatic mutation carriers (25 GRN, 8 MAPT and 28 C9orf72) and 76 mutation negative non-carrier family members. An automatic detection and quantification algorithm was developed for determining load, location and appearance of WMH. Significant differences were seen only in the symptomatic GRN group compared with the other groups with no differences in the MAPT or C9orf72 groups: increased global load of WMH was seen, with WMH located in the frontal and occipital lobes more so than the parietal lobes, and nearer to the ventricles rather than juxtacortical. Although no differences were seen in the presymptomatic group as a whole, in the GRN cohort only there was an association of increased WMH volume with expected years from symptom onset. The appearance of the WMH was also different in the GRN group compared with the other groups, with the lesions in the GRN group being more similar to each other. The presence of WMH in those with progranulin deficiency may be related to the known role of progranulin in neuroinflammation, although other roles are also proposed including an effect on blood-brain barrier permeability and the cerebral vasculature. Future studies will be useful to investigate the longitudinal evolution of WMH and their potential use as a biomarker as well as post-mortem studies investigating the histopathological nature of the lesions. |
536 | Validating production of PET radionuclides in solid and liquid targets: Comparing Geant4 predictions with FLUKA and measurements | Radioisotopes play a crucial role in the diagnosis and treatment of cancer.Numerous isotope-producing nuclear reactors are due to end their operation within a few years.As a result, proton-induced reactions have attracted significant interest from the scientific community after cyclotrons proved to be a feasible alternative to reactor produced radioisotopes.Currently cyclotrons can be used to produce radioisotopes for imaging techniques such as positron emission tomography and single photon emission computed tomography.The irradiated target can be in solid, liquid or gaseous form and may be required to satisfy strict design constraints.For example, a target may have material composition restrictions to achieve a desired specific activity, proton energy constraints to avoid unwanted isotope production, or may need to survive several hours of proton irradiation without any thermal issues.As a result, cyclotron targets and materials can be very expensive.Monte Carlo simulations can be used to assess the expected yield and to optimize target design and materials to maximize yield of the isotope of interest without increasing the production of contaminants.The success in using MC for yield assessment depends strongly on the cross section data used for the simulation.Despite a large number of experiments carried out with proton activation, the data available are often inconsistent and at times data from different experiments conflict with each other.In this work, the MC package Geant4 has been used to simulate the yields of the following PET isotopes: 13N, 18F, 44Sc, 52Mn, 55Co 61Cu, 68Ga, 86Y, 89Zr and 94Tc.The results have been compared to our previous work with the MC package FLUKA and with experiments.Different physics models in Geant4 have been tested to find the best approximator of isotopic yield to experiments.Previous results for 13N, 18F, and 68Ga have been published and are repeated here for completeness.The experimental details have been described and, where appropriate, referenced by Infantino et al.Some details are repeated here for the convenience of the reader.The selector has four positions, allowing eight different targets to be installed at a time.Two target assemblies were simulated in Geant4.Figs. 1 and 2 illustrate the liquid and solid target assemblies respectively with each component labeled numerically.The proton beam enters the assembly though the baffle and collimator rings.The beam is then collimated further with a four quadrant conical collimator contained within an insulator flange.Each quadrant of the collimator is capable of measuring beam current separately and the four readings can be used to deduce the position of the proton beam.The beam then enters the target assembly through a 25 µm thick aluminium foil, which separates the cyclotron vacuum from the target assembly.Due to the power deposition in the foil, helium cooling is applied to the foil in the helium window.The liquid target is a closed volume of 0.9 ml capacity, with 8 mm depth and 12 mm diameter.The liquid target is separated from the helium cooling by a HAVAR foil.HAVAR is a cobalt based metal alloy with high tensile strength.It is composed of 42.5% cobalt, 13.0% nickel, 20.0% chromium, 2.0% molybdenum, 0.2% carbon, 0.04% beryllium, 1.6% manganese, 2.8% tungsten and remainder iron, see Hamilton Precision Metals.The target body is composed of standard niobium.Target loading and unloading is performed using an automated loading system, see Hoehr et al.In the solid target assembly, the foil target is in the place of the HAVAR with helium jets for cooling on both sides and.Due to the use of thin foils, the proton beam traverses through the target and is finally stopped by the water cooled aluminium block which acts as the beam dump.The geometries were modelled as accurately as possible by using dimensions from technical drawings.The nuclear and chemical properties of the liquid and solid target materials are listed in Table 1.After irradiation, isotopic yield measurements were performed using gamma-ray spectrometry analysis or ionization chamber measurements.All measured yields were decay-corrected to the end of bombardment.When multiple irradiations took place for the same isotope, the yield was normalized to the beam current prior to calculating the average saturation yield.The error in the yield is dominated by the standard deviation of the different irradiations.For more details see Infantino et al. and references therein.Geant4 is an all particle Monte Carlo toolkit designed for simulating particle interactions from 100 TeV down to a few eV.Geant4 is implemented in C++ and has great flexibility and expandability and thus is used in various applications such as space research, Large Hadron Collider experiments, medical physics or microdosimetry applications.When calculating yield ratios the experimental and MC uncertainties have been added in quadrature.The solid and liquid targets have been represented in Geant4 using two geometries as shown in Figs. 1 and 2.Geant4 provides a wide range of simple solid geometries that can be used.More complex geometries such as the conical collimator can be generated by combining existing shapes with Boolean operators such as G4UnionSolid and G4SubtractionSolid.The target materials have been divided into two categories: liquid target, containing water solution of salts, and solid targets.While it is possible to use the natural isotopic composition of elements from the NIST database, user defined isotopic compositions were used in order to match material definitions in the FLUKA model in Infantino et al.For solid and liquid targets, the mass fractions were calculated for each element and used in the definition of materials.Geant4 provides multiple physics models, each applicable for different particle interactions at different energy levels.In this study, in order to model proton inelastic hadronic interactions in the relevant energy range, three physics lists were considered: Bertini Intranuclear Cascade High Precision model, Binary Intranuclear Cascade High Precision model and Binary Intranuclear Cascade All High Precision model.In Geant4 QGSP-BERT-HP and QGSP-BIC-HP are well established physics list for low energy application but were not developed for predicting radionuclide production.From the three physics lists investigated, QGSP-BIC-AllHP proved to be the best approximator for our investigation.The QGSP-BERT-HP list failed to calculate any yield for 13N and 61Cu.QGSP-BIC-HP did not calculate any 13N yield.Due to these limitations, mainly results from the QGSP-BIC-AllHP physics list are being discussed in this paper.QGSP-BIC-AllHP is a new data-driven all particle, high precision physics model that uses TALYS-based Evaluated Nuclear Data Library.TENDL is based on experimental and calculated results of TALYS nuclear model code to produce a nuclear data library for Alpha, Deuteron, 3He, Proton, Neutron and Triton for energies below 200 MeV.The proton sub-library contains cross sections of about 2800 isotopes.This model has been validated against experimental data.In this work TENDL 2015 cross sections were used with Geant4 10.1 for energies below 200 MeV.To describe electromagnetic interactions, electromagnetic options 1, 2 and 3 were tested.Electromagnetic option 1 proved to produce comparable results with the benefit of reduced computation time.The production thresholds were set at 1 mm for all particles inside the target volume.As the saturation yield is a function of the nuclear reaction cross section in the energy range between the beam entering the target to the beam exiting the target or being stopped, see Infantino et al., comparing the area under the cross section in this energy range between experimental and TENDL cross sections is a good measure of the expected yield difference.Experimental Nuclear Reaction Data cross sections were taken from EXFOR.For reactions with multiple available sources, selections were performed taking into account error margins and the number of data points available for the energy range concerned.The source of experimental reaction cross sections for every isotope investigated are listed in Table 2 under the reference column.After selecting appropriate cross sections, a curve was fitted through the cross sections, and the area under the curve was calculated for both the EXFOR and the TENDL cross sections.Comparisons between TENDL and EXFOR cross section areas are shown in Table 2.FLUKA is a Fortran-based general purpose Monte Carlo code used to investigate particle transport and interaction with matter.It is applicable at energies from low energy to TeV energy levels such as shielding, target design, calorimetry, hadron therapy, neutrino physics, cosmic rays, etc.FLUKA is jointly developed by the European Organization for Nuclear Research and the Italian Institute for Nuclear Physics.The FLUKA MC package version 2011.2b.6 was used for the isotope production at the medical cyclotron.Isotope production in FLUKA is handled inside the software package.Materials were defined by the user to match with experimental details.For more details, see Infantino et al.During experiments or routine isotope production, there are losses in the transfer system and in vials prior to measurement for liquid targets and dissolved solid targets respectively.Also MC codes do not take into account complex thermal and fluid dynamics of the liquid target.Thus a factor of 2 seems to be an acceptable limit for the ratio of saturation yield.The comparison between Geant4 and experiment for the isotopes 18F, 44Sc, 52Mn, 55Co, 61Cu, 68Ga, 89Zr, and 94Tc fulfills this criteria.Only for 13N and 86Y is the ratio between Geant4 and experiment larger than 2, and none is smaller than 0.5.Overall in this section, Geant4 is less than a factor of two away from the experimental yield.It is also closer to the experimental yield than FLUKA for five isotopes, while FLUKA is closer for three isotopes.In general the comparison of the EXFOR cross sections with the TENDL cross sections used in Geant4 are within 25% except for 61Cu.While the yield of 18F is under-calculated by a factor of 0.53 using Geant4, FLUKA over-estimates it by a factor of 1.66.The database takes into account multiple sources to provide a single unified table of cross sections that has been used.The 18O18F reaction has multiple resonances between 2 and 10 MeV with each experiment reporting slightly different peaks.This does not take into account the efficiency of the liquid target system."For 52Mn, 55Co and 61Cu Geant4 performed better than FLUKA with ratios of 1.1, 0.7 and 0.6 against FLUKA's 4.62, 0.3 and 3.13 respectively.For these solid targets FLUKA appears to be less reliable than Geant4, with all yield ratios outside acceptable limits.For these three isotopes, the yield ratios of Geant4 to experimental values correlate very well to the cross section ratios between TENDL and EXFOR.For 52Mn the yield ratio is 1.1 while the cross section ratio is 0.93, for 55Co yield and cross section ratios are 0.7 and 0.88 respectively.61Cu has a yield ratio and a cross section ratio of 0.6 and 0.55 respectively.The excellent level of agreement between the yield ratio and cross section ratio indicates that while the Geant4 yield might be different from experiments, it is a consequence of mismatching TENDL and EXFOR cross sections."For 68Ga FLUKA performed better with a ratio of 1.03 against Geant4's 0.84.The cross section of the 68Zn68Ga reaction currently has significant discrepancies, hence multiple sources were taken and a spline fit was used to make comparisons.The TENDL library underestimates yields over the concerned energy range compared to fitted EXFOR with a ratio of 0.75.Geant4 over-calculates the yield of 86Y by a factor of 2.5 whereas the FLUKA yield ratio was 0.9.The yield for 89Zr was calculated more accurately using FLUKA than Geant4, the respective yield ratios are 0.87 and 0.69 respectively."Both MC codes under-estimate the yield, with Geant4's performance disagreeing with theoretical expectations.The TENDL cross section is higher than most EXFOR tabulated cross sections.This indicates that Geant4 should calculate a yield higher than experiments, however, the yield from both MC codes is lower than that of experiments.At this moment no explanation has been found why the MC results challenge the cross sections available."Due to Geant4 and FLUKA's inability to calculate metastable isotopes, 94 mTc is presented as the sum of metastable and ground state.For this isotope, Geant4 calculates the yield with a factor of 1.7 whereas FLUKA ratio is 1.53 and the cross section ratio is 1.16.Both MC are able to calculate accurately 94Tc yield, with FLUKA performing slightly better.For these isotopes the deviation from the experiment is larger than a factor of two.For 13N and 44Sc the deviation in the Geant4 simulation is smaller than for FLUKA, while only for 86Y is the deviation for Geant4 larger than for FLUKA.The EXFOR cross section area is the same as the TENDL cross section area, except for 13N which has a very large ratio of 2.34.13N yield was overestimated by a factor of 2.72 compared to a factor of 5.9 from FLUKA.The yield between Geant4 and experiments of 13N results are not comparable at TR13 energy levels as TENDL does not account for the resonance at 7.9 MeV for the natO13N reaction.For energies above 8.5 MeV, TENDL cross section vastly over-estimate the yield and has large disagreements with EXFOR.This is illustrated by Fig. 3a.This phenomenon was also observed when 13N was created inside a PMMA target under proton therapy conditions in Amin et al."For 44Sc FLUKA performed worse with a ratio of 2.35 against Geant4's 2.1.When comparing with experimental cross sections, the ratio for 44Sc is 1.05.While the agreement between ratios is acceptable, EXFOR lacks sufficient good quality cross sections for the reaction of interest at these low energy levels.The contribution of 44 mSc to the total production of 44Sc in MC calculations is negligible for TR13 energy ranges.Geant4 overestimates the yield of 86Y by a factor of 2.5 whereas the FLUKA yield ratio is a very good 0.9.The yield of 86Y has been represented here as the sum of metastable and ground states of 86Y.As a result, a minor overestimation from Geant4 is expected when comparing simulated yields with experimental yields.The fitted tabulated cross sections for 86Sr86Y reaction were provided by the IAEA.The fit was performed using data from Roesch et al. and Levkovsky et al., where the former had significant error bars contributing to a slightly inaccurate smoothing of the fit.Compared to EXFOR, the TENDL data had a marginally larger overall yield in the energy ranges relevant to this work.Despite the discrepancy between the MC codes, the ratio of Geant4 to experimental yields agrees well with the ratio of cross sections for 86Y, as shown is Table 2. | The Monte Carlo toolkit Geant4 is used to simulate the production of a number of positron emitting radionuclides: 13N, 18F, 44Sc, 52Mn, 55Co 61Cu, 68Ga, 86Y, 89Zr and 94Tc, which have been produced using a 13 MeV medical cyclotron. The results are compared to previous simulations with the Monte Carlo code FLUKA and experimental measurements. The comparison shows variable degrees of agreement for different isotopes. The mean absolute deviation of Monte Carlo results from experiments was 1.4±1.6 for FLUKA and 0.7±0.5 for Geant4 using TENDL cross sections with QGSP-BIC-AllHP physics. Both agree well within the large error, which is due to the uncertainties present in both experimentally determined and theoretical reaction cross sections. Overall, Geant4 has been confirmed as a tool to simulate radionuclide production at low proton energy. |
537 | Hypercrosslinked polyHIPEs as precursors to designable, hierarchically porous carbon foams | Porous materials with high surface areas are of significant scientific interest due to the potential for a large degree of surface interactions throughout the bulk of the structures.Carbons are one of the most prolifically studied families of porous materials owing to their chemical inertness, relatively low costs and high surface areas.Porous carbons are desirable for a huge range of applications including as adsorbents for water contaminants , materials for gas separation and as electrodes in energy storage devices, such as Li-ion batteries and supercapacitors .Many routes to high surface area carbonaceous materials have been devised, including the carbonization of natural materials, such as coconut husk , and of synthetic materials, such as porous polymers .More difficult to achieve, however, is carbon materials with designable structures and/or tuneable pore-sizes.A number of different routes have successfully achieved this, including the pyrolysis of carbides and synthetic organic polymers , and templating methods such as ice-templating and the use of hard, inorganic templates .Emulsion-templated polymers present another attractive route to carbon foams with tailored porosity as good control over the size of the templating emulsion droplets is possible and the templates are easily removed post-polymerization."High internal phase emulsions are defined as an emulsion consisting of a droplet phase that makes up greater than 74.05% of the emulsion's total volume, the maximum volume occupiable by uniform spheres .By polymerization of the continuous but minority phase of a HIPE, followed by the subsequent removal of the internal templating-phase, free-standing macroporous polymers, denoted as polyHIPEs, are produced .PolyHIPEs have great potential as precursors for tailorable carbon foams, not only because of their tunable porosity and monolithic nature, but also as they are formed from liquid precursors and can, therefore, be moulded into virtually any shape.By the carbonization of polyHIPEs, emulsion templated carbon foams, so called ‘carboHIPEs’, can be produced .A wide range of polyHIPE carbon-precursors have been reported thus far, such as tannins , lignin , resorcinol-formaldehyde and polyacrylonitrile .Generally, some of the more prolifically studied polyHIPE systems are those consisting of styrene-co-divinylbenzene copolymers , however in order to form carboHIPEs from these materials additional stabilization is required due to their low thermal stabilities.Successful attempts of stabilization have included sulfonation , or the inclusion of benzylchloride groups for crosslinking using a Friedel-Crafts method , leading to Davankov-type hypercrosslinked polyHIPEs.However, sulfonation is not ideal due to the requirement of strong sulfuric acid, and in the case of polyHIPEs containing benzylchloride moieties, new crosslinks were only able to form between benzylchloride components, meaning that the degree of crosslinking is dependent on the concentration of benzylchloride.Recently, a method using Friedel-Crafts chemistry to ‘knit’ together aromatic monomers using an external crosslinker was developed to produce high surface area microporous polymers .Due to the simple requirement of aromaticity, this has proven to be a robust method when producing many hypercrosslinked materials, such as heteroatom containing networks or Davankov-like materials via the knitting together of linear polystyrene chains .Very recently, the heteroatom derived hypercrosslinked polymers were employed as carbon precursors, producing hyperporous carbons with surface areas up to 4334 m2/g after activation .Due to the presence of many aromatic groups, this simple Friedel-Crafts alkylation has been used to produce high surface area hypercrosslinked polyHIPEs .This hypercrosslinking method did not affect the emulsion-templated macroporosity of polyHIPEs and brought about increased surface areas due to the introduction of micropores, an effect which could be brought under control by varying the concentration of DVB in the polymers.Herein, polyHIPEs and their subsequent hypercrosslinked analogues are used as carbon foam precursors in order to demonstrate that by employing an external crosslinker, hypercrosslinking provides the addition stabilization needed for the production of hierarchically porous high surface area carbons upon carbonization.The hypercrosslinked polyHIPEs gave higher surface areas in comparison with those containing benzylchloride components due to the less-selective hypercrosslinking process, which does not require the presence of Cl groups, eliminating the need for VBC mediated crosslinking.It is demonstrated that, in contrast to polyHIPEs, the hypercrosslinked equivalents gave good char yields and retained their emulsion-templated macroporosity upon carbonization.Varying the ST:DVB ratio in the initial polyHIPEs has a systematic effect on the surface area, microporosity and char yield of the subsequent carboHIPEs, allowing for carboHIPEs with tailorable porosity not only on the macro- but also on the meso- and micro-scale.Stryrene, divinylbenzene, azobisisobutyronitrile, calcium chloride dihydrate, dimethoxymethane, iron chloride and 1,2-dichloroethane were all purchased from Sigma Aldrich.Hypermer 2296 was kindly provided by Croda and methanol was purchased from Fisher Scientific.All products were used as received.The ratios of styrene to divinylbenzene were determined volumetrically.For example, in a typical HIPE containing a styrene:divinylbenzene ratio of 9:1HIPE), styrene and divinylbenzene were mixed with Hypermer 2296 in a 50 mL free-standing polypropylene centrifuge tube before the mixture was gently shaken for 5 min by hand.Azobisisobutyronitrile was then added and the mixture gently stirred using a vortex mixture before an aqueous CaCl2 solution was added slowly over 10 min while still stirring using the vortex mixer at the same speed setting.After the addition of the aqueous phase was complete, the resulting emulsion was stirred more vigorously for a further 5 min before the continuous, minority emulsion phase of the resulting HIPE was polymerized at 70 °C for 24 h in a convection oven.The resulting polyHIPEs were then removed from the Falcon tube and washed three times in an ethanol bath for at least 2 h at a time, under gentle stirring.The structures were patted dry with blue roll before being dried thoroughly in a vacuum oven at 110 °C overnight to remove any residual solvent.The hypercrosslinking procedure was adapted from a previous report .In a typical example, a polyHIPE was cut into small pieces and swollen in 1,2-dichloroethane for 12 h.The sample was then degassed under Ar for 20 min before the external crosslinker dimethoxymethane was added to the mixture, followed by the FeCl3 catalyst.While still under Ar atmosphere, the mixture was heated to 75 °C for 24 h while stirring slowly.The reaction mixture was allowed to cool to room temperature before filtering, after which the polyHIPE was washed with methanol until the filtrate was clear.The polyHIPE was then washed further by Soxhlet extraction in methanol for 24 h. Finally, the materials were dried overnight in a vacuum oven at 110 °C.Samples of both native polyHIPEs and hypercrosslinked polyHIPEs were initially weighed before being placed in a chamber furnace fitted with an air tight Inconel metal retort.Samples were degassed in the furnace at room temperature for 30 min under a constant flow of N2.After this initial degassing stage the samples remained under a constant flow of N2 and were heated to 800 °C at a ramp rate of 2 °C/min.Samples were held at 800 °C for 30 min before being allowed to cool to room temperature.After carbonization the samples were collected and weighed again in order to determine the char yield of the materials.The N2 isotherms of samples were measured using a porosity analyser at −196 °C.Each sample was degassed under vacuum at 120 °C overnight and then further degassed for 4 h in-situ at 120 °C prior to measurement.Surface areas were calculated using the Brunauer-Emmett-Teller method .The total volume of pores was calculated from the volume of N2 adsorbed at P/P0 = 0.97, while the micropore volume was determined using the t-plot method.Scanning electron microscope images were taken on a variable pressure SEM).All samples were attached securely using carbon tape to Al stubs.The samples were sputtered with a 10 nm layer of gold prior to imaging.The average pore diameter of the polyHIPEs, was determined using the image analysis software, ImageJ .Skeletal density ρs, was measured on powdered samples using He displacement pycnometry with fill pressure 19.5 psig.At least 10 measurements were taken and the average value recorded.The envelope densities ρe, of both polyHIPEs and hypercrosslinked polyHIPEs were measured using an envelope density analyser.The envelope density of carbonized samples was measured via a volume displacement method.The percentage porosity P, for all samples was estimated using both the envelope- and skeletal density × 100%).The degree of shrinkage for all samples was also determined using a volume displacement method.Raman spectra were collected on a Renishaw inVia micro-Raman spectrometer with 532 nm DPSS diode in backscattered geometry, with relative laser spot diameters 0.8 μm with a spatial resolution of ca. 1 μm.Raman spectra were processed using WiRE software and were background subtracted and normalized with respect to the G mode.In order to measure electrical conductivity carboHIPEs were placed between two flat headed probes and the resistivity measured by impedance spectroscopy using a potentiostat.Samples were painted using silver dag at either end in order to minimise contact resistance between the sample surface and the probes.The conductivity was calculated using the resistance measured in the linear region at frequency 100 Hz through the sample, the monolith cross-sectional area and the monolith length.A series of HIPEs was created using an aqueous internal phase and a continuous phase composed of varying styrene to divinylbenzene ratios including 2, 10, 20, 50 and 100 vol% DVB.PolyHIPEs consisting of 100% styrene were omitted due to their lack of stability in the absence of the DVB crosslinker.As an example of how the samples will be denoted throughout the report, a sample containing 10 vol% DVB will be referred to as polyHIPE, with the numbers in subscript referring to the vol% ratio of styrene to divinylbenzene.The hypercrosslinked equivalent will be referred to as HCLHIPE and carbonized hypercrosslinked samples will be denoted as carboHIPE.Hypermer 2296 was used as a surfactant for all systems and the internal phase comprised 80 vol% of the overall emulsion, in order to create similar porosity in all polyHIPEs.After polymerization of the continuous phase, the aqueous internal phase was removed, yielding freestanding polyHIPEs in all cases.Similar emulsion-templated macropores, with average diameters in the range of 3.9–5.2 μm, were observed by SEM in all polyHIPEs, with the exception of polyHIPE, in which the macropores do not appear as uniform, presumably due to the low degree of crosslinking.All samples show open-cell structures with pore throats clearly visible between macropores, as expected for polyHIPEs templated from standard surfactant-stabilized HIPEs.The BET surface areas SABET, of the polyHIPEs are all ≤ 10 m2/g, consistent with previous reports .When comparing the N2 uptake isotherms, isotherms change from IUPAC classification type III to type IV with increasing DVB concentration.The mesoporosity was attributed to the surfactant behaving as a precipitant porogen during the polymerization of HIPEs containing high concentrations of DVB, leading to the formation of mesopores in the polyHIPE, resulting in increased surface areas .All polyHIPEs were hypercrosslinked by Friedel-Crafts alkylation after swelling in 1,2-DCE.The degree of swelling was limited by the DVB content of the polyHIPEs; the higher the DVB content, the less the polyHIPE was able to swell due to the more inherently crosslinked structure .After hypercrosslinking, the swollen polyHIPEs, or HCLHIPEs collapsed upon the removal of the solvent, however they are not able to collapse back to a non-porous state due to the newly formed hypercrosslinks, introducing considerable strain in the network .This was demonstrated well when comparing the volume of the polyHIPEs and their resulting HCLHIPEs, as well as comparing the volume of all resulting HCLHIPEs, The most swellable polyHIPE, polyHIPE, containing the least DVB, had the highest SABET of 921 m2/g after hypercrosslinking.The surface areas of the HCLHIPEs then decreased with increasing DVB content, due to the reduced swellability of the polyHIPEs.The decrease in SABET with increasing DVB content was reflected in the micropore volume MPV, of the samples, which decreased from 0.331 to 0.060 cm3/g for HCLHIPE and HCLHIPE, respectively.This decrease in microporosity is easily observed in the N2 isotherms of the HCLHIPEs, which showed a decrease in N2 uptake at very low relative pressures, with increasing DVB content.Pore size distributions obtained from gas sorption analysis, confirm that all hypercrosslinked polyHIPEs displayed a predominantly microporous structure.The emulsion-templated macroporous structures appeared unchanged after hypercrosslinking, retaining similar average macropore diameters .Both polyHIPEs and HCLHIPEs were carbonized at 800 °C in N2 atmosphere.As the Friedel-Crafts catalyst for hypercrosslinking, FeCl3, has previously been used as an activating agent to promote graphitization during carbonization , the amount of FeCl3 remaining after washing of the HCLHIPEs was investigated.X-ray photoelectron spectroscopy was performed on both a hypercrosslinked polyHIPE and the subsequent carboHIPE produced after carbonization and showed no Fe present in either sample, as displayed in the survey spectra, suggesting it was successfully removed during washing.Upon carbonization, polyHIPEs with a DVB content of less than 50 vol% were completely destroyed, while polyHIPEs and polyHIPEs gave very low char yields of 1 and 7%, respectively.It is also worth noting that the emulsion-templated macroporosity of both polyHIPE and polyHIPE were destroyed upon heating, demonstrating the polyHIPEs reduced thermal stability.Contrary to standard polyHIPEs, all HCLHIPEs survived carbonization with good char yields in the range of 29–44%.Interestingly, the char yield decreased with increasing DVB content, hence with decreasing surface areas, suggesting that the hypercrosslinked structures resulting from the more swellable polyHIPEs possessed a greater temperature tolerance during carbonization, resulting in higher char yields."It is also of note that, although all materials decreased in volume upon carbonization, HCLHIPEs containing less DVB had greater dimensional stabilities, producing carboHIPEs with a volume of 58% the original polyHIPE, whereas carboHIPEs were only 23% of the original polyHIPE's volume, showing increased shrinkage, in good agreement with the reduced char yields.All carboHIPEs produced from hypercrosslinked precursors shrunkHIPE and polyHIPE in Fig. 2a and b, respectively), reflecting the decreased average emulsion-templated macropore diameters of the carboHIPEs.However, regardless of char yield or degree of shrinkage, all carboHIPEs retained their emulsion-templated porosity.Raman analysis was performed on carboHIPEs in order to probe the degree of graphitization within the samples.All carboHIPEs showed a characteristic broad D and G mode, indicative of a disordered graphitic structure after carbonization, with an intensity ratio of D to G mode of approximately 0.9 in all cases.Additionally, the subtle emergence of the 2D mode for carboHIPE samples which contained higher initial DVB content suggest that these samples were more crystalline.Successful carbonization was confirmed by elemental analysis, which measured much improved C:H ratios in carboHIPEs of up to 160:1 compared to the maximum of 13:1 in HCLHIPEs.In order to determine what effect the ratio of ST:DVB has on the porous properties of the resulting carboHIPEs, gas sorption analysis was performed.All carboHIPEs showed steep N2 uptake at low relative pressure, indicative of microporosity.However, in contrast to the HCLHIPEs, the MPV of the carboHIPEs increased with increasing DVB content in the precursors.CarboHIPEs containing the least DVB, carboHIPE and carboHIPE, showed the largest reduction in micropore volume from their hypercrosslinked equivalents, indicating significant collapse of the micropores.The carboHIPE had a very similar MPV to its precursor, while both carboHIPE and carboHIPE displayed increased micropore volumes.A comparison of the pore size distributions of all carboHIPEs, which displayed a sharp peak at pore widths between 5 and 8 Å in all materials, illustrates well the increase in MPV with increasing DVB content.With the exception of carboHIPE and carboHIPEs, all surface areas of the carboHIPEs decreased in comparison to their hypercrosslinked precursors.Upon carbonization, the stabilization of the micropores in HCLHIPEs containing more DVB may be a result of the more rigid crosslinks provided by the aromatic group within DVB.It is well reported that DVB can be used as a robust carbon precursor and, therefore, is expected to carbonise well to produce micropores.The inherently crosslinked structure, coupled with further hypercrosslinking via Friedel-Crafts technique, provides a stable framework for the retention of both the micro- and macropores upon carbonization.To try to further investigate the additional stability provided by the DVB, the degradation temperatures of various HCLHIPEs during carbonization was monitored using thermogravimetric analysis.The TGA curves all displayed similar temperature profiles, with the char yields showing the same trend as during carbonization in the furnace.Crucially, the extrapolated onset temperature, the initial decomposition temperature of the HCLHIPEs, increases from 375.9 °C to 383.5 °C and 397.2 °C in the heating profiles of HCLHIPE, HCLHIPE and HCLHIPE, respectively.Therefore, with increasing DVB content, the more rigid crosslinks in the polyHIPEs provide some stability upon heating, delaying To.Coupled with a greater mass loss, or lower char yield, of the HCLHIPE, the higher To leads to a more dramatic degradation during carbonization.Rapid degradation produces a large amount of volatiles, such as CO, CO2 and H2O, in a shorter space of time.During carbonization, the release of a higher concentration of small gas molecules leads to increased physical activation of the HCLHIPEs, possibly leading to higher surface areas in carboHIPEs produced from HCLHIPEs containing more DVB.Lastly, the electrical conductivities of each of the carboHIPEs were investigated.All carboHIPEs displayed excellent electrical conductivities within the range of 288–434 S/m.There appears to be no obvious correlation between the conductivities and the structure of the polyHIPE precursors, suggesting that all carboHIPEs displayed a similar conductivity regardless of the ratio of ST:DVB in the initial polymer.These values are comparable with some of the leading electrical conductivities reported for HIPE-derived carbon foams in the literature, including those produced from Kraft black liquor and furifal and phloroglucinol-based carboHIPEs , produced at 1350 and 950 °C, respectively.Owing to their promising and consistent electrical properties, coupled with their tailorable porous properties and good char yields, these carboHIPEs may find applications in the field of energy storage as binderless electrodes in supercapacitors.A simple route to carbon foams, or carboHIPEs, with tailorable porosity from polyHIPE precursors was demonstrated.The char yield, degree of microporosity and the surface area of carbon foams could be tailored by varying the ST:DVB ratio in the continuous phase of the emulsion template from which the polyHIPE precursors were produced.By hypercrosslinking the polyHIPEs using Friedel-Crafts alkylation, microporous structures could be created within the polyHIPEs, leading to hypercrosslinked materials with high surface areas and increased thermal stability.Upon carbonization, non-hypercrosslinked polyHIPEs were destroyed, demonstrating that the dramatically increased thermal stability of HCLHIPEs, permitting the production of carboHIPEs with char yields of up to 44 wt% of the macroporous precursor.Hypercrosslinked samples containing more DVB allowed for a better retention, and even increases, of micropore volume upon carbonization, suggesting that some stabilization was provided by the more inherently crosslinked poly.However, when the concentration of DVB in the HCLHIPE precursor is low, the microporosity is largely destroyed during carbonization.This work presents a novel route to pourable carboHIPE precursors, with the ability to control a number of properties including char yield, micropore volume and BET surface area, all of which can be varied simply by varying the ST:DVB ratio in the HIPE template for the macroporous polyHIPE precursors. | Hierarchically porous carbon foams were produced by carbonization of hypercrosslinked polymerized high internal phase water-in-styrene/divinylbenzene emulsions (HIPEs). The hypercrosslinking of these poly(ST-co-DVB)HIPEs was achieved using a dimethoxymethane external crosslinker to ‘knit’ together aromatic groups within the polymers using Friedel-Crafts alkylation. By varying the amount of divinylbenzene (DVB) in the HIPE templates and subsequent polymers, the BET surface area and micropore volume of the hypercrosslinked analogues can be varied systematically, allowing for the production of carbon foams, or ‘carboHIPEs’, with varied surface areas, micropore volumes and pore-size distributions. The carboHIPEs retain the emulsion-templated macropores of the original polyHIPE, display excellent electrical conductivities and have surface areas of up to 417 m2/g, all the while eliminating the need for inorganic templates. The use of emulsion templates allows for pourable, mouldable precursors to designable carbonaceous materials. |
538 | Stable integration of the Mrx1-roGFP2 biosensor to monitor dynamic changes of the mycothiol redox potential in Corynebacterium glutamicum | The Gram-positive soil bacterium Corynebacterium glutamicum is the most important industrial platform bacterium that produces millions of tons of L-glutamate and L-lysine every year as well as other value-added products .In addition, C. glutamicum serves as model bacterium for the related pathogens Corynebacterium diphtheriae and Corynebacterium jeikeium .In its natural soil habitat and during industrial production, C. glutamicum is exposed to reactive oxygen species, such as hydrogen peroxide which is generated as consequence of the aerobic lifestyle .The low molecular weight thiol mycothiol functions as glutathione surrogate in detoxification of ROS and other thiol-reactive compounds in all actinomycetes, including C. glutamicum and mycobacteria to maintain the reduced state of the cytoplasm .Thus, MSH-deficient mutants are sensitive to various thiol-reactive compounds, although the secreted histidine-derivative ergothioneine also functions as alternative LMW thiol .MSH is a thiol-cofactor for many redox enzymes and is oxidized to mycothiol disulfide under oxidative stress.The NADPH-dependent mycothiol disulfide reductase catalyzes the reduction of MSSM back to MSH to maintain the highly reducing MSH redox potential .Overexpression of Mtr has been shown to increase the fitness, stress tolerance and MSH/MSSM ratio during exposure to ROS, antibiotics and alkylating agents in C. glutamicum .Under hypochloric acid stress, MSH functions in protein S-mycothiolations as discovered in C. glutamicum, C. diphtheriae and Mycobacterium smegmatis .In C. glutamicum, 25 S-mycothiolated proteins were identified under HOCl stress that include the peroxiredoxins and methionine sulfoxide reductases as antioxidant enzymes that were inhibited by S-mycothiolation .The regeneration of their antioxidant activities required the mycoredoxin-1/MSH/Mtr redox pathway, but could be also coupled to the thioredoxin/ thioredoxin reductase pathway which both operate in de-mycothiolation .Detailed biochemical studies on the redox-regulation of antioxidant and metabolic enzymes showed that both, the Mrx1 and Trx pathways function in de-mycothiolation at different kinetics.Mrx1 was much faster in regeneration of GapDH and Mpx activities during recovery from oxidative stress compared to the Trx pathway .The enzymes for MSH biosynthesis and the Trx/TrxR systems are under control of the alternative extracytoplasmic function sigma factor SigH which is sequestered by its cognate redox-sensitive anti sigma factor RshA in non-stressed cells .RshA is oxidized under disulfide stress leading to structural changes and relief of SigH to initiate transcription of the large SigH disulfide stress regulon .In addition, the LysR-type transcriptional repressor OxyR plays a major role in the peroxide response in C. glutamicum which controls genes encoding antioxidant enzymes for H2O2 detoxification and iron homeostasis, such as the catalase, two miniferritins, the Suf machinery and ferrochelatase .Thus, SigH and OxyR can be regarded as main regulatory systems for the defense under disulfide and oxidative stress to maintain the redox balance in actinomycetes.The standard thiol-redox potential of MSH was previously determined with biophysical methods as E0′ of − 230 mV which is close to that of glutathione .However, Mrx1 was also recently fused to redox-sensitive green fluorescent protein to construct a genetically encoded Mrx1-roGFP2 redox biosensor for dynamic measurement of EMSH changes inside mycobacterial cells.EMSH values of ~-300 mV were calculated using the Mrx1-roGFP2 biosensor in mycobacteria that were much lower compared to values obtained with biophysical methods .This Mrx1-roGFP2 biosensor was successfully applied for dynamic EMSH measurements in the pathogen Mycobacterium tuberculosis.Using Mrx1-roGFP2, EMSH changes were studied in drug-resistant Mtb isolates, during intracellular replication and persistence in the acidic phagosomes of macrophages .Mrx1-roGFP2 was also applied as tool in drug research to screen for ROS-generating anti-tuberculosis drugs or to reveal the mode of action of combination therapies based on EMSH changes .The Mtb population exhibited redox heterogeneity of EMSH during infection inside macrophages which was dependent on sub-vacuolar compartments and the cytoplasmic acidification controlled by WhiB3 .Thus, application of the Mrx1-roGFP2 biosensor provided novel insights into redox changes of Mtb.However, Mrx1-roGFP2 has not been applied in the industrial platform bacterium C. glutamicum.In this work, we designed a genetically encoded Mrx1-roGFP2 biosensor that was genomically integrated and expressed in C. glutamicum.The biosensor was successfully applied to measure dynamic EMSH changes during the growth, under oxidative stress and in various mutant backgrounds to study the impact of antioxidant systems and their major regulators under basal and oxidative stress conditions.Our results revealed a highly reducing basal EMSH of ~-296 mV that is maintained throughout the growth of C. glutamicum.H2O2 stress had only little effect on EMSH changes in the wild type due to its H2O2 resistance, which was dependent on the catalase KatA supporting its major role for H2O2 detoxification.Confocal imaging further confirmed equal Mrx1-roGFP2 fluorescence in all cells indicating that the biosensor strain is well suited for industrial application to quantify EMSH changes in C. glutamicum at the single cell level.Bacterial strains, plasmids and primers are listed in Tables S1 and S2.For cloning and genetic manipulation, Escherichia coli was cultivated in Luria Bertani medium at 37 °C.The C. glutamicum ATCC13032 wild type as well as the ΔmshC, Δmtr, ΔoxyR, ΔsigH, ΔkatA, Δmpx, Δtpx and Δmpx tpx mutant strains were used in this study for expression of the Mrx1-roGFP2 biosensor which are described in Table S1.All C. glutamicum strains were cultivated in heart infusion medium at 30 °C overnight under vigorous agitation.The overnight culture was inoculated in CGC minimal medium supplemented with 1% glucose to an optical density at 500 nm of 3.0 and grown until OD500 of 8.0 for stress exposure as described .C. glutamicum mutants were cultivated in the presence of the antibiotics nalidixic acid and kanamycin.The mrx1 gene was amplified from chromosomal DNA of C. glutamicum ATCC13032 by PCR using the primer pair Cgmrx1-roGFP2-NdeI-FOR and pQE60-Cgmrx1-roGFP2-SpeI-REV.The PCR product was digested with NdeI and SpeI and cloned into plasmid pET11b-brx-roGFP2 to exchange the brx sequence by mrx1 with generation of plasmid pET11b-mrx1-roGFP2.The correct sequence was confirmed by PCR and DNA sequencing.The E. coli BL21 plysS expression strain containing the plasmid pET11b-mrx1-roGFP2 was grown in 1 l LB medium until OD600 of 0.6 at 37 °C, followed by induction with 1 mM IPTG for 16 h at 25 °C.Recombinant His6-tagged Mrx1-roGFP2 protein was purified using His Trap™ HP Ni-NTA columns and the ÄKTA purifier liquid chromatography system according to the instructions of the manufacturer.The purified protein was dialyzed against 10 mM Tris-HCl, 100 mM NaCl and 30% glycerol and stored at − 80 °C.Purity of the protein was analyzed after sodium dodecyl sulfate-polyacrylamide gel electrophoresis and Coomassie brilliant blue staining.The vector pK18mobsacB was used to create marker-free deletions in C. glutamicum.The gene-SOEing method of Horton was used to construct pK18mobsacB derivatives to perform allelic exchange of the katA and mtr genes in the chromosome of C. glutamicum ATCC13032 using the primers listed in Table S2.The constructs include the katA and mtr genes with flanking regions and internal deletions.The pK18mobsacB derivatives were sub-cloned in E. coli JM109 and transformed into C. glutamicum ATCC13032.The pK18mobsacB::Δtpx plasmid containing the tpx flanking regions was constructed previously and transformed into the C. glutamicum Δmpx mutant.The gene replacement in the chromosome of C. glutamicum ATCC13032 resulted in ΔkatA and Δmtr single deletion mutants and the gene replacement of tpx in the chromosome of C. glutamicum Δmpx resulted in the C. glutamicum Δmpx tpx double deletion mutant.The deletions were confirmed by PCR using the primers in Table S2.For construction of the genomically integrated Mrx1-roGFP2 biosensor, a 237 bp fragment of mrx1 was fused to roGFP2 containing a 30-amino acid linker6 under control of the strong Ptuf promoter of the C. glutamicum tuf gene encoding the translation elongation factor EF-Tu.The Ptuf-Mrx1-roGFP2 fusion was codon-optimized, synthesized with flanking MunI and XhoI restriction sites and sub-cloned into PUC-SP by Bio Basic resulting in PUC-SP::Ptuf-mrx1-roGFP2.For genomic integration of the biosensor into the cg1121-cg1122 intergenic region of C. glutamicum, the vector pK18mobsacB-cg1121-cg1122 was used , kindly provided by Julia Frunzke, Forschungszentrum Jülich.The vector was PCR amplified with primers pk18_MunI and pk18_XhoI to swap the restrictions sites.After digestion of the pk18mobsacB-cg1121-cg1122 PCR product and the PUC-SP::Ptuf-mrx1-roGFP2 plasmid with MunI and XhoI, both digestion products were ligated to obtain pK18mobsacB-cg1121-cg1121-Ptuf-mrx1-roGFP2.The resulting plasmid was sequenced with biosensor_seq_primer_1 and biosensor_seq_primer_2.Transfer of the plasmid into C. glutamicum strains was performed by electroporation and screening for double homologous recombination events using the conditional lethal effect of the sacB gene as described .Correct integration of Ptuf-mrx1-roGFP2 into the cg1121-cg1122 intergenic region was verified by colony PCR using 2 primer pairs.The Mrx1-roGFP2 biosensor was further cloned into the E. coli-C.glutamicum shuttle vector pEKEx2 for ectopic expression of Mrx1-roGFP2 under the IPTG-inducible tac promoter.The mrx1-roGFP2 fusion was amplified from plasmid pET11b-mrx1-roGFP2 using primer pair pEKEx2-Cgmrx1-BamHI-For and pEKEx2-roGFP2-KpnI-Rev.The PCR product and plasmid pEKEx2 were digested with BamHI and KpnI, followed by ligation to generate plasmid pEKEx2-mrx1-roGFP2.The resulting plasmid was cloned in E. coli, sequenced and electroporated into C. glutamicum.Induction of the C. glutamicum strain expressing pEKEx2-encoded Mrx1-roGFP2 was performed with 1 mM IPTG.The purified Mrx1-roGFP2 protein was reduced with 10 mM dithiothreitol for 20 min, desalted with Micro-Bio spin columns, and diluted to a final concentration of 1 µM in 100 mM potassium phosphate buffer, pH 7.0.The oxidation degree of the biosensor was determined by calibration to fully reduced and oxidized probes which were generated by treatment of the probes with 10 mM DTT and 5 mM diamide for 5 min, respectively .The thiol disulfides and oxidants were injected into the microplate wells 60 s after the start of measurements.Emission was measured at 510 nm after excitation at 400 and 488 nm using the CLARIOstar microplate reader with the Control software version 5.20 R5.Gain setting was adjusted for each excitation maximum.The data were analyzed using the MARS software version 3.10 and exported to Excel.Each in vitro measurement was performed in triplicate.C. glutamicum wild type and mutant strains expressing stably integrated Mrx1-roGFP2 were grown overnight in HI medium and inoculated into CGC medium with 1% glucose to a starting OD500 of 3.0.For stress experiments, the strains were cultivated for 8 h until they have reached an OD500 of 14–16.Cells were harvested by centrifugation, washed twice with CGC minimal medium, adjusted to an OD500 of 40 in CGC medium and transferred to the microplate reader.Aliquots were treated for 15 min with 10 mM DTT and 20 mM cumene hydroperoxide for fully reduced and oxidized controls, respectively.Injection of the oxidants was performed 5 min after the start of microplate reader measurements.The values of I400sample and I488sample are the observed fluorescence excitation intensities at 400 and 488 nm, respectively.The values of I400red, I488red, I400ox and I488ox represent the fluorescence intensities of fully reduced and oxidized controls, respectively.C. glutamicum wild type expressing Mrx1-roGFP2 was grown in HI medium for 48 h, exposed to 80 mM H2O2 for different times and washed in potassium phosphate buffer, pH 7.0.Cells were blocked with 10 mM NEM, and imaged using a LSM 780 confocal laser-scanning microscope with a 63 × /1.4 NA Plan-Apochromat oil objective controlled by the Zen 2012 software.Fluorescence excitation was performed at 405 and 488 nm with laser power adjustment to 15% and 25%, respectively.For both excitation wavelengths, emission was collected between 491 and 580 nm.Fully reduced and oxidized controls were prepared with 10 mM DTT and 10 mM diamide, respectively.Images were analyzed by the Zen 2 software and Fiji/ImageJ .Fluorescent intensities were measured after excitation at 405 and 488 nm and the images false-colored in red and green, respectively.Auto-fluorescence was recorded and subtracted.Quantification of the OxD and EMSH values was performed based on the 405/488 nm excitation ratio of mean fluorescence intensities as described .Previous studies have revealed a specific response of the Mrx1-roGFP2 biosensor to MSSM in vitro, which was based on a fusion of mycobacterial Mrx1 to roGFP2 .Here we aimed to engineer a related Mrx1-roGFP2 biosensor for the MSH-producing industrially important bacterium C. glutamicum.Mrx1 of C. glutamicum exhibits a similar redox-active CxxC motif and shares 46.8% and 42.1% sequence identity with Mrx1 homologs of M. tuberculosis H37Rv and M. smegmatis mc2155, respectively .The principle of the Mrx1-roGFP2 biosensor to measure intrabacterial EMSH changes was shown previously .MSSM reacts with Mrx1 to form S-mycothiolated Mrx1, followed by the transfer of the MSH moiety to roGFP2 which rearranges to the roGFP2 disulfide resulting in ratiometric changes of the 400/488 excitation ratio .Mrx1 of C. glutamicum was fused to roGFP2 and first purified as His-tagged Mrx1-roGFP2 protein to verify the specific Mrx1-roGFP2 biosensor response to MSSM in vitro.In addition, Mrx1-roGFP2 was integrated into the genome of C. glutamicum wild type in the intergenic region between cg1121-cg1122 and placed under control of the strong Ptuf promoter using the pK18mobsacB-int plasmid as constructed previously .First, the Mrx1-roGFP2 biosensor response of the purified biosensor and of the stably integrated Mrx1-roGFP2 fusion were compared under fully reduced and fully oxidized conditions.The Mrx1-roGFP2 biosensor fluorescence excitation spectra were similar under in vitro and in vivo conditions exhibiting the same excitation maxima at 400 and 488 nm for fully reduced and oxidized probes.Thus, the Mrx1-roGFP2 probe is well suited to monitor dynamic EMSH changes during the growth and under oxidative stress in C. glutamicum.In addition, it was verified that purified Mrx1-roGFP2 reacts very fast and most strongly to low levels of 100 µM MSSM, although weaker responses were also observed with bacillithiol disulfide and glutathione disulfide which are, however, not physiologically relevant for C. glutamium.Furthermore, we assessed the direct response of Mrx1-roGFP2 and unfused roGFP2 to the oxidants H2O2 and NaOCl to compare the sensitivities of the probes for direct oxidation.This was important since a previous study showed a high sensitivity of fused Grx-roGFP2 and roGFP2-Orp1 to 10-fold molar excess of 2 µM NaOCl .In our in vitro experiments, the Mrx1-roGFP2 and roGFP2 probes did not respond to 100 µM H2O2 as in previous studies.Only 1–5 mM H2O2 lead to a direct oxidation of both probes with a faster response of the Mrx1-roGFP2 fusion.Both probes were rapidly oxidized by 10–40 µM NaOCl in vitro, and again Mrx1-roGFP2 was more sensitive to thiol-oxidation by NaOCl compared to unfused roGFP2.The rapid oxidation of roGFP2 and fused roGFP2 biosensors to low levels of HOCl is in agreement with previous studies and was also observed using the Brx-roGFP2 biosensor in S. aureus .The higher sensitivity of fused roGFP2 biosensors to NaOCl indicates that the redox active Cys residues of Brx or Mrx1 are more susceptible for thiol-oxidation compared to the thiols of roGFP2.In conclusion, our Mrx1-roGFP2 probe is highly specific to low levels of MSSM.The response of Mrx1-roGFP2 to higher levels of 1 mM H2O2 in vitro are not expected to occur inside C. glutamicum cells due to its known H2O2 resistance mediated by the highly efficient catalase.Next, we applied the genomically expressed Mrx1-roGFP2 biosensor to monitor the perturbations of basal level EMSH along the growth curve in various C. glutamicum mutant backgrounds, which had deletions of major antioxidant systems and redox-sensing regulators.The oxidation degree was calculated in C. glutamicum wild type and mutants during the 5–12 h time points representing the log phase and transition to stationary phase in defined CGC medium.The biosensor oxidation of each C. glutamicum sample was normalized between 0 and 1 based on the fully reduced and oxidized controls.It is interesting to note, that C. glutamicum wild type cells maintained a highly reducing and stable EMSH of ~-296 mV with little fluctuations during the log and stationary phase.Thus, this basal level EMSH of C. glutamicum is very similar to that measured in M. smegmatis previously .In agreement with previous studies of bacillithiol- and GSH-deficient mutants, the absence of MSH resulted in constitutive oxidation of the Mrx1-roGFP2 biosensor in the mshC mutant.This indicates an impaired redox state in the mshC mutant and the importance of MSH as major LMW thiol to maintain the redox balance in C. glutamicum.We hypothesize that increased levels of ROS may lead to constitutive biosensor oxidation in the MSH-deficient mutant since the mshC mutant had a H2O2-sensitive phenotype in previous studies .The high MSH/MSSM redox balance is maintained by the NADPH-dependent mycothiol disulfide reductase Mtr which reduces MSSM back to MSH .The importance of Mtr to maintain a reduced EMSH was also supported by our biosensor measurements which revealed an oxidative shift in EMSH to −280.2 mV in the mtr mutant during all growth phases.The alternative ECF sigma factor SigH controls a large disulfide stress regulon mainly involved in the redox homeostasis, including genes for thioredoxins and thioredoxin reductases, mycoredoxin-1 and genes for MSH biosynthesis and recycling .The C. glutamicum sigH mutant showed an increased sensitivity to ROS and NaOCl stress .Mrx1-roGFP2 biosensor measurements confirmed a slightly more oxidized EMSH of − 286 mV in the sigH mutant supporting the regulatory role of SigH for the redox balance.However, the oxidative EMSH shift was lower in the sigH mutant compared to the mtr mutant.In conclusion, our Mrx1-roGFP2 biosensor results document the important role of MSH, Mtr and SigH to maintain the redox homeostasis in C. glutamicum during the growth.In addition to MSH, C. glutamicum encodes many antioxidant enzymes that are involved in H2O2 detoxification and confer strong resistance of C. glutamicum to millimolar levels of H2O2.The H2O2 scavenging systems in C. glutamicum are the major vegetative catalase and the peroxiredoxins.The catalase is highly efficient for detoxification at high H2O2 levels while Tpx and Mpx are more involved in reduction of physiological low levels of H2O2 generated during the aerobic growth .In C. glutamicum, expression of katA is induced by H2O2 and controlled by the redox-sensing OxyR repressor which is inactivated under H2O2 stress .Thus, the oxyR mutant exhibits increased H2O2 resistance due to constitutive derepression of katA .Here, we were interested in the contribution of OxyR, and the antioxidant enzymes KatA, Tpx and Mpx to maintain the reduced basal level EMSH in C. glutamicum.In all mutants with deletions of oxyR, katA, tpx and mpx, the basal level of EMSH was still highly reducing and comparable to the wild type during different growth phases.Thus, we can conclude that the major antioxidant enzymes for H2O2 detoxification do not contribute to the reduced basal EMSH level in C. glutamicum during aerobic growth.These results further point to the main roles of these H2O2 scavenging systems under conditions of oxidative stress to recover the reduced state of EMSH which was investigated in the next section.Next, we were interested to determine the kinetics of Mrx1-roGFP2 biosensor oxidation in C. glutamicum under H2O2 and NaOCl stress and the recovery of reduced EMSH.C. glutamicum can survive even 100 mM H2O2 without killing effect which depends on the very efficient catalase KatA .In accordance with the H2O2 resistant phenotype, the Mrx1-roGFP2 biosensor did not respond to 10 mM H2O2 in C. glutamicum wild type cells and was only weakly oxidized by 40 mM H2O2.C. glutamicum cells were able to recover the reduced EMSH within 40–60 min after H2O2 treatment.Importantly, even 100 mM H2O2 did not further enhance the biosensor oxidation degree, indicating highly efficient antioxidant systems.In contrast, C. glutamicum was more sensitive to sub-lethal doses of NaOCl stress and showed a moderate biosensor oxidation by 0.5–1 mM NaOCl, while 1.5 mM NaOCl resulted in the fully oxidation of the probe.Moreover, cells were unable to regenerate the reduced basal level of EMSH within 80 min after NaOCl exposure, which could be only restored with 10 mM DTT.Since H2O2 is the more physiological oxidant in C. glutamicum, we studied the biosensor response under 40 mM H2O2 stress in the various mutants deficient for MSH and Mtr, antioxidant enzymes and redox regulators.The sigH mutant showed an increased basal level of EMSH of ~-286 mV as noted earlier, but a similar oxidation increase with 40 mM H2O2 and recovery of the reduced state after 40 min compared to the wild type.The similar kinetics of biosensor oxidation and regeneration in wild type and sigH mutant cells may indicate that MSH is not directly involved in H2O2 detoxification.In contrast, the oxyR mutant showed a lower H2O2 response than the wild type, but required the same time of 40 min for recovery of the reduced state of EMSH.The derepression of katA in the oxyR mutant is most likely responsible for the lower biosensor oxidation under H2O2 stress .This hypothesis was supported by the very fast response of katA mutant cells to 40 mM H2O2 stress, resulting in fully oxidation of the biosensor due to the lack of H2O2 detoxification in the absence of KatA.Exposure of katA mutant cells to 40 mM H2O2 might cause enhanced oxidation of MSH to MSSM leading to full biosensor oxidation with no recovery of the reduced state.In contrast, kinetic biosensor measurements under H2O2 stress revealed only slightly increased oxidation in the tpx mutant while the mpx mutant showed the same oxidation increase like the wild type.However, the H2O2 response of the mpx tpx mutant was similar compared to the wild type, indicating that Tpx and Mpx do not contribute significantly to H2O2 detoxification during exposure to high levels of 40 mM H2O2 stress, while KatA plays the major role.The small oxidation increase in the tpx mutant might indicate additional roles of Tpx for detoxification of low levels of H2O2 as found in previous studies .Altogether, our studies on the kinetics of the Mrx1-roGFP2 biosensor response under H2O2 stress support that KatA plays the most important role in H2O2 detoxification in C. glutamicum.To correlate increased biosensor responses under H2O2 stress to peroxide sensitive phenotypes, we compared the growth of the wild type and mutants after exposure to 80 mM H2O2.Exposure of the wild type to 80 mM H2O2 did not significantly affect the growth rate indicating the high level of H2O2 resistance in C. glutamicum.Of all mutants, only the katA mutant was significantly impaired in growth under non-stress conditions and lysed after exposure to 80 mM H2O2.In contrast, deletions of sigH, oxyR, tpx and mpx did not significantly affect the growth under control and H2O2 stress conditions.However, we observed a slightly decreased growth rate of the mpx tpx mutant in response to 80 mM H2O2 stress supporting the residual contribution of thiol-dependent peroxiredoxins in the peroxide stress response.Overall, the growth curves are in agreement with the biosensor measurements indicating the major role of KatA for detoxification of high levels of H2O2 and the recovery of cells from oxidative stress.To verify the biosensor response under H2O2 stress in C. glutamicum at the single cell level, we quantified the 405/488 nm fluorescence excitation ratio in C. glutamicum cells expressing stably integrated Mrx1-roGFP2 using confocal laser scanning microscopy.For control, we used fully reduced and oxidized C. glutamicum cells treated with DTT and diamide, respectively.In the confocal microscope, most cells exhibited similar fluorescence intensities at the 405 and 488 nm excitation maxima, respectively, indicating that the Mrx1-roGFP2 biosensor was equally expressed in 99% of cells.Fully reduced and untreated C. glutamicum control cells exhibited a bright fluorescence intensity at the 488 nm excitation maximum which was false-colored in green, while the 405 nm excitation maximum was low and false-colored in red.In agreement with the microplate reader results, the basal EMSH was highly reducing and calculated as −307 mV for the single cell population.Treatment of cells with 80 mM H2O2 for 20 min resulted in a decreased fluorescence intensity at the 488 nm excitation maximum and a slightly increased signal at the 405 nm excitation maximum, causing an oxidative shift of EMSH.Specifically, the EMSH of control cells was increased to −263 mV after 20 min H2O2 treatment.The recovery phase could be also monitored at the single cell level after 40 and 60 min of H2O2 stress, as revealed by the regeneration of reduced EMSH of −271 mV and −293 mV, respectively.The oxidative EMSH shift after H2O2 treatment and the recovery of reduced EMSH were comparable between the microplate reader measurements and confocal imaging.This confirms the reliability of biosensor measurements at both single cell level and for a greater cell population using the microplate reader.Here, we have successfully designed the first genome-integrated Mrx1-roGFP2 biosensor that was applied in the industrial platform bacterium C. glutamicum which is of high biotechnological importance.During aerobic respiration and under industrial production processes, C. glutamicum is frequently exposed to ROS, such as H2O2.Thus, C. glutamicum is equipped with several antioxidant systems, including MSH and the enzymatic ROS-scavengers KatA, Mpx and Tpx.Moreover, Mpx and Tpx are dependent on the MSH cofactor required for recycling during recovery from oxidative stress .The kinetics of H2O2 detoxification has been studied for catalases and peroxiredoxins in many different bacteria.However, the roles of many H2O2 detoxification enzymes are unknown and many seem to be redundant and not essential .There is also a knowledge gap to which extent the H2O2 detoxification enzymes contribute to the reduced redox balance under aerobic growth conditions and under oxidative stress.Thus, we applied this stably integrated Mrx1-roGFP2 biosensor to measure dynamic EMSH changes to study the impact of antioxidant systems and their major regulators under basal conditions and ROS exposure.The basal EMSH was highly reducing with ~-296 mV during the exponential growth and stationary phase in C. glutamicum wild type, but maintained reduced also in the katA, mpx and tpx mutants.In contrast, the probe was strongly oxidized in mshC and mtr mutants indicating the major role of MSH for the overall redox homeostasis under aerobic growth conditions.While the enzymatic ROS scavengers KatA, Mpx and Tpx did not contribute to the reduced basal level of EMSH during the growth, the catalase KatA was essential for efficient H2O2 detoxification and the recovery of the reduced EMSH under H2O2 stress.In contrast, both MSH-dependent peroxiredoxins Tpx and Mpx did not play a significant role in the H2O2 defense and recovery from stress, which was evident in the tpx mpx double mutant.These results were supported by growth phenotype analyses, revealing the strongest H2O2-sensitive growth phenotype for the katA mutant, while the growth of the mpx tpx double mutant was only slightly affected under H2O2 stress.These biosensor and phenotype results clearly support the major role of the catalase KatA for H2O2 detoxification.Since expression of katA is controlled by the OxyR repressor, we observed even a lower H2O2 response of the oxyR mutant, due to the constitutive derepression of katA as determined previously .In contrast, the sigH mutant showed an enhanced basal EMSH during aerobic growth, since SigH controls enzymes for MSH biosynthesis and recycling which contribute to reduced EMSH .However, the sigH mutant was not impaired in its H2O2 response of Mrx1-roGFP2, since H2O2 detoxification is the role of KatA.Thus, we have identified unique roles of SigH and Mtr to control the basal EMSH level, while OxyR and KatA play the major role in the recovery of reduced EMSH under oxidative stress.In previous work, the kinetics for H2O2 detoxification by catalases and peroxiredoxins was been measured using the unfused roGFP2 biosensor in the Gram-negative bacterium Salmonella Typhimurium .The deletion of catalases affected the detoxification efficiency of H2O2 strongly, while mutations in peroxidases had only a minor effect on the H2O2 detoxifying power.These results are consistent with our data and previous results in E. coli, which showed that catalases are the main H2O2 scavenging enzymes at higher H2O2 concentrations, while peroxidases are more efficient at lower H2O2 doses .The reason for the lower efficiency of H2O2 detoxification by peroxidases might be due to low NADH levels under oxidative stress that are not sufficient for recycling of oxidized peroxidases under high H2O2 levels .Overall, these data are in agreement with our Mrx1-roGFP2 measurements in the katA, tpx and mpx mutants in C. glutamicum.However, C. glutamicum differs from E. coli by its strong level of H2O2 resistance since C. glutamicum is able to grow with 100 mM H2O2 and the biosensor did not respond to 10 mM H2O2.In contrast, 1–5 mM H2O2 resulted in a maximal roGFP2 biosensor response with different detoxification kinetics in E. coli .Since the high H2O2 resistance and detoxification power was attributed to the catalases, it will be interesting to analyze the differences between activities and structures of the catalases of C. glutamicum and E. coli.Of note, due to its remarkable high catalase activity, KatA of C. glutamicum is even commercially applied at Merck.However, the structural features of KatA that are responsible for its high catalase activity are unknown.While our biosensor results confirmed the strong H2O2 detoxification power of the catalase KatA , the roles of the peroxiredoxins Mpx and Tpx for H2O2 detoxification are less clear in C. glutamicum.Both Tpx and Mpx were previously identified as S-mycothiolated proteins in the proteome of NaOCl-exposed C. glutamicum cells .S-mycothiolation inhibited Tpx and Mpx activities during H2O2 detoxification in vitro, which could be restored by the Trx and Mrx1 pathways .Moreover, Tpx displayed a gradual response to increasing H2O2 levels and was active as Trx-dependent peroxiredoxin to detoxify low doses H2O2 while high levels H2O2 resulted in overoxidation of Tpx .Overoxidation of Tpx caused oligomerization to activate the chaperone function of Tpx.Since mpx and katA are both induced under H2O2 stress, they were suggested to compensate for the inactivation of Tpx for detoxification of high doses of H2O2.Previous analyses showed that the katA and mpx mutants are more sensitive to 100–150 mM H2O2 .In our analyses, the mpx mutant was not more sensitive to 80 mM H2O2 and displayed the same H2O2 response like the wild type, while the katA mutant showed a strong H2O2 sensitivity and responded strongly to H2O2 in the biosensor measurements.Thus, our biosensor and phenotype results clearly support the major role of KatA in detoxification of high doses H2O2 in vivo.Finally, we confirmed using confocal imaging further that the genomically expressed Mrx1-roGFP2 biosensor shows equal fluorescence in the majority of cells indicating that the biosensor strain is suited for industrial application to quantify EMSH changes in C. glutamicum at the single cell level or under production processes.Previous Mrx1-roGFP2 biosensor applications involved plasmid-based systems which can result in different fluorescence intensities within the cellular population due to different copy numbers.Moreover, plasmids can be lost under long term experiments when the selection pressure is decreased due to degradation or inactivation of the antibiotics.We also compared the fluorescence intensities of the plasmid-based expression of Mrx1-roGFP2 using the IPTG-inducible pEKEx2 plasmid with the stably integrated Mrx1-roGFP2 strain in this work.Using confocal imaging, the plasmid-based Mrx1-roGFP2 biosensor strain showed only roGFP2 fluorescence in < 20% of cells, while the genomically expressed biosensor was equally expressed and fluorescent in 99% of cells.The integration of the Mrx1-roGFP2 biosensor was performed into the cg1121–1122 intergenic region and the biosensor was expressed from the strong Ptuf promoter using the pK18mobsacB construct designed previously for an Lrp-biosensor to measure L-valine production .Previous live cell imaging using microfluidic chips revealed that only 1% of cells with the Lrp-biosensor were non-fluorescent due to cell lysis or dormancy .Thus, expression of roGFP2 fusions from strong constitutive promoters should circumvent the problem of low roGFP2 fluorescence intensity after genomic integration.The advantage and utility of a stably integrated Grx1-roGFP2 biosensor has been also recently demonstrated in the malaria parasite Plasmodium falciparum which can circumvent low transfection frequency of plasmid-based roGFP2 fusions .Moreover, quantifications using the microplate reader are more reliable, less time-consuming and reproducible with strains expressing genomic biosensors compared to measurements using confocal microscopy .Thus, stably integrated redox biosensors should be the method of the choice for future applications of roGFP2 fusions to monitor redox changes in a greater cellular population.In conclusion, in this study we designed a novel Mrx1-roGFP2 biosensor to monitor dynamic EMSH changes in C. glutamicum during the growth, under oxidative stress and in mutants with defects in redox-signaling and H2O2 detoxification.This probe revealed the impact of Mtr and SigH to maintain highly reducing EMSH throughout the growth and the main role of KatA and OxyR for efficient H2O2 detoxification and the regeneration of the redox balance.This probe is now available for application in engineered production strains to monitor the impact of industrial production of amino acids on the cellular redox state.In addition, the effect of genome-wide mutations on EMSH changes can be followed in C. glutamicum in real-time during the growth, under oxidative stress and at the single cell level. | Mycothiol (MSH) functions as major low molecular weight (LMW) thiol in the industrially important Corynebacterium glutamicum. In this study, we genomically integrated an Mrx1-roGFP2 biosensor in C. glutamicum to measure dynamic changes of the MSH redox potential (E MSH ) during the growth and under oxidative stress. C. glutamicum maintains a highly reducing intrabacterial E MSH throughout the growth curve with basal E MSH levels of ~− 296 mV. Consistent with its H 2 O 2 resistant phenotype, C. glutamicum responds only weakly to 40 mM H 2 O 2 , but is rapidly oxidized by low doses of NaOCl. We further monitored basal E MSH changes and the H 2 O 2 response in various mutants which are compromised in redox-signaling of ROS (OxyR, SigH) and in the antioxidant defense (MSH, Mtr, KatA, Mpx, Tpx). While the probe was constitutively oxidized in the mshC and mtr mutants, a smaller oxidative shift in basal E MSH was observed in the sigH mutant. The catalase KatA was confirmed as major H 2 O 2 detoxification enzyme required for fast biosensor re-equilibration upon return to non-stress conditions. In contrast, the peroxiredoxins Mpx and Tpx had only little impact on E MSH and H 2 O 2 detoxification. Further live imaging experiments using confocal laser scanning microscopy revealed the stable biosensor expression and fluorescence at the single cell level. In conclusion, the stably expressed Mrx1-roGFP2 biosensor was successfully applied to monitor dynamic E MSH changes in C. glutamicum during the growth, under oxidative stress and in different mutants revealing the impact of Mtr and SigH for the basal level E MSH and the role of OxyR and KatA for efficient H 2 O 2 detoxification under oxidative stress. |
539 | A time-saving method for sealing Purdue Improved Crop Storage (PICS) bags | The Purdue Improved Crop Storage program grew out of an earlier project funded by the USAID Bean/Cowpea Collaborative Research Support Program in 1987 to address post-harvest losses of cowpea grain on smallholder farms in West Africa.In 2007, the PICS triple-bagging technology was promoted in ten countries in West and Central Africa.The PICS bag consists of two, high-density polyethylene liners fitted inside a third woven polypropylene bag.When the bag is filled with grain and sealed, metabolic activities of living organisms inside the bag deplete the available oxygen, and the oxygen reaches low levels within a few days.The low oxygen levels suppress the development, reproduction, and the survival of insects and pathogens.The PICS bags have been evaluated and shown to be effective for storage of a wide range of crops including rice, wheat, maize, sorghum, groundnut, sunflower seeds, pigeonpea, beans, and mungbean.The PICS technology was disseminated to smallholder farmers in West and Central Africa since 2007; and by 2012, nearly 50% of the cowpea-stored on-farm in that region was stored using PICS bags or other hermetic containers.Presently, the PICS program is active in more than 25 countries in Africa and has expanded into several countries in Asia including Nepal, India, and Afghanistan.PICS technology was developed to address postharvest grain losses on smallholder farms, but overtime it has attracted the interest of large-scale users including farmers’ groups, international development relief programs, government food security agencies, and grain traders.PICS bags used by small-scale farmers and filled with grain have conventionally been sealed-using the twist-tie method.This involves twisting the lip of each layer individually, folding the lip over, and tying with a cord.While simple, the twist-tie method requires substantial effort and is time-consuming.If not done right, it may damage the inner plastic liners.The time and effort required for the twist-tie method are one of the constraints to adoption of PICS bags among potential larger-scale users, some of which may use thousands of bags.Hence, it would be useful to find a simpler and faster alternative to the conventional twist-tie closure.In the present study, we developed and evaluated alternative methods of closing PICS bag and evaluated them by estimating the average time taken to close the bags, and assessing the effect of each tying method on oxygen depletion rates and grain quality.We developed three new methods of closing bags, each involved either folding without twisting or rolling the plastic liners.The closing methods were: 1) Inner liner Rolled - the inner plastic liner was rolled onto itself and the second liner folded and tied; 2) Folded together- both liners were folded together and tied; 3) Folded Separately- both liners were folded and tied separately.The three methods described above were compared with the conventional method of bags closure, 4) Twist-tied method.In this conventional procedure currently recommended, both the inner and second plastic liners are twisted and tied separately.The twist-tie method may stress the plastic liners when the bag is used multiple times.In all of the above alternatives, the outer woven bags were twist-tied to provide firm support to the bagging system.Wear and tear on the woven bags is of less concern.Experiment 1: To determine the time taken to close bags utilizing different methods, we prepared 50 kg capacity PICS bags and filled them with 35 or 50 kg maize grain."The maize variety used in the experiment was yellow maize grain purchased from the Wax Seed Co.The 35 or 50 kg filled bags represent real field bag usage where farmers partially or fully fill PICS bags.Two sets of eight people were selected to seal the 35 or 50 kg bags using the four methods.The two sets of people were selected in order to increase the number of scores as the skills and abilities might vary among people.The 35 and 50 kg bags were closed during separate weeks for better handling of the experiment.The order of sealing the bags using the four methods was randomized using a random sequence generator.The 35 kg bags were sealed by the first group of eight people using four methods every day over six days, while the 50 kg bags were closed by the second group of eight people using four methods every day over four days.The time taken by each person to tie the bags was recorded for each sealing method.Experiment 2: To assess the effect of the four sealing methods on the performance of the PICS bags, we monitored internal oxygen levels for 90 days in the bags containing maize grain artificially infested with the maize weevil, Sitophilus zeamais Motschulsky, one of the most important cosmopolitan pest of stored maize.Preparation of infestation grain: Infested grain was prepared by rearing a population of S. zeamais in eight woven polypropylene bags filled with approximately 25 kg maize grain.Four bags were prepared by placing 15 mixed-sex S. zeamais adults in each bag to develop low-infested grain for a period of approximately three months.The remaining woven bags were infested with approximately 300 adult S. zeamais per bag to develop high-infested grain.Use of the infested grain ensured that all developmental stages of S. zeamais were present in the grain.On the first day of the experiment, six samples of 335 g each were taken from each of the four low-infested bags.Similar samples were also drawn from the high-infested bags.The number of dead and live adults were counted as a measure of the baseline infestation for each group.Experiment setup: For the low-infestation study, twelve, 50 kg capacity PICS bags were filled with about 45 kg of clean maize grain that had been kept in a freezer for at least 15 days to kill any field-related infestation and contamination.Then, approximately 5 kg of low-infested maize grain was mixed thoroughly in each bag and closed using either TT, IR, FT or FS method; with each treatment replicated three times.Similarly, for the high-infestation study, each bag received 5 kg high-infested maize grain and then closed using one of the four sealing methods.The low and high-infestation studies were initiated in separate weeks for data collection convenience.Uninfested controls were filled with 50 kg of clean maize grain and closed using the four methods."The bags were stored in Purdue University's insect quarantine room for three months.The temperature and percent relative humidity of the room during the experimental period were recorded every twelve hours using USB data loggers.Monitoring of oxygen levels inside the bags: The oxygen levels inside the PICS bags were monitored using the Oxysense 5250i® oxygen reader device.The Oxysense system consists of two components: fluorescent yellow Oxydots, which are placed inside the hermetic storage system, and an ultraviolet light pen which is directed onto the Oxydots from outside the container to measure the oxygen levels inside the bag.Prior to filling bags with maize grain, we attached the Oxydots to the bottom of Petri dishes and glued the Petri dishes to the inner liner of the bags.A small area of the outer woven bag was cut away so that the Oxydots were visible through the inner liners for reading.We placed two Oxydots in each bag, one at the front side at about one-third the height of the bag; another was placed at about the two third level of the bag on the opposite side of the bag.The oxygen content in all PICS bags was measured daily during the first week, twice a week over the next five weeks, and once a week thereafter.The mean oxygen level taken from the two different Oxydots was recorded as the internal oxygen content of the bags.The internal temperature and r.h. of the infested and control bags were recorded every twelve hours for 90 d by placing USB data logger inside each bag.iii) Germination: the germination test was conducted for the maize grain stored in PICS bags for 90 d. Two samples, 50 each, of undamaged grains were taken from each bag.Each set of samples were immersed in a 5% bleach solution for two minutes and washed with clean water.Then each set of seeds was wrapped in wet paper towels and placed inside small plastic containers.The plastic containers were stored in a dark location for one week in a room set at 26 ± 2 °C, 40% r.h., after which the seed samples were scored for germination.The grain was recorded as germinated if at least a part of the radical was observed breaking through the shell.Statistical analyses were conducted with the General Linear Models Procedure of the Statistical Analysis System.The data for the time taken to seal the bags were subjected to two-way Analysis of Variance to determine the significance of grain filling size and sealing methods.The average internal oxygen levels among the bags were compared between the degree of bag fill, sealing methods and treatment using three-way ANOVA.The data for the number of live and dead adults before and during the experiments were subjected to one-way ANOVA to compare the effects of sealing methods on population development.The relative grain damage data were subjected to two-way ANOVA to measure the main effects of infestation levels and bag sealing methods.The grain germination count data were converted to percentage values, which were transformed to angular values before subjecting the data to one-way ANOVA to compare germination rate among sealing methods within low or high infestation study."The means between sealing methods were separated using Tukey's HSD procedure.Differences among means were considered significant at α = 0.05."The temperature and r.h. recorded through data loggers kept within PICS bags were compared against ambient temperature and r.h. using Pearson's correlation.The two-way ANOVA showed that the average time to close the bag was significantly affected by the quantity of grain in the bags-partially or fully-filled bags and bag sealing methods.There was no interaction between the quantity of grain in the bag and sealing methods.Subsequent one-way ANOVA for each sealing method between 35 kg and 50 kg bag showed that the sealing time was significantly different only for FT and FS.Additionally, the bag closing time was significantly different among different sealing methods for both 35 kg and 50 kg bags.For the 35 kg and 50 kg filled bags, the average bag sealing times were- FT: 51.55 and 46.78 sec, respectively; IR: 61.95 and 57.87 sec, respectively; TT: 76.91 and 72.5 sec, respectively; and FS: 83.31 and 73.25 sec, respectively.When the data for 35 and 50 kg bags were combined, the time for closing the bags was significantly different for sealing methods.The most time-efficient method in descending order was FT, IR, TT, and FS.The average oxygen level inside PICS bags were significantly different between infestation levels, and between infested grain and controls, but not for sealing methods.There were no interactions between infestation level and sealing method.The data showed that the average oxygen levels reached <2% within 5 d of sealing for high-infested bags, while it reached <10% levels only after 10 d of sealing for the low-infested bags- but never reached 2%.Oxygen levels for controls of both high and low-infestation levels did not differ from the initial oxygen levels and were not significantly different after 90 d.The oxygen levels among the control bags remained between 19.61 ± 0.13 to 20.72 ± 0.13% during the entire period of study.For the low-infested treatment, the average internal oxygen level inside the PICS bags was significantly different among sealing methods.The average oxygen levels for TT was not significantly different from FT and IR, but significantly lower compared to FS.For the high-infested bags, oxygen levels showed that there was no significant difference among the sealing methods.The average oxygen levels for each sealing method in high-infested bags dropped to 1.41 ± 0.11% levels at some point during the course of the study.The number of live and dead adults of S. zeamais was determined for each bag after 90 d of storage.The one-way ANOVA showed that all of the sealing methods significantly suppressed insect population development in both low and high infested bags.For the estimates percent relative damage, no significance difference was observed between the infestation levels and the sealing methods.Subsequent one-way ANOVA within infestation levels showed no difference in sealing methods at low or high infestation.The germination of maize grain was not significantly different among treatments in low and high-infested maize.In addition, maize germination was not affected by the sealing method within both controls and treatments for low and high-infested maize.The germination rates ranged between 70 and 95% among the sealing methods for both low and high-infested bags, thereby producing high standard errors.The internal temperatures of the bags were strongly and positively correlated with room temperature for all sealing methods."The Person's correlation value, P for TT, IR, FT, and FS were 0.991, 0.990, 0.991, and 0.992, respectively.However, we found very weak correlation between r.h. for room and the sealed bags."The Person's correlation value, P between r.h. for room and TT, IR, FT, and FS methods were −0.099, −0.021, −0.051, and −0.055, respectively.Our results showed that the FT and the IR methods reduced the bag sealing time by 34% and 20%, respectively.Both FS and TT methods took a bit longer time because they required tyings the two plastic liners.No significant difference was observed between FS and TT regarding the average time to close the bags.Sealing a 50 kg capacity bag filled with maize took less time compared to bags filled with only 35 kg maize for all tying methods.However, the sealing was only significantly different between the 50 kg and 35 kg bags with the FT and FS methods.There might be several reasons for the reduced time to close 50 kg maize bags compared to 35 kg bags.Since the 35 kg capacity was not filled to the top, there was a larger lip of the plastic liners that needed to be folded or twisted to close the bags and this requires more time.Additionally, the large lips required extra time to force out all the trapped air from the bags before sealing the liners.Typically, large-scale and commercial farmers in developing nations fill the PICS bags to their capacity and do not repeatedly open and close them to remove grain, unlike small-scale farmers.Therefore, both the FT and IR methods could greatly benefit the large-scale farmers and traders by reducing the time needed to seal the bags.All of the alternative methods tested for sealing the bags maintained the low oxygen levels similar to the conventional twist-tie method over the extended storage period.We found that the average internal oxygen levels reached <2% within 5 d of sealing for only high-infested bags.This is due to the high population of S. zeamais in highly-infested bags that accounted for the much faster consumption of oxygen.This extended hypoxic condition inside the bags not only killed the existing immature and adults of S. zeamais but also further suppressed population increase.However, we noted that after a few days of reaching the lowest oxygen levels, the oxygen readings began to rise slowly.Because triple-layer plastic liners are not perfectly impermeable to oxygen, we speculate that the atmospheric oxygen began to leak into the system after insects were dead and hence slowly raising the oxygen level inside the bag.Previous studies have documented similar trends.Nevertheless, at this point nearly all the insects inside the bag are dead, and insect population growth has been arrested, so these small and slow increases in oxygen do not lead to increase in the numbers of insects.No differences in relative damage were observed in grain stored in PICS bags tied with each of the four methods.We found no or at most one S. zeamais adult per kg sample at 90 d after storage.All four methods were effective at suppressing insect development.This may be due to the cessation of the feeding activities by S. zeamais when the oxygen levels have begun to drop after sealing of the bags.Our finding is consistent with previous studies that show PICS bags can severely restrict the flow of oxygen into the bags and reduce the insect population growth and survival.Additionally, we observed no significant difference in relative damage to maize grain between the low and highly-infested maize bags.This may be due to a quick drop in oxygen level that reached <2% levels within 5 d for highly-infested bags; hence S. zeamais stopped feeding much earlier in high-infested bags compared to low-infested bags.The estimate of the relative damage is based on the dry weight of the grain.This measure not only takes into consideration how many grains were damaged, but it also considers the severity of the damage.The assumed short duration of feeding in highly-infested bags might have resulted in the minimal feeding damage similar to that seen in the low-infested bags.For all closing methods, the germination rates of maize grain stored in PICS bags was not different compared to the baseline value.This is in agreement with the previous studies which stated that PICS bags do not compromise the germination rate of the stored seeds.The internal temperature in PICS bags had a strong positive correlation with external temperature.Additionally, the r.h. inside the bags remains stable for all closing methods throughout the study period.Williams et al. also observed stable r.h. in the PICS bags storing maize grain.The buffering effect of PICS bags against external factors, especially r.h. is beneficial, particularly in the regions where the external humidity fluctuates greatly during the year.Such changes may affect grain quality and impact seed viability.Overall, our study provides evidence that folding and tying both liners together is effective at reducing the time to seal a PICS bag and can serve as an alternative method to the conventional twist-tie technique.The time-saving fold-tie method may attract large-scale users of PICS bags including commercial grain traders, development and government food security agencies, and thus expand the use of PICS technology into new markets.Our study further confirms that PICS bags control of S. zeamais while maintaining seed germination. | Purdue Improved Crop Storage (PICS) bags were designed to reduce grain storage losses on smallholder farms. The bag consists of three layers: two high-density polyethylene liners fitted inside a woven polypropylene bag. Recently, farmer groups, development relief programs, and government food security agencies have shown interest in PICS bags for large-scale use. PICS bags are conventionally closed by a twist-tie (TT) method, which involves twisting, folding, and tying the lip of each layer individually with a cord. This is not only time and labor intensive, but also may affect the integrity of the liners. We evaluated three new bag closure methods: i) inner liner rolled onto itself and middle liner fold-tied (IR), ii) both liners folded together and tied (FT), and iii) both liners folded and tied separately (FS), along with the conventional twist tie (TT) method. The time to close partially or fully filled 50 kg-capacity PICS bags filled with maize grain was assessed. Results showed that FT was the most time-saving method, reducing bag sealing time by >34% versus the usual TT method. The average internal oxygen levels reached <2% within a week in bags containing grain highly infested with Sitophilus zeamais, while it remained >5% levels for less-infested bags. In both cases, insect population growth was suppressed. Oxygen depletion rates among tying methods remained the same regardless of the closure method used. When large numbers of bags need to be closed, the time-saving FT method is a good alternative PICS sealing method over the conventional twist-tie approach. |
540 | Influence of weld thermal cycle and post weld heat treatment on the microstructure of MarBN steel | MarBN steel, based around the general composition of 9Cr-3W-3Co-VNbBN, is a recently developed material and is a promising candidate for the replacement of the more conventional 9–12% Cr steels for the applications of hot section components including tubes, pipes and headers within thermal powerplant.Compared to the more conventional materials, including the Grade 91 and Grade 92 steels, MarBN steel has demonstrated a superior creep strength and improved oxidation resistance .This is achieved by an increased content of solid solution elements in combination with a balanced content of minor elements to provide an additional precipitation strengthening effect .The addition of B at an increased level in combination with a balanced content of N further enhances the performance of MarBN steel upon long-term creep exposure.This is due to an effectively stabilised precipitate structure of M23C6 carbides, as a result of the addition of B, and a well-dispersed distribution of nano-scale MX carbonitride particles formed upon the addition of N at an optimised level .The manufacturing of components made from MarBN steel commonly involves welding processes to achieve appropriate joints within the system.Although components made from MarBN steels have been successively fabricated using a variety of welding processes , weld joints commonly introduce relatively vulnerable areas within structures.Similar to other 9–12% Cr steels, the welds fabricated with MarBN steel as the parent metal demonstrate a deteriorated creep resistance as compared to the bulk material in a high temperature and low stress regime .Within other 9–12%Cr steels, creep failure in these welds typically occurs in the region close to the boundary between the heat affected zone and the parent metal , which is termed as ‘Type IV’ failure .Extensive research activities have therefore been conducted on 9–12% Cr steel welds to understand the link between Type IV failure and local microstructural differences within the HAZ.The presence of Type IV failure was initially linked to a particularly soft region formed during weld thermal cycles with an inter-critical peak temperature .However, experimental observations from other existing studies are not in full agreement with this, since the failure locations of the welds showing Type IV failure behaviour are not completely aligned with the softest region in the HAZ .The HAZ region with a refined microstructure has been linked to the rupture location through Type IV failure in other existing studies .However, there have been experimental observations showing that the failure does not always occur in the region with the most refined microstructure .Type IV failure has also been linked with the region showing a microstructure that does not have an optimised creep resistance .However, due to a lack of systematic description of the microstructural distribution in the HAZ of complicated multi-pass welds, the critical regions corresponding to the location of Type IV failure has not yet been clearly indicated in the HAZ of the 9–12% Cr steel welds, including those in MarBN-type steels.The microstructure in the HAZ is controlled by the local thermal gradient imparted by the welding process.Simplified models have been established in previous work to describe the microstructure in the HAZ of low alloy ferritic/bainitic steels .Typically, the HAZ has been divided into several regions relating to the local peak temperature reached during the welding process.The microstructure in the HAZ is conventionally divided into four regions as follows: Coarse Grain, Fine Grain, Inter-Critical and Over-Tempered regions .However, the existing definition of the HAZ do not seem satisfactory when defining the critical regions that are susceptible to creep damage in the HAZ of welds within 9–12 wt % Cr steel .This is because the existing definition of the HAZ currently lacks a systematic description of key microstructural factors that are linked with the formation of creep damage.For instance, grain size can play a major role in controlling creep resistance where a larger grain size increases the creep resistance .However, the distribution and quantity of M23C6 carbides and fine MX-type carbonitrides in the martensitic matrix may also significantly affect the stability of a variety of boundaries under creep exposure .In recent work on a single-pass weld fabricated within a parent metal of Grade 92 steel, the microstructure in the HAZ was systematically studied to link the microstructures produced to the thermal history in the HAZ .The zones within the microstructure of the HAZ were therefore classified based on the gradient of peak temperature as: Completely Transformed, Partially Transformed and Over-Tempered regions .The PT-HAZ regions that are exposed to a weld thermal cycle with an inter-critical peak temperature were further indicated as the region most susceptible to creep damage .In other recent work that compared the microstructure in the HAZ between the welds fabricated on the MarBN and Grade 92 parent metals, a similar microstructural gradient was observed between both welds except that the region showing a fine-grained structure was absent, in combination with a more refined precipitate structure in the weld on a MarBN parent metal .However, as a post-mortem analysis focused on the final resultant microstructure after Post Weld Heat Treatment, this work has not explicitly established the link between microstructure and heat input during welding.In addition, there is a lack of systematic assessment on the formation and distribution of secondary precipitate particles both during welding and PWHT, which is indeed of significant importance for the prediction of creep performance of weld joints.In the present study, the properties and microstructures of simulated HAZ structures within MarBN steels produced using dilatometer-based simulations have been systematically examined to provide an explicit description of the microstructural gradient within the as-welded microstructure as a function of local peak temperature experienced.The influence of PWHT on the physically simulated as-welded microstructure has been further studied to thoroughly describe the resultant HAZ microstructure after PWHT.This further contributes to a more accurate identification of the critical HAZ regions and will assist in determining which of these may be particularly susceptible to creep damage.The investigation was conducted using a MarBN steel with the composition shown in Table 1, known as IBN1.The material was sectioned from an ingot in the as-cast condition and subsequently underwent a normalisation process at 1473 K for 3 h and a tempering process at 1053 K for 3 h followed by air cooling.Cylindrical specimens were then machined with a dimension of 5 mm in diameter and 10 mm in length.The thermal cycles applied to simulate the experienced HAZ temperatures during this study were based on the experimentally measured heat cycles and established simulations used in existing studies .In this study, a heating rate of 100 K/s was applied in combination with a dwell time of 2 s at the required peak temperature.Although it is accepted that heating rates may exceed this during real welding operations, the heating rate applied here is the maximum achievable in the equipment used.The cooling phase comprised three different stages with the rate ranging from 60 K/s to 8 K/s as shown in Fig. 1.The transformation temperatures, Ac1 and Ac3 were measured from the dilation behaviour of the specimen as a function of temperature.These were measured within the dilatometer by determining the temperatures at which the dimensional change of the specimen started to deviate from a linear relationship with the variation of temperature as described in .These transformation temperatures were determined using the weld simulation heating rate of 100 K/s to ensure that transformation temperatures were as expected for the high heating rates used in the applied weld simulation.Based on measurements from six individual specimens, the average Ac1 and Ac3 phase transformation temperatures were determined to be: 1211 ± 15 K and 1342 ± 15 K, respectively.Based on these transformation temperatures, four peak temperatures of primary interest were defined as listed in Table 2.In some parts of this study, an additional Tp of 1473 K was applied in order to explore the limits of microstructural features or to provide an intermediate step between 1373 K and 1573 K.After experiencing these simulated heat cycles, half of the specimens were further heat treated at 1033 K for a duration of 2 h to simulate a Post Weld Heat Treatment procedure.Hardness testing was performed using a Struers® Durascan® 70 hardness testing system equipped with a Vickers indenter.The testing method used in the current research is consistent with the methodology of hardness testing as detailed in previous research conducted on a similar 9% Cr steel .As a consequence, hardness indents were produced at an applied weight of 0.2 kg and a dwell of 10 s.A total of 100 indents were produced on each specimen with an inter-spacing distance of 0.1 mm to assess hardness with sufficient statistical significance.Specimens for microstructural examination were prepared using conventional metallographic preparation methods finished by a chemo-mechanical polishing procedure using a 0.02 μm colloidal silica suspension.An FEI Nova Nanolab 600 dual beam Focused Ion Beam/Field Emission Gun Scanning Electron Microscope-SEM was used for Electron Backscatter Diffraction analysis and ion beam induced Secondary Electron imaging.EBSD mapping was conducted at an accelerating voltage of 20 kV with a step size of 0.1 μm.These maps were used to study the grain microstructures with the location of high angle PAGBs highlighted by filtering the collected data to show just the boundaries with 15–50° misorientation.Ion beam induced SE micrographs were collected to reveal the location and sizes of secondary phase particles with the ion beam operated at an accelerating voltage of 30 kV and a nominal beam current of 50 pA.In-situ XeF2 gas etching was used to enhance the contrast differential between particles and the matrix.Precipitate particles were quantified using a grey scale segmentation methodology using ImageJ.Backscattered Electron imaging was also conducted to characterise Laves phase particles after PWHT using the dual-beam FIB/FEG SEM.Due to the abundance of heavy elements, Laves phase particles were revealed as bright particles with a distinctively higher brightness than the matrix .Chemical and crystallographic analyses of secondary precipitate particles were carried out using transmission electron microscopy on membrane specimens prepared using a carbon extraction replication technique .Precipitate particles were chemically identified using Energy Dispersive X-ray spectroscopy in an FEI Tecnai F20 TEM equipped with an Oxford Instruments X-Max 80N TLE EDX system.Fig. 2 shows EBSD maps of the prior austenite grain boundaries from a range of different peak temperatures.Within the original, as-received parent material, a coarse PAG structure was present and also within the material after a simulated weld thermal cycle with Tp < Ac1 and, respectively).Fine austenite grains started to form along the pre-existing PAGBs after the heat treatment in which Tp reached ∼1273 K, Fig. 2.With increasing Tp, up to a Tp of ∼1373 K, Fig. 2, the newly-formed austenite grains were seen to have grown.When Tp reached an intermediate temperature of ∼1473 K, a more homogeneous microstructure consisting of equiaxed austenite grains was present in the simulated HAZ microstructure, Fig. 2.A greater number of equiaxed fine grains with a grain size of less than 60 μm can be seen when Tp = ∼1473 K) than when Tp = ∼1573 K) that has a grain size of ∼100 μm.The martensitic substructure was further studied by EBSD analysis conducted at a step size of 0.1 μm, Fig. 3.The martensitic substructure in the region exposed to a Tp of ∼1573 K) is composed of lath-like martensitic blocks with a similar size to the region exposed to a Tp of ∼1148 K).The martensitic structure in the fine austenite grains after experiencing a Tp of both ∼1273 K and ∼1373 K is much denser and composed of martensitic sub-grains less than 10 μm in size, Fig. 3 and respectively.The martensitic substructure in the region exposed to a peak temperature of ∼1148 K shows no significant variation from the original substructure in the unaffected parent metal).The distribution of carbides within the matrix material after application of the simulated HAZ thermal cycle and PWHT were characterised using ion induced SE imaging, Fig. 4.This technique has been proved to provide reliable information on the distribution of precipitate particles, particularly the M23C6 carbides .Here it can be seen that the number density of M23C6 carbides varies after simulated thermal cycles with different peak temperatures.It was found that the carbides were completely dissolved when Tp » Ac3 at ∼1573 K.Carbides were only partially dissolved when Tp was between Ac1 and ∼1573 K.After PWHT, precipitates were formed and preferentially distributed on the martensitic substructure boundaries in the resulting microstructure of samples of all Tp temperatures.The population of carbides after the simulated thermal cycles and PWHT were quantitatively measured as presented in Fig. 5.The data shows that the distribution of grain boundary carbides significantly varies after experiencing different Tp within the weld thermal cycles.However, although the number density of carbides after a Tp of 1273 K appears to be slightly lower, the standard deviation of the dataset suggests that the number of carbide particles after PWHT does not significantly vary, regardless of the Tp experienced within the simulated thermal cycles.The size distribution of carbides in each of the simulated HAZ regions after PWHT was analysed and is summarised in Fig. 5 with the peak carbide size in each specimen indicated by vertical lines.There is a higher number of carbides with a size of less than 0.1 μm in the specimens which experienced Tp = ∼1373 K and ∼1273 K than the specimens which experienced a Tp = ∼1148 K.This can be attributed to the presence of un-dissolved or partially dissolved precipitates after weld simulations that mitigated the formation of new precipitates during the applied PWHT stage.Fine precipitates in the microstructure were analysed using TEM.Fig. 6 demonstrates a BF-STEM micrograph showing the precipitates on a carbon extraction replica obtained after weld simulation with a peak temperature of ∼1148 K and representative EDX spectra obtained from a selection of precipitate particles.The precipitate particles within the extraction replica have a range of sizes and shapes as shown in Fig. 6.The particles with a dark appearance are typically 0.1–0.2 μm in length with an elongated shape.These particles were identified to be enriched in Cr) with a chemical composition close to that expected for M23C6 carbides .The size of these precipitates is also close to the particles observed by ion induced SE imaging as shown in Fig. 5.In addition, smaller particles measuring 20–80 nm in Feret diameter were also observed in the same specimen.These precipitates were identified to be enriched in Nb and V and) with a chemical composition close to that expected for MX carbonitrides .The chemistry of the MX carbonitride particles in all specimens was investigated using EDX after both weld simulations and PWHT with results summarised in Fig. 7.The MX carbonitride was found to co-exist as either a Nb-rich or V-rich MX after a simulated HAZ thermal cycle with a Tp of ∼1148 K), whereas the Nb -rich MX particles became the more dominant type of precipitate after thermal cycles with a Tp of ∼1273 K and ∼1373 K and 7).The pre-existing MX precipitates from the parent metal were completely dissolved after the thermal cycle with a Tp of ∼1573 K so are not included here.MX precipitates were also analysed after PWHT-).The chemistry of the MX precipitates does not significantly vary between samples of differing Tp after PWHT, with the majority being the Nb-rich type of carbonitride in combination with the V-rich precipitates at a minor level.The presence of Laves phase particles was not observed in the microstructure of the original parent metal nor after weld simulations.However, Laves phase was observed to form during the PWHT applied here.Fig. 8 is a collection of BSE micrographs showing the Laves phase in the microstructure after a combination of thermal cycles with different peak temperatures followed by the PWHT.A lower number of Laves phase particles was observed after the thermal cycle with a Tp of ∼1573 K) compared to the specimens which experienced lower peak temperatures.This can be linked with a relatively coarser substructure formed after weld simulation when Tp » Ac3).However, the distribution of the Laves phase particles is also denser after a thermal cycle with a Tp of ∼1148 K, while the martensitic substructure in this specimen) is similar to the microstructure formed after weld simulation with a Tp of ∼1573 K).This is probably due to a less homogeneous distribution of the elements related with the formation of Laves phases after weld simulations with a lower Tp at ∼1148 K.The variation of hardness after the simulated weld thermal cycles and PWHT was further studied using Vickers hardness testing, Fig. 9.The hardness of specimens exposed to weld thermal cycle simulations increased as Tp increased.This can be explained by the presence of newly formed un-tempered martensite after weld simulations with a Tp of > Ac1 .This is also consistent with the observation from other 9% Cr steel welds that the microstructure with a higher hardness value is presented in the region close to the weld line .The weld simulation with a Tp < Ac1 at ∼1148 K also leads to a slightly lower hardness than the original parent metal due to a tempering effect, which is also similar to the existing observations from Grade 92 steel .The hardness in the specimens which experienced a weld simulation with a high Tp has been shown to significantly decrease during PWHT, whereas the hardness in the specimens which experienced a lower Tp did not significantly decrease after PWHT.This can be linked to a decrease in dislocation density and the precipitation of carbides during PWHT.It has been reported that the dislocation structure in newly-formed martensite is significantly altered during PWHT, which leads to a lower density network of dislocations and hence a lower hardness .The decrease in hardness after PWHT can also be linked with the formation of the M23C6 carbides and Laves phase particles.This is likely to reduce the concentrations of solid solution elements in the matrix .The significant decrease in hardness during PWHT of the specimens which experienced a higher Tp, also leads to a similar hardness level as the specimens which experienced a lower Tp.This suggests that the hardness gradient in the HAZ of a real weld is likely to be minimised after PWHT if conducted at appropriate temperatures.The microstructure in the simulated weld specimens can be linked with the microstructure in the HAZ of real welds.It is known that the Tp of a weld thermal cycle is decreased with increasing distance from the weld fusion line within the HAZ of real welds, with the ultimate value of Tp close to 1500 °C in the area adjacent to the weld fusion line .The Tp of the thermal cycles adopted for physical weld simulation in this study is also within the range determined by both Finite Element modelling and experimental measurement .Using these assumptions, it is therefore possible to summarise the variation in microstructure within the HAZ in MarBN steels as a function of approximate distance from the fusion line as shown in Fig. 10.On this diagram, M23C6 particles are shown to be present on PAGBs of the structure only.It should be noted that, as well as these locations, M23C6 particles are also observed to be present on lath boundaries within the martensite structure of the material as shown in Fig. 4.An important observation is that the PAG structure varies significantly with the Tp experienced during the weld simulation.Starting from a PAG structure with a grain size of >300 μm within the parent metal, refined PAGs start to form along the pre-existing PAG boundaries as Tp > Ac1).As Tp increased to a range between the Ac1 and Ac3, the PAG structure is a mixture of refined PAGs along the pre-existing PAG boundaries from the parent metal.Such a PAG structure remains until Tp reaches ∼1473 K, above which the original PAG structure is completely replaced by an equiaxed PAG structure with a grain size of less than 100 μm.This set of observations is consistent with previous work where the HAZ microstructure of B-containing steels was studied as a function of peak temperature where a combination of diffusional and displacive reactions were suggested to be responsible for the appearance of newly formed, fine PAGs at the original PAGBs within a coarse-grained matrix, respectively, at intermediate peak temperatures.To confirm the transformation process of the alloy studied here, further work would be required.The resultant PAG structure after welding is not significantly changed by PWHT.Compared to previous observations from the HAZ of Grade 92 steel welds , the general trend of the PAG structure in the HAZ of IBN1 is similar.However, in the HAZ of Grade 92 steel, a duplex grain structure composed of refined PAGs on the pre-existing PAG boundaries and the residual domain of PAGs from the parent metal was only observed as Tp was in the inter-critical range between the Ac1 and Ac3 temperatures , whereas a similar PAG structure was also observed as Tp was >Ac3 in IBN1.This may be attributed to a higher dissolution temperature of the M23C6 carbides when compared to the Grade 92 steel.In Grade 92 steel, the M23C6 carbides are completely dissolved as Tp reaches 1373 K, whereas the M23C6 carbides in IBN1 are not completely dissolved until Tp » Ac3.The range of Tp during which the M23C6 carbides are not dissolved corresponds to the Tp range at which a duplex PAG structure is observed.This indicates that, within IBN1, the growth of the newly-formed PAGs is effectively hindered due to the pinning effect from the un-dissolved M23C6 carbide particles.This is also consistent with findings from previous work on MarBN steel welds .The martensitic substructure is relatively coarser after a weld simulation with a higher Tp of ∼1573 K, whereas the substructure is much more refined with a lower Tp of ∼1373 K and ∼1273 K.This is attributed to a short time at high temperature and a high cooling rate experienced within the thermal cycles used for the weld simulation.Both the PAG structure and the martensitic substructure formed during weld simulations were preserved during PWHT, suggesting a good stability of the martensitic microstructure.Secondary precipitates from the original parent metal are dissolved due to heat input during weld simulations.Thermodynamic calculation was adopted to understand the thermal stability of the major phases in IBN1 under an equilibrium state, Fig. 11.The Ac1 and the Ac3 temperatures measured prior to weld simulations were found to be significantly higher than the predicted equilibrium transformation temperature.This is, however, consistent with observations from previous studies of 9% Cr steels in which a high heating rate was adopted for measurement .The measured temperature for the dissolution of the M23C6 carbides during weld simulation) is also higher than the prediction from the equilibrium thermodynamic calculation.This is indeed related to the high heating rate applied during the thermal cycles and the short time period during which a high temperature was maintained.The dissolution of the pre-existing MX carbonitrides also shows similar behaviour.The Nb-rich MX precipitates are more stable than the M23C6 carbides and the V-rich MX in the high temperature regime.These precipitates were not completely dissolved until the Tp of the weld simulation reached 1573 K, whereas the relatively unstable V-rich MX precipitates appeared to be completely dissolved as Tp reached 1373 K.PWHT was conducted in the α-Fe phase regime at a temperature of 1033 K.As a result, the martensitic microstructure formed during the weld simulation was not significantly altered during PWHT.The temperature of the PWHT was also within the range that M23C6 carbides, the MX carbonitrides and Laves phase were predicted to be stable by the thermodynamic calculation.As a result, these phases were observed in the microstructure after PWHT.The M23C6 carbides formed uniformly on the martensitic boundaries during PWHT.This suggests that the overall microstructure is effectively stabilised by the precipitates distributed on substructure boundaries in all specimens regardless of Tp .The specimen that experienced a Tp of ∼1573 K has the highest number of precipitate particles between 0.1 and 0.2 μm in size, whereas a higher number fraction of the M23C6 carbides with a size of less than 0.1 μm was observed after weld simulation with a Tp of 1373 K and 1273 K.Such a trend of distribution for the M23C6 carbides was also observed in previous work on Grade 92 steel .A higher fraction of smaller M23C6 carbides can be attributed to an incomplete dissolution of pre-existing precipitates during the weld thermal cycle, which limits the carbide forming elements re-entering solution.In addition, the specimen which experienced a weld simulation with a Tp of 1573 K demonstrated an appropriate size distribution of the M23C6 carbides for an optimum precipitation strengthening effect with a size range of 0.1–0.2 μm .This suggests that the overall microstructural stability is more enhanced by an even distribution of M23C6 carbides in the specimen which experienced a Tp » Ac3, corresponding to the regions more adjacent to the weld line in weld HAZ.The presence of Laves phases was also observed after PWHT.Due to a relatively higher W content in MarBN steels, the tendency for Laves formation is higher than the conventional 9% Cr steels such as the Grade 91 and 92 steels.It is known that the fast growth of Laves phase is detrimental to the creep performance of 9% Cr steels as it rapidly consumes solid solution elements from the matrix .Although post-mortem analyses after creep testing was not conducted in this study, a higher growth rate of Laves phase particles during creep exposure is expected due to a higher W content in the MarBN steel .Therefore, although the Laves phase particles observed after PWHT are not excessively large, they may grow rapidly upon further exposure to applied stress at a high temperature during subsequent creep exposure.It was thus considered that the condition of PWHT should be further optimised to prevent the presence of Laves phases in the initial state before creep testing.Based on previous observations of 9% Cr steel welds that have failed in a Type IV manner, creep damage is preferentially accumulated in the HAZ regions where a refined grain structure is present in combination with an uneven distribution of M23C6 carbides located on PAGBs .Microstructural observations obtained from the current research suggest that a Tp between Ac1 and ∼1473 K results in an incomplete dissolution of M23C6 carbides and the formation of a refined martensitic substructure.In addition, Laves phase particles are observed to form after the initial PWHT in this steel, which may subsequently coarsen during service.Further work is required to determine the effect of welding on the HAZ structures and subsequent creep performance of cross-weld samples for these grades of MarBN to determine their susceptibility to Type IV failure.The thermal cycle applied to IBN1 has been utilised to simulate thermal history in different regions of the HAZ during practical welding processes.Based on microstructural observation, the expected variation in microstructure throughout the HAZ has been accurately determined.The martensitic microstructure after weld simulation has been shown to vary from an equiaxed grain structure to a duplex grain structure consisting of refined grains on the pre-existing PAG boundaries as a function of peak temperature experienced during weld simulations.Dissolution of the pre-existing secondary precipitates from the original parent metal during simulated weld thermal cycles was also observed.The observations also suggest that the un-dissolved M23C6 carbides hinder the re-austenitisation process and help sustain the duplex grain structure after weld simulation when Tp > Ac3.The Nb-rich MX carbonitrides are also more stable in the high temperature regions as compared to the V-rich MX precipitate.During PWHT, both the M23C6 carbides and the MX carbonitrides were re-precipitated, with the former evenly distributed on the martensitic boundaries.The specimen exposed to a weld simulation with a Tp = 1573 K demonstrated a higher fraction of M23C6 carbides within a size range of 0.1–0.2 μm for the optimisation of creep resistance.This suggests a higher creep strength in the region close to the weld line than in other parts of the HAZ of a real weld.Laves phase was also formed to a minor level during PWHT.This may lead to a compromised creep performance of the weld HAZ due to a fast growth of the Laves phase upon creep exposure.It was therefore recommended that the PWHT could be optimised to prevent the formation of Laves phase at the initial stage before creep.The hardness in the specimens which experienced a higher Tp during weld simulation decreased more significantly than the specimens exposed to a thermal cycle with a lower Tp.This suggests a minimised hardness gradient in the HAZ of real welds after PWHT if conducted in an appropriate condition.The raw/processed data required to reproduce these findings cannot be shared at this time as the data also forms part of an ongoing study. | Martensitic steels strengthened by Boron and Nitrogen additions (MarBN) were developed for high temperature/high stress service in power plant for periods of many years and are being considered as a promising candidate for the replacement of the more conventional Grade 91/92 steels. In the present study, extensive microstructural observation of physically simulated Heat Affected Zone (HAZ) MarBN material has been carried out after dilatometry simulations to link the variation in microstructure with weld thermal cycles. The microstructure in the MarBN HAZ has been observed to vary from a refined equiaxed morphology to a duplex microstructure consisting of refined grains distributed on the pre-existing Prior Austenite Grain Boundaries (PAGBs) as the peak temperature of the weld thermal cycle decreases. The temperature range corresponding to the formation of the duplex grain structure coincides with the temperature regime for the dissolution of the pre-existing M23C6 carbides. An even distribution of the M23C6 carbides within the martensitic substructure was also observed after Post Weld Heat Treatment (PWHT), which is beneficial for the creep performance of the weld HAZ. The MX precipitates are more resistant to thermal exposure and are not completely dissolved until the peak temperature reaches 1573 K (1300 °C). The Nb-rich MX precipitates are the predominant type observed both after weld simulations and PWHT. The hardness between the materials experienced with the thermal cycles with different peak temperature does not significantly vary after PWHT conducted in an appropriate condition, which is likely to mitigate an unfavoured stress condition in the localised area within the HAZ. |
541 | Teacher educators’ approaches to teaching and connections with their perceptions of the closeness of their research and teaching | Teacher educators are agents who educate and help student teachers to develop knowledge and competence in the teaching profession.Furthermore, they also act as models that student teachers can observe and imitate.Therefore, it is important to explore how teacher educators approach teaching to facilitate student teachers becoming qualified future teachers.Approaches to teaching are complex combinations of teaching intentions and strategies that teacher educators employ when teaching student teachers.Two broad approaches to teaching have been identified: the student-focused approach to teaching and the teacher-focused approach to teaching.Teachers combine these two approaches in different ways according to different teaching contexts and disciplines."Moreover, teachers' approaches to teaching influence students' approaches to learning. "Concerning the situation in teacher education, student teachers' beliefs concerning teaching have been formed during their previous learning and school experiences before entering teacher education programmes.These beliefs influence the ways in which student teachers perceive teaching and their facilitation of pupil learning.Thus, the challenge for both teacher educators and student teachers is that student teachers need to consider their teaching beliefs and ways to teach, which are resistant to change once formed.Teacher educators’ support for student teachers in terms of their teaching is significant in this process.Besides the teaching task, teacher educators are involved in productive research activities.Being active in research is important for teacher educators working in the academic university context, firstly, because high quality research projects and publications are used as evaluation criteria for teacher educators to receive funding and promotion.Secondly, research-active teachers have the competence to teach student teachers skills to conduct research."However, there are concerns that teacher educators' participation in research work would conflict with their involvement in teaching, especially when their research and teaching focus on different issues and are separate activities.It is argued that an integrated relationship between research and teaching is warranted for teacher educators to improve their teaching and to be able to work effectively.Thus, the aim of the present study is to clarify how teacher educators perceive the different approaches to teaching and combine the different approaches in a Chinese teacher education context.Further discussion based on teacher educators’ dual roles as teachers and researchers, focuses on how closely they consider their research work and teaching are related in the academic university context.The final argument is on whether their perceptions concerning the closeness of their research and teaching are related to the way they approach their teaching.Approaches to teaching are defined as strategies that teachers adopt for teaching and the intentions underlying these strategies.With 24 university physical science teachers, Trigwell et al. identified the information transmission/teacher-focused approach to teaching and the conceptual change/student-focused approach to teaching."In the ITTF approach, teaching is seen as transmitting knowledge from teachers to students, and it is less connected with students' deep approach to learning, while in the CCSF approach, teaching is seen as helping students to develop their own understanding of knowledge.There are arguments concerning whether the two approaches can be combined.Trigwell, Prosser, and Ginns argued that transmission elements of the teacher-focused approach to teaching can be included in the student-focused approach.Thus the student-focused approach to teaching can be seen as a more sophisticated and complete approach than the teacher-focused approach.After exploring the teaching of 97 Finnish university teachers from a wide variety of disciplines, Postareff et al. concluded that only a minority of teachers adopt an either teacher-focused or student-focused approach, i.e., a theoretically consonant approach to teaching.Most teachers adopt a combination of both approaches, resulting in a theoretically dissonant approach to teaching.Further studies showed that the dissonance in teachers’ approaches to teaching is typically related to a development process in which teachers develop from using more teacher-focused approaches towards adopting student-focused approaches to teaching.It takes time to develop teaching to be consistently student-focused."The exploration of what kinds of approaches exist in teachers' teaching is the first step in the development of their approaches to teaching.This is also helpful in designing the pedagogical training to fit for teachers with the particular approaches."Furthermore, the understanding of teachers' approaches to teaching is closely related to the improvement of their students' approaches to learning.In a quantitative study, Prosser et al. found that a dissonant approach to teaching is associated with lower quality learning of the students, while a consonant approach to teaching is related to the improvement of students’ learning.Approaches to teaching are contextually dependent and may vary according to different teaching context.Moreover, previous research has identified disciplinary differences in approaches to teaching.It has been revealed that university teachers in hard disciplines tend to apply a teacher-focused approach to teaching, while teachers of soft disciplines more commonly adopt a student-focused approach.However, there are other studies indicating that disciplines may not always be an influencing factor in the difference in teachers’ approaches to teaching.The complicated findings on approaches to teaching require further exploration of the issue in specific fields."In teacher education, studies focusing on teacher educators' teaching show that teacher educators experience a tension between “telling and growth.”", "They struggle between teaching through the transmission of propositional knowledge and teaching through the creation of a learning context to develop students' understanding of knowledge. "Teaching by “telling” might satisfy teacher educators' need to transfer information to student teachers, but it does not necessarily satisfy student teachers' learning needs.Student teachers prefer both teacher-centred features of teacher direction and student-centred features of cooperative learning and knowledge construction."The difficult task for teacher educators is that they need to understand their student teachers' various learning needs and further adapt their teaching to the preferred learning approaches of the students to create a productive learning environment.Student teachers need to be challenged during their pedagogy learning because they need to reconsider their existing knowledge through experience, and teacher educators should engage them in such reflection."Thus, teacher educators' teaching work is not simply offering student teachers the technical and instrumental knowledge of teaching.It also stresses teacher educators’ support for student teachers to realise the interaction between teaching theory and their own practice, and further, to question their teaching beliefs and practices and involve themselves in a continuous professional development.To make this happen, teacher educators need to apply a variety of educational strategies and approaches to their teaching in a student-focused way, one which encourages student teachers to adopt a deep approach to learning.Besides the disciplinary contexts mentioned above, it should be realised that cultural influence is complex, and basic concepts of Western educational cogitation need to be reconsidered in the Chinese context.“The paradox of the Chinese learner” indicates that while memorisation is normally seen as a rote learning method, Chinese learners utilise memorisation as a means to develop their own understanding of the content, which indicates a deep learning approach.This paradox among Chinese learners shows that the surface and deep approaches to learning are culturally and contextually dependent."Thus, it is reasonable to consider that the approaches to teaching are also influenced by the specific teaching and learning context, and a similar “paradox” might exist in Chinese teachers' teaching as well.In other words, approaches to teaching could be conceived differently in the Chinese educational context.The teacher-focused approach to teaching is common and dominant among university teachers in China.Contradictorily, it has also been shown that Chinese university teachers have a stronger belief in student-focused than teacher-focused teaching."It is argued that Chinese university teachers' approaches to teaching are developing towards the student-focused.In the discipline of teacher education in China, teaching has been criticised for its teacher-focused and outdated teaching methods and poor effects on developing students’ cognitive creativity and independent learning.Thus, in response to the reform of teacher education since the 1990s to educate future teachers with student-focused views of teaching, teacher educators themselves have been required to change their approaches to teaching from the traditional teacher-focused “transmission teaching” to student-focused teaching."Several studies have shown that teachers' approaches to teaching vary between different contexts.However, most studies have investigated approaches to teaching in western countries, and few of them have focused on teachers in teacher education.Empirical research exploring teaching in the context of teacher education of China is needed, and may shed light on how Chinese teacher educators perceive their approaches to teaching.Both the student- and teacher-focused approaches to teaching have been found in the university teaching context in China in previous studies, however with contradictory conclusions concerning which one is more often applied.Furthermore, no previous studies have explored whether the dissonance exists in Chinese university teachers’ approaches to teaching.Meanwhile, as mentioned above, the teaching in teacher education of China, which is in the midst of change, may be related to the adoption of the teacher-focused or dissonant approaches to teaching among teacher educators.In summary, all the findings in previous studies mentioned above provide an initial background to explore the consonant and dissonant approaches to teaching of Chinese teacher educators in the present study."There has been an increasing amount of research on the research-teaching nexus and the significance of the nexus for teachers' teaching. "Research could enhance teachers' teaching effectiveness.For example, as research-active academics, teachers provide more up-to-date knowledge to students."Based on the interrelated nexus between research and teaching and the positive influence of the research-teaching nexus on teachers' work and students' learning, many studies have further discussed the various forms in which teachers implement and strengthen the nexus in practice.For instance, teachers could use the research they work on as information to be transmitted to the students, while they could also use their research as a structural element in the learning process to shape the learning activities carried out by the students."In the latter situation, teachers' research is incorporated with their teaching in a deeper way and serves as a mode of teaching, students are involved in teachers' research and participate in the process of knowledge creation with the teachers.Teachers might relate research with teaching either in an information-transmission way, or in a way that supports deep student engagement.The different forms teachers apply reflect their conceptions of knowledge, research and teaching.A recent study showed that the more teachers consider their teaching as student-focused, the more important they value the role of research in their own teaching.However, research and teaching could be independent of each other, or even have a negative nexus.The conflict between research and teaching could be caused by the limited time and energy of teachers to put in their work."Furthermore, the university context, policy and supporting strategies for teachers also influence teachers' perceptions and practice over their research-teaching nexus. "For example, research has been highly valued since it is more related to teachers' academic careers and promotion, though teaching is also an important part of teachers' academic work.Teachers may have to prioritise research work over their teaching tasks.It is important for teachers to keep a balance between research and teaching and to enhance the research-teaching nexus.Teachers’ experience of the nexus varies from a weak relationship to an integrated one.A close and mutually enriching relationship between research and teaching would be helpful for reducing the tension between research and teaching.There has been debate about teacher educators’ role as researchers, how their research influences their teaching and is of benefit to the educational activities in schools and universities in general.Teacher educators not only work as teachers, their research work enhances their academic role as professionals in the field.Teacher educators conduct research in teacher education.Their research could focus on themes such as teaching and learning in school contexts, subject-related research and the pedagogy of teacher education, and it is indicated that their research is mainly qualitative in methodology.Teacher educators have also been encouraged to conduct practitioner research to focus on the educational practice of their own, because it is argued that they can improve their practice directly by conducting research on it.Furthermore, teacher educators teach research to their students to provide them with a research-orientation towards their work, understanding of the relevance of the theoretical knowledge in practice, and to develop their pedagogical thinking.Teacher educators are responsible for teaching student teachers about the academic work of a researcher.Meanwhile, teacher educators engage in research-based teaching, meaning that they organise their teaching around inquiry-oriented activities and make educational decisions based on research-based thinking and the competence achieved through research work.In the context of teacher education in China, the educational reform launched in the late 1990s has emphasised the importance of teacher educators regarding their roles both as teachers and as researchers.Though Chinese teacher educators showed a passion for teaching, in line with university teachers in other countries, they experienced conflict between teaching and conducting research.Chinese teacher educators are required to have a PhD, indicating that they have had years of training to conduct research and in academic writing, and they are encouraged to engage in a range of academic activities concerning research, such as academic publications, conferences, and seminars, to publish their research results.However, sufficient support for them to engage in research work is scarce.Although Chinese teacher educators perceived that their research should be related to their teaching practice, they experienced difficulties in bridging research and teaching.As teachers, teacher educators need to be excellent in teaching and consider it to be an important task.Furthermore, the demanding work in teacher education requires them to be reflective and inquiring and to be able to interpret and analyse the educational issues, which indicates that research is a necessary task for teacher educators and it should not be separated from their teaching.Previous studies from a range of international contexts have stressed the positive influence that arises from teacher educators’ research on their teaching.However, it is unclear whether their perceptions of the nexus between their research and teaching are related to the way they approach teaching."The aim of this study was to gain a better understanding of teacher educators' perceptions of their approaches to teaching and how their approaches to teaching were related to their perceptions of the closeness of their research and teaching.Firstly, by applying the revised version of Approaches to Teaching Inventory in the Chinese teacher-education teaching context, the present study examined how teacher educators reported applying approaches to teaching in their everyday practice.Secondly, it investigated the connections between teacher educators’ approaches to teaching and their perceptions of how closely their research was related to their teaching.The specific research questions and hypotheses were:What approaches to teaching did the teacher educators report adopting?,It was hypothesised that among Chinese teacher educators, who are in the midst of the changing pedagogical environment, both the teacher-focused approach to teaching and dissonant approach to teaching, in which the teacher- and student-focused approaches are combined, could be identified."How were the teacher educators' approaches to teaching related to their perceptions of the closeness of their research and teaching?",It was hypothesised that the student-focused approach to teaching was more intensively associated with a close relation between research and teaching than the teacher-focused approach."Teacher education in China is mainly offered at teachers' universities1. "Teachers' universities generally provide teacher education study as four-year bachelor's programmes, three-year master's programmes, and some also have programmes for doctoral studies.The programmes are offered in different faculties and according to the different levels of education and subject knowledge student teachers receive, they become teachers at different levels and subject areas.The curricula may differ between institutions.Normally, they contain general education courses, such as political theories and foreign languages; professional education courses, like pedagogy, psychology, and 6–8 weeks of teaching practice; and subject matter courses.Teachers are required to pass a qualification examination to obtain a teaching certificate in higher education institutions to teach at universities.Since the late 1990s, Chinese teacher education has undergone reform, aiming at launching teacher education curricula as being learner-centred and practice-oriented."Meanwhile, teachers' universities are required to conduct more research.Furthermore, academic publications in prestige journals are seen as one evaluation criterion of the institutions."The present study included two teachers' universities from the northeast part of China. "One is a key national teachers' university and is affiliated with the Ministry of Education, China.This means that the university represents the high quality of teacher education in the country."The other university is under the supervision of the regional education administration and in the same province as the first one, and it is a key provincial teachers' university.Nine faculties from the two universities were involved, including the Faculties of Education, Arts, Chemistry and Biological Sciences."The programmes included in the study were at the bachelor's and master's levels.A total of 115 teacher educators participated in this study.The mean age of the participants was 39 years.Of these, 49 were male and 63 were female.Three participants did not report their gender."Among the participants, only one had a bachelor's degree, 35 held a master's degree and 79 possessed a doctoral degree.This meant that almost all of them had experienced professional training in conducting research.Most of the participants mentioned having a formal teaching certification, of which 105 had the teaching certificate in higher education institutions.Furthermore, 111 participants reported their teaching experience, which varied from one to 33 years.Twenty teacher educators had five years or less of teaching experience.Concerning their workload, 62 teacher educators perceived that teaching occupied 50% or more of their total work time, and that research, administration and other tasks accounted for the rest of their workload.Thirty-eight thought conversely that research occupied 50% or more of their total work.Eighty-six teacher educators had participated in pedagogical training that ranged from three days to 12 months.Sixty-one teacher educators had received training of one month or less, twenty-six did not participate in any courses on university pedagogy, and three did not report their situation in terms of pedagogical training.The data were collected in 2015.The inventory was sent to the participants in either electronic or paper form, according to their preference."All the participants were informed about the nature and aims of the study in the inventory's instructions. "Participants' submission of the inventory was voluntary and considered as being their informed consent to participate in the study.All personal information was de-identified.Since teacher educators may adopt different approaches to teaching in different course contexts, the respondents were asked to select a specific subject or course and think of a typical teaching situation while answering the inventory.The Approaches to Teaching Inventory and its revised version have been widely used to investigate approaches to teaching in a range of countries, including China.In our study, the revised version was applied since it was an improved version compared with the previous one and the increased number of items from eight to 11 per scale extended the range of the scales.The inventory was translated into Chinese by a Chinese researcher.Each item was compared with the original English item to make sure that the translated item corresponded with the original one.After this, the Chinese version was back-translated into English by two scholars who were fluent in both English and Chinese.The original English inventory and the back-translated version were similar, including only a few word variations that did not change the core meaning of the items.The items were slightly revised based on these differences.ATI-R includes 22 items.The items were measured on a 5-point Likert scale.Two items were developed by the authors to explore the participants’ perceptions of the closeness of their research and teaching.The first item asked them about how much they thought their research was related to their teaching.The answer was measured on a 5-point Likert scale.The participants self-estimated the relationship between their research and teaching and chose one from the given choices.In addition, the respondents were asked to estimate the extent to which they consider themselves to be researchers and/or teachers.They provided responses as percentages, ranging from “0% as a teacher and 100% as a researcher” to “100% as a teacher and 0% as a researcher.,The second item was developed to understand how teacher educators viewed their roles in their professional career concerning their two main tasks as conducting research and teaching.Firstly, exploratory factor analyses were conducted to assess the factor structure of the inventory by using the SPSS Statistics 23.0.On the basis of the factor analysis, two scales as the teacher- and student-focused approaches to teaching were created."Afterwards, Cronbach's alphas of the scales were calculated for reliability analyses.Secondly, the means of the two scales were calculated by the mean of the items.Thirdly, two-step cluster analysis was conducted to classify teacher educators into different groups according to their scores on the two scales of approaches to teaching.Finally, a three-cluster solution was selected.At the end, one-way ANOVA was used to analyse how the three clusters differed from each other in terms of how they perceived the closeness of their research and teaching, and their roles as researchers and teachers.The first aim of the study was to reveal what kind of approaches to teaching the teacher educators reported adopting.Though ATI and ATI-R are seen as valid and reliable instruments to explore teachers’ approaches to teaching, Prosser and Trigwell note that the approaches to teaching are contextually dependent, thus it is important to explore the validity and reliability of the instrument when applied in a new cultural context.Considering the cultural dependency of teaching and learning and the “Chinese paradox” mentioned above, the authors of the present study decided to conduct exploratory factor analyses to examine the factor structure of the 22 items from ATI-R in the present research context.There are different opinions about how large a sample should be when conducting factor analysis and the recommendations are diverse.De Winter, Dodou, and Wieringa recommended that with well-defined factors, exploratory factor analysis with a small sample size can produce reliable solutions.Thus, the authors considered that the sample size of the present study was appropriate for exploratory factor analysis.Firstly, the items of the ATI-R were included in the exploratory factor analysis with principal axis factoring and promax rotation in a two-factor solution.It is suggested that factors with eigenvalues of more than 1.0 should be retained, however, this criterion is not accurate, and too many factors are retained by using this method.The scree test was used in the study, and the plot of eigenvalues showed that the curve flattened out after a break point of three, thus two factors were retained with the cumulative variance extracted of 36.63%.This means that the two factors together accounted for 36.63% of the total variance.Next, exploratory factor analyses with three and four factors were conducted to ensure the appropriateness of the two-factor solution.The item loading tables of the two-, three-, and four-factor solutions were compared, and the two-factor solution revealed the cleanest factor structure.Furthermore, considering that our study was based on the two scales of approaches to teaching, and the two-factor solution was the most interpretable, the authors decided to use the two-factor solution in the following analyses.The cut-off for a loading of 0.32 to determine whether an item contributes towards the factor was referenced.The results showed that items 4, 15, 21, and 29, which are designed to measure the ITTF approach in the original inventory, loaded to the CCSF scale.In addition, item 20 loaded from the CCSF scale to the ITTF scale."On the CCSF scale, item 15 was left out because of its low communality, and the Cronbach's alpha of this scale increased when it was left out.On the ITTF scale, item 20 was left out for the same reason.Finally, the CCSF scale included 13 items and the ITTF scale included seven items."The three items from the original ITTF scale which were included in the CCSF scale are about teachers' presentation of information or facts to the students.Items varying from one scale to another indicated that the factor structure of the inventory slightly varied."The three items loading from the ITTF scale to CCSF scale confirmed the interpretation from the inventory's developers that the CCSF approach is more sophisticated and it could include elements from the ITTF approach.The meaning of teacher- and student-focused approaches to teaching changed in the present research context."The CCSF scale included items representing a student-focused teaching strategy and an intention of changing students' conceptions of knowledge.The ITTF scale included items representing a teacher-focused teaching strategy and an intention of information transmission.Meanwhile in the present study, the presentation of information for students was also an important element in the CCSF scale.After the analysis of the factor structure of the ATI-R, the participants’ mean scores and standard deviations on the student-focused scale and teacher-focused scale were calculated.We hypothesised that the teacher-focused approach to teaching and dissonant approach to teaching could be identified among teacher educators.To test this hypothesis, a two-step cluster analysis was used to divide the participants into clusters.A three-cluster solution was revealed with the lowest BIC coefficient of 136.49 and the largest ratio of distance measures of 2.15.The means and standard deviations of each cluster on the student- and teacher-focused approaches to teaching are shown in Table 2 and Fig. 1.Teacher educators in cluster 1 had similar scores on student- and teacher-focused approaches to teaching.Thus, they were identified as “Teacher educators with a vague approach to teaching.,These teacher educators did not show a clear preference towards either of the two approaches to teaching.Cluster 2 was labelled as “Teacher educators with a dissonant approach to teaching” because they scored not only highly on the student-focused scale, but also the highest on the teacher-focused scale among the three clusters.These teacher educators showed dissonance in their approaches to teaching since they applied both student- and teacher-focused approaches while teaching, though they had a preference towards the student-focused approach compared to the teacher-focused approach."During the teaching, these teacher educators stressed supporting the students' own learning activities and aimed at helping the students to restructure their knowledge.Meanwhile, they also focused on what they do and on delivering the fixed concepts in the textbook to the students to help them to pass the assessment.Teacher educators in cluster 3 scored the highest on the student-focused scale and the lowest on the teacher-focused scale.They clearly had a preference on the student-focused approach to teaching and thus they were named as “Teacher educators with a student-focused approach to teaching., "In adopting the consistently student-focused approach to teaching, these teacher educators showed similarities with cluster 2 in that they also intended to promote their students' conceptual development in understanding the subject, by encouraging the students’ active participation in learning activities.However, contrary to cluster 2, they did not stress the teacher-focused approach while teaching.Our first hypothesis was only partly true because a cluster of teacher educators with a strong teacher-focused approach to teaching did not emerge in the study.Instead, a group of teacher educators with a student-focused approach to teaching was identified.One-way ANOVA was applied to show further whether teacher educators in the three clusters differed in their approaches to teaching at a statistically significant level.The post-hoc comparisons of the three clusters on the student-focused approach to teaching scale showed that teacher educators with a vague approach to teaching had statistically significantly lower scores than teacher educators with a dissonant approach to teaching and teacher educators with a student-focused approach to teaching.Clusters 2 and 3 had almost the same scores.On the teacher-focused approach to teaching scale, Cluster 2 scored statistically significantly higher than Cluster 1 and 3.Furthermore, Cluster 1 scored statistically significantly higher than Cluster 3.Our second research question was to explore the relationship between teacher educators’ approaches to teaching and how close their research is related to their teaching.Participants were asked about how much they think their research is related to their teaching.More than half of them reported that their research and teaching were highly related or totally related.Around one third thought that their research and teaching were partly related.A minority considered that their research and teaching were loosely related, or that there was no link between them.Teacher educators’ means and standard deviations on the closeness of their research and teaching were calculated based on the three clusters.Afterwards, One-way ANOVA was applied to explore the differences between the three clusters in terms of how close they perceived the nexus between their research and teaching = 3.10, p = .049)."Bonferroni's post-hoc test showed that teacher educators with a student-focused approach to teaching scored significantly higher than teacher educators with a vague approach to teaching on the item measuring how strongly teaching and research are related to each other.It meant that the teacher educators demonstrating the student-focused approach to teaching emphasised the close relationship between teaching and research in their work.It could also be elaborated that compared with teacher educators with a vague approach to teaching, teacher educators who employed a consonant approach to teaching experienced a close relationship between research and teaching.The partial η2 was 0.05, which was seen as a small effect in practical significance.However, this positive association between the closeness of research and teaching and the teacher-focused approach to teaching was not shown in the study.Concerning teacher educators’ perceptions of their roles as teachers and researchers, 54 teacher educators perceived themselves more as teachers than as researchers, 30 teacher educators perceived themselves more as researchers than as teachers, and 29 teacher educators thought their roles as teachers or researchers were equal.It was revealed that half of the teacher educators considered themselves mainly as teachers in their professional field.One-way ANOVA was conducted to compare the three clusters concerning the extent to which they considered themselves to be teachers and researchers.However, no differences were shown = 0.03, p = .966).This study explored teacher educators’ perceptions of their approaches to teaching, and the connections between approaches to teaching and their perceptions of the closeness of their research and teaching.In a previous study in which the ATI was utilised to explore the approaches to teaching in the Chinese context, it revealed a four-factor structure model of the inventory.In our study, the exploratory factor analysis of the ATI-R revealed that three items, which are about teachers presenting information or facts to students, loaded from the ITTF scale to the CCSF scale.This indicated that the factor structure of the inventory in the present study was also different from what had been introduced in previous studies.Offering basic knowledge to the students seems to be related to the other characteristics of the student-focused approach to teaching, rather than to the teacher-focused approach.This result indicated that information transmission is an element that can be included in the student-focused approach to teaching.However, in the student-focused approach, information transmission is accompanied by student-activating teaching strategies."Teachers applying the student-focused approach to teaching focus on their students' understanding and construction of knowledge after the presentation of information, not the information transmission itself.Concerning the present study context, offering facts to students as an element of the student-focused teaching could be explained by the research context and “the paradox of the Chinese learner”.Contrary to the traditional view that memorising is related to a surface approach to learning and is separable from understanding, Chinese learners see memorising as a means leading to deep learning.Similarly, the present study suggests that Chinese teacher educators’ intention to help students to understand deeply is intertwined with presenting information and facts to the students.Teacher educators who intend to help student teachers develop their own understanding of a subject may adopt a teaching strategy which focuses on offering the students basic facts and information first.The approaches to teaching teacher educators reported applying is different from what we expected.We hypothesised that a cluster of teacher-focused teacher educators could be identified, because previous studies showed the existence of the teacher-focused teaching in teacher education in the Chinese context.However, in line with studies of Han et al. and Hu et al., teacher educators with a strong teacher-focused approach to teaching did not emerge in our study.One explanation could be that participation in our study was voluntary, and therefore the inventories might have been returned mainly by teacher educators who were interested in teaching and thus might be more willing to adopt a student-focused approach to teaching.Further, teacher educators who scored highly on the three items which loaded from the ITTF scale to the CCSF scale in the inventory were now measured as student-focused in their teaching.If these items had remained in the ITTF scale, the results might be different and show more evidence of the existence of the teacher-focused approach.The disciplinary difference could also be an explanation for the no-show of teacher-focused approach to teaching.Leung et al. conducted the study in the construction engineering discipline and concluded that the teacher-focused approach to teaching is dominant among university teachers in China.However, the present study was conducted with teachers in teacher education, which is considered to be a soft discipline.Teachers in soft disciplines are more likely to employ a student-focused approach to teaching compared with teachers in hard disciplines.Furthermore, it was also elaborated that teacher educators would use student-focused teaching strategies more often than university teachers in other disciplines to support student teachers’ active learning."The dissonance was revealed in teacher educators' approaches to teaching in the Chinese context.Teacher educators who reported the dissonant approach to teaching, showed high scores on student- and teacher-focused approaches to teaching and they were more towards the student-focused approach.A similar group of teachers has been identified in previous studies."Postareff et al. applied the word “dissonant” in their study to describe the teaching profiles in which university teachers' teaching are combinations of both student- and teacher-focused approaches and thus theoretically dissonant.This group of teacher educators could be in a transition phase developing from the teacher-focused towards student-focused approach to teaching."On one hand, concerning the undergoing reform and changes of teacher education which encourage teacher educators to teach with a focus on students' learning, and further with the increasing influence of Western pedagogic philosophy, such as developing students’ own critical thinking, Chinese teacher educators increasingly tend to apply the student-focused approach to teaching; on the other hand, they still need to face the reality of their teaching context and the influence of traditional Chinese education.For example, the high student-teacher ratio makes it easier for teacher educators to complete their teaching tasks by applying teaching methods such as lecturing.This could also be the reason for the teacher educators who applied the vague approach to teaching, as they may be confused about how to approach their teaching in the complex teaching context."Our study showed the close relationship between teacher educators' student-focused approach to teaching and their perceptions of an intensive nexus between their research and teaching in teacher education.This is similar to the result of another study in a Chinese context showing that there is a systematic relationship between student-focused teaching and beliefs about the role of research in ideal teaching."Applying a student-focused approach to teaching means that teacher educators focus on students' understanding and construction of knowledge while teaching, and the results showed that these teachers considered that there was a close relationship between their research and teaching.How teacher educators perceive the research-teaching nexus is aligned with their perceptions of knowledge, research, teaching, and learning.The student-focused teacher educators are more likely to link their research work with their teaching in a student-focused way, as Brew has described in the “new model” of the relationship between research and teaching."In this model of research-teaching nexus, knowledge is constructed in a social context, and teachers focus on students' engagement in research-based activities, which is beneficial to student teachers' achievement in conceptual change and critical thinking.Engaging in research is also a learning process for teacher educators concerning their profession and teaching.These teacher educators may see research and teaching in a mutually reinforcing way.During the teaching process, research functions as a primer to trigger students’ interests; interspersing research with teaching gives teacher educators more opportunities to interact with the student teachers.On the other hand, they may also try to improve their research with the thoughts invoked by teaching.Furthermore, teacher educators with a student-focused approach are the ones who apply a coherent approach in their teaching and they feel fewer negative emotions about improving teaching.Thus, they are more willing to enrich their teaching by integrating research work and teaching."The study also explored teacher educators' perceptions of their two roles as teachers and researchers and whether the perceptions varied among teacher educators with differing approaches to teaching.The results showed that though teacher educators have a different tendency towards the student- and teacher-focused approaches to teaching, no difference was shown in their views on their roles as teachers and researchers.The explanation could be that almost half of the participants in the study considered themselves more as teachers than researchers, and there were limited variations between the three clusters on their perceptions of their roles as teachers and researchers.However, the analysis revealed that the teacher educators had different perceptions of their roles when they had different amount of workload concerning teaching and research and belonged to different universities.For all the teacher educators in the study, their perceptions of their roles were consistent with the work they engaged in.To be more precise, the more they focused on teaching, the more they perceived themselves as teachers.Correspondingly, the more they focused on conducting research, the more they perceived themselves as researchers."Furthermore, compared to the teacher educators in the key national teachers' university, the teacher educators in the regional teachers' university perceived themselves more as teachers and less as researchers. "Teachers' research-teaching nexus is clearly influenced by the institution's policy and management strategies. "The key national teachers' university is research-intensive and a model of educational reform to other local universities.The positioning of the university and its developing strategy stressed the importance of conducting research, and the teachers of this university would engage more in research."Thus, this could lead the teachers' self-perceptions as more of researchers. "In order to enhance teacher educators' research-teaching nexus, firstly it is vital to know how they define their roles and perceive the closeness between their research and teaching.Teacher educators in the study considered that their main task is to educate the future generation of teachers.Based on that, further discussion about how research could be beneficial for their teaching and integrated to their teaching is needed.Research and teaching are not necessarily in a conflicting relation for teacher educators.However, researchers have been concerned that factors discouraging the research-teaching nexus may exist.Teacher educators are under the pressure to become research-active and build their researcher identity, which may cause role conflict.In the context of Chinese teacher education, teacher educators have been encouraged to conduct research and publish academic papers for several years.While in the present research context, many teacher educators perceived themselves more of teachers than researchers."Thus, there might be a conflict between the institutions' requirements for teacher educators' engagement in research work and teachers’ own preference towards teaching.Furthermore, the research-teaching nexus may be seen as self-evident and automatic, and because of this, the need to reflect on and improve the research-teaching nexus might be ignored.Teacher educators in the present study considered a close relationship between their research and teaching, nevertheless, this leads us to think in depth about how the research-teaching nexus is actually implemented in their practice.Firstly, the sample of the study may have been biased because the data were collected on a voluntary basis, and thus it is possible that those who were more interested and concerned about their teaching returned the questionnaire."Secondly, only two teachers' universities were involved in the study. "A sample with broader representation of teachers' universities would strengthen the transferability of the results.Furthermore, the low correlations between the items of ATI-R in the present study could influence the reliability and validity of the analysis and indicated that a bigger sample size is required in the future studies."Thirdly, though Chinese university teachers' approaches to teaching have been explored with the Approaches to Teaching Inventory, no similar “paradox” in teachers' teaching or variation in factor structure of the inventory was found.This might indicate that the findings in the present study were limited to the particular research context and participants.Further work is needed to ascertain the extent to which the results could be generalised."Meanwhile, the results concerning teacher educators' perceptions of the closeness of their research and teaching and researcher/teacher role were limited and only presented a general picture, because only two items were developed in the study.Furthermore, the results in the study were based on the participants’ self-reporting and there was limited knowledge of how they conceptualised the issues investigated.Finally, the present study applied a quantitative methodology.Though the ATI-R showed good reliability and validity in the study, extra caution is needed when interpreting the results, especially when the questionnaire is applied in a new cultural context."Our study provides new insights into research on teacher educators' approaches to teaching, which has been scarce.However, more evidence is needed to enrich our knowledge in this area.Firstly, to get a better understanding of teacher educators with a dissonant approach to teaching, interviews and qualitative analyses could be added."Secondly, previous studies indicate that teachers' pedagogical training could influence their approaches to teaching. "Unfortunately, in the present study, the limited variation of short training duration restricted us to exploring the relationship between teacher educators' approaches to teaching and pedagogical training. "Thirdly, research exploring student teachers' experiences and perceptions of their teacher educators’ approaches and practices of teaching in teacher education would be relevant. "The ATI-R could be used to detect teachers' approaches to teaching and to monitor teachers' change and development in their approaches to teaching.However, it is important to acknowledge that teaching is dependent on the context, and therefore it is necessary to check the factor structure of the inventory and consider the specific teaching context when applying the inventory."After having an insight into teachers' approaches to teaching, pedagogical training could be organised for them to further develop their teaching. "The training programmes should be tailored to the target group's actual needs to enhance the impact.For example, in the present study, the student-focused teacher educators stressing information transmission is not necessarily a negative phenomenon."The important task is to improve these teachers' teaching skills on how to guide student teachers' active learning and processing of the information.Furthermore, teacher educators with a dissonant approach to teaching might be confused about their teaching and experience more negative emotions than teachers with a consonant approach to teaching.Thus, pedagogical training could be organised for them to develop their teaching to be more student-focused.The training programmes should guide all teacher educators to be reflective, not only on how they approach teaching, but also concerning how their particular approaches to teaching influence the way their students approach learning."As university academics, teaching and conducting research constitute most teacher educators' main workload. "However, conflicts may occur, for example as shown in the study, between teachers' self-positioning as more of teachers and the university's value-orientations for teachers to conduct more research.One suggestion to reduce the conflict would be to build a close relationship between research and teaching both at the institutional and individual level.At the institutional level, universities need to consider their developing strategies first and then provide the teachers with support to encourage their engagement in research and teaching.For example, proper workload should be arranged for teachers to keep a balance between these two tasks."Teacher educators' efforts to integrate research and teaching could be rewarded by the funding system.Furthermore, universities need to build a scholarly community that does not view teachers separately as the ones who teach and the ones who conduct research.Indeed, a community of practice could be built for teacher educators to increase their research capability, especially for those who consider themselves less as researchers.A culture of research and integrating research in teaching in teacher education needs to be encouraged.At the individual level, teacher educators need to consider the possibilities to link research and teaching in their everyday work.A deep reflection on their views of research and teaching is needed before they take actions to link research and teaching.Furthermore, teacher educators need to relate research with teaching in a student-focused way, such as involving students in research.Sufficient support from the university to give individual teachers more control over their own teaching and research would be important.Teacher educators then could decide in which specific ways to link their research and teaching appropriately to make the two tasks not to obstruct each other.For example, they could design the curriculum to involve more of the elements of research in teaching.Changes in teacher educators’ perceptions and actions of enhancing the relationship between research and teaching are pivotal, meanwhile challenging.All the endeavours to link research and teaching need to build on what teacher educators already have as strength, for instance, their perceptions of a student-focused approach to teaching.This work was supported by the Chinese Scholarship Council.No potential conflict of interest was reported by the authors. | This study explores teacher educators’ perceptions of their approaches to teaching and the closeness of their research and teaching. A total of 115 participants completed a questionnaire. The results showed that these teacher educators perceived information transmission as an element of the student-focused approach to teaching. Three clusters were identified which mirrored different kinds of combinations of the teacher- and student-focused approaches to teaching. The results further revealed that these clusters were related to how closely teacher educators considered their teaching and research to be related to each other. |
542 | Glucose Metabolism and Oxygen Availability Govern Reactivation of the Latent Human Retrovirus HTLV-1 | HTLV-1 is a human retrovirus that primarily infects CD4+ T lymphocytes.It is a single-stranded positive-sense RNA virus that reverse transcribes its RNA genome and integrates the resulting double-stranded DNA copy of its genome into the host cellular chromatin, thereby establishing a persistent infection in the cells.It is estimated that 5–10 million people worldwide are HTLV-1 carriers.HTLV-1 infection is asymptomatic in most cases.In a subset of infected individuals, however, HTLV-1 infection progresses to either a CD4+ T cell malignancy known as adult T cell leukemia/lymphoma or to HTLV-1-associated myelopathy, a progressive inflammatory disease of the spinal cord.HTLV-1 infection is considered to be largely latent in infected individuals, because the viral structural RNA and protein products are usually undetectable in freshly isolated infected peripheral blood mononuclear cells.However, the presence of a sustained chronically activated cytotoxic T cell response against HTLV-1 antigens, in infected individuals, suggests that the immune system frequently encounters newly synthesized viral antigens within the body.Hence it is important to understand the mechanisms that regulate the latency, reactivation, and productive infection of HTLV-1 in vivo.This understanding might suggest strategies to reactivate the dormant virus and make it accessible to attack by the immune system and antiretroviral drugs.Previous work on HTLV-1 proviral latency and reactivation has largely focused on identifying the viral factors involved, such as the HTLV-1 integration site.Spontaneous reactivation of the provirus is associated with integration in transcriptionally active euchromatin, in opposite transcriptional orientation of the nearest host gene and within 100 bases of certain transcription factor binding sites.Recently, it has been shown that the HTLV-1 provirus binds the key host chromatin insulator binding transcription factor CTCF.The functional consequences of this binding are not yet clear; we hypothesize that CTCF binding regulates proviral transcription, local chromatin structure, and alternative splicing of mRNA.Different cell types are exposed to widely differing oxygen tension in vivo.While pulmonary alveolar cells experience oxygen concentrations of ∼15%, embryonic, neural, mesenchymal, and hematopoietic stem cells are exposed to profoundly hypoxic environments.Viruses have evolved different strategies either to counter the detrimental effects of oxygen variations or to exploit hypoxic cellular metabolism to their own advantage, thereby enabling them to infect different cell types and replicate or persist in the host.Circulating CD4+ T cells are of special interest as they encounter frequent changes in oxygen tension and extracellular fluid, and contact several different cell types.We set out to test the hypothesis that these extracellular stresses and changing microenvironment influence transcription of the HTLV-1 provirus.To this end we took two complementary approaches.First, in a “top-down” approach we studied the effects on HTLV-1 transcription of cellular stresses routinely encountered within the body such as physiological hypoxia and nutrient limitation.Second, in a “bottom-up” approach we identified specific epigenetic and transcriptional changes in the provirus in response to specific inhibitors of cellular metabolic and stress-response pathways that are known to play an important role in cellular adaptation to the external environment.To minimize artifacts due to in vitro selection and adaptation, we employed primary PBMCs isolated from HTLV-1-infected individuals to study the effects of various stresses and inhibitors on HTLV-1 reactivation from latency.Plus-strand HTLV-1 gene expression is typically undetectable in fresh PBMCs obtained from HTLV-1-infected individuals, but there is a strong spontaneous increase in the expression of Tax, the viral transactivator of plus-strand gene transcription, within a few hours of isolation.An increase in Tax transcription can also be observed in cultures of whole blood obtained from HTLV-1-infected individuals.These observations suggest that changes in the extracellular microenvironment have an important impact on HTLV-1 proviral transcription.The concentration of oxygen in venous blood ranges from 5% to 10%, and in air at sea level is ∼20%."Previous studies of gene expression in HTLV-1-infected patients' PBMCs have been carried out under ambient atmospheric conditions.However, lymphocytes spend most of their time in the highly hypoxic environment of the lymph circulation or solid lymphoid organs.To study the impact of physiological hypoxia on the integrated HTLV-1 virus in naturally infected cells, we cultured primary HTLV-1-infected PBMCs overnight either under physiologically relevant hypoxia or normoxia.RNA was then extracted and subjected to qPCR with primers specific for HTLV-1 tax, HTLV-1 sHBZ, and host cellular VEGF.There was a significant increase in plus-strand transcription in PBMCs cultured under physiological hypoxia when compared with PBMCs cultured under ambient oxygen conditions.No such change was seen in sHBZ transcription levels.As expected, there was also a significant upregulation in transcription of the VEGF gene, which is regulated by the hypoxia-induced transcription factor.Next, we tested whether the hypoxia-mediated induction of HTLV-1 plus-strand transcription is associated with a specific epigenetic signature in the provirus, by analyzing changes in histone methylation status.HTLV-1-infected frozen PBMCs were thawed and either fixed immediately or cultured overnight under either hypoxia or normoxia, and fixed subsequently at 17 hr.Chromatin from the respective samples was fragmented by sonication and subjected to chromatin immunoprecipitation, using antibodies directed against H3K4me3, which is associated with promoters and enhancers; H3K36me3, which is associated with actively transcribed gene bodies; and H3K27me3, a repressive epigenetic mark associated with heterochromatin.After overnight incubation, H3K4me3 was observed to be significantly enriched at the 5′-LTR junction, gag, pol, env, and vCTCF regions of the HTLV-1 provirus in cells cultured in 1% as well as 20% oxygen.By contrast, there was no significant change in H3K4me3 at the 3′-LTR junction.Similarly, levels of the H3K36me3 mark, which corresponds with proviral activation, were increased across all the regions of the HTLV-1 provirus under physiological hypoxia as well as normoxia.However, the increase observed in the gag and pol regions was not statistically significant, perhaps because these regions were highly enriched in the H3K36me3 mark even at time 0 in immediately fixed PBMCs.Although there was a slight increase in H3K27me3 levels in all the tested regions of the HTLV-1 provirus upon proviral activation under 1% or 20% oxygen, this increase was statistically significant only in the 3′-LTR junction region.However, the apparent change in H3K27me3 status was not associated with any perturbation in minus-strand HTLV-1 transcriptional activity in response to hypoxia, so was not investigated further.H3K4me3 and H3K36me3 are two of the most dynamic histone methylation modifications at the HTLV-1 provirus upon reactivation.However, their levels did not differ within our limits of detection in PBMCs cultured under hypoxia or normoxia.Many transcriptional responses to hypoxia in cells are mediated by the α,β-HIF transcription factors.Under normoxic conditions, HIF-1α subunits are efficiently hydroxylated by the HIF prolyl hydroxylases and are rapidly degraded by the ubiquitin-proteasome system.The three human PHDs belong to the Fe and 2-oxoglutarate-dependent oxygenase enzyme family, whose activities are dependent on molecular oxygen.Under hypoxic conditions, PHD activities are inhibited and α,β-HIF is stabilized, thus enabling it to orchestrate the cellular response to hypoxia through its role as a transcriptional activator.Because experimental work under hypoxic conditions imposes significant constraints, hypoxia mimics and HIF stabilizers have been developed to study hypoxia responses under normoxic conditions.Dimethyloxalylglycine, a competitive antagonist of 2-OG, is a broad-spectrum inhibitor of 2-OG oxygenases.In contrast, the recently developed molecule IOX2 is a much more selective inhibitor of the PHDs and stabilizes HIF without affecting the activities of most other 2-OG oxygenases.We studied the effects of DMOG and IOX2 on the transcriptional activity of the HTLV-1 provirus in PBMCs isolated from infected individuals.The PHD-selective inhibitor IOX2 had no effect on HTLV-1 plus-strand transcription.The PHD inhibition activity of IOX2 in cells was confirmed by a significant increase in transcription of the HIF-1α-inducible gene VEGF.Thus, the HIF-mediated transcriptional response probably does not play a direct role in the hypoxia-induced increase in HTLV-1 transcription.We observed a small but significant decrease in minus-strand sHBZ transcription on treatment with IOX2; the reasons for this effect are unknown, but could relate to altered levels of one of the many HIF target genes.These results imply that HTLV-1 transcription is not directly HIF regulated, but does involve a hypoxia-related mechanism.Compared with control cells, DMOG-treated infected PBMCs also showed a significant upregulation in positive control VEGF transcription, signifying HIF stabilization in DMOG-treated cells and consistent with the use of DMOG as an experimental mimic of hypoxia.However, DMOG exerted a strong and paradoxical effect on plus-strand HTLV-1 transcription.In contrast to hypoxia, which induced plus-strand transcription, DMOG caused a significant and strong inhibition of HTLV-1 plus-strand transcription.There was also an associated significant increase in minus-strand sHBZ transcription after DMOG treatment.To rule out a possible effect of inhibitor cytotoxicity, we carried out dose-response analyses for both DMOG and IOX2.The results confirmed the observations obtained above.The HTLV-1 Tax-mediated positive feedback loop governs plus-strand HTLV-1 transcription.Tax transactivates transcription of the plus-strand genes, including itself.HBZ is encoded by the minus strand and has been reported to inhibit Tax-mediated plus-strand transcription.To analyze further the opposing effects of DMOG on plus- and minus-strand HTLV-1 transcription, we investigated the impact of DMOG on HTLV-1 transcription at an earlier time point."After 2 hr of culture of HTLV-1-infected patients' PBMCs with either DMOG or DMSO, there was a strong inhibition of tax mRNA transcription with DMOG treatment when compared with control.However, there was no corresponding significant increase in sHBZ transcription at 2 hr, suggesting that the inhibitory effect of DMOG on the plus strand is independent of its effect on the minus strand.However, a direct or indirect involvement of DMOG on the HBZ protein pools at the 2-hr time point cannot be ruled out.Also, there was no increase in transcription of the HIF-inducible gene VEGF at 2 hr, suggesting that the stabilization of HIF-1α in response to DMOG treatment did not occur at this early time point in primary PBMCs.The 2-OG oxygenase enzyme family plays diverse roles in humans, including roles in collagen biosynthesis, fatty acid metabolism, hypoxic signaling, and nucleic acid and DNA repair and demethylation."We wished to investigate whether specific 2-OG oxygenases regulated the observed epigenetic changes associated with HTLV-1 transcription in primary infected patients' PBMCs.The 2-OG oxygenases with important transcriptional regulatory roles in host cells include the HIF PHDs, Jumonji C histone lysine demethylases, DNA cytosine hydroxylases, and the AlkB homolog nucleic acid demethylases, some, but not all, of which are HIF target genes.We had already ruled out the direct involvement of PHDs by employing the selective PHD inhibitor IOX2, which had no effect on plus-strand HTLV-1 transcription.To further investigate these results, we employed Methylstat and JIB-04, which are broad-spectrum JmjC KDM inhibitors.These inhibitors had no significant effects on HTLV-1 plus-strand transcription, consistent with the conclusion that the JmjC KDMs do not play a direct regulatory role in HTLV-1 plus-strand transcription.In agreement with a previous report, Methylstat treatment resulted in a significant reduction in VEGF mRNA transcription.In contrast, JIB-04 treatment significantly induced VEGF transcription.To our knowledge, there have been no prior studies examining the effect of JIB-04 on VEGF mRNA and angiogenesis.Furthermore, ChIP analysis showed no significant difference in either H3K4me3 or H3K36me3 when compared with control in the 5′-LTR and the gag region of the provirus.There was a significant increase in H3K4me3 at the 3′-LTR in DMOG-treated samples compared with control.This increase is consistent with the observed increase in sHBZ mRNA levels following DMOG treatment.There was no difference in the DNA methylation profile between DMOG-treated and DMSO-treated samples, indicating that members of the TET and AlkB homolog subfamilies of DNA demethylases/hydroxylases are unlikely to be directly involved in regulating the observed spontaneous HTLV-1 plus-strand transcription.We conclude that the epigenetic effector 2-OG oxygenases are not directly involved in HTLV-1 plus-strand transcription.2-OG, in addition to being a co-substrate for 2-OG oxygenases, is also an important intermediate in metabolism, specifically the tricarboxylic acid cycle, oxidative phosphorylation, and amino acid metabolism."To test the hypothesis that DMOG influences HTLV-1 transactivation through perturbation of these cellular metabolic pathways, we treated HTLV-1-infected patients' PBMCs overnight with either 0.5 mM DMOG in DMSO, or DMSO alone.The cells were lysed and the extracts subjected to ion-exchange liquid chromatography coupled directly to tandem mass spectrometry, to identify the metabolic pathways modulated by DMOG.Untargeted metabolite profiling measured 4,261 molecular species with a unique mass-to-charge ratio.The identified metabolites were sorted according to the maximum fold change in the normalized abundance between DMSO-treated and DMOG-treated samples, respectively.Any change that was statistically significant and exceeded 1.3-fold was analyzed further.N-Oxalylglycine was found in high abundance in DMOG-treated cells but not in DMSO controls.This observation is commensurate with hydrolysis of the prodrug DMOG to NOG in cells by carboxylesterases.Endogenous metabolites that showed a statistically significant difference were sorted according to the metabolic pathways in which they participate.The results showed significant changes in six metabolic pathways:Glycolysis.A significant reduction was observed in the levels of most of the measured glycolytic intermediates in response to DMOG treatment.In particular, intermediates toward the end of glycolysis: 2,3-diphosphoglycerate and 3-phosphoglycerate were highly depleted in DMOG-treated cells.Glyceraldehyde-3-phosphate and α-D-glucose 6-phosphate were the only glycolytic intermediates that showed significantly higher levels in DMOG-treated cells compared with control.Pyruvate levels remained largely unchanged in response to DMOG treatment.There was a significant reduction in the level of glycolysis and TCA-cycle products NADH and ATP in response to DMOG treatment.TCA cycle.Being an analog of the TCA cycle metabolite 2-OG, DMOG affects the TCA cycle; Specifically, the levels of the TCA cycle intermediates citrate, cis-aconitate, isocitrate, 2-OG, and malate were significantly lower in DMOG-treated HTLV-1-infected PBMCs than in control cells.Only succinate levels remained unchanged among the measured TCA-cycle metabolites in response to DMOG treatment.DMOG inhibits mitochondrial respiration independently of its hypoxia mimic effect.Pentose phosphate pathway.Higher levels of glyceraldehyde-3-phosphate, α-D-glucose 6-phosphate, 6-phosphogluconate, ribulose 5-phosphate, and D-ribose in combination with low levels of 1-deoxy-D-xylulose 5-phosphate, deoxyribose 5-phosphate, and sedoheptulose 1,7-bisphosphate in DMOG-treated samples when compared with control pointed toward a downstream inhibition in the pentose phosphate pathway in response to DMOG treatment.Redox metabolism.The pentose phosphate pathway is responsible for synthesis of NADPH, a reducing equivalent in cell metabolism, e.g., in the conversion of oxidized glutathione disulfide to reduced glutathione for counteracting cellular redox stress.The decrease in cellular NADPH levels, together with the associated decrease in glutathione levels in DMOG-treated cells, suggests that the DMOG-treated cells were under oxidative stress.This proposal is supported by the observed impairment in oxidative phosphorylation above) and the mitochondrial electron transport chain, both of which generate oxidative stress.Purine and pyrimidine metabolism.While levels of purine metabolites adenosine and guanosine monophosphate were significantly higher in DMOG-treated cells when compared with controls, the corresponding nucleoside triphosphates were significantly depleted.All pyrimidine metabolites measured were depleted in DMOG-treated cells when compared with control.dCTP, a precursor for DNA synthesis, was strongly inhibited by DMOG treatment.Amino acid, carbohydrate, and lipid metabolism.DMOG treatment resulted in a significant increase in L-tryptophan levels.This might be expected given the strong concomitant reduction in tryptophan degradation products, quinolinic acid and 2-aminomuconic acid semialdehyde, observed in the DMOG-treated samples.Inhibition of glucose metabolism by DMOG might in turn inhibit many anabolic processes, because of the reduction in ATP and NADH.The observed significant decrease in the levels of N-acetylated metabolites could be due to the limited availability of acetyl-coenzyme A resulting from an inhibition of glycolysis and the TCA cycle.Thus most of the observed metabolic effects of DMOG, discussed above, are secondary to its perturbation of glycolysis, the TCA cycle, and oxidative phosphorylation.Hence we went on to target these pathways with specific inhibitors to study whether selective inhibition of certain enzymes could inhibit tax transcription in HTLV-1-infected primary PBMCs.To investigate the influence of glucose metabolism on HTLV-1 transcription, we tested the effects of the TCA-cycle inhibitor sodium arsenite and the glycolysis inhibitor iodoacetate and quantified their effects on HTLV-1 plus-strand transcription in primary PBMCs derived from HTLV-1-infected individuals.IAA treatment significantly inhibited HTLV-1 tax mRNA transcription, whereas arsenite caused no change in tax mRNA levels at the concentration tested.The stress-inducible enzyme heme oxygenase 1 served as the positive control for the biological activity of the drugs.IAA and arsenite also inhibited cellular VEGF mRNA levels.These results suggest that glucose metabolism influences reactivation of HTLV-1 from latency and reinforce the conclusion that HTLV-1 tax mRNA transcription is independent of HIF and intracellular oxidative stress.Neither IAA nor arsenite made an impact on minus-strand HTLV-1 transcription.As a further test of the involvement of glucose metabolism in HTLV-1 transactivation from latency, primary PBMCs from HTLV-1-infected individuals were cultured in RPMI medium with either 5.5 mM glucose with 10% fetal bovine serum or 0 mM glucose with 10% FBS in the presence or absence of the glycolysis inhibitor 2-deoxy-D-glucose and the TCA-cycle inducers sodium pyruvate or galactose.PBMCs from HTLV-1-infected individuals cultured overnight in RPMI with 0 mM glucose +10% FBS expressed significantly lower levels of HTLV-1 tax mRNA than did PBMCs cultured in RPMI with physiological glucose +10% FBS.Addition of the glycolysis inhibitor 2-DG to the 0-mM glucose medium led to a further significant suppression of HTLV-1 plus-strand transcription.Further addition of the TCA-cycle inducer sodium pyruvate did not rescue the inhibitory effect of 2-DG on HTLV-1 tax transcription.These observations are consistent with the data shown in Figure 5A; they indicate that glycolysis, but not the TCA cycle, plays an important role in HTLV-1 plus-strand transcription.Galactose, an inducer of the TCA cycle and oxidative phosphorylation, had no effect on HTLV-1 plus-strand transcription when added to 0-mM glucose medium, strengthening the conclusion that the TCA cycle is not involved in HTLV-1 reactivation from latency.Like IAA and arsenite, 2-DG treatment also resulted in a significant reduction in cellular VEGF mRNA levels.Inhibition of glycolysis by 2-DG also resulted in a significant decrease in the mRNA of lactate dehydrogenase A, a surrogate marker of the rate of glycolysis.There was no significant difference in sHBZ transcription upon either inhibition of glycolysis or TCA-cycle induction when compared with control.Next, we studied the effect of inhibition of the mitochondrial electron transport chain on HTLV-1 plus-strand transcription.PBMCs from HTLV-1-infected individuals were incubated overnight in the presence or absence of inhibitors of mitochondrial ETC complex II, complex I, complex III, and complex V.Four out of five mitochondrial ETC complex inhibitors tested caused a significant reduction in plus-strand HTLV-1 transcription: only oligomycin, an inhibitor of complex V, had no impact on HTLV-1 tax transcription.These inhibitors had varying effects on VEGF and LDHA mRNA levels.Also, there was no change in minus-strand sHBZ transcription in response to ETC-inhibitor treatment.These results suggest that the mode of action of the ETC inhibitors on plus-strand HTLV-1 transcription is an independent phenomenon and is not mediated through their perturbation of the glycolytic and hypoxia pathways in cells.CD4+ T lymphocytes, the primary reservoir of HTLV-1 in humans, are routinely exposed in vivo to alterations in the microenvironment such as changes in the oxygen tension or the concentration of glucose and other nutrients.We hypothesized that such changes influence the transcriptional activity of the HTLV-1 provirus, which is usually silent in freshly isolated PBMCs but undergoes strong spontaneous activation ex vivo.To minimize artifacts due to in vitro selection, we studied fresh PBMCs isolated from HTLV-1-infected individuals.HTLV-1-infected primary PBMCs cultured under conditions of physiological hypoxia showed a significant enhancement in HTLV-1 plus-strand transcription when compared with those cultured under ambient oxygen conditions.Hypoxia has been shown to have variable effects on viral expression within cells.For example, hypoxia induces lytic replication in human herpesvirus 8, whereas Tat-induced HIV-1 transcription is inhibited by hypoxia.Thus, viruses have developed different adaptive responses to varying oxygen levels within the body.By employing a specific HIF stabilizer, IOX2, we showed that the hypoxia-mediated effect on HTLV-1 transcription appeared to be independent of HIF.The effect of physiological hypoxia on HTLV-1 transcription suggests that the dynamics of HTLV-1 reactivation that we observe in freshly isolated PBMCs from venous blood differ from the dynamics in other compartments such as solid lymphoid tissue or bone marrow, where the oxygen tension is typically 1%–2%.Since hypoxia enhances plus-strand proviral transcription, the infected cells in these compartments might be more likely to support productive viral replication and spread.Consistent with this hypothesis, Yasunaga et al. reported higher levels of tax mRNA in the bone marrow than in other tissues.In contrast to the effect of hypoxia, the hypoxia mimic DMOG potently inhibited proviral plus-strand transcription in HTLV-1-infected primary PBMCs.We therefore investigated the HIF-independent effects of DMOG, either as an inhibitor of 2-OG oxygenases or as an inhibitor of glucose metabolism and the TCA cycle.Having ruled out the involvement of epigenetic effector 2-OG oxygenases including PHDs, JmjC lysine demethylases, and nucleic acid oxygenases, we studied the influence of glucose metabolism on HTLV-1 transcription.Mass spectrometry of DMOG-treated HTLV-1-infected primary PBMCs revealed that DMOG significantly inhibited all the metabolic pathways closely linked to glucose metabolism.It is known that DMOG inhibits the TCA cycle and mitochondrial respiration, but direct inhibition of glycolysis by DMOG has not previously been reported.Chemical inhibition of glycolysis by either iodoacetic acid or 2-DG significantly reduced plus-strand HTLV-1 transcription.However, neither an inhibitor nor inducers of the TCA cycle altered HTLV-1 transcription.We conclude that glycolysis regulates reactivation from latency of the integrated HTLV-1 provirus.Consistent with this effect, we saw a significant reduction in HTLV-1 plus-strand transcription in PBMCs cultured in glucose-free medium when compared with those cultured under physiological glucose concentration.The glucose receptor GLUT-1 is a cellular receptor for HTLV-1 infection; expression of GLUT-1 is induced by hypoxia and other forms of cellular stress.Four out of five inhibitors of the mitochondrial ETC tested significantly reduced HTLV-1 transcription.Specifically, inhibitors of ETC complexes involved in electron transfer reduced HTLV-1 plus-strand transcription.However, inhibition of mitochondrial ATP synthase by oligomycin had no impact.It is unclear why oligomycin had no effect on HTLV-1 transcription at the tested concentration.The ETC inhibitors had variable effects on expression of the hypoxia-inducible gene VEGF and the glycolytic enzyme LDHA, suggesting that their effect on HTLV-1 transcription is independent of hypoxia and glycolysis.Ciminale and colleagues have shown that the p13 accessory protein of HTLV-1 modulates mitochondrial membrane potential by facilitating an inward K+ current, thereby increasing ETC activity and reactive oxygen species production, consistent with a link between mitochondrial function and the HTLV-1 life cycle.We propose that the strong inhibition of HTLV-1 caused by DMOG is due to inhibition of both glycolysis and the mitochondrial ETC.Thus, glycolysis and the mitochondrial ETC play an important role in regulating HTLV-1 plus-strand transcription, whereas the TCA cycle does not play a direct role.In the light of the recent observations that different subsets of T cells rely on different metabolic pathways for their energy needs, it is plausible that the TCA cycle contributes less than glycolysis to the metabolism of HTLV-1-infected CD4+ T cells.A typical HTLV-1-infected individual has approximately 104 to 105 different HTLV-1-infected T cell clones, each with a unique proviral integration site.We conclude that both viral determinants and the host microenvironment determine the likelihood of spontaneous reactivation of an integrated HTLV-1 provirus from a latent state.Retroviruses such as HTLV-1 persist life-long in their host by integrating a copy of their genetic material into the host cellular genomic DNA.Although HTLV-1 infection is asymptomatic in most cases, in a subset of infected individuals it causes an aggressive hematological malignancy or a debilitating neuroinflammatory condition.Infection is considered largely latent due to the absence of viral RNA and proteins in fresh blood samples.However, when blood obtained from HTLV-1-infected individuals is cultured ex vivo, there is a spontaneous increase in plus-strand HTLV-1 transcription, which suggests that changes in the extracellular microenvironment play an important deterministic role in viral expression.Here, we identify two factors in the microenvironment that regulate HTLV-1 proviral latency and expression.First, we show that physiological hypoxia, as present in the lymphoid organs and bone marrow, enhances HTLV-1 transcription.Second, inhibition of glycolysis or the mitochondrial electron transport chain suppresses plus-strand HTLV-1 transcription.We conclude that both glucose metabolism and oxygen availability regulate HTLV-1 transcription.The significance of these results is twofold.First, the identification of two microenvironmental factors that regulate HTLV-1 expression constitutes a basic advance in the understanding of HTLV-1 persistence and pathogenesis.Second, targeting these pathways with currently available as well as novel therapies could complement existing antiretrovirals and improve treatment efficiency.Further information and requests for reagents may be directed to, and will be fulfilled by, the corresponding author Charles R.M. Bangham,A list of patient blood samples tested has been provided in File S1."All donors attended the National Centre for Human Retrovirology at Imperial College Healthcare NHS Trust, St Mary's Hospital, London and gave written informed consent in accordance with the Declaration of Helsinki to donate blood samples to the Communicable Diseases Research Tissue Bank which is approved by the UK National Research Ethics Service.Venous blood samples from HTLV-1-infected individuals attending the NCHR HTLV clinic were separated using Histopaque and PBMCs stored in liquid nitrogen.CD8+ T-cells were depleted from PBMCs with Dynabeads CD8, using the manufacturer’s instructions.Equal volumes of RPMI with L-Glutamine without Glucose and RPMI with L-Glutamine and 11mM Glucose were mixed to make RPMI with L-Glutamine and 5.5mM Glucose.This medium was used for cell culture with 10% FBS in all experiments, unless otherwise stated.A ‘Hypoxylab’ - hypoxia workstation and incubator was employed for all hypoxia-related experiments.All media and consumables used in hypoxic culture were conditioned to the target oxygen concentration before use.Primary PBMCs were manipulated and cultured in the HypoxyLab at the desired oxygen concentration for the indicated time.In situ dissolved oxygen in the culture media was measured using the integrated OxyLite™ oxygen sensor to accurately monitor oxygen availability in the cellular microenvironment.Multiple chemical inhibitors of different cellular pathways were employed in this study.A comprehensive list of all the compounds used along with the corresponding references is provided in Table S1.Stocks of each inhibitor were prepared in either DMSO or water, and diluted to their desired concentration in culture medium to study their effects on HTLV-1 transcription.RNA was extracted from cultured PBMCs using the RNeasy Plus Mini kit.cDNA was synthesised from the extracted RNA using the Transcriptor First Strand cDNA Synthesis kit by following the manufacturer instructions.An additional no-RT control was included for each cDNA sample synthesised.The kinetic PCR amplification was carried out using the Viia7 Real-time PCR system with gene-specific primers and Fast SYBR Green Master Mix.The list of primers used is given in Table S2.Up to 10 million cells were crosslinked with 1% formaldehyde for 10min at room temperature.Fixed cells were quenched with 125mM Glycine and lysed with Cell lysis buffer.Subsequently nuclei were pelleted by centrifugation at 2000rpm for 5 minutes and subjected to lysis with 130 μl Nuclear lysis buffer.Nuclear lysates were sonicated using the Covaris S220 sonicator in a microTUBE, with the following Sonolab7.2 program parameters: Peak Incident Power: 105 Watts; Duty Factor: 10%; Cycles per burst: 200; Treatment time: 480 sec; Distilled Water Temperature: 4-6°C.Sonicated lysates were subjected to immunoprecipitation using the following antibodies: ChIPAb+Trimethyl-Histone H3 Lys 4 and ChIPAb+Trimethyl-Histone H3 Lys 36, ChIPAb+ Trimethyl-Histone H3 Lys 27 and corresponding control IgG overnight at 4°C in the presence of MagnaChIP A+G magnetic beads.A 10% IP input sample was collected separately as a reference for relative quantification.The bead-bound immunoprecipitated DNA was washed sequentially for 10 minutes each at 4°C with the following wash buffers- Low salt wash buffer, High salt wash buffer, LiCl wash buffer; 1mM EDTA; 10mM Tris-HCl pH8.0) and 2 washes with TE buffer.The DNA was eluted from the beads by IP elution buffer.The eluted DNA was reverse cross-linked at 65°C overnight in the presence of 300mM NaCl and thereafter subjected to proteinase K digestion at 45°C for 2 hours.The immunoprecipitated and input DNAs were purified by using the QIAquick PCR Purification Kit.The DNA enrichment in ChIP samples was quantified using region-specific primers for 5’-LTR junction, Gag, Pol, Env, vCTCF, Tax and 3’-LTR junction of the HTLV-1 provirus and corresponding qPCR TaqMan probes.The kinetic PCR amplification was carried out using the Viia7 Real-time PCR system with TaqMan Gene Expression Master Mix.Genomic DNA from PBMCs was extracted using a QIAamp DNA Mini kit, following the manufacturer’s instructions.The extracted DNA was sonicated using a Covaris S220 machine.MeDIP assays were carried out using the MethylCollector Ultra kit according to the manufacturer’s protocol.Immunoprecipitated DNA was quantified by qPCR as described above for ChIP, using the same primers and probes.PBMCs isolated from HTLV-1 infected individuals were cultured overnight in the presence of 0.5mM DMOG or DMSO.Briefly, pelleted cells were lysed with ice-cold 80% methanol.0.2ml of each lysate was then filtered using a 10kd molecular weight cut-off filter.The liquid which had passed though the filter was then placed in an autosampler vial and stored at -80°C.On the day of analysis the extract was allowed to warm to 4°C in the chilled autosampler and then analysed directly by LC/MS/MS.Metabolite analyses were performed using a Thermo Scientific ICS-5000+ ion chromatography system coupled directly to a Q-Exactive HF Hybrid Quadrupole-Orbitrap mass spectrometer with a HESI II electrospray ionisation source.The ICS-5000+ HPLC system incorporated an electrolytic anion generator which was programmed to produce an OH- gradient from 5-100mM over 37 minutes.An inline electrolytic suppressor removed the OH- ions and cations from the post-column eluent prior to eluent delivery to the electrospray ion source of the MS system.A 10 μL partial loop injection was used for all analyses and the chromatographic separation was performed using a Thermo Scientific Dionex IonPac AS11-HC 2 × 250 mm, 4 μm particle size column with a Dionex Ionpac AG11-HC 4 μm 2x50 guard column inline.The IC flow rate was 0.250 mL/min.The total run time was 37 minutes and the hydroxide ion gradient comprised as follows: 0mins, 0mM; 1min, 0mM; 15mins, 60mM; 25mins, 100mM; 30mins, 100mM; 30.1mins, 0mM; 37mins, 0mM.Analysis was performed in negative ion mode using ascan range from 80-900 and resolution set to 70,000.The tune file source parameters were set as follows: Sheath gas flow 60; Aux gas flow 20; Spray voltage 3.6; Capillary temperature 320; S-lens RF value 70; Heater temperature 450.AGC target was set to 1e6 and the Max IT value was 250ms.The column temperature was kept at 30°C throughout the experiment.Full scan data were acquired in, continuum mode across the mass range m/z 60-900.Raw data was processed using Progenesis QI for small molecules.Briefly this encompassed chromatographic peak alignment, isotope cluster recognition and compound identification.Identification of compounds in experimental samples was based on matching to an in-house library of authentic standards, using four measured parameters for each compound from the database.These were: accurate mass measurement based on theoretical mass derived from the chemical formula, experimental retention time window of 0.5mins, isotope pattern recognition and matching with fragmentation patterns from an authentic standard where these were available from survey scans.All values in the database were obtained from the analysis of authentic standard compounds.Statistical analysis was performed using Progenesis QI and the EZ info plugin for Progenesis QI developed by Umertrics.Supervised and unsupervised modelling of the data was performed.Volcano plots, S-plots and VIP values were extracted from OPLS-DA models to help identify highly significant compounds and potential biomarkers.p-values, %CV and fold-changes associated with the statistical comparison of experimental groups were calculated for each metabolite using progenesis QI.These data were used to identify and evaluate potential metabolic biomarkers.The identified metabolites were sorted according to the maximum fold difference in normalized abundance between DMSO-treated and DMOG-treated samples respectively.Any difference in metabolite abundance, which was statistically significant, and fold change >1.3-fold, was analysed further.The resultant metabolite list was mapped according to the metabolic pathways they were associated with.LinRegPCR-Ct method was used for relative quantification of target mRNA levels.LinRegPCR software was used to determine baselines, threshold and mean efficiency of the reaction to calculate target mRNA quantity, where R0 = Threshold/ ECt.All values were normalized to their respective 18S rRNA levels, which was the internal PCR control.DNA enrichment was calculated as % Input =*10, where ΔCt= Ctinput- Ctsample.LinRegPCR software was used to determine the mean efficiency of the reaction for each primer pair.Two-tailed Student’s T test, Wilcoxon matched pairs signed rank test and 1-way ANOVA with post-test for linear trend were employed for statistical analysis of data as described in the corresponding figure legends.The raw metabolomics data with the identified metabolites has been provided in File S2.Conceived and designed the experiments: A.K., C.C.T., C.J.S., and C.R.M.B.; Performed the experiments: A.K., M.M., and J.S.M.; Analyzed the data: A.K., J.S.M., C.J.S., and C.R.M.B.; Contributed reagents/materials/analysis tools: C.C.T., C.J.S., and J.S.M.; Writing – original draft: A.K. and C.R.M.B.; Writing – review and editing: A.K., C.R.M.B., C.J.S., J.S.M., and G.P.T.; Recruited patients: G.P.T. | The human retrovirus HTLV-1 causes a hematological malignancy or neuroinflammatory disease in ∼10% of infected individuals. HTLV-1 primarily infects CD4+ T lymphocytes and persists as a provirus integrated in their genome. HTLV-1 appears transcriptionally latent in freshly isolated cells; however, the chronically active anti-HTLV-1 cytotoxic T cell response observed in infected individuals indicates frequent proviral expression in vivo. The kinetics and regulation of HTLV-1 proviral expression in vivo are poorly understood. By using hypoxia, small-molecule hypoxia mimics, and inhibitors of specific metabolic pathways, we show that physiologically relevant levels of hypoxia, as routinely encountered by circulating T cells in the lymphoid organs and bone marrow, significantly enhance HTLV-1 reactivation from latency. Furthermore, culturing naturally infected CD4+ T cells in glucose-free medium or chemical inhibition of glycolysis or the mitochondrial electron transport chain strongly suppresses HTLV-1 plus-strand transcription. We conclude that glucose metabolism and oxygen tension regulate HTLV-1 proviral latency and reactivation in vivo. The human leukemia virus HTLV-1 remains dormant most of the time in the infected person, but is intermittently reactivated by unknown mechanisms. Kulkarni et al. show that fluctuations in glucose metabolism and oxygen availability are two major factors that govern the reactivation of HTLV-1 from dormancy. |
543 | Introducing the H2020 AQUACROSS project: Knowledge, Assessment, and Management for AQUAtic Biodiversity and Ecosystem Services aCROSS EU policies | Aquatic ecosystems are rich in biodiversity and home to a diverse array of species and habitats.These ecosystems are vital to economic and social well-being, including through contributing to socio-economic security and human health, supplying clean water, preventing floods, producing food, and providing energy, among others.Around Europe, as in the rest of the world, many of these valuable ecosystems are currently at significant risk of being irreversibly damaged by human activities and by the numerous pressures these create, including pollution, contamination, invasive species, and overfishing, as well as climate change.Current and forecasted trends of biodiversity loss in aquatic ecosystems raise substantial concern not only on grounds of environmental impacts and loss of ecosystem processes and functions, but also in terms of their effects on human well-being through the provision of ecosystem services.Aquatic biodiversity is declining worldwide at an alarming pace, forcing scientists and policymakers to act together to identify effective policy solutions.Internationally action has been promoted under the Convention on Biodiversity via a number of protocols and conventions; Bonn Convention on Migratory Species; Bern Convention on the conservation of European wildlife and natural habitats.In parallel, the EU is taking action on multiple fronts to safeguard the status of aquatic ecosystems."These international goals and commitments are also reflected within the EU through a range of policies, regulations and directives these include the Birds and Habitats Directives the Water Framework Directive, the Marine Strategy Framework Directive, the Blueprint to Safeguard Europe's Water Resources, and more recently the EU Biodiversity Strategy to 2020.To date, despite these many environmental initiatives EU directives have been unable to halt and reverse the trend of declining biodiversity of aquatic ecosystems, the EU biodiversity strategy is at risk of failing.In the EU, the lack of success is the result, among other things, of a static view towards EU policies, their fragmented design and implementation, and the divisions in governance between the public and private sectors, 2015).In practical terms a better understanding of aquatic ecosystems state, the services they deliver, the pressures that impact them, and the causes of these pressures, including their thresholds and tipping points when impacted by changing drivers and pressures, is required and the need for more holistic approaches to environmental management has been widely recognised.Two promising approaches to work towards meeting these challenges include Ecosystem Based-Management, which explicitly considers the full range of ecological and human interactions and processes necessary to sustain ecosystem composition, structure and function and integrates the connections between land air water and all living things including human beings and their institutions, and the Ecosystem Services Approach, which enables integration of the many different types of benefits derived from biodiversity into the management of environmental resources for society and the economy."Both EBM and the incorporation of ecosystem services have been widely championed in academic research and through a variety of major EU research projects and the language of EBM and of Ecosystem services is included within many of the EU environmental Directives, yet these more holistic, integrative approaches to management have proved difficult to put into practice.Recognizing the many parallel environmental management efforts at play stemming from diverse EU directives and regulations, the AQUACROSS research project aimed to develop mechanisms for harmonized implementation of environmental management directives and regulations and expand the empirical as well as practical basis for application of the Ecosystem-based Management concept for all aquatic ecosystems along the freshwater, coastal, and marine water continuum.At its core, the project aimed to be of direct policy-relevance, in particular for supporting the timely achievement of the targets set out by the EU Biodiversity Strategy to 2020 and its strategic plan 2012–2020 by promotion of ES and EBM concepts in the statutory management process set out under EU regulations.This paper presents the context, approaches and objectives of the AQUACROSS project, and describes its strongly integrative and transdisciplinary approach, highlighting the major project outputs and providing context for the individual project components which have contributed to this dedicated special issue."To this end, we reflect on the state of the art of ecosystem-based management and the ecosystem services approach prior to AQUACROSS's inception. "We then outline AQUACROSS's key objectives and outputs before describing the AQUACROSS approach. "Section 5 concludes by emphasising the project's promotion of EBM to generate tangible real world examples of EBM application.AQUACROSS was designed to advance knowledge in three particular fields of research relating to both the social and ecological components of social-ecological systems:the application of EBM for the management of aquatic ecosystems, including through the development of an holistic conceptual framework to integrate social and ecological components of research and to provide a loosely standardized protocol for conducting EBM,the understanding of the biodiversity - ecosystem services causality chain e.g. Culhane et al. this issue, to understand the risks posed by human activities to ecosystem components and habitats and the services they provide, across different aquatic ecosystem types,Methods and mechanisms to develop and promote socially, politically and economically acceptable EBM solutions into local management.EBM can be defined as an integrated approach to management that considers the entire ecosystem, including humans.The goal is to maintain ecosystems in a healthy, clean, productive and resilient condition, so that they can provide humans with the services and benefits upon which we depend.Management decisions should not adversely affect ecosystem functions and productivity, so that the provisioning of aquatic ecosystem services can be sustained in the long term.EBM is also relevant to maintain and restore the connection between social and ecological systems."Indeed, EBM now encompasses a whole range of decision-making support tools, and has in that context permeated scientific and policy practice related to the management of aquatic ecosystems and the language of EBM and of ES for example is present within many of the newer EU directives and regulations.A major challenge nevertheless remains in the establishment of an operational framework that links the assessment of biodiversity and ecological processes and their full consideration in public and private decision-making.EBM implementation remains limited in particular regarding i) the lack of explicit consideration of the ecosystem services concept, which would critically help link ecological assessments with the achievement of human well-being, thereby enhancing the relevance of achieving biodiversity targets for a range of public and private actors; ii) a primary focus on ecological dimensions which may limit the acceptability of EBM as relative to a more truly holistic consideration of social-ecological processes, which would also enhance our integrated understanding of relevant dynamics and feedbacks between society and environment; iii) the lack of attention to trade-offs, uncertainties, and thresholds inherent in the management of ecosystems; and, iv) standardized methodologies and approaches.In this context AQUACROSS developed an analytical framework to enable a common approach to Ecosystem Based Management across ecosystems and management contexts.Better understanding the links between biodiversity and ecosystem functions and services and how natural and anthropogenic drivers and pressures alter these relationships, is essential to inform decision-making to support achievement of biodiversity targets.Knowledge regarding these linkages has progressed rapidly since the early 1990s.Substantial evidence indicates the positive influence of biodiversity on freshwater and marine ecosystem functions, the provision of ecosystem services and overall ecosystem resilience."The relationship between biodiversity and ecosystem functions has in particular been studied, with evidence that these relationships vary depending on the relative contribution of dominant and minor species, environmental context, and density dependence and species interactions.In parallel, significant efforts have been made on building modelling capacities, to test key causal links between biodiversity and ecosystem functions and to increase our ability to forecast future dynamics.However, the whole biodiversity causality chain remains poorly understood.Insufficient evidence exists to determine the modifying effects of environmental factors, such as nutrient concentration, altered physical structures, or elevated CO2 on biodiversity and community dynamics and, subsequently, ecosystem properties.Most studies fail to find tangible links between structure, diversity and dynamics of natural communities and their ability to deliver ecosystem services that directly affect human well-being.In addition, current modelling predictions remain very limited.For example, few studies have explicitly incorporated structuring abiotic and biotic features that are key to species co-existence and vital for the maintenance of species diversity.While more advanced dynamical modelling approaches have been developed, their complexity has led to limited practical application.Models usually only cover selected ecosystem functions and are rarely able to link them to targets of biodiversity conservation and to socio-economic variables.They also have largely neglected the coupling of social-ecological systems and often exhibit significant weaknesses regarding the complex and adaptive nature of these systems, such as assuming linear response kinetics, ignoring regime shifts, uncertainty, and uncertainties of human responses to policies and management decisions and environmental change.Under such conditions of uncertainty, risk based approaches may provide a useful practical basis for incorporating what is known about system behavior into specific management strategies and several AQUACROSS outputs based on these causality chains are described in detail in this special issue."EU policies on water, the marine environment, nature and biodiversity, together form the backbone of environmental protection of Europe's aquatic ecosystems and their services. "One of the biggest challenges for the implementation of the EU Biodiversity Strategy to 2020 is to take advantage and reduce conflicts between these policy fields, and to effectively leverage activities harmful for the protection and sustainable management of aquatic ecosystem.It is widely recognised that effective streamlining and coordination of EU environmental policy cannot only be supported by developing innovative concepts and methods, and tackling knowledge gaps, but also requires the involvement of society in policy design and research activities.Participation may have both an instrumental role and additionally a normative one."In view of building resilience, stakeholder engagement is also a key process that helps build the capacity of actors to mobilise knowledge and resources for action and promote social learning by changing actors' relationships, understanding, values and norms.While the benefits of stakeholder engagement are established in theory, limited attention is paid to the “policy demand” of such processes in practice.In many participatory research initiatives, stakeholders and decision-makers often play a purely advisory and observer role, with minor influence on the research carried out.As a result, and despite an increasing number of dissemination events targeting stakeholders and policy makers in past and on-going research activities, the impact of research results continues to remain limited, which reduces the scope for evidence-based policy-making and hinders the potential uptake of identified solutions.The overall aim of the AQUACROSS project has been to support the coordinated implementation of the EU 2020 Biodiversity Strategy and international biodiversity targets, and by doing so to ensure improved functioning of aquatic ecosystems as a whole.More specifically, AQUACROSS has had the following research goals:To explore, advance and support the implementation of the EBM concept across aquatic ecosystems in the EU and beyond for the purposes of enhancing human well-being;,To specifically identify and test robust, cost-effective and innovative management and business models and tools for seizing all the opportunities offered by aquatic ecosystems services that correspond to the objectives and challenges faced by stakeholders, businesses, and policy makers; and, "To mobilise policy makers, businesses, and societal actors at global, EU, Member State, and case-study levels in order to learn from real-world experiences, aligned with EU policy implementation, and to co-build and test assessment frameworks, concepts, tools, management approaches, and business models, to ensure end-users' uptake of project results",AQUACROSS has focused its research activities on identifying synergies and overcoming barriers between policy objectives, concepts, knowledge, data streams, and management approaches for freshwater, coastal, and marine ecosystems.To do so, AQUACROSS applied end-user driven processes and social innovation.The first two goals described above were supported by two specific sub- objectives:To Provide an interdisciplinary assessment framework to support an EBM approach built both on exploring the evidence of links between biodiversity and aquatic ecosystem functions and services, as well as between drivers and pressures, changes in the status of biodiversity and the delivery of biophysical flows of ecosystem services, and linking this to the future impacts these will have in turn on human well-being.Overcome knowledge gaps on evaluating the effects of biodiversity change on ecosystem services by providing assessments of how ecosystem functions cascade into service supply, delivery, and value, moving beyond ideal experimental conditions to realistic management scenarios in which services are actually delivered to society at large.While the 3rd goal was supported by the specific objective of developing a network of interdisciplinary, adaptive, and participatory EBM experiments that cover a gradient of landscapes and seascapes, as well as a diversity of socioeconomic contexts.Here, demand-driven approaches were central to ensuring that AQUACROSS research addressed issues that were important to stakeholders, taking their needs and knowledge into account, providing opportunities for co-learning, and feeding into public and private decision-making.In this section, we introduce how AQUACROSS has set out to achieve its objectives.Given the integrative and interdisciplinary nature of the project, guiding concepts and an overarching theoretical framework were essential to enable parallel collaborative works strands, these are summarised in Section 4.1.Application of the theoretical framework is outlined in Section 4.2.This application was built around four pillars: 1 – real world testing, 2 – giving direction, 3 – improving scientific knowledge, and 4 – improving management.Given their central role as a practical testing ground for AQUACROSS concepts and as source of insights and conclusions, Section 4.3 introduces each of the eight case studies in the AQUACROSS project.Integration as well as inter- and trans-disciplinary research were central to the project, and the application of these approaches to the challenges of EBM across aquatic ecosystems were its main innovation.AQUACROSS combined scientific analyses to develop an integrative understanding of drivers, pressures, state of ecosystems, ecosystem services, and impacts on aquatic ecosystems based on an adaptation of the well-known DPSIR analytical framework.At the outset and throughout, the project incorporated stakeholder and end-user engagement into the assessment of causal links between ecosystems and the services they provide.This integration is illustrated by the way AQUACROSS addressed both the harmonisation and streamlining of environmental policies under the overall framework of the EU Biodiversity Strategy to 2020; in the coordination of policies in transitional and coastal waters, where different policy directives apply, and through the integration of relevant information for the assessment of aquatic ecosystems across the freshwater-saltwater continuum.By addressing and integrating across all aquatic ecosystems, the project mobilised biologists, ecologists, chemists, eco-toxicologists, hydrologists, oceanographers, environmental scientists, physicists, economists, IT-experts, and other social scientists in a truly transdisciplinary process.At its core, AQUACROSS developed and tested an Assessment Framework that aimed to enable the practical application of EBM in aquatic ecosystems through relevant indicators, data, models and guidance protocols.AQUACROSS recognised EBM as a way to address uncertainty and variability in dynamic ecosystems in an effort to embrace change, learn from experience and adapt policies throughout the management process.As EBM measures needs to be supported by an effective policy and governance framework that enables their adoption among a wide range of actors from public authorities to businesses, civil society organisations and citizens, this aspect also featured in the Assessment Framework.The AQUACROSS AF integrates ecological and socio-economic aspects in one analytical approach to EBM, building on well-established frameworks currently in application to assess biodiversity, ecosystem functions and services, for example, MAES, CICES, TEEB, MA and ARIES, as well as INSPIRE, SEIS and the GEOSS Data Sharing Principles.The AQUACROSS AF applied and extended the widely used DPSIR cycle addressing Drivers, Pressures, States, Ecosystem Goods and Services, Impacts and Responses for the assessment of aquatic ecosystems.The DPS-EGS-IR approach, which includes the causal relationships relevant to inform management decisions, allows for addressing multiple interactions between socio-economic and ecological systems in aquatic ecosystems.AQUACROSS emphasised the role of feedback loops, critical thresholds of ecosystems and coupled social-ecological systems that behave as complex adaptive systems as illustrated by the “Butterfly diagram” which is central to the AF.For this, the project enriched its analyses with the current debates and practical applications of Resilience Thinking.Resilience is defined as the ability to cope with alterations induced by the presence of multiple stressors or with unpredictable or non-directional environmental change.A system is resilient when it retains or returns to its essential features and functions after its elements, processes and structures are subjected to pressure.In AQUACROSS, resilience was not only considered on conceptual grounds but also from a practical perspective to facilitate the integration of knowledge on ecosystem functions and services with values, needs and preferences of stakeholders to develop sustainable solutions.Processes of knowledge production through participation aimed to support social learning and lead to management and governance approaches that were more capable of coping with uncertainty and are more suitable to enhance the resilience of social-ecological systems.Finally, AQUACROSS took the Meta-Ecosystem Approach to better understand feedbacks and impacts across multiple scales and the emergent properties that arise from spatial coupling of local ecosystems, such as global source–sink constraints, biodiversity–productivity patterns, stabilisation of ecosystem processes and indirect interactions at local or regional scales.The meta-ecosystem approach is a useful and powerful theoretical and conceptual tool i) to integrate the perspectives of community ecology, ii) to provide novel fundamental insights into the dynamics and functioning of ecosystems from local to global scales, and iii) to increase our ability to predict the consequences of drivers and pressures on biodiversity and the provision of ecosystem services to human societies.The meta-ecosystem approach recognises the distinctive spatial distribution of ecosystems, describing abiotic and biotic components based on interaction, connection or movement rates, e.g. of nutrients or long distance migratory organisms.This approach is widely seen as theoretical, and it has been rarely applied in practice to aquatic ecosystems.Being scale independent, this approach enables a focus on ecosystem diversity, which renders outputs more operational for EBM.The project built on existing knowledge to generate innovative responses to policy coordination challenges by developing integrative tools and concepts with relevant stakeholders.The AQUACROSS approach was built around four interconnected pillars of work, enabling an integrated work programme throughout the project.In addition, eight different case studies supported the development and testing of the AQUACROSS AF as well as the wider suite of innovative and applicable AQUACROSS management tools for aquatic ecosystems, which together served to best enhance, through conservation of biodiversity, the socio-ecological resilience of the ecosystem and its capacity to deliver services to society."AQUACROSS placed stakeholders and policy demands first to ensure research was framed in terms of real policy, stakeholder, and business needs, and to accelerate and broaden the uptake of projects' results.This required not only a sound understanding of prevailing policy, scientific and management paradigms, values and perceptions for each policy area, but also effective engagement mechanisms within the project.Pillar I involved the development of guidance on stakeholder engagement to the case studies, the creation of interactive platforms for discussion, advice and consultation on the main questions relevant for AQUACROSS research, and the communication and dissemination of AQUACROSS findings and outputs.To ensure relevance to policy and business, AQUACROSS used a science-policy-business interface focused at two levels: local, through the case studies; and generically, through a project guidance board, the Science-Policy-Business Think Tank.The SPBTT membership was a balanced mix of individuals with backgrounds in science, policy and business.This, combined with local stakeholder representation contributed to the identification of common research and policy challenges to elucidate policy and business solutions, and their extrapolation to wider areas/issues, along with the identification of their pre-conditions necessary for implementation.Pillar 2 was based around two research activities: Policy Orientation and the AQUACROSS AF.Policy Orientation investigated the demands that arise from “policy implementation in practice”."It identified the main international, European and Member State-level policy drivers affecting biodiversity conservation targets at different scales of application through a top-down/bottom-up approach.Synergies, opportunities and barriers were identified between the specific operational features of existing environmental and related sectoral policies in Europe that are relevant for the protection of aquatic ecosystems.This analysis enabled a fuller understanding of the extent to which existing and planned EU policies may support or hinder the achievement of EU and international biodiversity targets.Finally, the analysis synthesised the insights gained from AQUACROSS to provide policy-relevant information guiding EBM implementation for the achievement of the EU biodiversity targets in aquatic ecosystems in all regions of Europe, and beyond.The Assessment Framework developed a common framework focused on concepts, tools and methods for the assessment of aquatic ecosystems and application in the project case studies.Within the project, it built a joint understanding, facilitating the integration of social and natural scientific disciplines.The AQUACROSS AF followed the DPS-EGS-IR causal framework, and identified critical linkages between the different elements of the project: analysis of drivers and pressures; the assessment of causalities between biodiversity and ecosystem functions and services; the impact of direct, indirect and emerging drivers on the status and trends of biodiversity, ecosystem functions and services; as well as facilitating the design and implementation of EBM approaches to enhance the status of aquatic ecosystems and achieve policy objectives.The AF integrates crosscutting issues such as resilience thinking, uncertainty, issues of varying spatial and temporal scales, and data and metrics for indicators.The AF highlights key areas or “nodes” where indicators are essential for capturing the state and dynamics of biodiversity and ecosystem services, as well as the adaptive capacity and resilience of aquatic ecosystems.Finally, the framework was further refined and updated based on feedback from its implementation in case studies to develop an ecosystem based management handbook to enable more widespread practical implementation of EBM.The AQUACROSS AF was used to assess drivers of change and pressures for different aquatic ecosystems and ecosystem components along a freshwater – marine continuum, including transitional waters, and addressing ecological and socio-economic factors in eight case studies.Pillar 3 consisted of four separate but interlinked research activities based on the DPS-EGS-IR approach of the AQUACROSS AF.Drivers of change and pressures on aquatic ecosystems examined existing knowledge and global projections of direct, indirect and emerging drivers and resulting pressures on aquatic ecosystems to be faced at different spatial scales.It extends the AF through guidance on indicators and methods to assess drivers and pressures affecting aquatic ecosystems.Further, it tested the suitability of indicators and applicability of methods in the case studies.Analyses on drivers and pressures, and their complex interactions, are based on a meta-analysis of the current state of knowledge on drivers and pressures, taking into consideration finalised and ongoing research projects.Additionally, it also assessed the existing indicators addressing the driver-pressure relationship including different biodiversity indices.Drivers were considered both at global and local scales.Similarly, the temporal dimension was factored in.Causalities between biodiversity, ecosystem functions and services increases knowledge on the relationship between biodiversity, ecosystem functions and ecosystem services across the three aquatic realms.Assessments on the causality links between biodiversity and ecosystem functions and services not only considered species richness but also the functional trait composition of biological assemblages using multimetric biodiversity indices.This work built on previous literature, including outcomes of finalised and ongoing research projects.In addition, multivariate modelling approaches were used to consider the multidimensional nature of causality relationships.Generalised dissimilarity modelling and diversity-interactions models were used to derive biodiversity and ecosystem functions and services across large regions.Derived causality functions were integrated into the Artificial Intelligence for Ecosystem Services modelling platform, using mapping explicit techniques, to increase forecasting ability of ecosystem services.Additionally, AQUACROSS considered how biodiversity-related causal links are affected during disturbance and recovery.The third research activity, the development of an Information platform involved the construction of a software platform based on the Comprehensive Knowledge Archive Network architecture to make possible the cataloguing, interrogation, analysis, and visualisation of diverse datasets on aquatic ecosystems and biodiversity using a range of selection criteria.Data and information were acquired from within the project and external sources.The platform was implemented as a network of interoperable databases including an ingestion module, in charge of data acquisition from external service providers, e.g. WISE, BISE, OBIS, EMODnet, other EU initiatives in addition to GEOSS, COPERNICUS and other initiatives led by the European Space Agency, among others.The open-access information and dissemination platform integrates inputs from the three aquatic realms and contain modules for: overview of data and metadata; AQUACROSS indicators and tools; technical documentation and guidelines; geospatial exploration and visualisation of the collected data with various levels of access to the stored data; and a user management module to administer user accounts, data access and processing rights.Forecasting biodiversity and ecosystem service provision established novel predictive capacities for key indicators of aquatic biodiversity, ecosystem function, and service provision with greatest relevance to EU environmental policy.A key scientific challenge was to provide robust evidence for expected trends that considered effects of ecosystem resilience and connectivity, effect thresholds, climatic extremes, socio-economic trends and uncertainties.A special effort was also dedicated to optimisation modelling on the effects of the spatial arrangement of various ecosystem types.Depending on case studies, the work was based on semi-quantitative models, quantitative deterministic or statistical models, and qualitative social-ecological models co-developed with stakeholders.Social-ecological models in particular aimed to bridge the gap between ecological modelling and policy paradigms, values and perceptions of stakeholders.This supported a joint learning process and contributed to the science-policy interface of Pillar 1.This work supported more robust scenarios, more integrated management approaches and policies, and maximisation of the delivery of multiple ecosystem services.To close the DPS-EGS-IR cycle, Pillar 4 identified, developed and assessed impacts and responses for innovative management of aquatic ecosystems building on scientific evidence and a strong stakeholder involvement.Pillar 4 was strongly framed by Pillar 2 but drew on evidence built within Pillar 3.Pillar 4 involved the development of EBM management responses, and policy instruments, that can ensure the cost-effective provision of ecosystem services so as to contribute to the objectives of marine, freshwater and biodiversity policies.Particular attention was given to the link between well-being and human responses for the conservation of biodiversity and sustainable management of ecosystem services.The eight AQUACROSS case studies were of key importance to the AQUACROSS project forming a major source for information and data, co-created concepts and developed products, shared experiences with implementing policy and respective management responses, as well as providing critical feedback on project outputs, including the AQUACROSS AF.The large-scale observational case studies not only benefited from the collaborative science-policy-business activities, but they also provided different and complementary insights into the development of indicators, methods and tools to assess the links between aquatic biodiversity and ecosystem services.These case studies were specifically selected to 1) showcase specific elements of the objectives of the EU 2020 Biodiversity Strategy relevant for the management of aquatic ecosystems; 2) understand the most relevant challenges surrounding the protection of aquatic biodiversity; and 3) maximise the lessons learnt in order to up-scale results.The eight case studies include:Case Study 1: Development of the knowledge base for more informed decision-making and the implementation of ecosystem-based management aimed at achieving Biodiversity Strategy targets in the North Sea.The North Sea is one of the busiest seas with many sectors laying a claim to a limited amount of space.The need for Integrated Ecosystem Assessments, Marine Spatial Planning and Ecosystem-based Management is therefore rapidly increasing and an appropriate scientific knowledge base is becoming a key requirement for more informed decision-making.This case study focused on what a focal point of North sea policy, food security, clean energy and nature conservation.This involved the most important current activity, i.e. fisheries, and the main newly emerging activity, i.e. renewable energy, to showcase how EBM can contribute to the achievement of the societal goals centred around the conservation of the seabed habitats.This case study started with an integrated risk-based assessment of all the human activities and their pressures in the study area in order to frame the focal point of this case study, i.e. the food-energy-conservation nexus, into the wider context required for integrated EBM.This risk-based assessment guided the further development of more detailed models which were then applied to evaluate different management strategies, e.g. spatial closures, technical measures, based on trade-offs between e.g. policy objectives or the supply of specific ecosystem services in the study area.Case Study 2: Analysis of transboundary water ecosystems and green/blue infrastructures in the Intercontinental Biosphere Reserve of the Mediterranean Andalusia – Morocco.This case study uncovered best practice examples of nature-based solutions for aquatic ecosystems through the development of direct recommendations to increase the establishment of green and blue infrastructures in the management and planning of transboundary water ecosystems within natural protected areas.The study focused on the Intercontinental Biosphere Reserve of the Mediterranean: Andalusia – Morocco which spans two continents, Europe and Africa.The one million hectare reserve passes through the Strait of Gibraltar and includes river basins, coastal and marine waters.The case study identified major drivers and pressures of the study site, which include water management and planning, transboundary fragmentation of water bodies, pollution, water uses, water prices, illegal extraction, and drought and water scarcity.A set of indicators was identified to assess the provision of ecosystem services across the reserve, which can be applied to the 20 diverse natural protected sites in both Andalusia and Morocco and cover the three water realms.Data on case study characterisation and water bodies, statistics, uses, prices, plans and strategies was collected and modelled to forecast the future provision of aquatic ecosystem services over time.Lastly, the case study further extends these models to examine green/blue infrastructures as nature-based management solutions in the Mediterranean context.Further detail is provided in Barbosa et al., this issue).Case Study 3: Danube River Basin - harmonising inland, coastal and marine ecosystem management to achieve aquatic biodiversity targets.This case study identified the impacts of significant water management issues of the Danube River Basin on its aquatic biodiversity.These management issues included organic, hazardous substances and nutrient pollution, and hydromorphological alterations.Major drivers and pressures of the study area are identified, including land use change, pollution, hydropower, navigation, eutrophication, and habitat loss and degradation.The focus at the river basin scale was to assess effects of hydromorphological alterations, e.g. hydropower development in the network of tributaries and the conservation and restoration potential of floodplains along the Danube River by considering mechanisms for enhancing the integration between different policies and human activities.A set of indicator species, such as those based on outcomes of historical analyses within the FP7 project MARS, was identified as well as floodplain characteristics, status of protected areas and biodiversity indices at different scales.The study assembled data on biodiversity, historical records of indicator species occurrences, and specific data on floodplains as well as analyses on ecosystem services.This data were used to forecast the future development of aquatic ecosystems under changed environmental conditions and management schemes.These forecasting models were extended to identify management options to address hydromorphological alterations as one of significant water management issues, while taking into account better integration with other EU water policies.Case Study 4: Management and impact of Invasive Alien Species in Lough Erne in Ireland.This case study investigated the management protocols in place for invasive alien species in a transboundary context and assessed where institutional arrangements could be improved or refined to better serve biodiversity conservation needs, and advance an ecosystem-based approach to management."The study focussed on IAS in Lough Erne; specifically, the aquatic weed Nuttall's Pondweed.As IAS are largely considered an environmental pressure, the study examined the drivers of this particular pressure within the study site and management options to alleviated the negative effects of IAS on recreational activities within the Lough.Additionally, the study examined the ecological impacts of these species, as well as the impact on habitats, other species and human activities.This stage provides information and data on the links/relationships between IAS and affected ecosystem services and/or biodiversity.This information on the impacts was combined with scientific data on species distribution, monitoring and historical establishment of species, as well as information from stakeholder engagement processes regarding the current management regimes dealing with the impacts of IAS.This data were then used to develop a Fuzzy Cognitive Map, a qualitative model on the effects of IAS and forecast the potential future changes in the Lough based on the relationships between existing activities and ecosystem components.These forecast models help identify possible opportunities to incorporate EBM approaches within current or emerging plans to address IAS impacts on ecology, social and economic systems in the study area.Case Study 5: Improving integrated management of Natura 2000 sites in the Vouga River, from catchment to coast, Portugal.In the context of environmental and water related policies and the Integrated Coastal Zone Management recommendation, this case study aims to contribute to the improvement of integrated management of aquatic Natura 2000 sites, from catchment to coast, involving the concepts of Science-policy-stakeholders interface.Special attention is given to investigate causalities involving biodiversity, ecosystem functions and services in relation to spatial flows and how they affect ecosystem resilience using a meta-ecosystem approach.The study area includes a downstream section of the Vouga river, the Vouga river estuary, which is part of Ria de Aveiro coastal lagoon, the lagoon adjacent coastal area, and the freshwater wetland Pateira de Fermentelos classified as Ramsar site.It includes several habitats integrated in the Natura 2000 network, classified as Special Protection Area and/or as Site of Community Importance, contributing significantly to the maintenance of biological diversity within this biogeographic region and its provision of ecosystem services.Initially, the case study promotes actions for engagement of stakeholders at different levels, reviews the current laws and policies governing the environmental management of the area.The study then identifies the main drivers and pressures in the considered area, which include agriculture, fishing, population growth, tourism and recreational activities, uncoordinated management, and associated economic drivers and pressures.A causalities analysis is conducted to explore and identify links to biodiversity, ecosystem functions and services in Natura 2000 aquatic habitats.All data collected on ecosystem service indicators is used in GIS-based models applied to environmental and socio-economic scenario analysis.Lastly, the case study develops innovative management instruments, including participatory initiatives, which set out conservation objectives for biodiversity and preservation of ecosystem services, as well as restoration measures for Natura 2000 sites.Further detail is provided in Libello et al., this issue).Case Study 6: Understanding eutrophication processes and restoring good water quality in Lake Ringsjön - Rönne å Catchment in Kattegat, Sweden.This case study aims to identify key structural elements and processes in the social and ecological subsystems and their interactions that determine the capacity of a social-ecological system in a catchment to adapt to change and transition to new management approaches.Specifically, the case study examines the process of eutrophication and restoration of good water quality and their implications for the provision of ecosystem services along the Rönne å catchment and Lake Ringsjön.An initial assessment of drivers and pressures takes into account both ecological and social perspectives.Links between these drivers and pressures and changes in biodiversity and provision of ecosystem functions and services utilise identified indicators, such as habitat characteristics attractive to tourists, support drinking water purification, fisheries, etc.Data is collected on spatial distribution of habitats for water plants, fish and birds; historical land use; fishing pressure; climate change impacts; eutrophication history of the catchment; relevant local and regional policies; socio-economic statistics; and spatial data on ecosystem services perception and use.This information is used in participatory socio-ecological models to specify and explore scenarios of catchment use and restoration, and to address conceptual questions of resilience.Lastly, the study explores possible future trajectories under different management settings through scenario projection and analysis considering responses to climate change, WFD requirements, integrated catchment management and improved water quality.Case Study 7: Biodiversity Management for Rivers of the Swiss Plateau.This case study predicts the development of biodiversity of invertebrates and fish in the rivers of the Swiss Plateau between the Jura and the Alp mountains as a function of climate change, land use and population growth scenarios and of suggested management strategies.Identification of main drivers and pressures, such as river canalisation, chemical pollution and modification of hydrologic regimes by hydropower plants, is conducted in conjunction with the identification of indicator species, such as invertebrates and fish.Information on cause-effect relationships is formalised in the structure and quantification of a probability network model.Additionally, site-specific information is used to specifically condition this model to the investigated river networks.Forecasting of aquatic biodiversity in the rivers is done by applying the conditioned probability network model.Lastly, management alternatives is evaluated for their effectiveness using a multi-criteria decision analysis approach.The study estimates the changes from different management alternatives and thus, jointly with the value function formulating the societal preferences, allowing us to valuate management alternatives.Case Study 8: Ecosystem-based solutions to solve sectoral conflicts on the path to sustainable development in the Azores.Case Study 8 considers the richly biodiverse Faial-Pico Channel, a 240 km2 Marine Protected Area in the Azores, an EU Outermost Region.Despite international, Azorean, and local protection for the area, biodiversity in the MPA continues to be lost.Commercial and recreational fishing as well as swiftly growing tourism place pressures on the Channel ecosystem."This in turn threatens the biodiversity and sustainability of the Channel on which these sectors rely, and lead to increasing conflict over the Channel's scarce resources.Given this context, the case study collaborates with local stakeholders and policy makers to identify cooperative ecosystem-based solutions to ensure long-term sustainability.To do so, we analyse current biodiversity-relevant EU and local policies to identify policy objectives and gaps.Stakeholder objectives are collected and analysed through interviews and stakeholder workshops.To understand relationships between the sociological and ecological aspects of the ecosystem, we characterise the Channel in terms of key drivers, pressures, ecosystem state, ecosystem functioning, and ecosystem service flows, using a qualitative linkage tool and available quantitative data.Scenario analysis with stakeholders draws on this policy, stakeholder, and sociological-ecological system characterisation to identify and evaluate ecosystem-based management measures that ensure a sustainable future for the Channel and its inhabitants.AQUACROSS emphasises the integration of existing ideas and approaches to provide innovative outcomes and products relevant for the sustainable management of aquatic ecosystems at different scales of application.At its core, AQUACROSS outcomes are aimed at problem solving and responding to pressing societal and economic needs.It applies a policy- and user- led research approach, where science is furthered through the co-creation of knowledge between practitioners and stakeholders.AQUACROSS brings together traditionally fragmented research traditions between biodiversity, freshwater, coastal, and marine components, and thereby contributes to integrating knowledge, concepts, information, methods, and tools across multiple research fields in an inter-disciplinary way.In particular, the consolidated outlook on EU policy for biodiversity and aquatic ecosystems will help build shared values, perceptions, and views.A coherent set of EBM assessment methods and models that cover the further developed DPS-EGS-IR cycle for freshwater, coastal and marine waters will be produced.More specifically on monitoring, combined indicators as called for in Resource Efficient Europe 2020 are advanced for freshwater, coastal, and marine waters.A direct support is provided to the achievement of biodiversity targets, and the implementation of river basin management in the ongoing second and third cycle of the WFD and for marine management in the second cycle of the MSFD.A structured information platform integrates generated knowledge and provides a consistent framework for collection of existing and improved data to ensure quality, comparability, and availability of water-related environmental information. | The AQUACROSS project was an unprecedented effort to unify policy concepts, knowledge, and management of freshwater, coastal, and marine ecosystems to support the cost-effective achievement of the targets set by the EU Biodiversity Strategy to 2020. AQUACROSS aimed to support EU efforts to enhance the resilience and stop the loss of biodiversity of aquatic ecosystems as well as to ensure the ongoing and future provision of aquatic ecosystem services. The project focused on advancing the knowledge base and application of Ecosystem-Based Management. Through elaboration of eight diverse case studies in freshwater and marine and estuarine aquatic ecosystem across Europe covering a range of environmental management problems including, eutrophication, sustainable fisheries as well as invasive alien species AQUACROSS demonstrated the application of a common framework to establish cost-effective measures and integrated Ecosystem-Based Management practices. AQUACROSS analysed the EU policy framework (i.e. goals, concepts, time frames) for aquatic ecosystems and built on knowledge stemming from different sources (i.e. WISE, BISE, Member State reporting within different policy processes, modelling) to develop innovative management tools, concepts, and business models (i.e. indicators, maps, ecosystem assessments, participatory approaches, mechanisms for promoting the delivery of ecosystem services) for aquatic ecosystems at various scales of space and time and relevant to different ecosystem types. |
544 | Nonlinear dynamic Interactions between flow-induced galloping and shell-like buckling | The simplest form of pure galloping is exhibited by a bluff body oscillating transversely in a steady wind.With a structural support providing both linear elastic stiffness and linear viscous damping, the theory for this phenomenon was developed by Novak for a series of rectangular cross-sections.Based on experimental fitting to the quasi-static aerodynamic forces, Novak′s theory agreed well with his related experimental studies.An excellent modern account of this, and other work, is given in the book by Paidousis et al. .Note that galloping is essentially a one-mode phenomenon, distinct from flutter which arises in systems with at least two active modes; and even more distinct from vortex resonance which involves a strong interaction with the fluid.Note, though, that in nonlinear dynamics the bifurcations to both galloping and flutter are described as a Hopf bifurcation .The essence of Novak′s galloping theory was to use the highly nonlinear aerodynamic force characteristics obtained by calibration experiments in which a steady wind-stream was directed, at a series of angles, towards the stationary rectangular body.The characteristic graph of lateral force versus angle of attack was then approximated by a seventh-order polynomial.Some of Novak′s results are summarised in Fig. 1.Here the lateral force on the rectangular prism, in the direction of the lateral displacement, x, due to a wind of velocity, V, is ½ ρaV 2Cf where ρ is the air density, a is the frontal area, and the angle α is approximately x′/V.A prime denotes differentiation with respect to the time, t.The responses in the right-hand column show the amplitude of the steady-state oscillations.These periodic motions are stable when represented by a solid line, unstable when represented by a broken line.Hopf bifurcations on the trivial solution are denoted by H, and away from the trivial path stable and unstable oscillatory regimes meet at cyclic folds.Fast dynamic jumps are indicated by vertical arrows.For case the wavy arrow denotes a slightly turbulent wind.The 2:1 rectangular cross-section exhibits a super-critical Hopf bifurcation at H, with a path of stable limit cycles for higher values of the wind speed.In row the square cross-section in a steady wind exhibits at H a super-critical Hopf bifurcation; and the subsequent limit cycles exhibit two cyclic folds and an associated hysteresis cycle.In row a 2:1 rectangle in steady wind exhibits a sub-critical Hopf bifurcation at H from which a fast dynamic jump would carry the system to a large amplitude stable limit cycle.The unstable path from H eventually stabilizes at a cyclic fold, giving an overall response akin to the response of many shell-buckling problems.In the bottom row, a 1:2 rectangle standing across-wind gives no bifurcation from the trivial solution but large amplitude stable and unstable cycles do exist, separated again by a fold.Some of the most familiar examples of galloping arise with engineering cables , but we should note that a cable of circular cross-section cannot gallop because the force is in the direction of the resultant wind velocity, and therefore opposes any cable motion.Some cables that can and do gallop are shown in Fig. 2.Galloping problems can also arise in complete structures, such as tower blocks, and here there can be interactions between the wind-induced vibrations and gravity-induced buckling.A classic case was the high-rise Hancock Tower in Boston which had a lot of such problems in its early days.Window panes started falling out, and eventually all 10,344 had to be replaced.Occupants suffered from motion sickness, and tuned mass-dampers had to be fitted.There were still problems, however, when a gravitational instability increased the period of vibration from 12 to 16 s.The final cure was to add 1500 t of diagonal steel bracing, costing $5 million.The tower is still standing today; and still winning architectural prizes for its minimalism!,It is the purpose of this paper to examine the interactions between galloping and buckling, remembering that simultaneous failure modes often represent a simplistic, though potentially dangerous, optimal design .We introduce an archetypal model which is non-conservative but autonomous, subjected to time-independent loading by a steadily flowing fluid.It is designed to exhibit sub-critical bifurcations in both galloping and buckling, both of which will trigger a dynamic jump to a remote stable attractor.When there is more than one candidate attractor, the one onto which the structure settles after the Hopf bifurcation can be indeterminate .This is due to the two-dimensional spiralling outset of the Hopf, which makes the outcome sensitive to infinitesimally small variations in starting conditions or parameters.This indeterminacy forms the focus of our investigation.We consider the archetypal model, shown in Fig. 3, that we use to study the nonlinear dynamic interactions between galloping and shell-like buckling.A rigid link is pivoted as shown, and held vertical by a long spring of stiffness k which is assumed to remain horizontal throughout and is attached to the mass-less rod at a distance L2 from the pivot.We introduce an imperfection into the model by supposing that this spring is initially too short by y0 to hold the unloaded rod exactly vertical.Loaded by the mass m of the grey prism, assumed concentrated at a point on the mass-less rod at a distance L1 from the pivot, this model will exhibit a sub-critical pitch-fork bifurcation.The only interaction with the wind is through the grey prism which has a 2:1 section with the longer edges lying in the direction of the wind.As we have seen, such a prism was analysed by Novak, and shown to exhibit galloping at a sub-critical Hopf bifurcation.The rotational deflection of the rod is written as x, and a prime denotes differentiation with respect to the time, t.On the left we have the post-buckling behaviour for a perfect and an imperfect system, with the asymmetric potential well sketched for the latter.The corresponding two-thirds power-law cusp of imperfection sensitivity is shown on the right.The central insert shows the actual shape of the well for the present system at the parameter values of some of our later studies. for the shape of Cf).Note that b=0 at the pitch-fork, and positive b measures how far we are away from buckling.Turning to the Hopf bifurcation, we must first think about the aerodynamic curve, Cf, for which we have adopted an analytic function that closely fits Novak′s experimentally determined form that we saw in Fig. 1.This is more suitable for our theoretical work than the power series that Novak used to fit the experiments for x′/v positive, since reflecting this for negative x′/v gives rise to a singularity at the origin.Our form is shown in Fig. 5.Here we see the sub-critical Hopf bifurcation at H generating the trace of unstable cycles which become stable cycles at the cyclic fold, F.The graph shows the maximum and minimum values of x for the steady state galloping oscillations against the wind velocity, v.The figure is drawn for an imperfect system with e=−0.01, which explains the asymmetry about the v axis, and in particular why H does not lie precisely at x=0.Notice that the result vH=1.875 is independent of the imperfection.The localised curving of the path of stable cycles at C signifies its approach to the nearer of the two post-buckling equilibrium states.All the cycles of Fig. 5 are superimposed on the phase-space portrait of Fig. 5 where the sharp point of the outer orbit corresponds to the proximity of C. Finally Fig. 5 shows the variation of the periodic times of the cycles traced in Fig. 5.The period is tending to infinity as the final cycles approach the hill-top equilibrium, C.Looking finally at the complete model, with both wind and gravity loading, we show in Fig. 6 a sequence of phase portraits for fixed gravity loading and fixed imperfection.In Fig. 6 the topology of the portrait is not yet significantly affected by the wind.Portrait shows a homoclinic connection which together with a very localised fold creates the unstable cycle seen in portrait.Notice that disturbances of this cycle generate escape only to the left over the lower potential barrier.Between portraits and a heteroclinic connection alters the topology, so that in the escape is indeterminate, being either to the left or right.In, past the Hopf bifurcation, the central point is unstable and is likewise indeterminate, with disturbances generating escapes over either of the potential hill-tops.His complete unfolding of the singularity, in the space of the two control parameters is shown in Fig. 7.Notice that the only attractors are the trivial points for w<0.The significant event in this diagram is the saddle connection that occurs on line S, which separates two regions of parameter space, one containing an unstable limit cycle which is destroyed on crossing S.Guided by this 2D unfolding of the symmetric case, we now proceed to fully unfold the compound singularity exhibited by our model in the 3D parameter space of our stiffness parameter, b, our wind velocity, v, and our symmetry-breaking parameter, e.This compound bifurcation has been called the Takens-Bodganov Cusp .Note that these authors study the unfolding of the centre-saddle-centre case as opposed to our saddle-centre-saddle case illustrated in Fig. 7.The result is shown in Fig. 8, which gives two views of the same ellipsoid in parameter space.Here the ‘small’ radius parameter, R is taken nominally as 0.2, but for clarity the picture is not to scale.The resulting image does not change qualitatively for smaller or moderately greater R.The coloured arcs drawn on the ellipsoid show its intersection with the various bifurcation surfaces emerging from the origin, which are better understood in the unfolded ellipsoidal surface of Fig. 9 when projected into the parameter plane.In this view, the pitch-fork bifurcation appears twice, at P, where the arc of static folds exhibits a very localised cusp.The Hopf bifurcation occurs on the vertical axis.There are now two types of saddle connection, a homoclinic and a heteroclinic.Crossing the heteroclinic arc takes us, for example, from portrait where the galloping system escapes only to the left to portrait where the outcome is indeterminate, depending sensitively on the starting condition near the node.Remember, here, that the asymmetry, e, varies as we move over the ellipsoidal surface; it is positive above the red symmetry line, lowering the left escape barrier, unlike as in Fig. 4 where e was negative.We explore this sensitivity more fully in the following section.Meanwhile, crossing the homoclinic arc transforms portrait into portrait.Notice that portrait is very close to the fold arc, crossing which gives a portrait such as with only one equilibrium fixed point.A feature not visible in Fig. 9 for small radius R is the fold of limit cycles.This fold of limit cycles must exist because the periodic orbit born in the homoclinic is asymptotically stable.The fold of limit cycles is more clearly visible in Figs. 5 and and 6.Correspondingly, there should be a phase portrait in Fig. 9 between portraits 9 and 9 with two coexisting limit cycles; this could not be conveniently shown in Fig. 9.This study is for negative e, making it ‘easier’ for the system to escape to positive large x, but the parameters are such, as in portrait of Fig. 9, that escape from the un-ramped Hopf bifurcation is indeterminate being possible in either direction.We notice first the considerable ‘tunnelling’ through the Hopf bifurcation which arises because the small disturbance from equilibrium takes time to grow under the light negative effective damping just after the steady-state Hopf velocity.This tunnelling increases as the runs are started earlier and earlier, because the longer time interval under positive damping ensures that x and x′ have decreased closer and closer towards the origin before v reaches vH.Next we observe that some runs escape over the lower hill-top equilibrium while others escape over the higher hill-top.The relative hill-height for this value of b is shown as an insert in Fig. 4.A further study of this indeterminacy is shown in Fig. 11.In Fig. 11 we display the outcomes, in terms of easy escape over the lower barrier or hard over the higher barrier resulting from different values of v for different ramping rates corresponding to the six integer values of log2.The fixed starting conditions in are x=xeq − 0.05, x′=0 where xeq is the equilibrium value of x.In Fig. 11 we show, again in black or white, the outcomes in the space of for fixed values of v=vH/2 at γ=0.01.While our numerical simulations of parameter-ramping through a Hopf bifurcation are adequate for the case when the location of the equilibrium does not depend on the drifting parameter, we should note that the general case is more subtle.In particular, the value of the drifting parameter at which the trajectory starts to grow noticeably exponentially depends not only on the starting parameter in our case) but also on properties of the right-hand side of the equation.See for a mathematical treatment, and for some of the typical observations.We have proposed and studied an archetypal model to explore the nonlinear dynamic interactions between galloping at an incipient sub-critical Hopf bifurcation of a structure with shell-like buckling behaviour.Optimal designs often call for a simultaneity of failure modes, but nonlinear interactions can then be dangerous .The compound bifurcation corresponding to simultaneous galloping and buckling is the so-called Takens-Bodganov Cusp, and we have made a full unfolding of this codimension-3 bifurcation for the model to explore the adjacent phase-space topologies.The indeterminacy of the outcome, that we find for both quasi-static and ramped loadings, should certainly be noted by design engineers.It will be interesting to see if the various approaches of analysis and control of safe basins of attraction pioneered by Giuseppe Rega and Stefano Lenci can play a role in interactions of the present type. | For an elastic system that is non-conservative but autonomous, subjected for example to time-independent loading by a steadily flowing fluid (air or water), a dangerous bifurcation, such as a sub-critical bifurcation, or a cyclic fold, will trigger a dynamic jump to one or more remote stable attractors. When there is more than one candidate attractor, the one onto which the structure settles can then be indeterminate, being sensitive to infinitesimally small variations in starting conditions or parameters. In this paper we develop and study an archetypal model to explore the nonlinear dynamic interactions between galloping at an incipient sub-critical Hopf bifurcation of a structure with shell-like buckling behaviour that is gravity-loaded to approach a sub-critical pitch-fork bifurcation. For the fluid forces, we draw on the aerodynamic coefficients determined experimentally by Novak for the flow around a bluff body of rectangular cross-section. Meanwhile, for the structural component, we consider a variant of the propped-cantilever model that is widely used to illustrate the sub-critical pitch-fork: within this model a symmetry-breaking imperfection makes the behaviour generic. The compound bifurcation corresponding to simultaneous galloping and buckling is the so-called Takens-Bodganov Cusp. We make a full unfolding of this codimension-3 bifurcation for our archetypal model to explore the adjacent phase-space topologies and their indeterminacies. |
545 | Modelling the prestress transfer in pre-tensioned concrete elements | The prestress force in pre-tensioned concrete elements is transferred from the steel to the concrete over a certain length, which is known as the transmission length.The formulae of the transmission length in the current design code are basically developed empirically for normal types of concrete.Because of the rapid innovation in construction industry and introducing of new types of concrete materials, experimental tests are needed to estimate the transmission length .Models account for the new type of concrete material, based on the mechanical properties of materials which can be measured using simple tests, will be desirable .The transmission length is influenced by many factors such as diameter of prestressing steel, initial or effective prestress, concrete strength, type of release, type of tendon, bond condition, concrete cover, surface condition and size of the section on the transmission length .To date, no full agreement exists on the factors used in formulae to predict the transmission length .The aim of this study is to develop a model that accounts for different concrete materials and reinforcing steel as a closed-form expression to predict the transmission length to be used in initial design stage where new concretes are used.Moreover, this study will develop a finite element model to understand the influence of different parameters on the prestress transfer.This paper is organised in nine sections.Sections 2 and 3 give a brief background about the previous modelling work and the transfer of prestress, respectively.In Section 4, an analytical expression to calculate the transmission length and the stress distribution is given.Axi-symmetric and 3D finite element models are developed in Sections 5 and 6.Section 7 presents a parametric study while Section 8 examines the assumptions used in the analytical model.Finally, a summary and conclusions are given in Section 9.Modelling the transfer of prestress force in pre-tensioned concrete elements has been described either by using empirical or numerical models models).These models allow for calculating the transmission length that is needed to transfer the prestress force in steel to concrete.Analytical modelling of prestress transfer was previously carried out by considering the prestressing steel as a solid cylinder and concrete as a hollow cylinder with inner radius equal to the prestressing steel radius and with an infinite outer radius .Although the model assumed an infinite radius for concrete and neglected the effect of longitudinal stress on concrete, a simple expression for prestress transfer and transmission length was given.The thick-wall cylinder model was also used to evaluate the effects of concrete cover on bond behaviour of prestressing strands in both high and normal concrete strengths considering the non-linear behaviour of concrete in tension .The same concept was used to estimate the transmission length in pre-tensioned concrete element .The model by Oh et al. considered non-linear anisotropic concrete behaviour after the occurrence of cracks and assumed no slip.The slip of prestressing steel was considered in evaluation of the transmission length in the study of Benítez and Gálvez .The literature shows that the use of the thick-wall cylinder concept in modelling of prestress transfer is simple and provides a more rational basis.On the other hand, the reviewed literature did not present a closed-form mathematical expression to estimate the transmission length and stress distribution along prestressing steel.The analytical modelling becomes very complicated when the material׳s non-linearity and the effect of different types of stresses in addition to the behaviour in 3D are taken into account.The FE method on the other hand is much more effective in handling the complexities associated with material non-linearity and the structural behaviour in 3D.The FE method has been used by other researchers to study the effects of the releasing techniques of prestressing steel on the stress field and cracks at the end zone .Kannel et al. used the ABAQUS 5.4 software to model pre-tensioned concrete girders using three dimensional continuum elements for concrete and truss elements for the strands.The transfer of the prestress force from steel to concrete was modelled by varying the strand diameter linearly from zero at the end of the girder to the nominal diameter at the end of the transmission length.Another method was also used by Kannel et al. in which the interactions between the steel strand and concrete were modelled by using connected rigid springs with plastic behaviour.The use of truss elements, which account only for axial forces, neglects the effect of radial deformation due to Poisson׳s effect.These techniques and the assumption of a linear material model for both steel and concrete do not accurately reflect the reality of the bond behaviour between prestressing steel and concrete in prestressed concrete elements.The ANSYS FE package was also used to model pre-tensioned concrete beams and railway sleepers .The concrete was modelled using 8-node solid elements and truss elements were used to model the prestressing steel.Concrete cracking and crushing were modelled using the concrete damage plasticity model proposed by Willam and Warnke .The prestress was simulated by assigning initial strain for the truss elements while the interaction between steel and concrete surface was assumed to be fully bonded.The assumption of full bond condition affects the estimation of transfer of prestress force to concrete because it eliminates the contribution of prestressing steel slippage.In the work of Ayoub and Filippou , the prestressing process was modelled across the different stages in the pre-tensioned concrete elements using FE models in which an empirical local bond relationship was used.The main shortcoming of this model and the models before is the neglecting of the concrete tension softening behaviour.Recently two FE approaches were developed to model the pre-tensioned concrete element, namely: the embedment approach and the extrusion approach .Both approaches implement friction-based contact surface algorithms to model the interface between steel and concrete.The approaches were modelled using the ABAQUS v.6.9 package.In the first approach, the strand was modelled as a truss element that was embedded in the concrete “embedment approach”.Although this technique is less complex and has less computational cost it assumes a perfect bond and thereby neglects the possibility of strand slip.In the extrusion approach, prestressing steel was modelled using 8-node solid elements.Bond behaviour between steel and concrete was modelled using surface to surface contact element.The ABAQUS concrete damage plasticity model, which accounts for concrete post-cracking behaviour, was used to model the concrete material.The prestressing stage was effectively modelled by applying initial strain then the releasing stage was simulated by applying a strain-compatibility condition.Arab et al. introduced a frictionless casting bed in their modelling to account for the effect of the self-weight in prestress transfer, although in practice, the prestressing beds always have friction.While the value of the coefficient of friction between steel strands and concrete in such models has been assumed to be 0.4 , Arab et al. used values of 0.7 and 1.4 based on AASHTO LRFD shear friction design recommendations.In this paper, realistic models that consider the effects of different parameters on the transmission length are proposed.These models include a linear analytical model, which has been verified by means of an axi-symmetric FE model; both analytical and axi-symmetric models approximate the concrete around the pre-tensioned steel to a hollow concrete cylinder with thickness equal to the smallest concrete cover.This is followed by a 3D FE model which considers non-linear material models as well as the rectangular beam shape.In pre-tensioned concrete elements, when the prestress is released after hardening of concrete, the prestress force transfers gradually into the concrete.The prestressing steel tries to shorten and expands due to Poisson׳s ratio effect.This results in varying diameter within the transmission length in the shape of a wedge.The increment in the bar diameter imposes a normal pressure acting on the surrounding concrete.The pressure starts from high value at the end and continues decreasing along the transmission zone and reaches plateau after that.This pressure induces a frictional resistance, which acts against the shortening of the pre-tensioning steel and holds the bar/strand in tension.The forming of wedging and frictional forces is known as the Hoyer effect .This phenomenon enhances bonding of the released prestressing steel with the surrounding concrete.As a result, the stress in steel increases gradually until it becomes constant beyond the transmission length.The distribution of compressive stress in concrete follows a similar pattern to the stress in steel, with an additional shear lag effect.The analytical model proposed here adopts the thick-wall cylinder theory and assumes elastic material behaviour for steel and concrete .The inner diameter of the concrete cylinder equals the diameter of the prestressing steel before releasing, and the outer diameter measures to the nearest concrete surface as shown in Fig. 3.The prestressing steel is modelled as a solid cylinder with a radius that equals the nominal radius of prestressing steel.The thick-wall cylinder assumes that the stress is highest around the prestressing steel while the far parts have nearly zero stress.In other words; the stress contours takes a circular shape around the bar with nearly zero contours at the cylinder perimeter.This assumption will be examined in Section 8.The prestress transfer in pre-tensioned concrete was model by examining the equilibrium, the compatibility and the bond conditions at each section.The bond between steel and concrete is modelled using Coulomb friction law.Successive solutions of these conditions along the length of the member give the distribution of prestress and the transmission length.The flowchart in Fig. 4 shows the solution procedure for the prestress transfer model.To find out the distribution of stresses in a prestressing bar along the transmission length, consider a small element dx subjected to radial pressure p, axial stress fp and bond stress as shown in Fig. 5.Bond between steel and grout is attributed to three factors: cohesion between steel and grout; friction between steel and grout; and mechanical resistance.The cohesion always has an insignificant influence on the load-deformation response of the structure, because the cohesion fails after a very small relative slip .The mechanical resistance only contributes to bond when deformed steel bars are used.Since a strand slips through the grout follows the pre-shaped groves without shearing off the concrete .Some researchers pointed out that the contribution of the torsional stresses and lack-of-fit, from the variation in the pitch of outer strand wires, to the frictional stress is very low .An empirical law was proposed by considering the so-called pitch effect to simulate different bond situations although the physical meaning of pitch effect is not explained .The magnitude of the normal pressure p can be estimated by applying the compatibility condition at the interface between the bar and concrete.Eq. gives the distribution of the stress in the prestressing steel along the transmission zone.For the pre-tensioned concrete element with the properties shown in Table 1, the steel stress profile and the normal pressure are shown in Fig. 8.Fig. 8a shows the distribution of stress in prestressing steel in the longitudinal direction.The general trend of the stress profile is exponential, which can be easily seen in Eq., starting from zero at the end and yields to asymptote beyond the transmission zone.Fig. 8b illustrates the normal pressure at the interface between steel and concrete.The pressure starts from a high value at the end and decreases to nearly zero after the transmission length.This finding confirms to the concept of Hoyer׳s effect.A simple axi-symmetric FE model is developed using the ANSYS FE package .The model׳s aim is to verify the proposed analytical model.Both steel and concrete were modelled as thick wall cylinders using 8-node elements with linear material properties, Fig. 9.Regarding the contact element, the contact between two bodies occurs when the normal distance between two pairs of nodes on these bodies lies within a specified tolerance.This normal distance is known as penetration, it is a direct function of the normal pressure p.When this distance is positive, separation occurs and the contact pressure is set to zero.The relation between the tangential stress and the normal pressure is based on the Coulomb friction law as shown in Eq.Slip on the other hand occurs when the tangential distance between the pairs exceeds the specified tolerance.In this study the interface between steel and concrete is modelled using the surface-to-surface contact elements.Values between 0.3 and 0.7 of the coefficient of friction are reported in literature .The coefficient of friction in this study was taken as 0.4 .The allowable slip was defined when the relative displacement exceeds 1.0% of the element length across the contact surface .Two solution algorithms are used to solve the contact between steel and concrete: the augmented Lagrange multiplier method for solving normal contact behaviour; the penalty method for solving tangential contact behaviour.The prestressing process is modelled in three steps.In the first step, which simulates the prestressing step, an initial stress equal to the initial prestress was applied to the prestressing steel before any solution step and before concrete casting.The concrete placement is modelled by the formation of contact between steel and concrete.The third step, the prestressing steel is gradually released and a solution step takes place.A full Newton–Raphson solver with un-symmetric matrix storage was used in the static solution of the model steps.The slip of a tendon at a certain section can be defined as the relative displacement at the interface between concrete and steel cylinders.Slip of prestressing steel is likely to occur within the transmission length after the failure of cohesion between steel and concrete .The slip is implicitly considered in the analytical model by using of the Coulomb׳s friction law which is only true in the case of slippage.In order to scrutinise the impact of neglecting the compatibility in longitudinal direction, the analytical model is assessed against the axi-symmetric FE model.As an example, consider a pre-tensioned concrete member with parameters shown in Table 1.The element size used in the mesh in this example is discretised into 200 divisions in the longitudinal direction, and 3 divisions and 10 divisions in radial direction in the steel and concrete, respectively.No notable improvement was observed in the results for smaller element sizes.Fig. 10 shows a comparison between the analytical and the axi-symmetric model.Good agreement between the model solutions is observed for the transmission zone.However, there is a difference between the two models beyond the transmission zone.This is because the analytical model uses Coulomb friction law for the entire length of the member; however this is only true within transmission zone where slip/sliding of prestressing steel is present, Fig. 11a and d.The presence of slip causes a drop in the bond stress, Fig. 11b, and in the normal pressure, Fig. 11c, therefore the effective stress in the axi-symmetric FE model beyond the sliding zone, Figs. 11d and 10, is less than that in the analytical model.This finding was also observed by Den Uijl , who found that the slip had insignificant influence on the transmission of the prestressing force after more than 95% of the prestress is achieved.The general trend of these curves is consistent with the findings of many other researchers and with previous experimental observations .Fig. 11b and c also shows how the normal pressure and bond stress are proportioned according to the Coulomb׳s friction law.The presence of sliding in the entire transmission zone, Fig. 11a and d, supports the use of the static coefficient of friction.The results of the axi-symmetric FE model in Fig. 12 show that the hoop stress in the first part of the transmission length reaches values exceeding the concrete tensile strength adjacent to the tendon.This indicates that cracks will be present in the radial direction in concrete; therefore the concrete is not expected to display true elastic behaviour after cracking.One disadvantage of the axi-symmetric modelling is that the number of radial cracks needs to be assumed in order to account for the non-linear behaviour of concrete .However, this phenomenon can be conveniently modelled using 3D FE analysis considering concrete non-linear behaviour.A non-linear 3D model of a single-pre-tensioned concrete beam was developed using the commercial finite element package ABAQUS Version 6.10 .The material models used, prestressing modelling strategy and implemented solution algorithms are discussed below.Pre-tensioning steel is always stressed to a value less than the yield stress and it is, therefore, bound to behave elastically when released.Accordingly, a linear elastic material model was used for the pre-stressing steel.The pre-tensioning stress was modelled as an initial stress in the steel element as has previously been described in Section 5.The release of prestressing steel exerts radial compressive stresses and circumferential tensile stresses onto the surrounding concrete.In most practical cases, the circumferential stresses exceed the concrete tensile strength which leads to the formation of radial cracks around the prestressing steel.Therefore, the post-cracking behaviour of concrete has to be considered in the concrete material model.After prestress release, the radial stresses are always less than the compressive strength; hence, it is acceptable to use a linear model to describe the compressive behaviour of concrete.The behaviour of concrete in tension was modelled as linear-elastic up to its tensile strength, which was taken as one-tenth of the concrete compressive strength.The post-cracking behaviour was modelled using the Hillerborg׳s fictitious cracking concept assuming linear tension softening as shown in Fig. 13.The concrete constitutive model used in the analysis presented in this section is the CDP model in ABAQUS.The model uses the yield function proposed by Lee and Fenves which is a modification of the plastic damage model of Lubliner et al. to consider different strength evolution under concrete tension and compression.The eccentricity e defines the rate at which the function approaches the asymptote.A value of 0.1 eccentricity controls that the material has almost the same dilation angle ψ-measured angle in the p–q plane at high confining pressure – over a wide range of confining pressure stress values.Lee and Fenves found that when concrete was subjected to uni-axial compressive and tensile failures in both monotonic and cyclic tests, a dilation angle of 31° produces results in good agreement with the experimental ones .In case of the biaxial loading tests, the angle of 31° produces a small difference in out-of-plane strain caused by larger dilatancy in comparison to the angle about 25° .In the models presented here, values of 0.1 and 30° were used for the eccentricity and dilation angle, respectively.The bond between steel and concrete was modelled using surface-to-surface contact element.The Coulomb friction law was used to define the frictional behaviour between steel and concrete with a coefficient of friction of 0.4 and zero cohesion.The augmented Lagrange multiplier solver, which updates the contact stiffness any iteration, was used in solving of the contact behaviour in the normal direction of the interface while the penalty method was used to solve the contact tangential behaviour.For solving this model under the static condition, a full Newton–Raphson solver with un-symmetric matrix storage was used.A small time step size, and therefore, a large number of increments were used to promote the convergence of the complex non-linear material behaviour and the contact problem׳s solution.The results obtained by using the concrete damage plasticity model showed mesh dependency since the model is dependent on the concrete crack bandwidth, the width affected by the crack .Therefore, the global response may not be identical when different element sizes are used normal to the crack direction .Moreover, increasing the number of elements has been observed to result in the formation of new cracks in the localised crack zone .The following strategies are recommended to reduce mesh dependency :Using of fracture energy or stress displacement methods to define concrete post-cracking rather than using the stress–strain relationship.This is because the latter introduces unreasonable mesh dependency into the results in the regions lacking reinforcement.Defining the characteristic crack length to be equal to the length of a line across an element in the first order element.Use of elements with aspect ratios close to one.In addition to the above, in this study, the effective stress was used to assess mesh sensitivity.This is because the transmission length is calculated using the 95% AMS method, which means it is not a direct result from the model.The sensitivity of the numerical solution to the mesh discretisation was investigated in order to identify a suitable mesh.The number of segments around the steel is defined as np and was varied from 8 to 64.For the beam in Table 1, it was found that the effective stress values start to form a plateau for np greater than 40, Fig. 16.The estimated transmission length at np equals 40 is just 1.8% below the one for np equals 64.Therefore the use of np equals 40 gives an acceptable prediction of transmission length at reasonable computational cost.The sensitivity from element sizes of 25 mm, 20 mm, and 10 mm at the outer edges was also examined; no remarkable change on the model results was observed.Hence, to reduce the computational cost, the element size of 25 mm at the edges was used in this study.In this section the simulation of prestressing steps in pre-tensioned concrete elements is illustrated and the influence of concrete shrinkage on prestress transfer is shown, based on the 3D FE non-linear model.This is followed by the validations of the 3D FE model against previous experiments.The prestressing procedure includes three different steps.The first step is prestressing where steel is tensioned.In this step, the distribution of stresses along its length remains constant and equals the prestress magnitude.The second step is casting of concrete.After the setting of concrete, concrete shrinkage takes place causing a change to the stress profile in the pre-tensioned concrete member.In the third step the prestressing steel released.The release of the prestressing force creates radial stress and activates a frictional resistance component which enhances the transfer of prestress force into the concrete.The influence of concrete shrinkage before releasing the prestress force is highest at the end of the element and decreases towards the centre of the element until it stabilizes to a value less than the initial prestress after a certain distance.This behaviour enhances the transfer of prestress although it reduces the effective prestress.Therefore, neglecting the effect of concrete shrinkage does not reflect the true behaviour of pre-tensioned concrete in the analysis of the prestress transfer.Furthermore, the subtraction of shrinkage losses from the initial stress in the modelling of prestress transfer does not properly account for the variation of shrinkage strain profile along the transmission length.As shown in Fig. 17, the calculation of transmission length was found to be 55.1d compared to 41.3d when concrete shrinkage before release is considered.The release of prestressing steel usually causes radial cracks in concrete around the tendon .These bursting cracks occurs due to expansion of the tendon and the wedge action .In this section, the model is examined whether it is able to predict this phenomenon or not.Fig. 18 shows the visualisation of concrete cracks around the tendon and along the transmission length.The cracks are visualised here plotting the maximum principal plastic strain.It is assumed that the direction of the maximum principal plastic strain is parallel to the direction of the vector normal to the crack plane .The presence of concrete cracks agrees to observations in the previous experiments .To validate the proposed models, data were compared to previous experimental results .The comparisons exhibit a good agreement between the model and experimental measurements.As an example, Figs. 19 and 20 show comparisons of strain data normalised by the average maximum strain.In Fig. 20, the decrease of the slope of tendon stress can be related to the non-linear behaviour of concrete, which is not considered in the analytical simulation.It can be clearly seen that the variability in measurement of experimental data and the number of measuring points between the two figures.Mitchell et al. estimated the transmission length using the slope-intercept method.The method estimates the transmission length from the end of the element to the intersection of the fitted line within the curved part of the strain profile with the horizontal fitted line of the asymptote.This method is highly dependent on how these parts are determined and the number of points in each part.For this reason only the results from 3D FE model are compared.The same method was followed to estimate the corresponding transmission length of their specimens from the FE model.Validation of the 3D FE model is carried out against 22 tests.In their published work, the transmission length was measured at the ends of each beam, i.e. end A and B, which gives 44 measurements of the transmission length.In the work of Oh et al. the transmission length was estimated by predicting the transmission length at 95% of the average maximum strain, .The comparison between the FE prediction, analytical model) and the experimental estimations of transmission length are shown in Fig. 22.Oh et al. gives an estimation of each beam at the end of releasing and the far end.The comparisons against 12 specimens show a reasonable agreement.It was found that the average error on the 3D FE model predictions is less than 8% in 48 specimens.In the other hand the analytical model overestimate the transmission length by 11% based on ten specimens.However, the 3D FE model completed in about 3 h in one CPU compared to 5 ms. Take into account, in both experiments sets, the average difference in measuring the transmission length in the two ends is about 14%.Additionally, this scatter can be due to the different in the quality of the interface between steel and concrete which is influenced by many parameters such as compaction, presence of air bubbles, and mortar sedimentation.Previous reports showed 20% coefficient of variance between upper and lower bound of the transmission length in observations under similar circumstances .This means that the ratio between characteristic upper and lower limit is 2.0, which indicated a high scatter in transmission length measurements in practice.The Poisson ratio for steel and concrete is taken as 0.3 and 0.2, respectively.The study focuses on the influences on the transmission length and on the effective prestress as well.The transmission length is normalised to the diameter of the pre-tensioning steel while the effective prestress is normalised to the initial prestress.The parametric study is performed on a high performance computing cluster using eight CPUs with 16 GB of RAM.This has reduced the computational time drastically and has allowed for a number of parameter to be studied.The results of the model show that in case of constant concrete cover, the transmission length increases proportionally with an increase of diameter.The same conclusion was made by other researchers .This observation can be also demonstrated by Eq.On the other hand the effective prestress slightly decreases with increase of the tendon diameter within a range less than 6% of the initial prestress.Fig. 24 shows the influence of concrete compressive strength at release on prestress transfer in pre-tensioned concrete.It is found that the increase of concrete compressive strength reduces the transmission length and increases the effective prestress.This is because of increasing concrete tensile strength and Young׳s modulus as a result of increasing the compressive strength.The increase of tensile strength reduces the cracked concrete zone around prestressing steel and thereby enhances the transfer of the prestress force.A similar observation was also reported by Mitchell et al. who investigated experimentally the influence of high concrete strength on the transmission length.The increase of compressive strength from 25 MPa to 50 MPa results only in about 3% increase in the effective prestress.The radial expansion of prestressing steel due to Poisson׳s effect after the release of prestressing force is in direct proportion with the magnitude of the initial prestress.The increasing of the initial prestress force causes an increase in the magnitude of circumferential stresses around the prestressing steel and hence the concrete cracking.As a result, the length that is required to transfer the prestress force becomes longer as is demonstrated in Fig. 25.This observation is supported by Eq.It can also be seen that in all beams the effective prestress force develop to the same percentage.Fig. 26 shows a possible trend of influence of concrete cover on the transmission length.The final point was excluded from the curve fit as it appears to be an outlier.A Similar trend was observed in previous experiments conducted by Oh and Kim .It is found that the transmission length decreases with the increase of the concrete cover.This is caused by the increase of nonlinearity in concrete and decrease of frictional resistance at the interface in the case of small covers .Only less than 2% change was observed in the effective prestress in this case.The trend also shows that the reduction in the transmission length reached a plateau beyond a concrete cover of 5.5d in this simulation.However, Den Uijl has reported no further reduction to be expected beyond 3d to 4d .Russell and Burns reported that large elements with multiple strands have a shorter transmission length.However, for a pre-tensioned beam with a single strand, the results show that the size of the section does not affect either the transmission length or the effective prestress significantly.This contradiction can be attributed to the fact that their experimental observations were based on a comparison between various beams with different number of strands, and different concrete properties and shapes.Also, this can be related to the presence of the transverse reinforcement which enhances the concrete confinement and bond behaviour.Ageing of the concrete causes an increase in the concrete compressive strength, tensile strength, Young׳s modulus and shrinkage strain, which results in enhanced bond between steel and concrete.In this investigation, it was found that later release of external prestress force results in shorter transmission length and up to 3% higher effective prestress, Fig. 28.No considerable change in either transmission length or effective prestress was observed after 21 days.It is well known that the increase of steel roughness due to the presence of rust on the steel enhances the bond between steel and concrete .To investigate the influence of surface roughness the coefficient of friction in the model is varied between 0.3 and 0.7 .Fig. 29 shows that pre-tensioned concrete elements with a rough surface tendon have a smaller transmission length in comparison to those with a smooth surface.This finding was also reported in Ref. .In the figure, it also can be observed that the tendons in all cases develop to the same stress level, which means the surface roughness has no influence on the effective prestress.In case half of the length of a member is less than the transfer length, it is found that the stress distribution follows the full transmission curve up to the stress corresponding to the half-length, Fig. 30.This finding is very helpful when experiments are designed, especially if the aim of the test is studying the bond behaviour in the end zone rather than focusing on measuring of the transmission length.Note that although there is no slip at the member׳s half-length, the change in steel strain is not equal to the concrete strain.The developed 3D FE model was also used to examine the thick-wall cylinder assumption for prestressed concrete.A contour plot of von Mises stresses shows the concentration of stress around the prestressing steel.The von Mises contours take a circular shape and they diminish at a distance approximately equal to the concrete cover, Fig. 31.This observation means the assumption that the prestressing steel is a solid cylinder surrounded by a hollow thick-wall cylinder is acceptable.More investigations are carried out on beams with different cross section and constant concrete cover of 30 mm as shown in Fig. 32.The beams with a width of 100 mm and depth vary from 125 mm up to 250 mm.Fig. 32 shows similar stress contours for all beams.This result is also supporting the findings in Section 7.5, which shows that the prestress transfer is not dependant on the size of the pre-tensioned concrete unit in the case of equal covers.Also the von Mises contours are plotted for the 200×200 mm2 beams with different concrete cover shown in Section 7.4.In this case, also circular contours are observed around the tendon with different intensity.The beams with smaller concrete cover show higher level of stress compare to those with the larger cover.A closed-form expression of the transmission length is presented in this paper based on a linear analytical model and the thick-wall cylinder theory.The paper also presents a 3D non-linear FE model considering the post-cracking behaviour of concrete in addition to different parameters such as concrete cover, initial prestress, concrete strength, concrete shrinkage and member cross section.It is found that the 3D non-linear finite element model is more accurate than the analytical model although the analytical model is more computationally efficient.The developed 3D FE model was then used to examine the assumptions of the thick-wall cylinder model to simulate the prestress transfer in pre-tensioned concrete elements which was found to be reliable.This model is also used to investigate the influence of prestressing steel diameter, concrete cover, concrete strength, initial prestress, section size, member length, time of prestress releasing, and surface condition of the tendon on the transfer of prestress force from steel to concrete in pre-tensioned concrete elements.The following conclusions can be drawn:A linear closed-form expression has been proposed to predict the transmission length and the stress profile along the transmission zone.The proposed expression can be used in the initial design stage where new concrete materials are used and there is an absence of code-design formulae.The 3D non-linear FE model can be used as a tool to understand the phenomenon of prestress transfer considering different aspects.In the modelling of prestress transfer, it is not appropriate to account for the concrete shrinkage by subtracting the shrinkage losses from the initial prestress.Concrete shrinkage before release imposed non-uniform increase and decrease of stresses along the member.The use of the thick-wall cylinder theory was found to be reliable in modelling the prestress transfer.The presented parametric study, based on the 3D FE model, provides useful information about the influence of steel diameter, concrete cover, concrete strength, initial prestress, section size, member length, time of prestress releasing, and surface condition of the tendon on the transfer of prestress force from steel to concrete in pre-tensioned concrete elements.The size of the element was found to have no significant effect on the prestress transfer,In general, the paper has drawn advanced numerical simulations to improve the understanding of the prestressed transfer in pre-tensioned concrete.It has also proposed an analytical formula of the transmission length as well the stress distribution over the transmission zone. | Three models were developed to simulate the transfer of prestress force from steel to concrete in pre-tensioned concrete elements. The first is an analytical model based on the thick-walled cylinder theory and considers linear material properties for both steel and concrete. The second is an axi-symmetric finite element (FE) model with linear material properties; it is used to verify the analytical model. The third model is a three dimensional nonlinear FE model. This model considers the post-cracking behaviour of concrete as well as concrete shrinkage and the time of prestress releasing. A new expression from the analytical model is developed to estimate the transmission length as well as the stress distribution along the tendon. The paper also presents a parametric study to illustrate the impact of diameter of prestressing steel, concrete cover, concrete strength, initial prestress, section size, surface roughness of prestressing steel, time of prestress release, and the member length on the transfer of stress in pre-tensioned concrete elements. |
546 | Master athletes have higher miR-7, SIRT3 and SOD2 expression in skeletal muscle than age-matched sedentary controls | Skeletal muscle is the most abundant tissue in human body, accounting for about 60% of the total protein content and 40% of body mass.It is not only important for locomotion and maintenance of body posture, but also has an important metabolic function such as storage of carbohydrates in the form of glycogen.Aging results in significant loss in muscle mass and function, which directly effects well-being and mortality .The loss of muscle mass is attributable to a gradual decline in the number of muscle fibers that begins around the age of 50, where by the age of 80 approximately 50% of the fibers are lost .Physical exercise might be the only natural tool to attenuate sarcopenia.Indeed, regular exercise has been shown to be associated with larger muscle cross sectional area , fiber number , strength , endurance capacity , mitochondrial function , insulin sensitivity , among others.Aging is associated with alterations in the miRNA profile in skeletal muscle and deterioration of mitochondrial function dynamics .Physical activity induces a wide range of functional and biochemical changes in skeletal muscle including epigenetic changes, such as alterations in the miRNA profile .For instance, both short- and long-term endurance exercise induced changes in the levels of a number of miRNAs that are involved in the regulation of skeletal muscle regeneration, gene expression and mitochondrial biogenesis.These effects of endurance training are not limited to healthy people, but also in patients with polymyositis or dermatomyositis endurance exercise induced an increase in muscle miRNA levels that target transcripts involved in inflammation, metabolism and muscle atrophy .Hypertrophic stimuli can also induce changes in miRNA levels as illustrated by our observation that functional mechanical overloading by synergist muscle ablation induced alterations in miRNA levels that control atrophy and hypertrophy .It is thus possible that regular exercise can reverse some of the detrimental ageing-related changes in the miRNA profile and mitochondrial function.Since ageing is associated with reduced levels of physical activity and disuse does cause muscle wasting and reductions in oxidative capacity master athletes may provide an excellent model to study the effects of ageing per se on muscle, not confounded by disuse .Therefore, the purpose of this investigation was to study the effects differences in the miRNA and mitochondrial profile between master athletes aged over 65 years old and age-matched controls.We recruited 26 Master Athletes at the European Veterans Athletics Championships in 2010.Control participants were also recruited.The participants provided written informed consent before inclusion.For this study we have selected 10 master athletes 65 ± 5 years and 13 sedentary subjects 64.67 ± 2.08 years old.The master athletes reported that they all had been training for more than sixty years, while control subjects were sedentary.The investigation was approved by the local ethics committee and performed in compliance with the Declaration of Helsinki.Muscle biopsies were obtained from the vastus lateralis using a conchotome or needle biopsy technique as described earlier .Samples were frozen in liquid nitrogen and stored at − 80 °C until biochemical analysis.Total RNA, including miRNA, was isolated from muscle biopsy samples by miRNeasy Mini Kit according to the instructions of the manufacturer.miRNA expression analysis was performed on 4 samples from master athletes and 4 samples of sedentary subjects gained by skeletal muscle biopsy samples with Agilent Human miRNA Microarray Release 14.0 8 × 15K resolution array, that distinguishes 887 human miRNAs.The microarray was performed according to the instructions by the manufacturer.Hundred ng of total RNA were dephosphorylated and marked with Cyanine-3-pCp dye using the miRNA Complete Labeling and Hyb Kit.Purification of the marked RNA was performed by Micro Bio-Spin P-6 column and then hybridized onto the Human miRNA Microarray Release 14.0 microarray slides.After hybridization, slides were washed at room temperature and scanned using an Agilent DNA microarray scanner.Raw data were extracted with the Agilent Feature Extraction Software 11.0.The TaqMan miRNA reverse transcriptase kit and TaqMan miRNA assays were used to quantify mature miRNA expression levels."Each target miRNA was quantified according to the manufacturer's protocol with minor modifications.Briefly, reverse transcriptase reactions were performed with miRNA-specific reverse transcriptase primers and 5 ng of purified total RNA for 30 min at 16 °C, 30 min at 42 °C, and finally 5 min at 85 °C to heat-inactivate the reverse transcriptase."All volumes suggested in the manufacturer's protocol were halved, as previously reported .Real-time PCRs for each miRNA were performed in triplicate, and each 10-μl reaction mixture included 2.4 μl of 10×-diluted reverse transcriptase product.Reactions were run on a PRISM 7900HT Fast Real-Time PCR System at 95 °C for 10 min, followed by 40 cycles at 95 °C for 15 s and 60 °C for 1 min.Twofold dilution series were performed for all target miRNAs to verify the linearity of the assay.To account for possible differences in the amount of starting RNA, all samples were normalized to miR-423.All reactions were run singleplex and quantified using the cycle threshold method .cDNA was synthesized using a Tetro cDNA Synthesis kit in accordance with the manufactures’ instructions.Briefly, the reaction conditions were as follows: 1 μg of RNA, 1 μl of random primers, 1 μl of 10 mM dNTP, 1 μl of RNase inhibitor, and 0.25 μl of 200 U/μl reverse transcriptase in a final volume of 20 μl.The solution was incubated for 10 min at 25 °C for primer annealing, followed by 42 °C for 60 min for primer elongation, and followed by 80 °C for 5 min termination.cDNA samples were stored at − 20 °C.Based on the principle of the SybrGreen detection method, EvaGreen® dye was used to detect PCR products.The PCR was performed using a primer pair specific for mRNA of vascular endothelial growth factor, silent mating type information regulation 2 homolog 1, forkhead box protein O1, mitochondrial calcium uniporter, peroxisome proliferator-activated receptor gamma coactivator 1-alpha and mechano growth factors isoforms.PCR amplifications consisted of equal amounts of template DNA, 10 μl of ImmoMix™ complete ready-to-use heat-activated 2× reaction mix, 1 μl of 20x EvaGreen, 2.5 μl of 10 nmol/L forward and reverse primer and water to a final volume of 20 μl.Amplifications were performed in a Rotor-Gene 6000 thermal cycler at 95 °C for 10 min, followed by 40 cycles of 95 °C for 10 s, 60 °C for 20 s and 72 °C for 30 s in triplicates.The validity of the signal was evaluated by melting analysis and agarose gel electrophoresis.Human 28 S rRNA gene served as an endogenous control gene.Tissue homogenates of the muscle biopsy samples were generated with an Ultra Turrax® homogenizer using 10 vol of lysis buffer.Five to ten micrograms of protein were electrophoresed on 10–12% v/v polyacrylamide SDS-PAGE gels.Proteins were electrotransferred onto polyvinylidene difluoride membranes.The membranes were subsequently blocked in 0.5% BSA, and after blocking incubated with primary antibodies 1:2500 Santa Cruz #sc-69359, GAPDH 1:50000 Sigma-Aldrich #G8795) overnight at 4 °C.After incubation with primary antibodies, membranes were washed in tris-buffered saline-Tween-20 and incubated with HRP-conjugated secondary antibodies.After incubation with secondary antibodies, membranes were repeatedly washed.Membranes were incubated with chemiluminescent substrate and protein bands were visualized on X-ray films.The bands were quantified by ImageJ software, and normalized to GAPDH, which served as an internal control."Data gathered from the miRNA array validation and gene expression experiments were analyzed with an unpaired Mann-Whitney U-test, and unpaired, two-tailed Student's t-test or χ2 test were used for qPCR and Western blot variables, as appropriate.Data are presented as mean ± standard deviation.Significance level was set at p < 0.05.First we performed miRNA array from the biopsy muscle samples of Master athletes and control subjects.The microarray analysis revealed that 21 of the 887 miRNA sequences were lower in the master athlete than control muscles.Four miRNA were selected based on the greatest difference in the miRNA array for further q-PCR analysis.This revealed that only miR-7 was expressed more in the muscles from controls than in those from master athletes.Then from the remained muscle samples, key mitochondrial mRNA and protein contents were measured.SIRT1 and FOXO1 mRNA levels were higher in master athletes than in control groups, while the SIRT3 and SOD2 proteins from the muscle samples of master athletes were higher than that in the control subjects.Aging is associated with an increased level of miR-7 and has been shown to play a crucial role in ageing-associated loss of transforming growth factor-beta 1 dependent fibroblast to myofibroblast differentiation, and hence poorer wound healing .It was proposed that miR-7 up-regulation in aged cells reduced the expression of epidermal growth factor receptor protein via degradation of its mRNA, but it may also interact with the mRNA of downstream targets of the EGFR-dependent signaling pathway such as MAPK/ERK, CaMKII, Rho-GTPase, PI3K, Akt and mTOR .This signaling pathway is important to wound healing in skeletal muscle .The significance of miR-7 for fibroblast function is also illustrated by the diminished miR-7 level after estradiol treatment that resulted increased EGFR mRNA expression and restored functionality of aging fibroblasts .The cause of the senescent state of the fibroblasts and elevated miR-7 levels in old age has been suggested to be involved in chronic inflammation and specifically the interferon-linked pathway .In line with the role of systemic inflammation to induce miR-7 is the observation that miR-7 expression is also elevated in airways of patients suffering from allergic rhinitis and chronic obstructive pulmonary disease , and in peripheral blood mononuclear cells of HIV patient , conditions associated with local or systemic inflammation.Moreover, it is known that facioscapulohumeral muscular dystrophy is also associated with both inflammation and elevated miR-7 expression in muscle .It is thus possible that sarcopenia-associated inflammation may at least partly contribute to the expression of miR-7 in old muscle.Exercise may reduce systemic inflammation and expression of inflammatory markers in muscle , the anti-inflammatory effect of life-long exercise may thus explain the lower miR-7 levels, we observed in the muscles of our master athletes than in those of non-athletes.Besides the role of miR-7 in inflammation, it seems that it is also important to lipid metabolism.It has been shown that miR-7 mediates cross-talk between peroxisome proliferator-activated receptor, sterol regulatory element-binding proteins, and liver X receptors signaling pathways .PPAR-α signaling regulates miR-7, which activates SREBP.Down-regulation of miR-7 is associated with sebaceous lipogenesis .It is known that high level endurance performance needs a high level of energy supply.Indeed, in mouse model, which was developed to study extreme endurance, mice have significantly elevated levels of PPAR-α and lipogenesis , suggesting, the regular exercise mediated metabolic challenge could involve miR-7-mediated up-regulation of fat metabolism.Another main finding of this study was, that in muscles from people who performed life-long exercise SIRT3 protein levels were higher than in the skeletal muscle of sedentary people.SIRT3 has a powerful regulatory role in lipid metabolism.SIRT3 ablation exhibits hallmarks of fatty-acid oxidation disorders during fasting, including reduced ATP levels .Indeed, it has been shown that SIRT3 controls fatty acid metabolism by deacetylation of medium-chain acyl-CoA dehydrogenase and acyl-CoA dehydrogenase , therefore ablation of SIRT3 would highly impact lipid metabolism.Moreover, it has been shown that SIRT3 can interact and deacetylate ATP-synthase F-complex , hence SIRT3 directly controls ATP production.This explains why SIRT3 knock-out mice have decreased production of ATP.In addition, it is also known that the aging is associated with decline in SIRT3 levels , which can explain the age related decline in ATP production.Here we show that life-long physical exercise significantly increases SIRT3 levels and this is a powerful beneficial effect of exercise against age-associated functional deterioration of mitochondria.SIRT3 deacetylates two critical lysine residues on SOD2 and promotes its antioxidant activity, and decreases the level of ROS in the mitochondria .Therefore, physical exercise through increase in SIRT3 level and activation of SOD2 can attenuate the age-associated decline in mitochondrial function and suppress oxidative stress .Due to the limited amount of samples, we could select just some important proteins in the skeletal muscle to investigate the effects of life-long exercise training.According to our results the mRNA levels of SIRT1 and FOXO1 were elevated in the muscle of master athletes compared to control subjects.We and others have shown that exercise training can prevent the age-related decrease in the level and activity of SIRT1 and the associated functional alterations .FOXO1 is involved in glycolytic and lipolytic flux, and mitochondrial metabolism, thus it is important part of adaptive response to cope with energy challenge during exercise .Moreover, it was suggested that the deacetylation of FOXO1 by SIRT3 elevates the expression of the FOXO1 target genes, like SOD2 while decreasing senescence phenotypes .In conclusion, our data suggest that life-long exercise program results in down-regulation of miR-7 in skeletal muscle of master athletes, which can lead to suppression of sarcopenia related inflammation, better fat metabolism.The increased level of SIRT3 supports more efficient fat metabolism, ATP production and antioxidant capacity through SOD2 in skeletal muscle of master athletes compared to controls subjects.Life-long exercise attenuates the age-associated decline in energy metabolism and antioxidant systems in the skeletal muscle. | Regular physical exercise has health benefits and can prevent some of the ageing-associated muscle deteriorations. However, the biochemical mechanisms underlying this exercise benefit, especially in human tissues, are not well known. To investigate, we assessed this using miRNA profiling, mRNA and protein levels of anti-oxidant and metabolic proteins in the vastus lateralis muscle of master athletes aged over 65 years and age-matched controls. Master athletes had lower levels of miR-7, while mRNA or protein levels of SIRT3, SIRT1, SOD2, and FOXO1 levels were significantly higher in the vastus lateralis muscle of master athletes compared to muscles of age-matched controls. These results suggest that regular exercise results in better cellular metabolism and antioxidant capacity via maintaining physiological state of mitochondria and efficient ATP production and decreasing ageing-related inflammation as indicated by the lower level of miR-7 in master athletes. |
547 | Measuring method for partial discharges in a high voltage cable system subjected to impulse and superimposed voltage under laboratory conditions | Partial discharge measurements provide a useful tool to obtain information about discharging defects in high-voltage equipment.In power cables, PD occurs at insulation defects in particular in cable joints and terminations, especially at interfaces .Therefore, PD measurement on cable systems can be considered a useful tool to diagnose insulation condition for both laboratory application and on-site application .PD in power cables is normally measured under AC voltage by using the conventional technique defined by IEC 60270 .In practice, power cables are not only subjected to AC operating voltage, but also to transient voltages such as lightning and switching impulses, which occasionally will be superimposed on the normal AC voltage.Those transient voltages will have an additional stress on the cable insulation.In that regard it is important to investigate PD under impulse and superimposed voltages.One of the challenges in measuring PD under impulse and superimposed voltages concerns the suppression of the disturbances caused by the transient voltages.In laboratory tests, the applied impulse voltage causes currents in the cable under test that disturb the PD measurement.So the PD measurement system needs to have a strong suppression of the disturbance.In such a case, the conventional PD technique is not suitable anymore.The unconventional method based on the measurements of electrical signals in MHz range is of more interest as a better alternative for these conditions .Three circuits for PD detection under impulse are provided in with a measurement frequency from hundreds of MHz to GHz, namely: the high frequency current transformer with multipole filter, the coupling capacitor with multipole filter, and the electromagnetic couplers.HFCTs or other sensors are commonly used with wide/ultra-wide bandwidth together with filters and a digital oscilloscope to detect PD in insulation specimens or models under impulses .A coupling capacitor was used to measure PD in material samples in cases where only impulse and square wave voltage were applied .Those PD measuring systems were able to detect PD during the impulse even during its front time.For superimposed impulses, PD was detected in laminated paper using a current transformer and a high-pass filter by Hayakawa et al. .Nikjoo et al. used a wideband detection system consisting of a coupling capacitor, a detection impedance and a low-pass filter to measure PD in oil-impregnated paper.However, in both works, PD was measured during AC cycles before and after impulses instead of during the impulses.Moreover, PD measurements in the above-mentioned works were performed on material specimens.Due to the small scale of the samples and the relatively low voltage level, less disturbance is produced in the circuitry.Regarding PD measurement on power cables, for off-line tests capacitors and HFCTs are normally used, while on-line tests almost always use HFCTs , especially for testing cable accessories .However in related literature, partial discharges in power cables are usually measured after the impulse has been applied, while partial discharges during the moment of impulse have been less reported.During impulse voltage conditions the PD measuring system should fulfil two requirements.Firstly, the safety of both human and equipment need to be ensured when using the measuring system.Secondly, the measuring system should be able to detect PD from the cable joint before, during and after the impulse transient application upon the AC voltage.This work presents a PD measuring system for laboratory use which is able to measure PD during impulses in a HV cable system under impulse and superimposed voltages.A 150 kV cross-linked polyethylene cable system with an artificial defect in the cable joint was tested under lab conditions.A PD measuring system consisting of two HFCTs, band-pass filters, transient voltage suppressors and a digital oscilloscope was used.In particular, the two HFCTs were installed at both ends of the cable joint, which helped to identify the PD from the cable joint and separate PD from disturbance signals using the polarity of the pulses.TVSs were added after the filters for protection purpose.This measuring system is able to identify and measure PD in AC and during the superimposed transients.Since the impulses applied to the cable system were in the range of hundreds of kilovolt, very large disturbances were induced during the impulse application, due to which PD cannot be detected during the impulse front time without additional filtering.To decrease the latter disturbance of the PD signals, additional band-pass filters were added.The possibility of the PD measuring system to measure PD during impulses is of potential use for studying the effects of transients on HV cable and accessories.The following chapters describe in detail the test setup and the characteristics and particularities of the proposed measuring system for PD cable measurements under transients.The circuit consists of the HV cable system under test, the testing voltage supplies and the PD measuring system.Fig. 1 shows the schematic diagram of the test circuit.Values of all the elements are given except for the resistors in the impulse generator, which are adjusted according to the required waveforms of impulse voltages.For testing under impulse voltages, part of the circuit denoted by the grey area in Fig. 1 was connected.For testing under superimposed voltages, the entire circuit was connected.Fig. 2 shows the physical set-up as built in the HV lab based on Fig. 1.In this work, a 150 kV XLPE extruded cable system was used as test object.The HV cable system was tested under 50 Hz AC voltage, impulse voltage and superimposed voltage.To identify and measure PD in the HV cable system, an unconventional PD measuring system was installed at the cable joint.A conventional PD measuring system according to standard IEC 60270 was also used.Detailed explanations on the setup are given in the following chapters.The test object is a 16-m long 150 kV cable terminated with two outdoor-type terminations, termination 1 and 2, and a pre-moulded joint in between, as shown in Fig. 2.The joint is located five meters from the termination 2.The cable is grounded at both cable terminations.The capacitance of the cable system is 3.75 nF.In order to investigate the functionality of the PD measuring system in the laboratory condition, a PD source is needed to produce PD in the cable joint.In this work, an artificial defect was created by manipulating the joint.The connector in the joint was prepared in such a way that the cable can be pulled out of the joint.In practice, this will not happen in a properly mounted cable joint.Whereas for laboratory testing, this defect can produce stable partial discharges.In this work, the cable was pulled 7 mm out of the joint at the side near to termination 2.For research purposes, this set up can generate under AC voltage detectable PD activities with recognizable and stable phase-resolved PD patterns.On average, the partial discharge inception voltage of the partial discharge was 104 kVrms.Fig. 3 shows the way in which the artificial defect is created by pulling out of the HV cable joint.The test circuit is able to provide 50 Hz AC voltage, impulse voltage and superimposed voltage.To supply AC voltage, a 380 V/150 kV AC transformer was connected to the HV cable.A LC low-pass filter was added at the low-voltage side of the transformer to filter out the line noise.Five stages of a Marx generator were used to provide impulse voltages.The total discharge capacitance C of the five stages is 100 nF.Different impulse waveforms, i.e. different front time Tf and time to half value Th, can be generated by adjusting the front resistor Rf and the tail resistors Rh_LI and Rh_SI.At the same time, Tf and Th are also related to the total load capacitance Cload.For testing under superimposed voltages, the total load capacitance Cload is the combination of the HV cable, the voltage divider VD2, the blocking capacitor Cb, the coupling capacitor Ck and the filtering capacitor Cd.In order to reach a longer front time without using a too large Rf, an additional 1 nF capacitance Cl was connected .The settings of the impulse generator for generating different impulses in this work are given in Table 4 in the Appendix.For generating superimposed voltages, the AC transformer and the impulse generator were both connected to the cable.In order not to stress the impulse generator with the AC voltage, a 1.6 nF blocking capacitor Cb together with a 2 kΩ resistor Rb were installed between the AC supply and the impulse supply.This attenuates the AC voltage at the impulse generator and allows the impulse voltage to be superimposed on AC voltage at the cable.The AC transformer was protected against the impulse voltages by a RC low-pass filter.One voltage divider VD1 was used to measure the generated impulse voltages at the impulse generator, and another VD2 served to measure the composite voltages at the cable termination 1.In this work, the HV cable system was tested under AC voltage, impulse voltage and superimposed voltage.Fig. 4a shows the waveform of the impulse voltage having a peak value Vpeak, front time Tf and time to half value Th.The waveform of the superimposed voltage is shown in Fig. 4b.An impulse voltage with front time Tf and time to half value Th is riding on the AC wave crest.The total peak value Vpeak of the testing voltage is the combined value of the AC peak value VACpeak and the superimposed impulse voltage.Two identical HFCTs were used to detect PD from the cable joint.The two HFCTs have a gain of 3 mV/mA and a bandwidth of 100 kHz–40 MHz .The PD signals captured by the two HFCTs were transmitted through two 20-m identical coaxial cables to a digital oscilloscope Tektronix MSO58, which was used to acquire the signals with a sampling frequency of 1.25 GS/s and a bandwidth of 250 MHz.During the application of impulse voltages, transient currents in the cable induce a high voltage signal in the HFCTs.Fig. 5 presents the signal measured by the HFCT during the application of the impulse.The signal was measured with a HV probe.The impulse has a waveform as shown in Fig. 4a with Vpeak of 274 kV and Tf/Th = 3/2000 μs, which was one of the test voltages applied on the cable in the PD measurement.As shown in Fig. 5, the amplitude of the measured signal is in the range of kilovolt, which far exceeds the maximum input voltage of the oscilloscope.Such large signal will cause a damage to the oscilloscope.Therefore, in order to protect the oscilloscope, a filter/suppressor protection unit was applied before the oscilloscope.A transient voltage suppressor together with a spark gap were used to clip the voltage to 12 V.A band-pass filter with bandwidth of 114 kHz–48 MHz was added before the TVS to reinforce the power attenuation outside the sensor’s bandwidth.The TVS, the spark gap and the band-pass filter are integrated in one box, named filter A. Fig. 6a shows the configuration of the measuring system combined with the HFCT, the coaxial cable, and the integrated unit filter A.The transfer functions of the HFCT as well as the measuring system are characterized by using the method in and given in Fig. 6b.The two HFCTs were mounted at both ends of the joint with the same polarity, as shown in Fig. 7.The one near to termination 1 is named as HFCT 1, and the other one near to termination 2 is named as HFCT 2.When the PD occurs externally to the cable joint, i.e. from the cable section near termination 1 or termination 2, the PD signals measured by HFCT 1 and HFCT 2 from PD event have the same polarities and similar magnitudes.If the PD occurs in the cable joint, the PD is generated between the two HFCTs and splits propagating in both directions.In this case the PD pulses measured by HFCT 1 and HFCT 2 have opposite polarities and similar magnitudes.By using this polarity recognition, it is possible to discriminate between discharges produced in the joint and outside the joint.PDflex , software developed by the High Voltage Laboratory of Delft University of Technology, was used for analyzing and presenting the PD measurement results with phase-resolved PD patterns, time-resolved PD pulses and typical PD parameters .A clustering technique applied in PDflex helped to separate PD from noise.To verify the functionality of the PD measuring system, three types of pulses were injected in the cable system from different locations.Table 1 lists the verifying pulses and the testing voltages under which they were tested.The following chapters describe the results for each case.For verification, pulses of 1 nC were injected into the measuring system from different locations both internally and externally to the cable joint.Fig. 8 illustrates how to simulate pulses occurring in the cable joint with the calibrator.The results recorded by HFCT 1 and HFCT 2 are shown in Fig. 9.When the calibration pulses were from termination 1, the measured signals always have positive polarities and similar amplitudes, as shown in Fig. 9a.When the calibration pulses were from termination 2, the measured signals show negative polarities, as shown in Fig. 9b.In Fig. 9c when the calibration pulses were from the cable joint, the pulse captured by HFCT 1 shows a negative polarity while the pulse captured by HFCT 2 shows a positive polarity.It can be seen from the above results that, the applied PD measuring system is able to indicate whether the pulses are internal or external to the cable joint by polarity recognition.If there are PD occurring in the joint while disturbances are produced outside the joint, such polarity recognition can also help to separate PD from disturbances.To test real PD external to the cable joint, corona discharge was generated by a metal needle installed at termination 1 under an AC voltage of 16 kVrms.Fig. 10 shows the PRPD patterns of the corona measured by HFCT 1 and HFCT 2.Both patterns indicate that the positive corona discharges occurred at the peak of negative AC cycle.Fig. 11 shows the TRPD pulses of one corona discharge measured by the two HFCTs.Both PD pulses have positive polarities, which is in accordance with the case of Fig. 9a, where the pulse was injected from termination 1.So based on the polarities of the corona pulses, it can be confirmed that the corona source is external to the cable joint and from the cable section near to termination 1.The partial discharges generated by the artificial defect in the cable joint were measured at an AC voltage of 108 kVrms.Fig. 12 shows the PRPD patterns of the partial discharges measured by HFCT 1 and HFCT 2.With HFCT 1, PDs measured under positive half cycle possess positive polarities and negative polarities under negative half cycle.With HFCT 2 the PD polarities reverse.So the pulses measured by the two HFCTs from every discharge event always have opposite polarities, which confirms that the PDs originate from the cable joint internally.Fig. 13 shows the TRPD pulses of measured partial discharges.One partial discharge event occurred during the negative half cycle is shown in Fig. 13a.The first peaks of the two pulses have opposite polarities: the pulse measured by HFCT 1 has negative polarity while the pulse measured by HFCT 2 has positive polarity.These two pulses all reach the peak values at the same time.After the first peak, both pulses start to attenuate quickly with oscillation due to the circuit configuration and cable reflections.Based on the pulse characteristics, only the first peak of each pulse was used to analyse the PD information.Due to the polarity, these two pulses indicate that the partial discharge source is located in the cable joint.Since the duration of the first peak is in the range of 40–50 ns, two PDs within a time interval longer than 40 ns should be detectable.Fig. 13b shows a case in which two PD events occurred in series with a time interval of 40 ns.It is worth mentioning that since the magnitude of the partial discharges is in the order of millivolt, the partial discharges would not be clipped by the TVS.Fig. 13a also gives an estimation of the charge magnitude of the PD pulse measured by HFCT 1.The apparent charge Q, with estimated value of 136 pC, is calculated as the integral of the first peak over time by applying the method in .Such estimation is valid when the PD pulse is not critically affected.However, since the cable length under test is quite short, the impact on PD pulse shape increases a lot due to pulse propagation and reflections.This situation will be shown in later sections.In such case, the estimation of apparent charge is not accurate any more.Consequently, the calibration of PD value based on the measured PD pulse becomes difficult.Thus, we directly use the voltage amplitude of the first PD pulse to describe the PD level instead of the charge magnitude.As stated in , depending on the test object and the PD measuring system there is no obvious correlation between the apparent charge level measured with the conventional and the unconventional methods.Moreover, in short cables the PD measurements are affected by the multiple PD reflections which makes the calibration process difficult.However, to provide some reference information, a conventional PD measuring system was also applied in the test circuit to measure PD under the same AC condition.A 400 pF coupling capacitance Ck was connected to the cable termination 1.A Haefely DDX9101 PD detector complying with IEC 60270 was used to measure PD through a PD coupling impedance PDZ.The PD measurement result acquired by the conventional method is given in Fig. 14.For the same defect and the same AC condition of 108 kVrms as in Section 3.3, Fig. 14 shows a comparable PRPD pattern to Fig. 12.The average discharge magnitude of 514 pC, which was measured with a filter bandwidth of 50–400 kHz, is in the same order of magnitude as the estimated charge value as shown in Fig. 13a.In addition, the conventional PD calibrator also helped to check the sensitivity of the PD measurement.The sensitivity around 10 pC was reached by diminishing calibration pulses injected into the cable system until it cannot be observed.To evaluate the intended capability of the PD measuring system, the cable system was then tested under impulse voltages and superimposed voltages.Table 2 lists the tests and the testing voltages with their parameters, as defined in Fig. 4.The following chapters describe the results.The partial discharges from the cable joint were tested under impulse voltages as shown in Fig. 4a.A short impulse voltage with Tf = 3 µs and Th = 56 µs was firstly applied on the cable in test 1.Fig. 15 shows the observed PDs with their polarities and amplitudes under this impulse.The PDs shown in Fig. 15 were measured by HFCT 1, with which the polarity of measured PD is the same as the polarity of applied voltage.All PDs were detected on the wave tail with negative polarities, which are referred to as reverse discharges according to Densley .The pulse shapes of the reverse discharges RD7 and RD8 measured by the two HFCTs are given in Fig. 16a.The pulse measured by HFCT 1for RD7 and RD8 are both negative.The opposite polarities for each PD event as observed by HFCT 2 shows that the discharges originate from the cable joint internally.The impulse application generated a lot of disturbance which was also captured by the HFCTs.Fig. 16b shows typical disturbances.The two signals from the two HFCTs are always in phase, which indicates that the disturbance is external to the cable joint.Such polarity recognition contributes to separate PD from disturbance in the analysis stage.A longer impulse voltage with Tf = 3 µs and Th = 2000 µs was next applied to the cable in test 2.The observed PDs are shown in Fig. 17.Similar to test 1, PDs were only detected on the wave tail with negative polarities.Fig. 18 shows the pulse shapes of the three reverse discharges RD5, RD6 and RD7, which occurred in series.Test 1 and test 2 show that, the PD measuring system is able to measure signals, including PD and disturbance, under impulse voltages.Moreover, using the pulse shape and pulse polarity, it is possible to identify PD from the cable joint and separate PD from disturbance.The HV cable system was then subjected to the superimposed voltages.In test 3, the superimposed voltage waveform, as shown in Fig. 4b, was applied to the cable system.The AC voltage was set at 124 kVACpk, which is the nominal operating voltage of the cable system.Since this AC voltage is below the PDIV of 147 kVACpk, no PD would occur.The applied impulse voltage with Tf = 3 µs and Th = 91 µs made the superimposed voltage reach to a peak value of 196 kVpk, which is well above the PDIV.During the test, the AC voltage was continuously applied before the impulse, under which no PD occurred.Then the impulse was superimposed on the AC voltage.After the impulse, the AC voltage was continuously applied until no more PDs were observed.The measurement results are shown in Figs. 19 and 20.Fig. 19a shows the observed PD activity over time under the superimposed voltage.Before the impulse, no PD occurred under the AC voltage as expected.When the impulse was applied on the cable, PD initiated, and then reoccurred for around 360 s under AC voltage.Fig. 19b shows the PD occurrence during the first eight cycles after the impulse.During the impulse moment, no PD could be observed.From the first negative cycle after the impulse, PD started to occur.With time, the number of PD decreased.The pulse shape of one PD from the positive cycle is given as PD9 in Fig. 20.Test 3 shows that, the PD measuring system is able to measure signals under superimposed voltages.However, so far no PD could be detected during the impulse moment.The previous tests have proven that, the deployed PD measuring system is capable to measure PD from the cable joint under impulse and superimposed voltages.According to Densley , PD initiates at the impulse both during the front time and the tail time.PD occurring during the front time near the peak of the impulse is referred to as main discharge with positive polarity.PD occurring during the tail time is referred as reverse discharge with negative polarity.However, in the previous tests under impulse voltages, only reverse discharges were detected during the tail time.No main discharges have been observed during the front time.For superimposed voltages, PDs were observed when the impulse was finished but not during the impulse moment.The reason is, the disturbance generated by the impulse application obstructed the observation of PDs during the front time of impulses.As shown in Fig. 5, besides PD signals, a large signal was also induced in the HFCT during the impulse application, which was regarded as disturbance during the PD measurement.As a result, the signal captured by the HFCT was a superposition of the induced disturbance and the PD signals.For safety purpose, the captured signal firstly went through filter A and is then clipped by the TVS.For measurement purpose, the vertical scale on the oscilloscope was set to 20–30 mV/division and the signal was then clipped as well by the vertical observation window.In the end, the signal on the oscilloscope displayed a large disturbance being clipped lasting for a certain period.After that period, the disturbance was gone and the PD signals could be observed clearly.However, if PDs occurred during this disturbance period, they might be undetectable.In case PDs occurred at the moment where the disturbance was larger than the 12 V threshold of the TVS, the PD signals would be clipped.If PDs occurred when the disturbance was smaller than 12 V but larger than the vertical observation window, they would still be clipped.It is possible to increase the vertical observation window.But in this case, the signal to noise level is too small so that it is impossible to decouple the PD signals from the disturbance signals.Only if PDs occurred when the disturbance was within the observation window, there was a chance to observe them.There are several options to cope with this issue.The signals can be measured with a higher threshold of TVS, and a larger vertical scale of the oscilloscope.However, in this way, the signal to noise issue still exists.Another option is to use a coaxial attenuator to attenuate the captured signals.However, both the PD signals and the disturbance signals will be attenuated.Thus, for measuring PD, using an attenuator is considered not suitable.In this work, to solve the problem, another filter/suppressor unit was used, which consists of a band-pass filter with a bandwidth of 1.38–90.2 MHz, a TVS and a spark gap.Same as filter A, all the elements are integrated in a box, named filter B.During measurement, filter B was added before filter A. Fig. 21 gives the characteristics of the new measuring configuration combined with the HFCT, the connecting coaxial cable, the filter A and the filter B. To evaluate the effectiveness of adding filter B, the signal was measured again under the impulse voltage as in Secttion 2.3 and test 2.Fig. 22 shows the measured signal in time and frequency domains.From the point of view of observing PD on the oscilloscope in time domain, it can be seen from Fig. 22a that, the large disturbance lasts for 100–150 µs without filter B.During this period, it is difficult to observe or decouple the PD signals from the disturbance.This period is named as dead zone.After adding filter B, the disturbance has been suppressed and the dead zone has been reduced to around 40 µs.In this case, any PD occurring after 40 µs is supposed to be detectable.The disturbance was also measured under impulse voltages in test 1 and 3 with different voltage values.With higher voltage and longer time of the impulse, the disturbance tended to have larger amplitude and longer dead zone.In all cases, filter B helped to suppress the disturbance and to decrease the dead zone.The performance of the PD measuring system has been improved by adding filter B. However, the resulting dead zone is still longer than the impulse front time of 3 µs, as shown in Fig. 23.In order to detect the main discharges during the front time, impulses with longer front time of 300 µs were applied to the cable system.In this case, main discharges were expected to be detectable during the front time.The tests performed with filter A + B are listed in Table 3 The results are explained in the following chapters.The HV cable system was tested again under an AC voltage of 108 kVrms.PRPD patterns and TRPD pulses of partial discharges from the cable joint were measured with the new PD measuring configuration.Fig. 24 shows the PRPD patterns.Fig. 25a shows the pulse shapes of one PD event.The opposite polarity appears at the first peak and the reversed at the second peak.Afterwards the two pulses oscillate in phase.Based on this feature, given the case as shown in Fig. 25b, PD 2 was recognized as another PD event right after PD 1 instead of the residual oscillation of PD 1.The shape distortion produced by the new filter B doesn’t jeopardize the pulse polarity recognition.Moreover, adding filter B also leads to a decrease in the measured PD amplitude.Fig. 26 shows the pulses of one PD event simultaneously measured with and without filter B under 108 kVrms AC.By using filter B, the amplitude of measured PD signal has been decreased around 50%.In case the decreased signal is close to the trigger level, it is very likely that this PD signal will not trigger the acquisition.As a result, using filter B may influence the detection of small PDs.The cable system was next subjected to a switching-like impulse with Tf = 300 µs and Th = 2650 µs in test 4.The observed PDs are shown in Fig. 27.In this test, main discharges with positive polarities were detected during the front time near the impulse peak at 237.4 µs and 239.7 µs, indicated as MD1 and MD2.During the impulse tail time, more reverse discharges occurred.Fig. 28a shows the pulse shapes of the two main discharges MD1 and MD2.Fig. 28b shows one reverse discharge RD6.In test 3, PD was measured with filter A under a superimposed voltage with Tf = 3 µs.In test 5 and 6, the same voltage values as in test 3 but with longer impulse front time of Tf = 93 µs were applied to the cable system.Figs. 29 and 30 show the measurement results with only filter A in test 5.Similar to test 3, PDs were initiated by the impulse starting from the first negative cycle, and lasted for around 22 s under AC voltage.No PD were detected during the impulse moment.In test 6, the same superimposed voltage was applied and PD were measured adding filter B.The measurement results are shown in Figs. 31 and 32.With filter B, main discharges were detected during the front time near the impulse peak at 91.6 µs, shown as MD1 and MD2 in Fig. 31.The pulses shapes of MD1, MD2 near the impulse peak and PD1 under AC are given in Fig. 32.By adding the filter B, the PD measuring system is able to measure both main discharges during the front time as well as the reverse discharges during the tail time, as long as the impulse voltage has a front time longer than the dead zone of the PD measuring system.On the other hand, small PD signals might be missing during the acquisition, due to attenuation produced by filter B and the trigger level.As a conclusion, whether to use filter B or not depends on the purpose of the test.If it is aimed to detect PDs during the entire impulse or superimposed transient moment, using filter B will help to decrease the dead zone.If it is more important to observe all the PD activities, removing filter B will increase the chance of the detection of small PD events.In this work, an unconventional PD measuring system was investigated to find a way to identify and measure PD in a HV cable system under laboratory conditions during impulse and superimposed AC voltage conditions.Two HFCTs were installed at the two ends of the cable joint with the same polarity.The signals captured by the HFCTs went through band-pass filters after which both were acquired by a digital oscilloscope.The PD data were then analyzed by the software PDflex and presented in PRPD pattern and TRPD pulses.The measurements under impulse and superimposed voltages show that, the deployed PD measuring system is able to identify and measure PD in the joint during the impulse conditions without and with AC superposition.Under these conditions the safety of equipment and human is ensured.The performance is achieved by using filters and transient voltage suppressors, and by post processing data techniques in PDflex.The installed HFCTs measure the signals internally to the cable joint with opposite polarities while externally to the joint with equal polarities.Such polarity recognition allows to identify PD from the cable joint, and discern PD from disturbances.The disturbance separation obtained by the polarity recognition and filters A and B is considered useful especially during the impulse test, since many disturbances enter the measuring system during the impulse application.The applied band-pass filters, spark gaps and transient voltage suppressors contribute to disturbance suppression and safety, which is a challenge in PD measurements under impulse.Filter A, equipped with a TVS and a spark gap, helps to protect the oscilloscope.By adding filter B, the extra band-pass filter helps to further suppress the disturbance and reduce the detection dead zone without detriment to the polarity recognition and having a good balance between pulse shape distortion and pulse attenuation.As a result, PD can be detected during the impulse front time.As an outcome, PD occurrence are presented with their pulse shapes and amplitudes during impulse and superimposed voltages as well as under AC voltage before and after impulses.The presented PD measuring system is instrumental for investigating the effect of transients on HV cable system in laboratory conditions.The effect of transients on HV cable system and the usefulness of the knowledge regarding such effect for on-site testing are to be investigated in future work. | A partial discharge (PD) measuring system has been deployed in order to identify and measure PD in a high voltage (HV) cable joint under impulse and superimposed voltages under laboratory conditions. The challenge is to enable the detection of PD during the impulse conditions. The method of measurement has been investigated by introducing an artificial defect in the cable joint in a controlled way to create conditions for partial discharges to occur. Next the HV cable system is subjected to AC, impulse and superimposed voltage. Two high frequency current transformers (HFCT) installed at both ends of the cable joint were used to identify PD from the cable joint and to separate PD from disturbance. Transient voltage suppressors and spark gaps are applied to protect the measuring equipment. Band pass filters with selected characteristics are applied to suppress transient disturbances and increase the chance to detect PD during the impulse. PD signals are separated from transient disturbances during data post processing and by means of pulse polarity analysis. The developed system enables the detection of so-called main and reverse discharges respectively occurring during the rise and tail time of the superimposed impulse. The measurement results obtained show the effectiveness of the presented PD measuring system for investigating the effects of voltage transients on a HV cable system in laboratory conditions. |
548 | Airborne Wind Energy Systems: A review of the technologies | Advancement of societies, and in particular in their ability to sustain larger populations, are closely related to changes in the amount and type of energy available to satisfy human needs for nourishment and to perform work .Low access to energy is an aspect of poverty.Energy, and in particular electricity, is indeed crucial to provide adequate services such as water, food, healthcare, education, employment and communication.To date, the majority of energy consumed by our societies has come from fossil and nuclear fuels, which are now facing severe issues such as security of supply, economic affordability, environmental sustainability and disaster risks.To address these problems, major countries are enacting energy policies focused on the increase in the deployment of renewable energy technologies.In particular:Since 1992, to prevent the most severe impacts of climate change, the United Nations member states are committed to a drastic reduction in greenhouse gas emissions below the 1990 levels.In September 2009, both European Union and G8 leaders agreed that carbon dioxide emissions should be cut by 80% before 2050 .In the European Union, compulsory implementation of such a commitment is occurring via the Kyoto Protocol, which bounded 15 EU members to reduce their collective emissions by 8% in the 2008–2012 period, and the ‘Climate Energy Package’, which obliges EU to cut its own emissions by at least 20% by 2020.In this context, in the last decades there has been a fast growth and spread of renewable energy plants.Among them, wind generators are the most widespread type of intermittent renewable energy harvesters with their 369 GW of cumulative installed power at the end of 2014 .Wind capacity, i.e. total installed power, is keeping a positive trend with an increment of 51.4 GW in 2014.In the future, such a growth could decrease due to saturation of in-land windy areas that are suitable for installations.For this reason, current research programs are oriented to the improvement of power capacity per unit of land area.This translates to the global industrial trend of developing single wind turbines with increased nominal power that feature high-length blades and high-height turbine axis .In parallel, since the beginning of 2000s, industrial research is investing on offshore installations.In locations that are far enough from the coast, wind resources are generally greater than those on land, with the winds being stronger and more regular, allowing a more constant usage rate and accurate production planning, and providing more power available for conversions.The foreseen growth rate of offshore installations is extremely promising; according to current forecasts, the worldwide installed power is envisaged in the order of 80 GW within 2020 .In this framework, a completely new renewable energy sector, Airborne Wind Energy, emerged in the scientific community.AWE aims at capturing wind energy at significantly increased altitudes.Machines that harvest this kind of energy can be referred to as Airborne Wind Energy Systems.The high level and the persistence of the energy carried by high-altitude winds, that blow in the range of 200 m – 10 km from the ground surface, has attracted the attention of several research communities since the beginning of the eighties.The basic principle was introduced by the seminal work of Loyd in which he analyzed the maximum energy that can be theoretically extracted with AWESs based on tethered wings.During the nineties, the research on AWESs was practically abandoned; but in the last decade, the sector has experienced an extremely rapid acceleration.Several companies have entered the business of high-altitude wind energy, registering hundreds of patents and developing a number of prototypes and demonstrators.Several research teams all over the world are currently working on different aspects of the technology including control, electronics and mechanical design.This paper provides an overview of the different AWES concepts focusing on devices that have been practically demonstrated with prototypes.The paper is structured as follows.Section 2 provides a brief description of the energy resource of high altitude winds.Section 3 provides a unified and comprehensive classification of different AWES concepts, which tries to merge previously proposed taxonomies.In Sections 4 and 5, an up to date overview of different devices and concepts is provided.Section 6 explains why AWE is so attractive thanks to some simple and well-known models.Finally, Section 7 presents some key techno-economic issues basing on the state of the art and trends of academic and private research.Differently from other previously published reviews, this paper deals with aspects that concern architectural choices and mechanical design of AWESs.We made our best in collecting comprehensive information from the literature, patents and also by direct contacts with some of the major industrial and academic actors.In the literature, the acronym AWE is usually employed to designate the high-altitude wind energy resource as well as the technological sector.High-altitude winds have been studied since decades by meteorologists, climatologists and by researchers in the field of environmental science even though many questions are still unsolved .The first work aimed at evaluating the potential of AWE as a renewable energy resource has been presented by Archer and Caldeira .Their paper introduces a study that assesses a huge worldwide availability of kinetic energy of wind at altitudes between 0.5 km and 12 km above the ground, providing clear geographical distribution and persistency maps of wind power density at different ranges of altitude.This preliminary analysis does not take into account the consequences on wind and climate of a possible extraction of kinetic energy from winds.However, the conclusions of these investigations already raised the attention of many researchers and engineers suggesting great promises for technologies able to harvest energy from high altitude winds.More in depth studies have been conducted employing complex climate models, which predict consequences associated with the introduction of wind energy harvesters, that exerts distributed drag forces against wind flows.Marvel et al. estimate a maximum of 400 TW and 1800 TW of kinetic power that could be extracted from winds that blow, respectively, near-surface and through the whole atmospheric layer.Even if severe/undesirable changes could affect the global climate in the case of such a massive extraction, the authors show that the extraction of ‘only’ 18 TW does not produce significant effects at global scale.This means that, from the geophysical point of view, very large quantity of power can be extracted from wind at different altitudes.A more skeptical view on high altitude winds is provided in Miller et al. who evaluated in 7.5 TW the maximum sustainable global power extraction.But their analysis is solely focused on jet stream winds.Despite the large variability and the level of uncertainty of these results and forecasts, it is possible to conclude that an important share of the worldwide primary energy could be potentially extracted from high altitude winds.This makes it possible to envisage great business and research opportunities for the next years in the field of Airborne Wind Energy.In this paper, the term AWESs is used to identify the whole electro-mechanical machines that transform the kinetic energy of wind into electrical energy.AWESs are generally made of two main components, a ground system and at least one aircraft that are mechanically connected by ropes.Among the different AWES concepts, we can distinguish Ground-Gen systems in which the conversion of mechanical energy into electrical energy takes place on the ground and Fly-Gen systems in which such conversion is done on the aircraft .In a Ground-Gen AWES, electrical energy is produced on the ground by mechanical work done by traction force, transmitted from the aircraft to the ground system through one or more ropes, which produce the motion of an electrical generator.Among GG-AWESs we can distinguish between fixed-ground-station devices, where the ground station is fixed to the ground and moving-ground-station systems, where the ground station is a moving vehicle.In a Fly-Gen AWES, electrical energy is produced on the aircraft and it is transmitted to the ground via a special rope which carries electrical cables.In this case, electrical energy conversion is generally achieved using wind turbines.FG-AWESs produce electric power continuously while in operation except during take-off and landing maneuvers in which energy is consumed.Among FG-AWESs it is possible to find crosswind systems and non-crosswind systems depending on how they generate energy.In Ground-Generator Airborne Wind Energy Systems electrical energy is produced exploiting aerodynamic forces that are transmitted from the aircraft to the ground through ropes.As previously anticipated, GG-AWESs can be distinguished in devices with fixed or moving-ground-station.Fixed-ground-station GG-AWES are among the most exhaustively studied by private companies and academic research laboratories.Energy conversion is achieved with a two-phase cycle composed by a generation phase, in which electrical energy is produced, and a recovery phase, in which a smaller amount of energy is consumed.In these systems, the ropes, which are subjected to traction forces, are wound on winches that, in turn, are connected to motor-generators axes.During the generation phase, the aircraft is driven in a way to produce a lift force and consequently a traction force on the ropes that induce the rotation of the electrical generators.For the generation phase, the most used mode of flight is the crosswind flight with circular or the so-called eight-shaped paths.As compared to a non-crosswind flight, this mode induces a stronger apparent wind on the aircraft that increases the pulling force acting on the rope.In the recovery phase motors rewind the ropes bringing the aircraft back to its original position from the ground.In order to have a positive balance, the net energy produced in the generation phase has to be larger than the energy spent in the recovery phase.This is guaranteed by a control system that adjusts the aerodynamic characteristics of the aircraft and/or controls its flight path in a way to maximize the energy produced in the generation phase and to minimize the energy consumed in the recovery phase.Pumping kite generators present a highly discontinuous power output, with long alternating time-periods of energy generation and consumption.Such an unattractive feature makes it necessary to resort to electrical rectification means like batteries or large capacitors.The deployment of multiple AWES in large high-altitude wind energy farms could significantly reduce the size of electrical storage needed.Moving-ground-station GG-AWES are generally more complex systems that aim at providing an always positive power flow which makes it possible to simplify their connection to the grid.There are different concepts of moving-ground-station GG-AWESs but no working prototype has been developed up to date and only one prototype is currently under development.Differently from the pumping generator, for moving-ground-station systems, the rope winding and unwinding is not producing/consuming significant power but is eventually used only to control the aircraft trajectory.The generation takes place thanks to the traction force of ropes that induces the rotation of a generator that exploits the ground station movement rather than the rope winding mechanism.Basically, there are two kinds of moving-ground-station GG-AWES:‘Vertical axis generator’ where ground stations are fixed on the periphery of the rotor of a large electric generator with vertical axis.In this case, the aircraft forces make the ground stations rotate together with the rotor, which in turn transmits torque to the generator.‘Rail generators’ or open loop rail) where ground stations are integrated on rail vehicles and electric energy is generated from vehicle motion.In these systems, energy generation looks like a reverse operation of an electric train.The following subsections provide an overview of the most relevant prototypes of GG AWESs under development in the industry and the academy.In GG systems the aircraft transmits mechanical power to the ground by converting wind aerodynamic forces into rope tensile forces.The different concepts that were prototyped are listed in Fig. 4; examples of aircraft of GG systems that are currently under development are presented in Fig. 5.They exploit aerodynamic lift forces generated by the wind on their surfaces/wings.The aircraft is connected to the ground by at least one power-rope that is responsible for transmitting the lift force to the ground station.The flight trajectory can be controlled by means of on-board actuators, or with a control pod, or by regulating the tension of the same power-ropes, or with thinner control-ropes.There are also two GG concepts that are worth mentioning: one uses parachutes which exploit aerodynamics drag forces , the other uses rotating aerostats which exploit the Magnus effect .The most important aircraft used for GG systems are here listed:Leading Edge Inflatable kites are single layer kites whose flexural stiffness is enhanced by inflatable structures on the leading edge.Mainly two kinds of LEI kites are used in AWESs:Supported Leading Edge kites are LEI kites with at least one bridle which supports the leading edge close to its central part.In comparison with C-kites, the traction force of the central bridles makes the wing flat in its central region and this is claimed to increase the wing aerodynamic efficiency.C-kites, which are generally controlled by four main bridles directly attached to extreme lateral points of the kite edges.In pumping generators, the C-kite is held with either one, two or three ropes.In generators with one rope, the rope is connected to both the leading edge bridles, while trailing edge bridles are controlled by a ‘control pod’ attached to the rope a few meters below the kite.The micro-winches inside the control pod are used to steer the kite and control the angle of attack.In case of two ropes, left bridles converge in one rope and right bridles converge in the second rope.The angle of incidence is fixed and the kite steers due to the difference in the ropes tension.In case of three ropes, there is one rope for each trailing edge bridle and one rope connected to the leading edge bridles.In this case, kite steering and angle of attack can be controlled from the ground.The stiffened tube-like structure of LEI kites is especially useful for take-off and landing maneuvers when the wing is not yet supported by wind pressure.The ease of handling is very appreciated also during small-scale prototyping and subsystem testing.However LEI kites have severe scalability issues as the tube diameter needs to be oversized in case of large wings.Foil kites are derived from parafoils .These double-layer kites are made of canopy cells which run from the leading edge to the trailing edge.Cells are open on the leading edge in a way that the air inflates all cells during the flight and gives the kite the necessary stiffness.Bridles are grouped in different lines, frequently three: one central and two laterals.With respect to LEI kites, foil wings have a better aerodynamic efficiency despite the higher number of bridles and can be one order of magnitude larger in size.Delta kites are similar to hang glider wings.They are made by a single layer of fabric material reinforced by a rigid frame.Compared with LEI or foil kites, this kind of aircraft has a better aerodynamic efficiency which in turn results in a higher efficiency of wind power extraction.On the other hand, their rigid frame has to resist to mechanical bending stresses which, in case of high aerodynamic forces, make it necessary to use thick and strong spars which increase the aircraft weight, cost and minimum take-off wind speed.Durability for fabric wings such as LEI, foil and delta kites, is an issue.Performance is compromised soon and lifetime is usually around several hundred hours .Gliders can also be used as GG aircraft.Like delta kites, their wings are subject to bending moment during the tethered flight.Gliders, and more generally rigid wings, have excellent aerodynamic performance, although they are heavier and more expensive.Lifetime with regular maintenance is several decades.Swept rigid wings are gliders without fuselage and tail control surfaces.Flight stability is most likely achieved thanks to the bridle system and the sweep angle.Semi-rigid wings are also under investigation by the Italian company Kitegen Research.They are composed of multiple short rigid modules that are hinged to each other.The resulting structure is lighter than straight rigid wings and more aerodynamically efficient and durable than fabric kites.Special design kites: Kiteplanes and Tensairity Kites are projects developed by TUDelft and EMPA, that aim at increasing the aerodynamic efficiency of arch kites without using rigid spars.This subsection provides a list of fixed-ground-station GG AWES which are summarized in Figs. 7 and 10.The Italian KiteGen Research was one of the first companies to test a prototype of Ground-Gen AWES .KGR technology is based on a C-Kite integrating on board electronics with sensor and is controlled by two power-ropes from a control station on the ground .The first prototype, named KSU1 , was successfully demonstrated in 2006.After a few years of tests, the company focused on the development of a new generator, named ‘KiteGen Stem’, with a nominal power of 3 MW .In this system, the ropes are wound on special winches and are driven by a pulley system through a 20 m flexible rod, called ‘stem’, to an arch-kite or a semi-rigid wing.The stem is linked to the top of the control station through a pivot joint with horizontal axis.The most important functions of the stem are: supporting and holding the kite and damping peak forces in the rope that arise during wind-gusts.The entire control station can make azimuthal rotations so the stem has two degrees of freedom relative to the ground.The ‘Stem’ concept was first patented in 2008 and is now used by more and more companies and universities.At the beginning of the take-off maneuvers, the kite is hanged upside down at the end of the stem.Once the kite has taken off, the production phase starts: the automatic control drives the kite acting on the two ropes, the kite makes a crosswind flight with ‘eight shape’ paths; at the same time ropes are unwound causing the winches to rotate; the motor-generators transform mechanical power into electric power.The company aims at retracting the cables with minimum energy consumption thanks to a special maneuver called ‘side-slip’ or ‘flagging’ .Side-slip is a different flight mode where the kite aerodynamic lift force is cleared by rewinding at first one rope before the other, which makes the kite lose lift and ‘stall’ and then, once fully stalled, both ropes are rewound at the same speed and the kite precipitates flying sideways.This maneuver can be done with flexible foil kites or semi-rigid wings.In this phase, the power absorbed by motor-generators is given by rope rewind speed multiplied by the resulting aerodynamic drag force of the side-slip flying mode.This power consumption would be a small percentage of the power produced in the production phase.After rewinding a certain length of the ropes another special maneuver restores the gliding flight and the aerodynamic lift force on the kite.At this point one pumping cycle ends and a new production phase starts.KGR patented and is developing special aerodynamic ropes in order to increase their endurance and to increase system performances.KGR also plans to use the Kitegen Stem technology to produce an offshore AWES since offshore AWESs are very promising .Another Italian company, Kitenergy, was founded by a former KiteGen partner and is also developing a similar concept by controlling a foil kite with two ropes .The prototype of the company features 60 kW of rated power .Kitenergy submitted also a different GG-AWES patent that consists in a system based on a single motor-generator which controls winding and unwinding of two cables and another actuator that introduces a differential control action of the employed cables.Another prototype developed by its co-founder, Lorenzo Fagiano, achieved 4 h of consecutive autonomous flight with no power production at University of California at Santa Barbara in 2012 .The German company SkySails GmbH is developing a wind propulsion system for cargo vessels based on kites .A few years ago a new division of the company ‘SkySails Power’ has been created to develop Ground-Gen AWES based on the technology used in SkySails vessel propulsion system.Two products are under development: a mobile AWES having a capacity between 250 kW and 1 MW, and an offshore AWES with a capacity from 1 to 3.5 MW.SkySails׳ AWES is based on a foil kite controlled with one rope and a control pod which controls the lengths of kite bridles for steering the kite and changing its angle of attack .Control pod power and communication with the ground station is provided via electric cables embedded in the rope.SkySails also has a patented launch and recovery system designed for packing the kite in a storage compartment.It is composed by a telescopic mast with a special device on its top that is able to grab, keep and release the central point of the kite leading edge.When the system is off, the mast is compacted in the storage compartment with the kite deflated.At the beginning of the launching operation, the mast extends out vertically bringing the deflated kite some meters above the ground.The kite is then inflated to have appropriate shape and stiffness for the production phase.Kite take-off exploits only the natural wind lift force on the kite: the system at the top of the mast releases the kite leading edge, the pod starts to control the flight and the winch releases the rope letting the kite reach the operating altitude.While the energy production phase is similar to that of the KGR generator, SkySails has a different recovery phase.Specifically, SkySails uses high speed winching during reel-in while the kite is kept at the edge of the wind window.The kite is then winched directly against the wind without changing the kite angle of attack.Though it might seem counter-intuitive at first, this kind of recovery phase has proven to be competitive .The Swiss company Twingtec is developing a 100 kW GG-AWES.After having tried several concepts including soft wings and rigid wings, the team is now tackling the problem of automating take-off and landing with an innovative concept: a glider with embedded rotors having rotational axis perpendicular to the wing plane.The rotors are used during take-off and landing.The company plans to have the generator and power conversion hardware inside a standard 20-foot shipping container in order to easily target off-grid and remote markets.The AWES will supply continuous and reliable electrical power thanks to the integration with conventional diesel generators .At Delft University of Technology, the first research in Airborne Wind Energy was started by the former astronaut, Professor Ockels, in 1996 .A dedicated research group was initiated by Ockels in 2004 with the aim to advance the technology to the prototype stage.Recently, Delft University of Technology and Karlsruhe University of Applied Sciences have initiated a joint project to continue the development and testing of a mobile 20 kW experimental pumping kite generator .A main objective of this project is to improve the reliability and robustness of the technology and to demonstrate in the next months a continuous operation of 24 h.At present, they use the third version of a special design LEI kite, co-developed with Genetrix/Martial Camblong, of 25 m2 wing surface area.Together with an automatic launch setup , the wing demonstrated fully automatic operation of their 20 kW system in 2012 .Like SkySails׳ system, this prototype is based on a single tether and an airborne control pod but they also control the angle of attack for powering and depowering the wing during production and recovery phase, respectively.An automatic launch and retrieval system for 100 m2 LEI kites is under development .In the past, the research group tested several kinds of wings such as foil kites and kiteplanes.TU Delft also tested an alternative device for controlling the kite: a cart-and-rail system attached to the tips of a ram-air wing and used to shift the attachment point of the two bridle lines.By that system, the wing could be steered and depowered with a minimal investment of energy.Ultimately, the concept was too complex and too sensitive to deviations from nominal operation .The first company that developed a pumping glider generator is the Dutch Ampyx Power .After several prototypes, they are currently developing and testing two 5.5 m ‘PowerPlanes’ the AP-2A1 and the AP-2A2 .They are two officially registered aircraft that are automatically controlled with state of the art avionics.They are constructed with a carbon fiber body and a carbon backbone truss which houses onboard electronics with sensors and actuators.Onboard actuators can drive a rudder, an elevator and four flaperons.One rope connects the glider to a single winch in the ground station.Ampyx Power is actually one of the few companies which has already developed an AWES that is able to automatically perform the sequence of glider take-off, pumping cycles and landing.Take-off maneuver sees the glider lying on the ground facing the ground station at some meters of distance.As the winch starts exerting traction force on the rope, the glider moves on the ground and, as soon as the lift forces exceed the weight forces, the glider takes off.They also installed a catapult for take-off and they have a propulsion system to climb up.The glider flight is fully autonomous during normal operations even though, for safety reasons, it can be occasionally controlled wirelessly from the ground thanks to a backup autopilot.The pumping cycles are similar to those of a kite.Glider landing is similar to that of an airplane and is being equipped with an arresting line so as to stop the glider in a right position for a new take-off.During a test campaign in November 2012, the system demonstrated an average power production of 6 kW with peaks of over 15 kW.Ampyx has started the design of its first commercial product: a 35 m wingspan AP-4 PowerPlane with a ‘wind turbine equivalent’ power of 2 MW.The German company EnerKite developed a portable pumping kite generator with rated continuous power of 30 kW.The ground station is installed on a truck through a pivotal joint which allows azimuthal rotations.EnerKite demonstrator uses mainly a foil kite, but a delta kite and a swept rigid wing are also under investigation and testing.The aircraft does not have on-board sensors and is controlled from the ground with three ropes according to the scheme of Fig. 4d. EnerKite is now developing an autonomous launch and landing system for semi-rigid wings .The company plans to produce a 100 kW and a 500 kW system .The US Company Windlift has a concept similar to that of Enerkite.Their 12 kW prototype uses SLE kites.They aim to sell their product to the military and to off-grid locations.The AWE community is constantly growing and for every company that goes out of business there are a few that are born.Here are some startup companies that are worth mentioning.e-Kite was founded in 2013 in the Netherlands and developed a 50 kW GG-AWES based on a direct drive generator.The company is now building a 2-ropes rigid wing that will fly at low altitude .Enevate is a Dutch 4-people startup that is mainly focused on bringing the TU Delft GG-AWES to the next step towards a commercial product .Kitemill, Norway, started the development of a GG-AWES.The company switched early on to a 1-cable rigid wing system with on-board actuators after having faced controllability and durability issues with soft materials .eWind solutions is a US company that is developing an unconventional, low altitude, rigid wing GG-AWES .KU Leuven has been actively doing research in AWESs since 2006.After significant theoretical contributions, the team developed a test bench to launch a tethered glider with a novel procedure .Before take-off, the glider is held at the end of a rotating arm.When the arm starts rotating, the glider is brought to flying speed and the tether is released allowing the glider to gain altitude.They are currently developing a larger experimental test set-up, 2 m long with a 10 kW winch.SwissKitePower was a collaborative research and development project started in Switzerland in 2009.It involved four laboratories of different Swiss universities: FHNW, EMPA, ETH and EPFL.The first prototypes, tested between 2009 and 2011, were based on a C-kite controlled by one rope and a control pod.The initial system worked according to the scheme of Fig. 4b, similarly to KitePower and SkySails prototypes.In 2012, SwissKitePower developed a new ground station with three winches that can be used to test kites with 1, 2 or 3 lines .They also tested SLE kites and tensairity kites.The project ended in 2013 and since then FHNW is working in collaboration with the company TwingTec.At Langley Research Center, the US space agency NASA conducted a study about wind energy harvesting from airborne platforms after which they developed an AWES demonstrator based on a kite controlled by two ropes and having a vision-based system and sensors located on the ground .In addition to the main prototypes listed above, there are several other systems that have been built.Wind tunnel tests of small scale non-crosswind generation, and outdoor crosswind generation tests with a SLE kite of GIPSA-lab/CNRS, University of Grenoble.Kite control project of CCNR at Sussex University, UK.EHAWK project of Department of Mechanical Engineering of Rowan University.Kite powered water pump of Worcester Polytechnic Institute.In addition to pumping systems, a number of AWES concepts with moving-ground-station have been proposed.Their main advantage is the ability to produce energy continuously or nearly continuously.However, only a few companies are working on AWESs with moving-ground-station and there are more patents and studies than prototypes under development.This subsection provides a list of moving-ground-station GG-AWES which are summarized in Figs. 7 and 10.The first moving-ground-station architecture which is based on a vertical axis generator has been proposed back in 2004 by Sequoia Automation and acquired by KGR .This AWES concept is based on the architecture described in Fig. 3a.During operations, lift forces are transmitted to a rotating frame inducing a torque around the main vertical axis.Torque and rotation are converted into electricity by the electric generator.This system can be seen as a vertical axis wind turbine driven by forces which come from tethered aircraft.There is no prototype under development, but the concept has been studied in a simulation showing that 100 kites with 500 m2 area could generate 1000 MW of average power in a wind with speed of 12 m/s.The considered generator would have a 1500 m radius, occupying a territory about 50 times smaller and costing about 30 times less than a farm of wind turbines with the same nominal power.An alternative system based on ground stations that moves on closed track circuits is proposed by KGR and by the German company NTS Energie und Transportsysteme .Starting from September 2011, NTS tested a prototype where 4-rope kites are controlled by a vehicle which moves on a 400 m flat-bed straight railway track.They are able to produce up to 1 kW per m2 of wing area and they tested kites up to 40 m2 .The final product should have a closed loop railway where more vehicles run independently.Another rail concept is proposed by Kitenergy and it is based on ideas published in 2004 in Drachen Foundation journal .The concept is based on a straight linear rail fixed on the ground with a pivotal joint.The rail direction is then adjusted perpendicular to the main direction of the wind.The ground station of the system is mounted on a wheeled vehicle which moves along the straight rail, under the kite traction forces, back and forth from one side to the other.The power is extracted from electromagnetic rotational generators on the wheels of the vehicle or from linear electromagnetic generators on the rail.The power production is not fully continuous because during the inversion of vehicle direction the power production will not only decrease to zero, but it could also be slightly negative.Nevertheless the kite inversion maneuver could be theoretically performed without the need of power consumption.Although it cannot be considered a moving-ground-station device, it is important to mention that the first concept of continuous energy production AWES was the Laddermill concept envisaged by the former astronaut, Professor Ockels in 1996 .In Fly-Gen AWESs, electric energy is produced onboard of the aircraft during its flight and it is transmitted to the ground trough one special rope which integrates electric cables.Electrical energy conversion in FG-AWESs is achieved using one or more specially designed wind turbines.A general classification of these systems is provided in this section.Besides the general classification between crosswind and non-crosswind mode proposed in Fig. 10, FG-AWESs can also be distinguished basing on their flying principles that are:Wings lift: Achieved with a tethered flight of special gliders or frames with multiple wings.Buoyancy and static lift: Achieved with aerodynamically shaped aerostats filled with lighter-than-air gas.Rotor thrust: Achieved with the same turbines used for electrical power generation.Aircraft in Fig. 8a and 8b fly crosswind and harvest the relative wind, while those in Fig. 8c and 8d fly non-crosswind and harvest the absolute wind.There is also one FG concept that aims at exploiting high altitude wind energy not by using aerodynamic lift.It uses instead a rotating aerostat which exploits the Magnus effect .This subsection provides a list of FG AWES which are summarized in Figs. 9 and 10.One of the most famous and old idea of exploiting wind energy using turbines on a kite belongs to Loyd who calculated that wind turbines installed on a crosswind flying kite could be able to generate up to 5 times the power produced by equivalent turbines installed on the ground.He also patented his idea in 1978 .Loyd׳s concept foresees a reciprocating wind driven apparatus, similar to a multi propeller plane, with a plurality of ropes linking the aircraft to a ground station.After about twenty-five years from Loyd׳s work, Makani Power Inc. has started the development of its Airborne Wind Turbine prototypes.In nine years, Makani tested several AWESs concepts including Ground-Gen, single rope, multiple rope, movable ground station on rails, soft wings and rigid wings .During these years, the company filed several patents where an electric and modern version of Loyd׳s idea has been enriched with a tether tension sensor , an aerodynamic cable , and with a new idea of a bimodal flight that has been invented to solve take-off and landing issues.In the bimodal flight the AWT takes off with the wing plane in a vertical position, driven by propellers thrust.This flight mode is similar to a quadcopter flight and rotors on AWT are used as engines.Once all the rope length has been unwound, the AWT changes flight mode becoming a tethered flight airplane.In this second flight mode a circular flight path is powered by the wind itself and rotors on AWT are used as generators to convert power from the wind.During this phase the cable length is fixed.In order to land, a new change of flight mode is performed, and the AWT lands as a quadcopter.Makani has developed and tested its 8 m, 20 kW demonstrator, called ‘Wing 7’ that showed the capability of fully automatic operations and power production.After these results, in early 2013 Makani was acquired by Google.Makani is currently developing a 600 kW prototype, ‘the M600’.The M600 AWT has eight turbines, each with five propeller blades, and has a wingspan of 28 m.The prototype is now undergoing testing .After M600, Makani plans to produce an offshore commercial version of AWT with a nominal power of 5 MW featuring 6 turbines and a wingspan of 65 m.Founded in 2008, Joby Energy Inc. is another US company which is developing a FG-AWES.The main difference between Joby and Makani is that the tethered airborne vehicle is a multi-frame structure with embedded airfoils.Turbines are installed in the joints of the frame.In Joby׳s concept, the system could be adapted to be assembled with modular components, constructed from multiple similar frames with turbines.The power generation method and the take-off and landing maneuvers are similar to those of Makani concept .Joby also patented an aerodynamic rope for its system .In 2009 and 2010, Joby tested different small scale prototypes.Another project based on flying wind turbines in a stationary position has been developed by Altaeros Energies, a Massachusetts-based business led by MIT and Harvard alumni .In this case, instead of using wings lift to fly, they use a ring shaped aerostat with a wind turbine installed in its interior.The whole generator is lighter than the air, so the take-off and landing maneuvers are simplified, and the only remaining issue is the stabilization of the generator in the right position relative to the wind .The aerostat is aerodynamically shaped so that the absolute wind generates lift that helps keeping a high angle of altitude together with the buoyancy force.After their energy production tests in 2012, Altaeros is additionally working on multiple rotor generators with different lighter-than-air craft configurations.Sky Windpower Inc. proposed a different kind of tethered craft called ‘Flying Electric Generator’ which is similar to a large quadrotor with at least three identical rotors mounted on an airframe that is linked to a ground station with a rope having inner electrical cables .Their concept was the first AWES to be tested in 1986 at University of Sidney .Take-off and landing maneuvers are similar to those of Makani׳s and Joby׳s generators, but FEG operation as generator is different.Once it reaches the operational altitude, the frame is inclined at an adjustable controllable angle relative to the wind and the rotors switch the functioning mode from motor to generator.At this inclined position, the rotors receive from their lower side a projection of the natural wind parallel to their axes.This projection of wind allows autorotation, thus generating both electricity and thrust.Electricity flows to and from the FEG through the cable.Sky Windpower tested two FEG prototypes.They claimed that a typical minimum wind speed for autorotation and energy generation is around 10 m/s at an operational altitude of 4600 m .Unfortunately the company went recently out of business.One of the most important reasons why AWESs are so attractive is their theoretical capability of achieving the megawatt scale with a single plant.For example in a 34 MW plant is envisaged with a tethered Airbus A380, and many other publications present theoretical analyses with MW scale AWES .This scalability feature is rare in renewable energies and is the key to successful commercial development.With reference to the extraction principles explained in Sections 4 and 5, this section gives an introduction to the modelling of crosswind flight, the most used flight mode in AWE.Modelling the principle of crosswind flight is the first necessary step towards the understanding of AWESs and their potential.A well known basic model is explained for the case of Ground-Gen and Fly-Gen crosswind AWESs.Only crosswind generation is analyzed because it was demonstrated that it can provide a power one or two orders of magnitude higher than non-crosswind generation .AWESs concepts that exploit crosswind power have therefore a strong competitive advantage over non-crosswind concepts in terms of available power and, therefore, in the economics of the whole system.This section explains how to compute the power output of a fixed-ground-station crosswind GG-AWES during the reel-out phase.As already introduced in Section 4, in GG-AWESs, the recovery phase represents an important factor in the computation of the average power output but, for simplicity, it is not considered in the following model.The expression of the maximum power, P, for crosswind Ground-Gen AWES can be derived following the analytical optimization on the reel-out speed from with the integrations from and .The hypotheses are: high equivalent aerodynamic efficiency, steady-state crosswind flight at zero azimuth angle from the wind direction, negligible inertia and gravity loads with respect to the aerodynamic forces.This section explains how to compute the power output of a crosswind FG-AWES during the generation phase.As already introduced in Section 5, unlike crosswind GG-AWESs, crosswind FG-AWESs have the advantage of being able to produce power without the need of a duty cycle for the recovery phase.Similarly to what already done in Section 6.1, the expression of the available crosswind power, P, for Fly-Gen AWES can be derived following the analytical optimization on the flying generator drag from with the integrations from and .In this section, we provide a brief discussion on some techno-economic issues and topics that are considered to be relevant with respect to the current development, trends, and future roadmap of AWESs.In all AWESs, increasing the flying mass decreases the tension of the cables.Since Ground-Gen systems rely on cables tension to generate electricity, a higher mass of the aircraft and/or cables decreases the energy production and should not be neglected when modelling .On the contrary, increasing the flying mass in Fly-Gen systems does not affect the energy production even though it still reduces the tension of the cable.Indeed, as a first approximation, the basic equations of Fly-Gen power production do not change if the aircraft/cable mass is included and this is also supported by experimental data .A question faced by many companies and research groups is whether rigid wings are better or worse than soft wings.On the plus side for soft wings there are: crash-free tests and lower weight because of the inherent tensile structure.Conversely, rigid wings have better aerodynamic efficiency and they do not share the durability issues of soft wings mentioned in Section 4.It is unclear whether one of the two solutions will prove to be better than the other, but a trend is clearly visible in the AWE community: even though a lot of academic research is being carried out on soft wings, more and more companies are switching from soft to rigid wings .Starting and stopping energy production require special take-off and landing maneuvers as explained in Sections 4 and 5.These are the most difficult to automate and are requiring a lot of research in private companies and academic laboratories .Another interesting question is how much is the optimal flight altitude, i.e. how much are the optimal cable length and elevation angle that maximize the power output.Increasing the altitude allows to reach more powerful winds, but, at the same time, increasing the cable length or the elevation angle reduces the power output according to Eqs.–.Considering a standard wind shear profile, the optimal flight altitude is found to be the minimum that is practically achievable .However, results greatly change depending on the hypotheses and, for example, a reduction in cables drag might lead to optimal flight altitudes around 1000 meters .More detailed and location-specific analyses could be therefore useful to define an optimal flight altitude.Nowadays, many AWE companies are aiming at exploiting low altitude winds with the minimum flight altitude set by safety concerns.Only few companies and academic institutions are still trying to reach high altitudes.A variation of the angle of attack can be induced by a change in the tether sag or in the velocity triangle.As for the tether sag, it is possible to compute the variation of the nominal angle of attack thanks to the model provided in .Depending on the value of the design parameters, the model would give a numerical value between 7 deg and 11 deg for a large scale AWES.As regards the velocity triangle, assuming controlled constant tether force and computing the effect of a different absolute wind speed on the angle of attack with a simple velocity triangle, a variation of about 3.5 deg or 4 deg can be reasonably obtained.Such variations of the angle of attack can decrease substantially the power output or even make the flight impossible.For example, using the values of the aerodynamic coefficients for an airfoil specifically optimized for AWESs , a steady state variation in the angle of attack of just ±2 deg can lead to a decrease in power output between 5% and 42% with respect to the optimal angle of attack.Real time control of the angle of attack in current AWES prototypes from Ampyx and Makani limits the angle of attack variation to ±2 deg in a real flight time history.Cables for AWESs are usually made of Ultra-High-Molecular-Weight Polyethylene, a relatively low-cost material with excellent mechanical properties even though many different materials are being used and studied .The cables can be 1, 2 or 3 and in some concepts they carry electricity for power generation or just for on-board actuation.Each of these choices has advantages and disadvantages, and, at the present time, any prediction about the best tethering system would be highly speculative.The tethers also represent a known issue in the AWE community because of wear, maintenance and aerodynamic drag.Very long cables should be studied removing also the steady state hypothesis in order to consider the fact that, in unsteady flight, the lower part of the cables could reasonably move less than in steady state flight, thus dissipating less energy.Some concerns have been raised regarding the behavior of tethers in atmospheric environment .An analysis performed on dry and wet polyethylene ropes without inner conductors shows that non-conductive tethers will not trigger a flashover in typical static electric fields of thunder clouds, however non-conductive tethers are very likely to trigger a flashover when subjected to impulsive electric fields produced by lightnings.It is reasonable to say that AWESs will not work during thunderstorms and that lightnings should not be an issue.However, an analysis regarding the electrical atmospheric behavior of tethers with inner conductors could be useful to understand the worst atmospheric conditions to which conductive tethers might be exposed.A reduction in the cable drag coefficient would likely lead to an increase in power output by two or three times thanks to better aerodynamic efficiency, increased flight speed and higher operational altitude .Several patents have been filed to address this issue even though the reduction of cable drag by means of e.g. fairing or streamlined cross-sections has not been experimentally demonstrated yet.Because of the potential advantages, funding opportunities might be available for concepts in aerodynamic drag reduction .As regards the aerodynamic cable drag, two patented concepts might provide an important improvement in the long term by setting to zero the aerodynamic drag for the majority of the length.They are worth being mentioned even though no prototype already exists for both of them.The first appeared in the very first patent about high altitude wind energy in 1976 and described the so-called ‘dancing planes’ concepts.Two Fly-Gen turbines are held by a single cable, the top part of which splits into two.Each turbine is tethered to one of these ends and follows a circular trajectory so that only the two top parts of the cable fly crosswind through the air and the main single cable stands still under balanced tensions.Several studies already exist for this promising concept .The second describes a ‘multi-tether’ AWES where three cables are deployed from three different ground stations and are eventually connected to each other in their top end.A single tether connects the top end of the three cables to one kite that is then geometrically free to fly crosswind within a certain solid angle without moving the three lower cables but only changing their inner tension.The further the ground stations are spaced apart, the larger is the allowable solid angle.For both of these concepts the result is the same: high altitude crosswind flight is achieved with the longest part of the cables completely fixed in space, thus allowing to reach winds at very high altitude with a relatively short dissipative cable length.To date, several tens of millions dollars have been spent for the development of AWESs, which is a relatively low amount of money, especially if one considers the scale of the potential market and the physical fundamentals of AWE technology.The major financial contributions came, so far, by big companies usually involved in the energy market .The community is growing both in terms of patents and in terms of scientific research .But still, there is no product to sell and the majority of the companies that are trying to find a market fit are now focusing on off-grid markets and remote locations where satisfying a market need can be easier at first .High altitude wind energy is currently a very promising resource for the sustainable production of electrical energy.The amount of power and the large availability of winds that blow between 300 and 10000 meters from the ground suggest that Airborne Wind Energy Systems represent an important emerging renewable energy technology.In the last decade, several companies entered in the business of AWESs, patenting diverse principles and technical solutions for their implementation.In this extremely various scenario, this paper attempts to give a picture of the current status of the developed technologies in terms of different concepts, systems and trends.In particular, all existing AWESs have been briefly presented and classified.The basic generation principles have been explained, together with very basic theoretical estimations of power production that could provide the reader with a perception on which and how crucial parameters influence the performance of an AWES.In the next years, a rapid acceleration of research and development is expected in the airborne wind energy sector.Several prototypes that are currently under investigation will be completed and tested. | Abstract Among novel technologies for producing electricity from renewable resources, a new class of wind energy converters has been conceived under the name of Airborne Wind Energy Systems (AWESs). This new generation of systems employs flying tethered wings or aircraft in order to reach winds blowing at atmosphere layers that are inaccessible by traditional wind turbines. Research on AWESs started in the mid seventies, with a rapid acceleration in the last decade. A number of systems based on radically different concepts have been analyzed and tested. Several prototypes have been developed all over the world and the results from early experiments are becoming available. This paper provides a review of the different technologies that have been conceived to harvest the energy of high-altitude winds, specifically including prototypes developed by universities and companies. A classification of such systems is proposed on the basis of their general layout and architecture. The focus is set on the hardware architecture of systems that have been demonstrated and tested in real scenarios. Promising solutions that are likely to be implemented in the close future are also considered. |
549 | Abacavir, zidovudine, or stavudine as paediatric tablets for African HIV-infected children (CHAPAS-3): An open-label, parallel-group, randomised controlled trial | In 2014, 91% of 3·2 million HIV-infected children lived in sub-Saharan Africa, but less than 25% of those needing antiretroviral therapy were receiving it.1,Low-cost, scored, dispersible fixed-dose combination paediatric tablets of stavudine plus lamivudine plus nevirapine in child-appropriate drug ratios2 drove initial ART roll-out to African children, replacing separate syrups, which are costly for programmes and difficult for carers to transport and administer.3,However, stavudine was discouraged in 20104 and 20135 WHO guidelines because of high lipodystrophy rates in adults and adolescents.In children, stavudine-associated toxicity has mainly been noted with higher doses than those recommended by WHO and in older children.6–8,Alternative nucleoside reverse-transcriptase inhibitors for children younger than 12 years are abacavir or zidovudine.Tenofovir is not licensed for those younger than 2 years and is not recommended by WHO5 in those younger than 10 years, primarily because of concerns regarding long-term effects on bone metabolism and renal function in growing children,9 although more data are needed.Zidovudine is associated with anaemia, which is of particular concern in malnourished children in endemic malaria areas where underlying anaemia is prevalent.Abacavir is associated with hypersensitivity reactions, although these are rare in Africa10 because of a lower risk-allele prevalence.11,However, two South African cohorts recently reported lower virological suppression with abacavir than with stavudine,12,13 and abacavir is also the most costly NRTI.14,Therefore, whether stavudine, given at the WHO recommended doses, should remain an option for young children was unclear.Evidence before this study,We searched PubMed up to April 27, 2015, using the keywords “HIV”, “child*”, not “prevent*”, dated after Jan 1, 1996,.The most relevant nucleoside reverse-transcriptase inhibitors for treating HIV-infected children when the study started were abacavir, zidovudine, and stavudine; didanosine and tenofovir were not used because of toxicity.The WHO conducts systematic reviews as part of guideline development.No existing systematic reviews of randomised controlled trials comparing these NRTIs head-to-head in HIV-infected children were identified in 2010 or 2013, with only one randomised trial directly comparing abacavir and zidovudine in 128 European children, which identfied that abacavir was virologically superior to zidovudine over 5 years follow-up.Recommendations for preferential ordering of zidovudine, abacavir, then stavudine in 2010, and abacavir, zidovudine, then stavudine in 2013, were therefore based primarily on expert opinion balancing toxicity, cost, and practicality; and, in 2013, also evidence on accumulation of different resistance mutations with sequential use.Added value of the study,This is the first randomised controlled trial in African children, conducting a head-to-head comparison of the three most relevant NRTIs for paediatric treatment, coformulated in NNRTI/NRTI generic fixed-dose-combination paediatric tablets and dosed with WHO drug ratios and weight bands.We identified no major differences between the NRTIs in adverse events, toxicity, clinical, immunological, or viral load endpoints, but did find higher drug susceptibility to relevant second-line NRTIs if abacavir was used first-line, thus providing evidence to support the WHO 2013 recommendation for its use as the preferred first-line NRTI for children.Use of abacavir also enables a once-daily ART regimen to be constructed for children, in line with adults.Implications of the available evidence,Excellent outcomes were obtained on all regimens, showing the importance of widening treatment access for HIV-infected children worldwide.Efforts need to be made to provide abacavir-based combinations where this is possible; but there is no need to move children who are stable on zidovudine-based regimens to abacavir.Further research should investigate the potential for once-daily triple abacavir-based fixed-dose combinations with efavirenz or dolutegravir to further simplify and improve durability of first-line ART for children who will need treatment for much longer than adults.Since 2003, changes in NRTIs recommended by WHO for children, followed by changes in national guidelines and clinical practice, have occurred with little evidence and no new randomised trials.Therefore, in 2010, when most African children were receiving stavudine-based ART, we aimed to compare stavudine, zidovudine, or abacavir fixed-dose combinations for first-line ART.In the first African paediatric trial comparing three NRTIs coformulated in NNRTI/NRTI generic fixed-dose-combination paediatric tablets, dosed using WHO drug ratios and weight bands,2,5 we identified no major differences in any adverse event or toxicity endpoint during nearly 2·5 years follow-up in ART-naive and ART-experienced children.First-line drug substitutions occurred in only 6% of children, with nearly one-third due to starting anti-tuberculosis treatment.ART-naive children had good clinical, immunological, and virological responses, regardless of backbone NRTI; CD4 cell count and virological responses were maintained among almost all ART-experienced children.As expected, most deaths occurred early in children starting ART and only 1% switched to second-line therapy.Paediatricians have long debated the relative advantages and disadvantages of different so-called backbone NRTIs combined with lamivudine, particularly because harmonising with adult tenofovir-based once-daily ART is not possible because of concerns about bone toxicity in growing children and absence of paediatric fixed-dose combinations or doses in those younger than 2 years.In the past decade, WHO guidelines have promoted paediatric fixed-dose combinations, first used in the CHAPAS-1 trial18 and licensed in 2007.However, preferred NRTI recommendations have changed from stavudine to zidovudine to abacavir, based on minimal paediatric data and no randomised trials.91% of children needing ART live in Africa, where genetic and environmental factors determine the relative effect of different ART toxicity profiles.We found no major differences across randomised NRTIs in grade 2–4 clinical or grade 3/4 laboratory adverse events, in either ART-naive or ART-experienced children.The only grade 3/4 event with marginally increased frequency was neutropenia in children allocated zidovudine; its significance is uncertain because African children have low neutrophil counts,19 and it rarely led to zidovudine substitution.As previously described,20 haemoglobin increased regardless of backbone NRTI, and severe anaemia occurred no more frequently in children who received zidovudine versus those who received stavudine or abacavir, suggesting HIV-related rather than drug-related cause.However, although infrequent, drug substitution was more common in the zidovudine group than both other groups, as was also reported in the ARROW trial,20 mainly for anaemia.These combined trial results reassure clinicians that zidovudine substitution is rarely needed for anaemia among children on ART.However, an important caveat is that severe anaemia and neutropenia were an exclusion criteria in both trials; if anaemia is HIV related, initiating zidovudine might also lead to good haemoglobin responses in anaemic children, as observed here, but we did not assess this.Clinical lipodystrophy was not recorded up to 3 years follow-up of children aged younger than 5 years at ART initiation.Absence of blinding cannot rule out ascertainment bias, but lack of significant differences in body circumferences or skinfold thicknesses between NRTIs supports anecdotally reported rarity of lipodystrophy among young children, and suggests that longer-term consequences of stavudine exposure in young children are likely to be limited.We also found no evidence of a difference between NRTIs in changes in lipids on ART.Nevertheless, lipodystrophy undoubtedly occurs in older children and adolescents; the only lipodystrophy noted during the trial was facial in two older ART-experienced children already taking stavudine for more than 2·5 years.For this reason, and despite little evidence of harm in young children, the WHO 2013 recommendation that stavudine should be used only where other drugs are unavailable seems reasonable because it harmonises with adult and adolescent recommendations where evidence is strong.However, our results suggest that stavudine could be safely used for at least 2 years in young children, if alternatives are not available, supporting WHO5 and the European Medicines Agency who recommended that stavudine for children should not be discontinued completely.Despite no HLA-B5701 testing, no hypersensitivity reactions to abacavir were observed, in agreement with previous data reporting its rarity in African adults21 and children.10,The only three hypersensitivity reactions leading to a change in ART were substitutions from nevirapine to lopinavir plus ritonavir, albeit at a lower rate than in adults,22 consistent with previous paediatric reports.18,Reassuringly, and providing the first randomised data in children, a CHAPAS-3 substudy showed no difference in cardiovascular measurements or biomarkers between randomised NRTI groups.23,24,One limitation is that our trial recruited more ART-naive and fewer ART-experienced children than was planned, reducing the power to detect differences between these subgroups, although no major interactions were identified.When this trial was designed, the major questions related to toxicity profiles of the three NRTIs, with concerns over the potency of abacavir12,13 only arising later.However, 478 children still provided good power to detect 10–15% differences in viral load suppression.CD4 recovery and retrospectively assayed viral load suppression to less than 100 copies per mL, less than 400 copies per mL, or less than 1000 copies per mL did not differ by randomised NRTI.Overall suppression was better in ART-experienced than in ART-naive children, as expected, because ART-experienced children were suppressed at enrolment.Similarly to ARROW, there were no interactions suggesting differences in viral load suppression by NRTIs by age,20 and, also in agreement with other reports, there was no evidence that viral load suppression depended on intrauterine nevirapine exposure beyond infancy25 or NNRTI in children older than 3 years.26,Although many seminal trials have been done in HIV-infected children by the IMPAACT/PACTG group, their randomised comparisons of combination therapy have focused on the third drug, older drugs, or on receiving an additional NRTI.No IMPAACT/PACTG trial has directly compared abacavir, zidovudine, or stavudine head-to-head within combination therapy.Our results differ from the only previous randomised, smaller trial of zidovudine versus abacavir, which showed virological superiority of abacavir versus zidovudine over 5 years in children in well-resourced settings.27,28,However, children received two NRTIs alone or with nelfinavir; with a potent third drug, as in CHAPAS-3, any superiority of abacavir over zidovudine could well be masked.Our results provide reassurance following recent observational analyses reporting poorer virological responses to abacavir versus stavudine in South African children.12,13,Possible explanations for the difference include unmeasured confounding or drug–drug interactions between abacavir and lopinavir plus ritonavir.29,Of interest, we did not find that abacavir did worse in children with higher viral loads in CHAPAS-3, but only 24 ART-naive children were younger than 1 year, by contrast with the South African studies where many were younger than 1 year with high viral loads.The contribution of the fixed-dose combination rather than separate pills to virological success is difficult to estimate, but cannot affect our within-trial comparisons as all were using fixed-dose combinations.Finally, these first randomised resistance data in African children on different NRTI plus NNRTI first-line ART reassuringly show that most children remained susceptible to second-line NRTIs over the medium term, regardless of initial NRTI.In particular, while those taking first-line zidovudine had significantly reduced susceptibility to abacavir second-line, those taking first-line abacavir retained high susceptibility to zidovudine; both retained high susceptibility to tenofovir, increasingly used in children older than 10 years who weigh more than 35 kg.At trial closure, all carers and children were offered continuing follow-up in the research trial centres or moving to an ART programme site closer to where they lived.Children moving to ART programme sites were moved onto the ART regimen provided by the site to ensure that the ART programme site could continue to provide uninterrupted ART, in terms of drug provision and forecasting.Children staying at the research sites could continue their randomised regimen, because there was no reason to change drugs in children doing well and stable on a WHO recommended regimen, and being carefully followed for toxicity.However, although stavudine remains an option for children not able to take other NRTIs in 20104 and 20135 WHO guidelines, at trial closure Uganda national guidelines no longer recommended stavudine for children.In Zambia, guidelines were based on duration on stavudine, with age being also used more recently.The recommended substitutions were therefore on a case-by-case basis.As a result of all these factors, as well as reduced demand for stavudine-based products by programmes, almost all children moved off stavudine at trial closure.In conclusion, CHAPAS-3 shows primarily that children respond well to all NRTI/NNRTI recommended fixed-dose combinations in 2013 WHO guidelines with minimal drug toxicity.Most primary endpoints were morbid events, showing the very small contribution of antiretroviral toxicity to managing the HIV-infected child.The population was generally young, with early disease, and hence highly generalisable to increasing numbers entering ART programmes under universal treatment for those younger than 5 years.The fixed-dose combinations have different advantages and disadvantages in terms of number and frequency of tablets, cost, and availability as dual or triple drug fixed-dose combinations.Abacavir has very low toxicity in African children, a superior resistance profile for second-line NRTI sequencing, and is the only once-daily licensed NRTI fixed-dose combination for children, supporting its preferred use in first-line ART.5,Its only disadvantage is that it has a higher cost than zidovudine and stavudine.14,A WHO survey in 2014 showed that paediatric use of abacavir was increasing, whereas stavudine was decreasing; zidovudine was 51% and also decreasing, thus data strongly arguing for further abacavir price reductions.Potential future triple abacavir-based combinations with efavirenz or dolutegravir could further simplify and improve durability of once-daily first-line ART for children who will need ART for much longer than adults.In this open-label, parallel-group, randomised controlled trial, we enrolled confirmed HIV-infected children from Zambia and Uganda—centres were from Zambia—the University Teaching Hospital, Lusaka; and from Uganda Baylor-Uganda Centre of Excellence, Kampala, and Joint Clinical Research Centre, Kampala and Gulu—aged 1 month to 13 years if they were either previously untreated and met WHO 20104 criteria for ART, or on stavudine-containing first-line ART for 2 years or more with screening viral load less than 50 copies per mL and stable CD4 and/or CD4 cell %.All children were already on or initiated co-trimoxazole prophylaxis at enrolment.Caregivers gave written consent; older children aware of their HIV status also gave assent or consent following national guidelines.The trial was approved by Research Ethics Committees in Zambia, Uganda, and the UK."Children were randomly assigned to receive open-label stavudine, zidovudine, or abacavir, together with lamivudine and either nevirapine or efavirenz.Randomisation was stratified by age, previous ART, NNRTI, and clinical centre.A computer-generated sequential randomisation list, using the urn probability method15 was prepared by the trial statistician and incorporated securely into the trial database at each centre.The list was concealed until allocation, which occurred after eligibility was confirmed by local centre staff, who then did the randomisation.Scored dispersible fixed-dose combinations of abacavir plus lamivudine, zidovudine plus lamivudine, zidovudine plus lamivudine plus nevirapine, stavudine plus lamivudine, and stavudine plus lamivudine plus nevirapine as so-called baby and junior tablets were prescribed following WHO weight bands5.Efavirenz and nevirapine were also supplied for children taking dual NRTI fixed-dose combinations.Children exited the trial from Oct 30, 2013, to Jan 23, 2014, after a minimum of 96 weeks follow-up.At nurse and doctor visits, children were examined, medical history was recorded, adherence was assessed, and ART was dispensed.At weeks 6, 12, 24, and then 24-weekly, five skinfold thicknesses and five body circumferences were measured to assess lipodystrophy; haematology, biochemistry, and CD4 tests were done; and plasma was stored for retrospective viral load and resistance testing."Substitutions for toxicity and switches to second-line for failure were at the treating physician's discretion, following WHO guidelines.5",The primary outcome was grade 2 or greater clinical adverse events, confirmed grade 3 laboratory adverse events, or any grade 4 laboratory adverse events16.Clinical primary endpoints were adjudicated against protocol-defined criteria by an endpoint review committee, masked to allocation, and were also adjudicated for relation to antiretroviral drugs, without knowing the specific ART received.Secondary toxicity outcomes were specific subsets of the primary endpoints, serious adverse events, ART-modifying toxicity, grade 3/4 adverse events possibly, probably, or definitely related to zidovudine or abacavir or stavudine, and changes in skinfold-thicknesses-for-age and body-circumference-for-age.Secondary efficacy outcomes were viral load suppression, clinical disease progression, change in weight-for-age, height-for-age, and CD4 and ART adherence.Laboratory measures, including viral load, were assayed blind to randomisation.HIV-1 viral load was assayed with the Roche COBAS Ampliprep/Taqman version 2.0 in both Uganda and Zambia.Because of small stored sample volumes, most samples were run with a 1/5 dilution with Basematrix 53, giving a lower limit of detection of 100 copies per mL.Drug resistance genotyping was done with either in-house primers or primers from Inqaba Biotec, with both laboratories using an automated ABI 3730xl sequencer.Recruiting 470 children gave 85% power to detect a reduction from 20% to 10% in the cumulative incidence of the primary endpoint across the three randomised groups.Interim data were reviewed by an independent data monitoring committee using the Haybittle-Peto criterion.Randomised groups were compared with intention-to-treat analysis with log-rank tests for time-to-event outcomes, exact tests for binary outcomes, and generalised estimating equations with independent working correlation for global tests of repeated measures.Analyses were stratified by age group, naive or experienced, and NNRTI, but not by clinical centre because this was not expected to affect outcome.Data were analysed with Stata version 13.1.This trial is registered with the ISRCTN Registry number, 69078957.The funder of the study had no role in the study design, data collection, data analysis, data interpretation, or writing of the report.The corresponding author had full access to all the data in the study and had final responsibility for the decision to submit for publication.Between Nov 8, 2010, and Dec 28, 2011, 480 children were randomly assigned: 156 to stavudine, 159 to zidovudine, and 165 to abacavir.After two were excluded due to randomisation error, 156 children were analysed in the stavudine group, 158 in the zidovudine group, and 164 in the abacavir group.More children were ART naive than ART experienced; more were younger than 5 years; and consequently more received nevirapine than efavirenz and efavirenz.Baseline characteristics were well balanced between randomised groups.ART-naive children were substantially younger than ART-experienced children.Median retrospectively assayed viral load was 270 670 copies per mL in ART-naive children, with three confirmed less than 100 copies per mL at both screening and enrolment.ART-experienced children had taken stavudine-based ART for median 3·5 years.The mother or child had received nevirapine or NRTIs for prevention of mother-to-child transmission in 56 ART-naive and nine ART-experienced children.Median follow-up was 2·3 years among children completing the study.25 children were lost, including eight who withdrew consent.8967 of 9143 scheduled nurse visits were completed.Initial ART followed randomisation for 473 children.445 remained on their initial treatment throughout follow-up.33 first-line ART changes occurred among 30 children: ten allocated stavudine, 16 allocated zidovudine, and four allocated abacavir.Nine changes were nevirapine substitutions for rifampicin-based tuberculosis co-treatment, 14 were nevirapine or NRTI toxicity substitutions, and ten were mostly dispensing errors.Five children switched to second-line ART.There was no evidence that self-reported adherence across visits through 96 weeks differed between randomised groups.917 grade 2–4 clinical or grade 3/4 laboratory adverse events occurred in 312 children.Events were more common in younger ART-naive children than in ART-experienced children, but there was no evidence of heterogeneity in differences between randomised groups.634 clinical events were grade 2; excluding grade 2 events gave similar results.199 serious adverse events occurred in 132 children, with no difference between randomised groups.Six children allocated stavudine, 12 allocated zidovudine, and five allocated abacavir had grade 3/4 adverse events judged by the ERC to have at least a possible relation to one of the randomised NRTIs.No grade 3/4 adverse events or serious adverse events were judged definitely or probably related to stavudine, zidovudine, or abacavir.14 children modified ART for toxicity; with significantly more in the zidovudine group where eight children substituted zidovudine with stavudine or abacavir for anaemia, neutropenia, or leucopenia.However, there was no evidence of differences between groups in grade 3/4 anaemia, although more grade 3/4 neutropenia occurred in the zidovudine group.Three children substituted ART for hypersensitivity reactions.Masked to NRTI received, the ERC adjudicated five stavudine, one zidovudine, and two abacavir primary endpoints as grade 2–4 hypersensitivity reactions; however, neither child on abacavir stopped the drug with no adverse consequences.One additional grade 1 hypersensitivity reaction was reported in the abacavir group; this child also continued abacavir without adverse effects.Two ART-experienced children substituted stavudine with abacavir after developing facial lipoatrophy.Body circumference increased with time at all measured sites, as expected, while the five skinfold thicknesses decreased similarly in ART-naive and ART-experienced children, with few differences between randomised groups.There was no evidence that randomised groups differed in body circumference or skinfold thickness ratios or the sum of the four skinfolds, or in changes in total cholesterol, LDL, HDL, or triglycerides.Disease progression was rare and similar across randomised groups.All 19 deaths, and 12 of the 14 WHO stage 3 or 4 events, occurred in ART-naive children.Nine of 19 deaths and five of 14 WHO 3/4 events occurred less than 12 weeks after ART initiation, related to pre-enrolment disease severity.There was very little evidence of drug-related mortality.Change in weight-for-age, height-for-age, or body-mass index-for-age to 96 weeks did not differ significantly between groups.Most ART-naive children achieved viral load less than 400 copies per mL by 48 weeks, with no differences between randomised groups.Viral load less than 400 copies per mL was maintained at 48 weeks by more than 96% ART-experienced children.Results were similar between groups at 96 weeks in ART-naive and ART-experienced children, as was viral load suppression less than 100 copies per mL at 48 weeks and 96 weeks.Among ART-naive children, 48-week suppression was better in those with viral load less than 100 000 copies per mL at enrolment, consistently across randomised groups with no evidence that any NRTI had superior performance in these strata.48-week suppression was similar in ART-naive children aged 3 years or older at enrolment receiving nevirapine and efavirenz, also with no evidence of variation across randomised groups.There was also no evidence that 48-week viral load suppression in ART-naive children older than 1 year varied by previous prevention of mother-to-child transmission exposure to nevirapine without NRTI cover versus those who had not.There was no evidence of differential CD4% recovery across randomised groups.Resistance mutations were assayed in 58 of 69 children with viral load greater than 500 copies per mL at 96 weeks.Seven children had no NNRTI or NRTI mutations.As expected, M184V and NNRTI mutations were common in all groups, thymidine-analogue mutations were common in stavudine and zidovudine groups, and 74V/115F mutations were common in the abacavir group.However, only one K65R mutation was identified in the abacavir group.In the abacavir group, sensitivity to second-line NRTI options was 100% for zidovudine and 94% for tenofovir.In the zidovudine and stavudine groups, sensitivity to tenofovir remained high, but, as expected, was lower for their alternative second-line NRTI abacavir. | Background: WHO 2013 guidelines recommend universal treatment for HIV-infected children younger than 5 years. No paediatric trials have compared nucleoside reverse-transcriptase inhibitors (NRTIs) in first-line antiretroviral therapy (ART) in Africa, where most HIV-infected children live. We aimed to compare stavudine, zidovudine, or abacavir as dual or triple fixed-dose-combination paediatric tablets with lamivudine and nevirapine or efavirenz. Methods: In this open-label, parallel-group, randomised trial (CHAPAS-3), we enrolled children from one centre in Zambia and three in Uganda who were previously untreated (ART naive) or on stavudine for more than 2 years with viral load less than 50 copies per mL (ART experienced). Computer-generated randomisation tables were incorporated securely within the database. The primary endpoint was grade 2-4 clinical or grade 3/4 laboratory adverse events. Analysis was intention to treat. This trial is registered with the ISRCTN Registry number, 69078957. Findings: Between Nov 8, 2010, and Dec 28, 2011, 480 children were randomised: 156 to stavudine, 159 to zidovudine, and 165 to abacavir. After two were excluded due to randomisation error, 156 children were analysed in the stavudine group, 158 in the zidovudine group, and 164 in the abacavir group, and followed for median 2.3 years (5% lost to follow-up). 365 (76%) were ART naive (median age 2.6 years vs 6.2 years in ART experienced). 917 grade 2-4 clinical or grade 3/4 laboratory adverse events (835 clinical [634 grade 2]; 40 laboratory) occurred in 104 (67%) children on stavudine, 103 (65%) on zidovudine, and 105 (64%), on abacavir (p=0.63; zidovudine vs stavudine: hazard ratio [HR] 0.99 [95% CI 0.75-1.29]; abacavir vs stavudine: HR 0.88 [0.67-1.15]). At 48 weeks, 98 (85%), 81 (80%) and 95 (81%) ART-naive children in the stavudine, zidovudine, and abacavir groups, respectively, had viral load less than 400 copies per mL (p=0.58); most ART-experienced children maintained suppression (p=1.00). Interpretation: All NRTIs had low toxicity and good clinical, immunological, and virological responses. Clinical and subclinical lipodystrophy was not noted in those younger than 5 years and anaemia was no more frequent with zidovudine than with the other drugs. Absence of hypersensitivity reactions, superior resistance profile and once-daily dosing favours abacavir for African children, supporting WHO 2013 guidelines. Funding: European Developing Countries Clinical Trials Partnership. |
550 | Review of the cultivation program within the National Alliance for Advanced Biofuels and Bioproducts | Humans have marveled at the complexity and multiplicity of aquatic microorganism since drops of pond water were first examined under a microscope.Photosynthetic microorganisms were necessarily classified based on morphological differences in the descriptive era of biological science.Over time, chemical and biochemical characteristics began to take on more taxonomic significance as expounded by R.Y. Stanier and more recently by others .Even the notion of “species” as applied to microalgae has been reexamined recently with specific reference to the diatoms.The result was a more holistic approach to taxonomy, integrating biochemical, molecular and ecological characteristics that suggests the species concept has been applied far too broadly within the microalgae .In sum, these studies provide a critically important backdrop to the current era.We know that primary production in aquatic systems is highly competitive and unstable in a biological sense .Constant changes in nutrient levels, light intensity and temperature will trigger winners and losers in the competition for available light, inorganic carbon, nitrogen and phosphorus.Morphologically similar species with very different metabolisms will wax and wane with seasons and on more rapid time scales dictated with rainfall, wind, dust and diurnal temperature ranges.Large-scale microalgal cultivation systems will need to be designed to mitigate all of these ecological eventualities.Crop protection strategies against grazers , pathogens and competitors will be equally important.Conceptually, the National Alliance for Advanced Biofuels and Bioproducts team was strongly influenced by the opportunities made available by the radical successes of reductionist biology: complete genome sequences, multiple “omic” analytical tools and detailed structure/function studies that provide both methods and metabolic insights needed for genetic manipulation of model microalgae.These tools provide tantalizing approaches to rapidly domesticate and improve wild microalgal species for renewable fuel production centered on neutral lipid synthesis and secretion of pure hydrocarbons .Well-funded start-up enterprises both within and outside the NAABB consortium set out to harvest this bounty of opportunity.Tightly coupled with this approach is the assumption that monocultures of elite algal strains can be effectively maintained in appropriately engineered cultivation systems.Modern agriculture provides a compelling model based on monocultures afforded by powerful methods of crop protection.It is important to note that this approach has been questioned by two key studies: the final report of the DOE Aquatic Species Program and more recently in the report on Algal Biofuels from the National Research Council .The ASP report cited repeated difficulties with maintaining desired strains in open raceway cultivation systems and went so far as to suggest better results might be obtained by cultivation of highly competitive local species with desired phenotypes over the use of elite strains of microalgae isolated elsewhere.The NRC study cites broader concerns regarding unsustainable requirements for energy, water and nutrients for elite genetically modified algae at scales required for production of 5% of the nations liquid fuel requirements.The NRC report provides important high-level guidance for future studies by identifying the significant barriers to sustainable algal biofuel production.The NAABB cultivation studies outlined here reflect key design criteria with respect to large-scale cultivation.It was important to identify and select geographic locations that have high annual solar insolation and climatic conditions that can maintain pond water temperatures at elevated levels for most of the year.The NAABB cultivation teams attempted to identify those microalgae strains that exhibited high growth rates within the annual temperature range and water chemistries from the production sites.Also, there was a need to design relatively simple, cost-effective, and energy-efficient large-scale culture systems that could support high productivity, culture stability and help maintain elevated water temperatures during the cold season to sustain reasonable biomass productivities year round.Specific goals included the following:identify robust production strains that will perform reliably in specific geographic locations and seasons;,develop methods and best practices for preventing large-scale culture crashes due to predators and competitors;,develop methods for cultivation in low-cost media using agricultural grade nutrients, wastewater sources, and media recycling; and,develop and demonstrate enhanced designs and operational methods that improve productivity of large-scale cultivation systems.In response, the NAABB cultivation team executed the projects reviewed here.Highlights include publications that: i) demonstrated an effective raceway design for temperature management in modified raceway systems ; ii) studied the energy efficiency of different cultivation systems ; iii) developed sensitive methods for detection of both closely and distantly related algal competitor strains and used these to monitor long-term cultivation ; iv) created detailed algal growth models for a Scenedesmus strain as well as the most productive and stable organisms used by the NAABB consortium ; v) evaluated polyculture approaches to increasing pond productivity, stability and resilience ; and vi) identified a new approach to cultivation using extreme conditions of low pH and high temperature appropriate for evaporation control in photobioreactors .The NAABB Consortium developed an R&D framework to begin addressing some of these major challenges and needs.The NAABB Cultivation team employed a variety of capabilities to execute the R&D framework.These include research tools, small-pond/raceway testbeds and two large-pond testbeds for large-scale cultivation experiments and the production of algal biomass to support downstream processing R&D across NAABB .As shown in Fig. 1, NAABB cultivation research was conducted across four major thrust areas:Cultivation Tools and Methods: Focused on developing new strain screening tools, systems, and models; sensors for cultivation; molecular diagnostics tools; and methods to control predators with environmental controls.Nutrient/Water Recycle/Wastewater Cultivation: Focused on nutrient studies, use of wastewater sources, and media recycle.Cultivation System Innovations: Focused on new raceway design to extend operation in cold season climates, airlift mixing systems, and computational fluid dynamic models to improve raceway performance.Large-pond Cultivation/Biomass Production: Focused on scale-up of new strains, development of low-cost media, and production of algal biomass.Measured the productivity of different strains at testbeds using open raceway and photobioreactor systems,NAABB focused several efforts on developing new tools and methods for optimizing algal cultivation.These include new models and processes to select strains and cultivation conditions for maximum productivity, methods to control predators using environmental factors and polycultures, and the development of a new molecular monitoring system and sensors.Light and temperature are the main abiotic determinants of biomass productivity of microalgae in photobioreactors and ponds operated under well-mixed and nutrient-replete conditions.For a pond at a given geographic location, the daily and seasonal fluctuations of sunlight intensity and water temperatures are determined by the prevailing climatic conditions at that site.The biomass productivity of a specific strain grown in this pond culture is then to a large degree determined by how the maximum specific growth rate is affected by sunlight intensity and water temperature.Thus, to optimize biomass productivities in outdoor ponds, it is necessary to identify not only geographic locations that have high annual solar insolation and climatic conditions that maintain pond water temperatures at elevated levels for most of the year, but also microalgae strains that exhibit high growth rates within the annual temperature range of the pond culture.In order to accelerate the transition of promising microalgae from the laboratory into outdoor ponds, NAABB developed and tested a biomass growth model for determining strains with high biomass productivity potential .For a given strain, the following four biological species-specific input parameters are measured and used as inputs into the biomass growth model:Maximum specific growth rate as a function of temperature;,Maximum specific growth rate as a function of light;,Rate of biomass loss in the dark as a function of temperature during the dark period and average light intensity during the preceding light period; and,Scatter-corrected biomass light absorption coefficient.An example of the data for a maximum specific growth rate matrix for Chlorella sorokiniana DOE1412 is shown in Fig. 2.Using Nannochloropsis salina and Chlorella sorokiniana DOE1412 as the model strains, it was found that this Chlorella strain exhibits much higher maximum specific growth rates at the optimal temperature and greater thermal tolerance than N. salina.Measurement of the maximum specific growth rates as a function of light intensity at different temperatures revealed that Chlorella sorokiniana DOE1412 is strongly photo-inhibited at lower temperatures and only slightly at higher temperatures.No photo-inhibition was found for N. Salina.Both strains lost significant amounts of biomass during the 10 h-long dark incubations, up to ca. 16% for Chlorella sorokiniana DOE1412 and 20% for N. salina.Biomass loss rates in the dark increased with temperature and were positively correlated with the average light intensity the cells received during the preceding growth period.The scatter-corrected biomass light absorption coefficient was determined for both strains from light attenuation profiles measured at different biomass concentrations in carboy cultures.Using these species-specific laboratory measurements together with sunlight intensity and pond water temperature data measured during an outdoor study in Arizona, the model-predicted and measured biomass concentrations compared reasonably well during the exponential and mid-linear batch growth phase, i.e., for the first 20 days following inoculation.The sawtooth pattern of the model-predicted concentration curve reflects the periodic increase of biomass during the day, followed by biomass loss at night due to dark respiration.Although the biomass growth model is useful for screening strains for the best candidates, it is important to validate the performance of the top strains in pilot-scale ponds simulating the light and water temperature conditions observed in outdoor pond cultures in selected geographic locations and seasons ."An indoor LED-lighted and temperature-controlled raceway pond that can be used to measure a strain's seasonal and annual biomass productivity under climate-simulated conditions was designed, built, and tested.The Chlorella outdoor pond culture experiment that was conducted in Arizona was repeated using the indoor LED-lighted and temperature-controlled raceways under climate-simulated conditions using scripts of the sunlight intensity and water temperature fluctuations that were previously recorded during the outdoor study.Preliminary validation results indicate that the indoor LED-lighted and temperature controlled raceway is able to simulate the outdoor cultures."Testing in indoor ponds under climate-simulated conditions is a low-risk way to confirm a strain's superior performance before transitioning to cultivation in outdoor ponds.The integrated strategy, consisting of strain characterization, modeling of growth characteristics, climate-simulated testing, and outdoor pond testing and validation of the strain characteristics provided an efficient and cost-effective screening strains for their potential to exhibit high biomass productivities in outdoor ponds.Several limitations impede algal biofuel from attaining cost-effective commercial viability.These include the need for optimized production systems and stable, resilient algae cultures that are resistant to invading organisms.Within these systems, major contaminants have included unwanted bacteria, fungi, algae, and grazers .NAABB examined environmental parameters that promote growth and lipid accumulation of N. salina while keeping invading organisms at a minimum.This included testing productivity and stability of algae polycultures compared to monocultures.In a series of experiments conducted in open aquaria in a greenhouse, we determined optimum salinity , pH, temperature and nitrogen source , to maximize N. salina production and minimize other algae competitors and predators.Table 1 shows the effects of salinity on invaders that appeared in N. salina cultures.We found that N. salina grows fastest at salt concentrations of 22–34 parts per thousand, pH 8–9 and a temperature around 20 °C.Highest biovolume was achieved using nitrate or a mixture of nitrogen sources .Lipid accumulation was enhanced when increasing salinity from 22 to 34 ppt upon reaching stationary phase .Invaders were reduced at a salinity of 22 ppt, pH above 8, and temperatures above 32 °C and using ammonium chloride as a nitrogen source.While N. salina still showed optimum growth at 22 ppt salinity and pH above 8, the higher temperatures and ammonium chloride had negative impacts on growth.In a laboratory experiment, differences in algae monocultures versus polycultures were studied .Polycultures consisted of 2, 4, and 6 species, where each of the 6 species was grown individually in the monocultures.Polycultures were assembled from equal numbers of big algae species and 3 small, fast-growing species.We demonstrated that growing several algae species together in polycultures may lead to a doubling of productivity and make algae cultures more resistant and resilient to disturbances by predators.Commercial-scale algae production for biofuels is comparable to crop production in traditional agriculture.Proper decisions concerning media addition, nutrient stressing, harvest, and invasion control should be made in a timely manner, in particular in open-pond conditions.Moreover, unlike traditional agriculture, where such decisions are typically made on a time scale of days, algae are very susceptible to environment change; therefore, management decisions should be made on a time scale of hours.Automated sensing and control systems can perform real-time monitoring of actionable data for production-scale ponds.NAABB developed and evaluated four prototype sensing systems: an algae optical density sensor for biomass concentration measurement ; Nile Red-staining-and-fluorescence-based algal neutral-lipid quantification ; near-infrared and mid-infrared analysis to characterize algal biomass composition ; and algal thin-film infrared-attenuated total reflectance lipid sensor .The OD sensor constantly pumped pond samples through the sensing chamber for wavelength-specific energy-transmission measurements .The field test results of the OD sensor at the Pecos test facility demonstrated that the sensor could accurately measure the OD of the culture within the pond, trace algae growth, and pinpoint cultivation events such as media addition and culture transfer.LEDs at two central wavelengths were selected as the light sources for OD measurement.The first is in the red region, with a central wavelength of 670 nm, a spectral bandwidth of 25 nm, and a radiant flux of 2.5 mW.The second LED is in the NIR region, having a central wavelength of 890 nm, a spectral bandwidth of 80 nm, and a radiant flux of 4.5 mW.It should be noted that, for algae, a widely accepted wavelength for spectrometer-based OD measurement is at 750 nm.In this study, we calibrated the sensor output to OD750 to assess the sensor performance.The inclusion of two wavebands added robustness to the sensor, and may allow other algae parameters, such as chlorophyll content, to be estimated.A laboratory protocol was developed toward potentially automating NRf-based measurements with a high degree of sensitivity to neutral-lipid content .A spectrofluorometer was used to identify NRf emission maxima and investigate the temperature effect; and a single-band fluorometer was used to investigate the effect of staining time and species.For all algae types, the NRf emission maximum was at 590 nm.Temperature had a large impact, with NRf intensity increasing almost proportionally with temperature.NRf signal increased from minute 0 to 4 after staining.Finally, NRf intensity was linearly correlated with neutral lipid content in algae culture.The particular strain of algae significantly affected NRf signal intensity, but within each strain NRf signal was highly correlated with lipid content.NIR combined with MIR spectroscopy of dried algae samples was used in an effort to quantify several biochemical components in dry algae biomass .Crude protein and heating value were estimated satisfactorily, followed by ash content, and neutral lipids.The absorption band at around 2920 cm− 1 was strongly related to total neutral lipid content.Finally, a simple, fast, low-cost, IR-ATR sensor was developed .The sensor included an integrated convective heater to dry algae-mixture droplets onto an internal-reflection element.The main components of the sensor are an infrared light-emitting diode that emits energy at 3.4 μm, a right-angle zinc-selenide IRE, and a photodiode to measure energy reflected through the IRE.A generalized model relating sensor measurements of two species to lipid content had a moderate root mean square error of 59 mg/g, but it showed a promising linear trend between sensor measurements and lipid content across multiple species.The time required for the sensor to take a measurement of one microalgae sample was 120 min, far less than that required with conventional laboratory methods.Given the depth of efforts related to strain discovery and improvement a key issue in cultivation was to determine the extent to which monocultures of elite strains could be maintained.Sensitive methods were developed for enumeration of elite algal varieties relative to “weedy” invader strains that are ubiquitous in the environment, and for cultivation management.The ideal monitoring strategy should be inexpensive and identify weedy algae long before they become prominent in cultures of elite varieties.NAABB developed and evaluated polymerase-chain-reaction–based tools for monitoring contaminants .In this work, primers were designed to amplify an approximately 1500-nucleotide region of the 18S rRNA gene from three major classes of algae: Bacillariophyceae, Eustigmatophyceae, and Chlorophyceae.These amplicons can be sequenced for definitive identification of strains, or they can be digested with a restriction enzyme to generate allele-specific fragmentation patterns for rapid, inexpensive characterization of strains and cultures.This work provides molecular tools to detect and monitor algal population dynamics and clarifies the utility, strength, and limitations of these assays.These include tools to identify unknown strains, to routinely monitor dominant constituents in cultures, and to detect contaminant organisms constituting as little as 0.000001% of cells in a culture.One of the technologies examined was shown to be 10,000 × more sensitive for detecting contaminants than flow cytometry .Another NAABB effort developed molecular monitoring tools using 16S ribosomal RNA gene and polymerase chain reaction amplification for identifying and tracking bacterial communities associated with the different cultivated microalgal species.These assays monitored by using the 18S rRNA gene as a marker by PCR amplification to assess the health of cultivated algal species, and anticipate, detect and mitigate pond crashes.The percent algal-associated bacterial community composition based on the 16S rDNA marker over an 8-week growth cycle of Chlorella sorokiniana DOE1412 in the Aquaculture Raceway Integrated Design outdoor cultivation pond in Tucson, AZ are shown in Fig. 8.The percentages were obtained by cloning each 16S rDNA PCR product, followed by DNA sequencing of thirty colonies per amplicon.Because algae contain chloroplasts with 16S rRNA genes, using the 16S rDNA as a marker allows for identification of algal species, and bacterial communities with a single molecular marker assay.Thus, in cultures for which Chlorella spp. is the predominant community member, experience has shown this to be a positive indicator of health.In contrast, when invaders predominate the percentage of algal clones among PCR amplicons is fewer, and bacterial clones increase in number; thus, serving as an indicator of stress.In the growth cycle shown, the predominant bacterium is detectable as an observed shift in bacterial community members predicted the eventual crash of the pond caused by the predator Vampirobrio chlorellavorus Gromov and Mamkayeva 1980, a bacterium known to infect certain species in the genus, Chlorella .Among the other bacterial residents identified using the 16S rRNA as a marker, none but V. chlorellavorus has been reported to be an algal pathogen.It should be noted that some of the bacterial community members cannot be identified to genus and/or to species using the bacterial sequence references available in the GenBank database or other available databases that specialize in bacterial isolates, in part because taxonomic activities have not kept abreast of molecular marker approaches.Environmental factors conducive to algal growth contribute importantly to phycosphere health, since extreme conditions and nutrient shortages can cause ‘abiotic’ stress and at times make the algal culture more susceptible to predators, viral and other pathogens, and scavengers.Although not well defined, some bacteria associated with algae have been shown to positively or negatively influence growth, survival, and stability, greatly affecting feedstock yields .Molecular monitoring tools have been shown to be effective in gauging the health of the phycosphere and show great promise for monitoring algal composition and algal-associated bacterial communities to serve as a forecasting system of environmental fluxes that may be detrimental to algal growth.In addition, molecular diagnostics have facilitated the early detection of at least one bacterial pathogen, and in advance of pond crashes.Management strategies are being tested to abate Chlorella spp. attack by this predator.A variety of bench scale and 1000 L scale cultivation studies were performed to determine strain-specific growth parameters.The scope included investigations of poly versus mono cultures, water recycle strategies and use of impaired waters, including produced water and wastewater.The majority of the work discussed in this section was done with the NAABB production strain Nannochloropsis salina.The suggested medium for cultivation of N. salina is denoted f/2, a well-defined growth medium for marine microalgae; hence the control case for all data presented is growth on f/2 medium.Initial experiments were done in 12 outdoor 3 m2 raceways located at one of the small test-bed locations in Corpus Christi, Texas, to investigate the effects of nitrogen source on productivity for N. salina in batch and semicontinuous cultures.Batch culture treatments continued to produce more biomass over the course of the study than the continuous cultures, with peak growth and time between harvests remaining consistent.Productivity averaged 12.8 g/m2/d for batch cultures, compared to 10.9 g/m2/d for the continuous cultures.In the nutrient regime study, nitrogen in the f/2 medium was provided as either ammonium or nitrate on an equimolar basis to determine the optimal growth response with respect to nitrogen source.No significant differences in production were found between treatments.The ability to get the same production from the modified mix with ammonia and nitrate compared with the more expensive f/2 media greatly enhances the ability to produce biomass at reduced costs with this strain.Additional experiments were performed to compare the productivity of a monoculture of N. salina to mixed cultures of Phaeodactylum tricornutum and N. salina.Results suggest that the mixed culture grew better than or the same as the N. salina monoculture.In colder temperatures, the mixed culture did better suggesting that crop rotation strategies require more investigation.Another important part of cultivating and characterizing algae is to recycle water.After algal cultures reach stationary phase, the water or spent media is separated from the algae and ideally recycled to the raceway.Recycling of spent media should occur until algal growth is inhibited severely.Fig. 10 shows growth of N. salina as a function of generation.A generation is defined as a batch culture started with fresh inoculum but the water is recycled.Hence for this set of experiments 90% of the water was reused 8 times.Each time, sufficient nitrogen and phosphorous were added to the recycled water so that the initial concentration was always the same.When fresh inoculum and additional media are added, the algae grow well with very little change in biomass productivity as a function of generation.As shown in Fig. 11, the lipid percentage varied more than when fresh inoculum was used; however, overall, these results demonstrate that water can be recycled multiple times with some losses in overall lipid productivity to be expected.In the field, some systems operate in fed-batch or semicontinuous mode as opposed to batch mode.In,this configuration, a percentage of the culture is removed and dewatered, the water is recycled to the reactor and additional nutrients are added but not additional inoculum.In these cases, the lipid percentage varied more than when fresh inoculum was used; however, overall, these results demonstrate that water can be recycled multiple times with some losses in overall lipid productivity to be expected.Use of municipal wastewater for algal cultivation is an area of great interest as it provides an inexpensive source of nutrients.As part of NAABB, four studies were done.The first study specifically targeted primary wastewater treatment in arid climates where unsustainable evaporative water losses preclude the use of open ponds for algae-based methods.The second study investigated whether or not the metals found in wastewater would be toxic to mesophilic algae.The two other studies investigated different types of wastewater obtained from Southwestern wastewater treatment plants.A complete treatment system operating on primary municipal wastewater was initiated with NAABB funding and evaluated at New Mexico State University .It is specifically designed for use only in hot arid climates.The algae-based treatment occurs in inexpensive horizontal photobioreactors designed to minimize evaporative water losses by minimizing gas flow rates.This system retains metabolically generated O2 and CO2 to maximize mixotrophic metabolism and boost biomass productivity.Measured O2 in headspace at mid-day in summer did not exceeded 30%, while the enhanced CO2 levels in head space dramatically reduced the probability of photorespiration, as in C4 plants.The entire system operates somewhat like green seed metabolism, whereby CO2 released by pyruvate dehydrogenase for fatty acid synthesis is recaptured by photosynthetic cells .In this case, the oxidation of reduced carbon via respiration by algae or heterotrophic microbes in the PBR can be recaptured by algae in the light zone.The PBR system experiences passive solar heat gain in hot arid environments such that moderately thermophilic algae are required.Galdieria sulphuraria was chosen as the ideal algal component.In addition to thermotolerance to 56 °C, it possesses the most versatile heterotrophic capabilities known among the phototrophs .Galdieria grows at low pH, which rapidly destablizes mesophylic neutral pH organisms in primary wastewater.The outdoor PBR system was shown to support the growth of Galdieria at productivities from 2 to 16 g/m2/day and to remove N and P to discharge limits in batch mode within 4 days .Acidophile-based wastewater treatment systems must also overcome challenges associated with the cost of acidification and potential effects of acid anions on downstream processes.G. sulphuraria naturally acidifies its growth medium likely because of proton pumping after uptake of NH4+ and assimilation of NH3.Data from Oesterhelt et al. demonstrate that adjusting the wastewater to pH 6.0 might be sufficient.Additional experiments will be required to directly test this hypothesis.To assess the potential impact of sulfate ions remaining from acidification on downstream hydrothermal liquefaction processing, Galdieria sulphuraria biomass was grown in outdoor PBRs like that shown in Fig. 12.H2SO4 was used for acidification and the resulting biomass concentrated to a 10% solid feed and converted to biocrude oil.The lipid content of the biomass was ~ 5% as total fatty acid methyl esters and the HTL biocrude yield was 19% by weight .Two other studies investigated different types of wastewater obtained from Southwestern wastewater treatment plants .In these, the metals investigated as potential toxicants included those present at the highest concentrations in regional municipal wastewaters.Compounds and their respective half maximal effective concentration values were as illustrated.Initial experiments involved N. salina in batch cultures that were simultaneously exposed to various multiples of the EC50 concentrations.Subsequent work in this area was designed to determine which metal species were the predominant source of observed toxicity.Fifty percent inhibition of the N. salina growth rate was observed in the culture amended with the Table 2 metals at 11 × their respective EC50 values.That is, zinc and copper were present at concentrations near mg/L levels—exceptionally high relative to their typical concentrations in regional municipal wastewater effluent.After determining that Nannochloropsis can grow in water that contains > 10 × the amount of heavy metals typically found in wastewater effluent, centrate was investigated.The basic experimental strategy was to substitute either wastewater effluent or a nutrient-rich sidestream developed during the dewatering of biosolids for the source of macronutrients in the f/2 medium.Effluent or the sidestream flow comprised fractions of the total liquid volume ranging from 5 to 100%.Salts were added to maintain a near uniform ionic strength.Relative growth rates and lower than normal terminal optical densities were taken as indications of inhibition.Results indicate that the addition of centrate derived from biosolids dewatering increased both the rate and extent of growth of N. salina at ratios ranging from 5 to 25% v/v.Higher fractional additions inhibited growth.Minor changes were apparent in the lipid compositions of the cells grown in centrate.It is apparent that those cells produced a larger percentage of fatty acids that were not recognizable based on the authentic standards utilized here.Furthermore, centrate addition virtually eliminated production of fatty acid C18:1n9 at every level of centrate addition.Additional tests were carried out using Nannochloropsis salina growing on secondary-treatment wastewater collected at the outflow before discharge or just before the chlorination stage from the Jacob A. Hands municipal wastewater treatment plant in Las Cruces, New Mexico.This plant uses an activated sludge protocol combined with a biological filter pretreatment and a final chlorination and SO2-dechlorination.This process produces an advanced secondary treated wastewater effluent.On bench-top shakers, without CO2 addition, algal growth was significantly slower in the wastewater effluent as compared to the standard f/2 substrate.The effect of wastewater bacteria on algal productivity and contaminant risk parameters was evaluated by comparing sterile wastewater to either raw treated wastewater or nutrient-amended treated wastewater.As expected, the algae grew to higher cell densities in the sterilized wastewater.For this water source, adding nutrients or fertilizer did not significantly increase the final cell density.We conclude that wastewaters, even partially treated to remove nutrients, are a viable source of nutrients allowing productivity levels similar to the ones obtained on standard growth media.The research program described has shown that the economic and environmental sustainability of a meaningful algal biofuels industry requires use of CO2 and fertilizer nutrients that are not derived from fossil fuels and that do not reduce the availability of fertilizer for agriculture.Recycling water or using otherwise impaired water can further increase the sustainability of biodiesel production from algae .One kilogram biodiesel requires approximately 3726 kg water, 0.33 kg nitrogen, and 0.71 kg phosphate if freshwater is utilized .Therefore, the use of wastewater as the source of water and nutrients is requisite to the development of algal biofuel technology in Arizona and other parts of the semiarid Southwest.Straightforward calculations indicate that without nutrient recovery and reuse, the supply of municipal wastewater cannot satisfy large scale biofuel nutrient requirements.In their recent report titled Sustainable Development of Algal Biofuels in the United States, the National Research Council of the National Academies concluded: “…with current technologies, scaling up production of algal biofuels to meet even 5% of U.S. transportation fuel needs could create unsustainable demands for energy, water, and nutrient resources…”.Identification of alternative water and nutrient sources is necessary to make algal biofuels a sustainable energy resource.Municipal wastewater is among the most promising sources of water and nutrients for algal growth.However, annual production of 39 billion L of algal biofuel, which is equivalent to 5% of annual U.S. demand for transportation fuels, requires at least 123 billion L of water, 6 million MT of nitrogen and 1 million MT of phosphorus.Without recycling, it would take over 1 ×, 4 ×, and 5 × the entire U.S. population, respectively, to generate sufficient wastewater to provide that much water, N, and P. Therefore, nutrient and water recycling/reuse are fundamentally critical for microalgae to be a sustainable energy source.NAABB had several efforts focused on developing new innovative approaches for the design and operation of the algal cultivation systems to improve algal biomass productivity and the associated capital and operating costs.These include new pond designs, mixing systems, CFD models for improved raceway design and the use of photobioreactor systems to produce high-yield inocula for large scale ponds.One of the causes of decreased algae production in open ponds is diurnal and seasonal temperature variation.The ARID system maintains temperature in the optimal range by changing the water surface area between day and night by draining the culture to a sump .A finite-difference temperature model of the ARID raceway was developed in Visual Basic for Applications .The model accurately simulated the temperature changes in the ARID raceway during winter cultivation experiments where the algal growth rate of N. salina in ARID and conventional raceways was compared.The ARID raceway remained 7–10 °C warmer than conventional raceways throughout the experiments.NAABB efforts continued to make design improvements and energy evaluations of the ARID system .Although the original ARID system was an effective method to maintain temperature in the optimal growing range, the pumping-energy input was excessive and the flow mixing was poor.Thus, an improved high-velocity raceway design was developed to reduce energy-input requirements.This was accomplished by improving pumping efficiency, optimizing the operational hydraulic parameters, and using a serpentine flow pattern in which the water flows through channels instead of over barriers.A second prototype ARID system was installed in Tucson, Arizona, and the constructability, reliability of components, drainage of channels, and flow and energy requirements were evaluated.Each of the energy inputs to the raceway were quantified, some by direct measurement and others by simulation.An algae growth model was used to determine the optimal flow depths as a function of time of year.Then the energy requirement of the most effective flow depth was calculated.The Biomass Growth Model was added to the ARID raceway model.The accurate estimates of light transmission and temperature enable an accurate prediction of algae growth for various raceway configurations, depths, and operational schemes.The model was used to compare ARID raceway algae growth with conventional raceway algae growth at different flow depths and then to simulate daily and monthly production values for different scenarios.The model was run for Tucson, Arizona, and showed that the ARID raceway had much higher production than a conventional raceway in winter, significantly higher production in spring, the same production in the last month before the monsoon, and similar production during the monsoon months.A new airlift-driven raceway reactor configuration was developed for energy-efficient algal cultivation and high CO2 utilization efficiency.Advantages of this configuration were demonstrated in a 23 L version of the configuration under artificial lighting and laboratory conditions and in an 800 L version under natural light and outdoor conditions .Results from side-by-side growth studies conducted with N. salina in the two raceways, one with the modified air-lift system and the other an identical raceway with a standard paddlewheel, are summarized in Fig. 22.The CO2 input into paddlewheel and airlift systems was identical on a culture volume basis.The higher biomass and lipid production due to the efficient CO2 supply resulted in higher net energy gain in the airlift-driven raceway than in the paddlewheel-driven raceway.The net energy output of the paddlewheel-driven raceway is estimated as 0.03 W/L whereas that in the airlift raceway is 0.15 W/L.Based on the laboratory tests and the field tests conducted in this research, the proposed airlift-driven raceway can be seen to be more energy-efficient than the traditional paddlewheel-driven raceway.To quantify energy efficiency, biomass productivity per unit energy input is used rather than the traditional measure of volumetric biomass productivity.Based on this measure, performance of the airlift reactor configuration is shown to be comparable to or better than those reported in the literature for different PBR designs .In light of the improved energy-efficiency and the higher CO2 utilization efficiency demonstrated in this study under laboratory and outdoor conditions, the proposed airlift-driven raceway design holds promise for cost-effective algal cultivation.The mathematical model of the airlift-driven raceway developed as part of this study was validated using growth data on two different algal species under indoor and outdoor conditions.The predictive ability of this model was shown to be high.Discussions of microalgal cultivation systems have typically focused on either open raceway systems or closed PBRs but this is likely to be a false dichotomy as there are applications best achieved using one or the other or both options in concert .For example, the use of highly efficient PBR systems for quality-controlled inoculum production at maximum rates for open raceway ponds could likely improve yields and cultivation system stability and reliability.PBRs integrated with large-scale open raceway production systems also represent a risk-mitigation system that can quickly repopulate a large-scale open raceway pond facility after a culture crash.PBR volumetric productivities can be 10-fold higher than open raceway ponds due to shorter light-paths and better control of culture parameters such as light, temperature, CO2, and mixing.Biomass and lipid productivity data for Nannochloropsis salina were collected from operations of the Solix Algredients Inc.PBR system over several years conducted in a serial batch mode with a portion of each harvest used to start the next batch .Harvest densities ranged between 2 g/L and 3 g/L.Some batches were harvested at lower densities due to low growth rates in low light periods during winter cultivation.A select number of batches were inoculated as low as 0.25 g/L and harvested at 6 g/L.Several sensitive tools were used in follow up studies to demonstrate that N. salina cells dominated these cultures in terms of cell numbers .Together, these studies document multi-year, stable cultures of this strain.Nevertheless, even this sophisticated photobioreactor system did not support pure monocultures.NAABB tested a newer version of the Solix Algredients Inc.PBR with the same media and culture, N. salina CCMP1776 from the previously mentioned Solix studies.The lower range of required inoculation densities and upper range of harvest densities was investigated with this system.Harvest densities as high as 6 g/L were observed in the same period.To achieve this growth range the system was fed a second batch of nutrients four days after inoculation.The ability of the system to operate at high linear growth rates over these density ranges supports the use of the system as an industrial scale cultivation technology for both stand-alone production and as an inocula source for large-scale integrated PBR/open-raceway pond systems.Fig. 23 shows a data plot for a number of production runs in the Solix PBR over the past several years indicating the relationship between final culture density and lipid content as a percentage of dry weight.This data plot shows that yields exceeding 5 g/L with 50% lipid have been achieved using the Solix PBR system.Moreover, these results have been confirmed in both small- and large-scale PBR systems with efficient use of nutrients and CO2.Actual operation of these PBR systems to produce inocula for open ponds would most likely focus on rapid biomass productivity under nutrient-sufficient conditions versus lipid accumulation under nutrient limitation, since this would significantly increase biomass productivity for providing seed for large-scale pond systems.An important aspect of the NAABB program was algal cultivation to provide biomass for downstream processing and analysis.Two sites were utilized: the Texas Agrilife facility at Pecos, Texas, and the Cellana facility in Kona, Hawaii.At Pecos, five algae strains, starting with N. salina as the baseline strain and four other selected strains, were cultivated.For each alga strain, two media were compared and productivities were determined, and batches were grown in 23,000 L open ponds with paddlewheels."At Kona, Cellana's ALDUO™ large-scale cultivation “hybrid” system of PBRs and open ponds was utilized.Each production system consists of six 25,000 L PBRs and three 450 m2 production ponds.All fluid transfers—including inoculations, nutrient additions, and harvest volumes—were operated and monitored by a remote process-control system.The first step performed prior to large-scale cultivation was to optimize the media to reduce the costs associated with growing algae at the 23,000 L scale.This was accomplished by replacing the nitrate with urea, the potassium phosphate with a mixture of monoammonium phosphate and potash, and the iron citrate with iron chloride.Each component was evaluated separately and the lowest quantity of replacement chemical that did not result in a decreased growth rate was used.The cost and quantity information for a common freshwater Chlorella sp. cultivation medium, BG-11 was compared to a much less expensive media developed for use in the field.The new media recipe is 90% lower in cost than the standard BG-11 media.Once the species had completed the media optimization testing at bench scale, intermediate scale tests were conducted in two medium raceways located in a greenhouse.Nine species were tested using this process at the Pecos site.Cultures of Chlorella sorokiniana DOE1412 were scaled-up from the bench to 800 L raceways on the BG-11 versus optimized media.The biomass productivity, lipid productivity, and FAME profile were monitored for both media formulations.Fig. 25 shows the cultivation data as a function of time.Essentially the media was added slowly, starting with 5 L of culture in 20 L of media, then 30 additional liters were added, followed by 30 L more.Subsequently, the cultures were transferred outdoors and the volume was set to 250 L and media added up to 800 L.The algae grew as well on the optimized, less expensive media as on the BG-11 media; however, the optimized media is 10 times less expensive.The lipid content and lipid profile are shown in Fig. 26.This strain provides lipids with many unsaturated bonds and primarily consist of C18 compounds that are readily converted to fuels.The lipid profiles are similar for the two media regardless of reactor type.Additionally, Cellana conducted strain screening and optimization experiments using its midscale cultivation system, which is a stand-alone system of 24 PBRs and pond simulators, each of 200 L capacity.Cellana focused on N. oceanica, strain KA19, and optimized pH, salinity, total nitrogen, and cultivation time.At the large scale, five consortium strains were grown in 23,000 L open pond raceways in Pecos.Additionally, Cellana cultivated three species in their production facility: N. oceanica KA19, Pavlova pinguis C870 and Tetraselmis sp.Table 4 provides the amount of biomass provided to the consortium for downstream processing studies.On average, a productivity of 10 g/m2/day was obtained at both sites.More detail on the long-term and seasonal productivities from the Pecos testbed and other algal cultivation facilities is provided in the Sustainability review section.Full 100% media recycle showed no adverse effects on media composition, biomass, or lipid yield.However, changes in the concentration of divalent and transition metals over time in the cultivation system and algae remain unaccounted for.These changes appear to be nominally the inverse of total salinity variations.Over 15 cycles of growth using recycled media were accomplished without large increases in salinity.The use of the recycled media reduces the use of new well water and retains salts that would otherwise be purchased for addition, thereby drastically reducing the quantity of water withdrawn from the local shallow aquifer and potentially reducing costs.There were several lessons learned at the large-scale related to scalability, media recycle, pond depth, ash content, contamination, and process integration.Overall, data collected at mid-scale matched up very well with that at large-scale in terms of the biomass productivity, pond cycle, and the biochemical composition of the biomass.This confirms that mid-scale production systems are useful research tools that simulate microalgae performance in large-scale ponds in a cost-effective manner.An important aspect of cultivation is the use of media recycling.It is extremely important to reuse as much water for cultivation as possible to reduce input costs.Studies were performed in the lab using ion chromatography and analysis performed to determine nutrient uptake of each individual alga species as well as to determine the chemical balance of the media after the algae had been removed from suspension.During 2011 and 2012, media was recycled, showing no significant drop in productivity or lipid accumulation.It should also be noted that over time microelements present within the media increased in concentration within the recycled media the more it was reused for cultivation, suggesting a reuse limit.Also, recycled media had to be treated using specific amounts of bleach to remove any potential contaminants that were present over time.Alternative methods for sterilization of the recycled media are ongoing and included UV treatment similar to what is seen in the wastewater and aquaculture industries.Pond depth, depending on the time of year, has an effect on the overall performance of the algae cultures.During the summer months, increasing the speed of the paddlewheel and operating at a depth of 4–7 in.can reduce the overall temperature of the algae culture, keeping it more protected from overheating and thus maintaining high growth rates during the more extreme months of the year.However, this strategy will not work in arid regions with high evaporation rates and limited water resources.Slowing the paddlewheel down in the winter to help reduce evaporation and increasing the pond depth to provide more thermal protection helps prevent the culture from getting too cold, allowing the growth rates to stay competitive during the winter months.A reduction in growth rate was observed in the winter months due to temperature fluctuations and lower light levels, but through proper management and culture care, the Pecos facility has been able to stay operational year round.One other lesson learned at the large scale is related to ash content management.In large open areas, especially in the Southwest, dust frequently blows into the ponds.The dust increases the ash content of the culture and is undesirable in the downstream processes.A strategy was developed utilizing partial harvests to minimize the amount of dirt in the cultures.However, efficient harvesting methods that minimize dust require further investigation.Culture contamination is the most prevalent cultivation issue that was observed over the course of the NAABB cultivation projects.By using a batch cultivation system, contamination issues could be mostly contained, but from time to time either due to older cultures, rain events, or large dust storms, cultures would become contaminated during the production process.Microscope checks were performed on the batches at regular intervals to determine the rate in which each batch was becoming contaminated.An arbitrary threshold of 20% contamination was established to provide a decision point when cultures were deemed unusable.Contamination decreased when the ALDUO™ hybrid system, a combination of PBRs and open ponds, was used.Species-specific methods were also developed, such as the addition of salt to freshwater cultivation systems when the algae had some salt tolerance, pH shifts, and nutrient starvation.The final aspect of large-scale cultivation is gaining an understanding of how changes in cultivation methodologies affect downstream processing.Ash content was one of the most significant issues for processing through harvesting and extraction equipment.Obviously, less is better; however, strategies to mitigate large quantities of ash are still required.Also, the addition of metals and high salt concentrations greatly affect the feed value and may require further cleaning of the bio-oil prior to conversion since these compounds affect catalyst life; hence, process integration is extremely important and crucial as the industry moves forward.Significant progress was made in all four major thrust areas shown in the Cultivation task framework, thereby advancing toward the goal of cost-effective achievement of high annual biomass productivities in robust outdoor pond and hybrid systems in an environmentally sustainable manner.The key advance in our optimization and modeling was the development of a microalgae biomass growth model.This model utilizes experimentally determined species-specific parameters and was validated using outdoor pond cultivation data.The biomass growth model, in conjunction with the biomass assessment tool, enables the prediction of monthly and annual biomass productivities of a given strain in hypothetical outdoor pond cultures located across the United States.Furthermore, an indoor raceway pond with temperature control and LED lighting to simulate sunlight spectrum and intensity was designed and successfully operated under climate-simulated conditions.This system allows one to simulate the climate conditions of any geographical location and determine how algae will grow in a location of interest.This innovative modeling capability combined with the LED system can be used as a low-risk and cost-effective way of screening strains for their potential of exhibiting high biomass productivities in outdoor ponds, for finding the best match between a given strain and climate, and for identifying the optimum pond operating conditions, thereby accelerating the large-scale cultivation of promising high-productivity strains while quickly eliminating suboptimal candidates.It was demonstrated that microalgae can be successfully cultivated on municipal wastewater and produced water resulting from oil and gas exploration.Recycling water and media or the nutrients in waste biomass can further reduce the costs of inputs.A water management strategy that includes the use of low-cost impaired waters and recycle strategies for cultivation will be necessary for the anticipated large-scale production of microalgae biofuels.With respect to the task of developing and operating innovative cultivation systems, the key advance was the modeling, testing, and design improvements of the ARID pond culturing system.This system provides improved temperature management, i.e., maintaining water temperatures within the optimum range for a given microalgae strain throughout the year.Modeling results and measurements demonstrated that water temperatures during the winter remained 7–10 °C warmer than in conventional raceways.As a result of better temperature management, the ARID system was shown to have significantly higher annual biomass productivities compared to conventional raceways.In conjunction with engineered reductions in the energy use for pumping and mixing, cultivation in the ARID system was also shown to have significantly higher energy productivity than conventional raceways.By extending the growing season and modulating temperatures, the impact of the ARID system could be profound by significantly increasing annual biomass productivities for any microalgae strain of choice.Collectively these improvements result in approximately an 18% reduction in cost of production of algal biomass in comparison to traditional open-pond systems with paddlewheels.With respect to the task of large-pond cultivation and biomass production, a media cost reduction of 90% in chemicals was demonstrated over the use of typical laboratory media formulations. >,1500 kg of biomass from eight different algal species was generated in the large-scale facilities at Pecos, Texas, and Kona, Hawaii for downstream processing and testing.By successfully demonstrating large-scale biomass production, significant progress was made toward the goal of commercial microalgae biofuels generation.A key development was the ability to move strains isolated from the prospecting effort from the laboratory to full production in outdoor pond systems, and subsequent downstream-processing of the strains to fuels and coproducts.Future work should continue with the characterization of new strains using the LED climate-simulation system to optimize conditions for outdoor cultivation and to develop a crop-rotation strategy.Along with conducting long-term cultivation trials in established testbeds at different locations using NAABB strains and various pond designs.The data from the above should be used to inform DOE harmonized models.Scale up efforts should include cultivation in impaired waters with recycling, evaluating the water chemistry, along with quality, and impact on sustained productivity; and cultivation of GMO strains first in the LED climate-simulation system prior to outdoor trials.These efforts should include mitigation strategies to minimize ash content and undesirable metals in algal biomass produced at large scale, and continue to demonstrate crop management strategies at scale.Finally, engineering optimization of pond design should continue in order to bring down capital and operating costs of large scale cultivation systems. | The cultivation efforts within the National Alliance for Advanced Biofuels and Bioproducts (NAABB) were developed to provide four major goals for the consortium, which included biomass production for downstream experimentation, development of new assessment tools for cultivation, development of new cultivation reactor technologies, and development of methods for robust cultivation. The NAABB consortium testbeds produced over 1500 kg of biomass for downstream processing. The biomass production included a number of model production strains, but also took into production some of the more promising strains found through the prospecting efforts of the consortium. Cultivation efforts at large scale are intensive and costly, therefore the consortium developed tools and models to assess the productivity of strains under various environmental conditions, at lab scale, and validated these against scaled outdoor production systems. Two new pond-based bioreactor designs were tested for their ability to minimize energy consumption while maintaining, and even exceeding, the productivity of algae cultivation compared to traditional systems. Also, molecular markers were developed for quality control and to facilitate detection of bacterial communities associated with cultivated algal species, including the Chlorella spp. pathogen, Vampirovibrio chlorellavorus, which was identified in at least two test site locations in Arizona and New Mexico. Finally, the consortium worked on understanding methods to utilize compromised municipal wastewater streams for cultivation. This review provides an overview of the cultivation methods and tools developed by the NAABB consortium to produce algae biomass, in robust low energy systems, for biofuel production. |
551 | The evolution of green jobs in Scotland: A hybrid approach | Since the Kyoto agreement was signed there has been a significant global debate around reducing carbon emissions, and many regions and nations have adopted a target to reduce national greenhouse gas emissions.In Scotland the target is to reduce GHG emissions by 42%, relative to 1990 levels, by 2020.Given that the energy sector is a major source of emissions, the Scottish and UK governments have introduced policies to develop renewable energy or low carbon technologies to help meet these emissions targets.A prime example of this is the Scottish Governments target to generate the equivalent of 100% of gross electricity consumption from renewable technologies by 2020."This target builds upon Scotland's existing high level of renewable generation capacity, and natural advantage in renewable resources, principally wind, wave and tidal.If this 100% target is to be met it is expected that the size of the Scottish Low Carbon Economy will increase significantly with an associated increase in employment or so-called “green”.The Scottish Government have made clear that their renewable electricity target is also required to assist in the “re-industrialisation” of Scotland, and the Scottish Government have estimated that this sector could create an additional 60,000 jobs by 2020.Given these targets, it is important for policy makers to have robust measures of the employment in the LCE.However, estimates of the number of such jobs vary greatly depending on the source.Principally, this is because estimates use different definitions of the LCE, producing a variety of estimates of the scale of employment.Classifying jobs in operating renewable electricity devices in Scotland as “green jobs” would likely be uncontroversial, the inclusion of other activities may be more controversial and may be omitted in some measures of “green jobs”.A widely used definition – indeed one used by the Scottish Government – captures activities in “Low Carbon Environmental Goods and Services”.This covers a range of renewable, low carbon and environmental activities.The Scottish Government methodology produces an aggregate figure for employment in the LCEGS, however it is only for a specific period, usually a year, is costly to produce and is not typically produced on a regular basis.In this paper we propose a methodology which can produce a time series of employment in LCEGS.Our method combines the detail from “bottom-up” surveys with “top-down” time series data from official surveys.We use industrial data on Scottish employment by sector alongside information from a regional UK survey of employment in LCEGS to track the evolution of LCEGS employment annually between 2004 and 2012 – a time of significant development of low carbon and renewable energy technologies in Scotland.1,The approach which we use was first proposed by Bishop and Brand, who examined LCEGS employment in Plymouth, UK, focusing on a single year.We extend the approach firstly to the national level and secondly, to show the evolution of the total number of jobs in LCEGS activities between 2004 and 2012.In doing so, we demonstrates how “bottom-up” and “top-down” data can be combined to produce a measure which can be updated frequently, can be used to measure progress towards targets for jobs in LCEGS and can be used to evaluate the employment “success” of energy policy.The paper proceeds as follows.The next section discusses different definitions of ‘green jobs’ and the ways in which they are measured.Section 3 gives details on the methodology used in this paper.Section 4 provides our results and discussion, and the final section provides our conclusions and policy implications.Although measures to increase employment in “green” activities are a policy area for many countries and regions across the world, there are a wide range of definitions used to measure progress towards these goals.This occurs for a variety of reasons, which might be classified as either conceptual or empirical, and which are summarised in Sections 2.1 and 2.2, respectively, below.2,In Section 2.3 we review previous estimates of LCEGS employment in Scotland.There are two principle conceptual challenges.First, there is little agreement on which activities might be considered as “green”.Furchtgott-Roth, for instance, writes that “no one knows what green jobs are”.Noting the US Bureau of Labour Services definition as “jobs in business that produce goods or provide services that benefit the environment or conserve natural resources” leads to the apparent contradiction that, for example, in the case of two farmers producing the same crop, one would be classed as having a green job if that crop was used in biofuels, while the other would not be counted if her output was used in food production.As the worker may not necessarily know where her output will be used it makes it difficult to simply ask workers if they have what might be considered a “green job”.A second conceptual issue is with employment in the “supply chain”.Workers employed in the operation of renewable energy facilities would, without controversy, be included in a measure of green jobs.However, this employment may require inputs from other sectors, e.g. installers of offshore wind turbines will require vessels, which will in turn require the production of metals, engines, and fuel and so on.It would not be natural to consider employment in these kinds of intermediate sectors as “green” jobs, but nevertheless they are part of the supply chain for these green activities.Aside from these conceptual issues, and the empirical considerations which are the subject of the next section, there is another important issue to consider which is the language and implied definitions of “green jobs”.For instance, some authors refer to the “low carbon economy” while others prefer the “low carbon environmental goods and services sector” nomenclature.The LCEGS measure has become widely used in recent years in the UK.This measure provides a “bottom-up” definition of employment across a range of activities and services, including through the supply chain, while also providing comparable estimates for other countries around the world.Perhaps part of the rationale for the LCEGS measure is to understand more about the parts of the economy which are undertaking work in the low carbon area, without placing restrictions on the precise industrial activities that are included.In other words, the use of the LCEGS definition perhaps represents a move away from a focus on decarbonising the domestic economy to maximising the economic benefit from publicly supported investment in the low carbon economy.Given the adoption of this broader LCEGS definition by the Scottish Government, as we shall see in Section 2.3, it is the measure which we use here.There are two broad approaches which have been used in the literature to date to measure the number of “green jobs” in an economy.We can classify these as those based on Standard Industrial Classifications and those based on surveys.We refer to these in the rest of the paper as “top-down” and “bottom-up” approaches respectively.This classification between top-down and bottom-up is merely used to illustrate the different ways in which estimates of “green” employment have been produced.3,First, the “top-down” measures use the classification of employment to industries which is compiled from official statistics covering the whole economy.By identifying specific industrial activities as “green” and tracking employment in these categories, such measures provide a regularly updated metric of employment.Pew Charitable Trust, for instance, take this kind of approach to count the number of green businesses in the U.S., summing firms across 74 categories.These estimates of the number of green businesses have subsequently been used by Yi to understand the drivers of green business growth across US states, while Yi and Liu use the same SIC approach to measure green employment in China.The “top-down” approach has the advantage of being based on regularly updated and robust statistical measures of economic activity.A significant drawback however is that all activities within each SIC is considered as “green”.The Scottish Government classification of “Energy” for instance, counts employment in the SIC code – “Engineering related scientific and technical consultancy services” – while only a portion of activities in this sector will be for “green” activities.This “all-or-nothing” approach is therefore problematic in practice, and is one of the advantages of the hybrid approach that we explain in Section 3.Second, there are “bottom-up” surveys of employment in specific green or renewable industries; these have been widely used and cited.These surveys require a careful consideration of the boundaries of the survey.A critical distinction lies between the count of direct jobs, indirect jobs and induced jobs supported by the sector of interest.Examples of this kind of “bottom-up” study include Llera et al. and Blanco and Rodrigues.Llera et al. estimated the number of direct jobs in renewable energy on a regional economy, and show the importance of having detailed survey data.Blanco and Rodrigues meanwhile surveyed firms in the wind industry in the EU and established that there are 50,000 direct jobs in the wind energy sector in the European Union.In order to get a measure of the employment indirectly supported through the supply chain, Blanco and Rodrigues use input–output employment multipliers to estimate that in total around 100,000 jobs were directly and indirectly attributable to the wind energy sector in Europe.An alternative to using IO methods to quantify the jobs indirectly supported by the sector, would be to survey the supply chain directly.This is the approach that Scottish Renewables took.Through surveying firms across the renewable energy sector in Scotland, they discovered that there were 11,136 such jobs.Definitional boundaries are critical to survey based approaches.Some measures of “direct” jobs appear to include employment that should more appropriately be considered as employment in the supply chain, e.g. construction firms involved in production of the raw material for a turbine may be counted as “direct” jobs, rather than as activity supported indirectly through the activities of renewable energy.This issue was clearly present in the Scottish Renewables study where, of total number of jobs in renewable energy in Scotland, some 30% were in the area of grid extension and upgrade work.These jobs were, in essence, construction jobs rather than “green” jobs.Alternative “bottom-up” measures of employment such as the employment in Low Carbon Goods and Services have the scope to capture total employment across identified green activities without being constrained to using top-down SIC categorisations.Additionally, by identifying activity across a wide range of areas connected to the “green” economy, the LCEGS measure itself covers total employment and so does not require the use of IO approaches, which are not always available for many regions or nations.Although this definition has been criticised by some for a lack of transparency, reproducibility and coverage of new firms in the “green” economy, it provides a widely used measure of employment in the green economy.Innovas provides a “bottom-up” estimate of the size of the LCEGS sector in the UK.This gathered primary data from over 720 sources and covered all the sectors which contribute to a low carbon economy, including research/development and the supply chain.Only companies where at least 20% of their outputs contributed to the LCGES were included in the report.Their report identified three main sectors of the LCEGS; Environmental, Renewable Energy and Emerging Low Carbon Technologies.These sectors were further split into 23 sub-sectors and 2,490 individual activities.The final report estimated the overall size of the UK LCEGS sector in the 23 identified sub-sectors, as well as a regional breakdown.This was a resource intensive study, as bottom-up studies are, and produced a large amount of data.Replicating this study to produce up to date estimates, even on an annual basis, would be a similarly time intensive activity.According to the Scottish Government low carbon employment in Scotland at that time was 70,000, and could increase by “at least 60,000 by 2020”.It was further estimated that by 2015 the LCE of Scotland will comprise 10% of the total economy, and be worth around £12 billion in 2015–16.The estimated increase of 60,000 jobs by 2010 was anticipated to comprise 26,000 jobs in renewable energy, 26,000 in low carbon technologies and 8,000 in environmental sector.Our proposed approach is to take the benefits of top-down data and combine these with bottom-up data to produce a regularly updating series of the number of jobs in the LCEGS sector in Scotland.Specifically, we wish to take the features of top-down data – particularly its coverage of employment in all sectors of the whole economy and that such statistics are regularly updated - and of the detail of bottom-up data to construct what we term a “hybrid” approach.The advantage of this method over the bottom-up approach is that it is less resource intensive than an annual survey, while it is also possible to produce updated estimates of the LCEGS.4,In our hybrid approach, SIC code data are used, in conjunction with other data sources, to determine the share of activity in each sector related to the LCEGS, in order to calculate the overall size of LCEGS jobs.The primary source of input data for our method was four-digit SIC codes.SIC coding in this format has 515 separate activities, many of which will be not relevant to the LCEGS sector.Thus the first task was to filter the SIC codes to identify those codes which contributed to the LCEGS.These were identified in the Red Group report, which can be used in our measure for Scotland.From carrying out this filter, we identified that 141 SIC activities can be identified as being part of the 23 sub-sectors in the LCEGS definition.Once filtered, a mapping was carried out to identify the percentage of employment in each SIC code which constituted LCEGS employment to the 23 LCEGS sub-sectors.The first part of this task was to determine exactly which of the green SICs contributed to each of the sub-sectors.For each of the sub-sectors between 5 and 36 SIC activities were involved.One example of this is in the air pollution LCEGS sub-sector where there are 8 specific SIC activities, ranging from manufacture of non-domestic cooling and ventilation equipment, to foreign affairs.Initially it was assumed that the mapping for Scotland would be the same as that for the South West of England.In practice for most of the sub-sectors that this was a reasonable assumption.For instance, if Red Group identified that 2% of activity/employment in an SIC in the South West was part of an LCEGS category, then our first approach assumes that the same share of employment in that SIC in Scotland could be considered as part of employment in that LCEGS category.The South West of England has a similar population to that of Scotland and they have several industries in common, with renewables and the low carbon economy playing a major role in both.The calculation was then repeated using SIC employment figures for each year between 2004 and 2012 to produce the “unscaled” estimates of the evolution of employment in LCEGS in Scotland over this period.However, using our “unscaled” mapping, our estimates of LCEGS employment in Scotland in 2007, 2008 and 2009, was overestimated compared to the count of LCEGS in Scotland produced by Innovas for these years.The largest discrepancy was in two LCEGS categories: “vehicle fuels” and ”other fuels” which were nearly twice as large.Some of the SIC codes within these categories include oil- and chemical-related activities, which are a significantly greater in absolute terms in Scotland than the South West of England.We would expect therefore that it is likely that a smaller percentage of activity under these SICs would be appropriate to be classified as LCEGS for Scotland.This produces two series: a “scaled” and “unscaled” series for LCEGS employment in Scotland between 2004 and 2012.We explore trends in this series in Section 4.We encountered further issues with the SIC-based employment series for Scotland.From 2008 the SIC series was on a different industrial basis than prior to this point.The 4-digit SIC2003 format has 515 separate activities whereas the newer 4 digit 2007 format has 616, the increase in SIC activities being attributed to the economy changing overtime and more industries being created as technology advances.We use a conversion matrix to construct a consistent time series covering the period prior to 2007, including weighting SIC codes between the two basis and calibrating our results to available figures for common years.Additionally, the choice of time period is chosen as eight years, which gives sufficient space to assess the trend in LCEGS jobs.Also, the Innovas report provides a robustness check from the middle of this period.It is likely that the further the distance from the survey date, the less reliable the estimates of LCEGS employment are likely to be.This suggests the useful complementarity between updates from the hybrid method, and regularly revised LECGS surveys.The objective of the study was to determine the number of jobs in LCEGS in Scotland and how this number had evolved between 2004 and 2012.In the previous section two methods were described and both can be used to provide estimates for the number of LCEGS jobs.The unscaled method gives an estimate of 92,653 jobs in 2012, whereas the scaled method gives an estimate of 75,561 in the same year.As discussed in detail earlier, the difference between the two approaches is principally due to differences in the number of jobs estimated in the alternative fuels and other fuels LCEGS sub-sectors.Fig. 1 shows the level of LCEGS jobs estimated from the “scaled” and “unscaled” estimates for 2004 to 2012 and the figures from the Innovas report for Scotland which provides job numbers for 2007–2009.Fig. 2 shows aggregate employment in LCEGS in Scotland indexed from its 2004 value.This shows that, on both series, there is an increase in employment in LCEGS over the period as a whole.The “scaled” estimate of employment increases by 1.7% over this period, while the “unscaled” estimate increases by 5.54%.While this increase would be expected, due to the policy emphasis given by the UK and Scottish Government to developments in this area, it is interesting to note that employment in LCEGS sectors is not immune from the general economic climate; for instance, between 2008 and 2010 employment in LCEGS activities declined.Indeed from Fig. 2 we can see that the “scaled” estimate suggests that in 2010 employment in LCEGS was actually lower than in 2004.Fig. 3 shows the annual change for the aggregate“scaled” and “unscaled” estimates of employment alongside the annual change in employment for the Scottish economy as a whole.We see that employment in LCEGS activities was more volatile than overall Scottish employment.In only two years was the employment change in Scotland as a whole smaller – in percentage terms – than LCEGS employment.It is possible that this observed pattern is due to a “portfolio” effect operating on total Scottish employment, compared with the smaller number of sectors which are included within the LCEGS definition.This paper sought to provide empirical evidence for Scotland on the size of employment in low carbon activities, and create a trend series over a period of significant change to the Scottish energy sector.To do this, we extend the hybrid approach of Bishop and Brand, combining the quality of bottom-up surveys with the timeliness and whole-economy coverage of official statistics, classified by industrial sector.This has produced a timely approach to track developments in employment in these activities.Our results show that between 2004 and 2012 employment in LCEGS categories in Scotland grew, and that this was more volatile than aggregate employment in Scotland.Our estimated trend series, however, reveals how the “Great Recession” beginning in 2008 hampered the growth of employment in LCEGS.While it is not possible to determine what level employment in the low carbon economy might have reached in the absence of the Great Recession, the methodology employed here does allow us to measure the impact that it had on jobs in the LCEGS sector in Scotland.Our approach also enables us to track developments in Scottish LCEGS activity more generally, and in a timely manner.In a never ending quest to demonstrate the importance of government action in supporting or creating or rescuing jobs, the debate about the employment impacts of the renewable energy sector is starting to resemble a old fashioned English auction with constantly rising “bids” for the number of jobs being supported.This is silly.One would expect that as the renewable energy sector continues to develop and reach technological maturity, the balance of employment in this sector will move from building renewable energy devices to maintaining and servicing them; as a result the number of people involved in such activities will decline.To see this issue more clearly, consider what we know about the growth of renewable energy generation activities.These activities comprise one part of the broader LCEGS, and we can see that the growth rate of LCEGS jobs appears to be much lower than the growth rate of the installed capacity of renewable generation.In fact between 2007 and 2012 the number of LCEGS jobs declined whereas the installed capacity of renewable generation in Scotland more than doubled.This may well be symptomatic of a broader trend in LCEGS activities, as these activities reach technological maturity.As a result, rather than focussing on the aggregate number of jobs, policymakers could better focus their attention on the types of jobs being created and supported, and the wider spillover effects in the economy.What can the growth of the LCEGS sector do to increase human and physical capital in the country?,How can our developments and expertise in this sector be best exported to other countries?,This wider debate needs to be had.However for as long as we have “green job” targets we will need a means to measure progress towards these.What we have demonstrated in this paper is a pragmatic, transparent and robust methodology for the production of timely estimates of employment in the LCEGS, which we believe is an improvement on what is currently available in this debate. | In support of its ambitious target to reduce CO2 emissions the Scottish Government is aiming to have the equivalent of 100% of Scottish electricity consumption generated from renewable sources by 2020. This is, at least in part, motivated by an expectation of subsequent employment growth in low carbon and renewable energy technologies; however there is no official data source to track employment in these areas. This has led to a variety of definitions, methodologies and alternative estimates being produced. Building on a recent study (Bishop and Brand, 2013) we develop a "hybrid" approach which combines the detail of "bottom-up" surveys with "top-down" trend data to produce estimates on employment in Low Carbon Environmental Goods and Services (LCEGS). We demonstrate this methodology to produce estimates for such employment in Scotland between 2004 and 2012. Our approach shows how survey and official sources can combine to produce a more timely measure of employment in LCEGS activities, assisting policymakers in tracking, consistently, developments. Applying our approach, we find that over this period employment in LCEGS in Scotland grew, but that this was more volatile than aggregate employment, and in particular that employment in this sector was particularly badly hit during the great recession. |
552 | Scaling method of CFD-DEM simulations for gas-solid flows in risers | Gas-solid fluidization in riser reactors has received substantial interest as it is widely encountered in numerous industrial processes such as base chemical production, biomass gasification and catalytic cracking.The performance of risers has been extensively investigated over the past decades to acquire an in-depth insight of their hydrodynamics.Risers are usually operated at high superficial velocities under fast fluidization conditions.These systems exhibit a so called “core annulus” flow structure, which is characterized by a dilute solids up-flow in the core of the riser, and a dense down-flow close to the walls.The dense regions typically contain particle clusters, where the gas permeability is reduced, which negatively impacts the performance of risers as a chemical reactor.CFD-DEM simulations have the major advantage of having very good predictive capabilities.However in most circulating fluidized beds, the number of particles is in the order of 109-1012 which is the biggest bottleneck of these simulations.Such particle numbers would quickly lead to prohibitively long simulation times.Considering the computational cost, it is desirable to develop an appropriate scaling method to substantially reduce the number of particles by increasing the particle size whilst maintaining the capability to capture main flow features.For gas-solid flows, Andrews and O’Rourke proposed the so-called “Multi-Phase Particle-In-Cell” approach, which is a parcel-approach and has been widely applied to granular flows.Instead of tracking collisions between particles directly, the MP-PIC employs a simple “particle pressure” model to prevent particles from becoming closely-packed.Patankar and Joseph explored an Euler-Lagrangian numerical simulation scheme for particulate flow, and showed that the parcel approach is able to capture the basic flow features.But this scheme is strictly applicable when collisions do not play a dominant role in the flow behavior, meaning dilute flow.Sakai and Koshizuka developed a coarse-grained model, which considers the drag force and the contact force.Their model could simulate the 3D plug flow in a horizontal pipeline system accurately but is limited to systems with a small number of calculated particles.The similar particle assembly model proposed for large-scale discrete element method simulation was validated in the work of Mokhtar et al.In their model, particles with similar physical or chemical properties are represented by a single particle.The influence of coarse graining was studied in literature and revealed that bulk volume fractions are independent of grain size provided the underlying forces scale quadratically with the grain diameter.Note that the latter study was restricted to dilute granular systems where only binary collisions occur.Whether the scaling is still valid in the intermediate to dense regimes is not clear.Alternatively, Liu et al. have modeled CFBs by modelling the gas-rich lean phase and separately modelling the solid-rich cluster phase using an equation for the cluster motion based on Newton’s law.Additional models for the collision, coalescence and breakup of clusters were derived.Filtered two-fluid models have been constructed for coarse-grid simulation.Gao et al. conducted a comparative evaluation of several model settings to assess the effect of mesoscale solid stress in a coarse-grid TFM simulation of gas-solid fluidized beds of Geldart A particles over a broad range of fluidization regimes.Various researchers have explored proper scaling rules for DEM simulations.To keep the contact forces constant, Radl et al. derived a scaling law for a linear-spring dashpot interaction model that enables tracking of clouds of particles through DEM-based simulation of scaled pseudo-particles.However this work did not consider the gas phase.In relatively coarse simulations the consideration of the gas-particle interactions were handled with the Energy Minimization and Multiscale Analysis.EMMS allows to model a coarse grained gas-particle flow, accounting for sub grid structures or clusters.This is needed when the assumption of a uniform porosity inside the grids is no longer valid.A sophisticated filtered drag model was established in literature for use in coarse-grid Euler-Lagrange simulations, highlighting the significant effect of particle clustering on the average slip velocity between particles and fluid and indicated how this clustering can be accounted for in unresolved EL-based simulations.The unclosed terms in the filtered model could also be constructed through a filtering operation of fine-grid resolved CFD-DEM simulations.Ozel et al. performed Euler-Lagrange simulations of gas-solid flows in periodic domains to study the effective drag force model to be used in coarse-grained EL and filtered EE models.A dynamic scale-similarity approach was used to model the drift velocity but the predictability of that model is not entirely satisfactory.One scaling law that could apply for the flow regime was constructed by Glicksman et al. by keeping the key dimensionless parameters constant, including the ratio of the inertial and viscous forces.This scaling law could be extended to model bed-to-wall heat transfer.A somewhat similar approach for the gas-particle interaction was proposed by Liu et al., where both the Reynolds and Archimedes numbers are scaled.Additional scaling parameters however are still required, to account for proper scaling of the particle contact forces A combination of hydrodynamic scaling as with Liu and the particle contact forces as in the work of Radl et al. is expected to provide better results.The clustering of particles in circulating fluidized beds continues to be a fundamental issue in gas-particle hydrodynamics and has been found to be significant in risers.A lot of effort has been dedicated to the characterization and quantification of clusters.However, discussions about how to scale cluster phenomena is still lacking.The objective of this paper is to obtain an appropriate scaling method to scale down the vast number of solid particles in large systems.We employ the CFD-DEM code to obtain computational data of a pseudo-2D riser reactor, and perform a full comparison with experimental data to study the hydrodynamics and associated cluster characteristics.The axial and horizontal profiles of time-averaged solids volume fraction and solids mass flux are analyzed to study the hydrodynamic behavior.The solids holdup and spatial velocity distribution of clusters are discussed to study the cluster phenomena.The present paper is organized as follows: in Section 2 we present the mathematical model, collision model and scaling method.The fluid and particle parameters and mapping conditions are described in Section 3.In Section 4, the scaled results are presented.Finally, the main conclusions are summarized in Section 5.where Rep is the particle Reynolds number.where τp represents the torque and Ip the moment of inertia.where kn is the normal spring stiffness, ηn the normal damping coefficient, δn the overlap of the two particles involved in the collision and meff the reduced mass of normal linear spring-dashpot system.Where en and et are the normal and tangential coefficient of restitution respectively, which are empirically determined.kn is chosen such that the maximum overlap between two colliding particles in the simulation is smaller than 1% of the particle radius.where the terms on the right hand side respectively represent forces due to gravity, far field pressure, drag and normal and tangential inter-particle contact.For simplicity we will subsequently drop the sub-script z in the subsequent equations.In order to obtain the same hydrodynamic behavior each of the four dimensionless groups on the right hand side should be kept constant.This means that if we change the particle diameter, other parameters should be changed in such a manner that the respective dimensionless groups remain constant.We will now consider each of the four terms indicated in Eq.when Re, ε and NAr are kept constant, the proper scaling would be obtained.The fourth group implied that the ratio between the normal and tangential contact forces remains constant.This depends on the applied contact model, but is usually guaranteed, as both forces depend on the particle diameter and mass in the same way.Which is fully consistent with the scaling parameters as defined by Feng and Owen.Note that in practice there will be a upper limit to the value of K that can still produce a faithful representation of the flow phenomena.It is expected that this limit is related to the length scales at the meso scale that can still be resolved, i.e. clusters and/or bubbles.As long as the scaled particles are sufficiently smaller than these meso-scales, it is expected that the scaling method will hold.To test the derived scaling rules, a pseudo-2D riser reactor with dimensions 1570 × 70 × 6 mm was simulated in our study.This riser was defined to serve as the base case for the series of CFD-DEM simulations.In the riser reactor, solid particles and gas flow co-currently upwards to the top of the riser, where the gas and particles are separated: gas will leave the system whilst particles are fed back into the riser from the bottom inlet region.Note that during insertion particle overlap was not accepted.No-slip boundary conditions were applied at all vertical walls.The particles are initially placed in a random position in the bottom section of the riser up to a height of 0.25 m.In the base case, there are 50.000 particles positioned in this domain, where the gas superficial velocity was set at 5.55 m/s for the base case.This is high enough to ensure proper particle circulation in the CFB system, yet it is low enough to produce clusters.The gas-solid interactions were represented by the Beetstra drag force correlation, whereas the collision parameters corresponds to properties of glass beads previously reported by Hoomans et al.Further details of simulation settings of the base case are specified in Table 1.To test the scaling, variations were made with respect to the base case, using three different scaling factors, i.e. K = 0.8, 1.25 and 2.The scaled parameters were scaled according to Eq. and are listed in Table 1 along with the other key parameters.A very important aspect to realize when scaling the particle size, is that the ratio between the particle size and other important spatial scales changes.In particular this involves the ratio of volumes of the computational grid cell and the particle) and the ratio of the shallow depth of the riser and the particle diameter.To check the importance of these two parameters, four scenarios of the scaled simulations were considered:Condition a, base case, i.e. no scaling of grid size or riser depth,Condition b, scaling of grid size and no scaling of riser depth,Condition c, no scaling of grid size, but scaling of riser depth,Condition d, scaling both grid size and riser depth.In Figure 1b and 1c the top views of the base case and cases of scaled riser depth are shown respectively.The scaled diameter is D1 = K D0, where D0 is the experimental riser depth.The scaled mapping parameters are specified in Table 2.Considering the fact that the grid number must be integer, in conditions b and d the volume ratio ΔV/Vp is kept almost constant.For all simulations, the last 10 s of the total 20 s were post-processed to obtain the time-averaged data.The computational cost was found to scale roughly linearly with the number of particles and computational cells.In this section the simulation results are presented, where first the overall flow patterns will be discussed.Subsequently, the solids volume fraction and mass flux distributions will be analyzed respectively, by making a comparison between simulation results and experimental data from literature.Finally, cluster characteristics will be discussed.In this sub-section a group of full-field configurations will be shown to study the gas-particle flow behavior.Figure 2 shows snapshots of gas void fraction while applying four scaling factors to the base case.It is noted that in the vertical direction there is similar tendency of gas volume fraction revealing a dense region close to the riser inlet and more dilute middle and top regions.This distribution can be explained by the nature of particle motion in circulating fluidized beds; particles are fed to the system at the bottom of the riser, resulting in more collisions between particles and consequently more particles residing in this region.In the horizontal direction, a so called “core-annulus” flow pattern prevails, which has as dilute core and dense annular regions.In the dense region particle clusters are observed with complex mutual interactions.Figure 3 shows a zoomed in section of the riser, which shows the cluster behavior in more detail.Clusters are formed in all four scaling scenarios, and appear to exhibit the same motion, moving downwards along the wall and upwards in the core region.When increasing the scaling factor K, it is noted that there are fewer particles with a larger diameter in the riser, which matches the definition of the scaling method.Clusters could be observed for all scaling scenarios but there appear to be less and smaller clusters with higher values of the scaling factors.The axial profiles of time-averaged solids volume fraction are shown in Figure 4.Every figure contains experimental data and four sets of simulation results representing different scaling approaches.It is noted that both experimental and simulation results show the same tendency in the sense that dense regions exist in the bottom of the riser while the system becomes more dilute with increasing axial direction.This matches the observations from Figure 2.In Figure 4, each sub-figure represents a mapping condition.Even though they all maintain the same range and tendency, sub-figures and show a better agreement with experiment than and.The grid cells cannot be too large because else the representation of the flow field is not captured and secondly the assumption of a uniform porosity does not hold.The grid cells cannot be too small either because the drag force correlations assume a porosity field and not discrete values of the two phases.Both case a, as well as case c show very similar behavior and difference when varying the scaling factor K.This suggest that the poor comparison of both cases with experiments is mostly related to their changes is grid size, and secondly, that the differences in case c cannot be attributed to the rise depth.K=0.8 seems to only show satisfactory results under condition b.In both condition a and condition c this is related to the fact that the grid mapping is not consistent and the grid volume to particle volume ratio is different amongst the different scaling factors.In condition d this could be related to the particle wall conditions, which are altered by the riser depth.Which underlying mechanism is acting there is object of further study.In Figures 5 and 6, the cross-sectional profiles of solids volume fraction are displayed.The axial positions are 0.3 m and 1.2 m respectively in each figure, and each figure contains four sub-figures, which represents four different mapping conditions.In each sub-figure four simulation results using different scaling factors are compared with the experimental data.It can be seen that all simulation results show the same tendency as the experimental data.In the horizontal direction, the typical U-shaped profile is obtained.In the axial direction, the solids holdup decreases with increasing heights, which matches what we observed in Figure 4.The U-shaped profiles possess a flatter solids holdup profiles at higher positions in the riser.The profiles of solids holdup in the dense region are shown in Figure 5.All the mapping conditions are acceptable except condition c which is scaling riser depth while keeping the grid size constant.While the volume ratio of grid size and particle is different, the ratio of riser depth and particle diameter is constant.As the fact that the riser is a pseudo-2D bed with no-slip boundary condition for the front/back walls, when scaling the riser depth, the bridge formation of gas-solid hydrodynamics would be changed.Furthermore conditions b and d are suitable for all cases which applied the closured scaling up of the particle diameters.In Figure 6 the solids holdup profiles in the dilute region are displayed.It can be seen that at axial position H = 1.2 m the U-shaped profiles possess an asymmetric distribution for conditions b and d, which are cases having constant ratio of computational cell and particle volume.This is due to the fact that the curved one-sided lateral solids outlet has an apparent effect on the solids volume fraction in these two conditions.where the product of the local solids volume fraction and solids velocity is time-averaged.Figures 7 and 8 show the simulated and experimental profiles of the solids mass flux at two axial positions 0.3 m and 1.2 m.The organization of these figures is the same as in section 4.2.The experimental data and simulation results exhibit the same tendency of solids up-flow in the core region of the riser and a relatively high down-flow close to the riser walls.At higher axial coordinates, the solids mass flux distribution becomes relatively flat.By comparing the numerical results for different scaling factors, it is observed that scaling conditions b and d produce a better performance than the other two approaches.In Figure 8, the profiles of solids mass flux at height = 1.2 m are shown.The distribution of solid mass flux for K = 2 becomes more uniform than the other cases.As previously mentioned, constant solids holdup thresholds are employed along the entire region of the pseudo-2D riser in order to detect and classify clusters.In this way, it is ensured that a uniform definition of clusters is used and that the quantified cluster-related properties are not influenced by a changing definition along the axial and cross-sectional directions of the riser.Clusters are defined as connected regions with local solids volume fractions larger than 0.2 everywhere that have a minimum area of 60 mm2 and a dense core with at least one grid with φs > 0.4.In this work the detection of clusters was performed by a Matlab® script.Figure 9 displays the snapshots of particle velocity obtained from four scaling factor scenarios using mapping condition b. Clusters can be observed, with dense cores formed close to the wall and big dilute strands of particles that tend to move upwards.Comparing with Figure 3, especially in snapshot of K = 2 the cluster phenomena becomes more apparent in Figure 9.The represented clusters could be better captured when applying the scaling approach condition b than condition a.It is noted that for all the cases using different scaling factors and mapping conditions, the solids holdups of clusters are in a similar range with more clusters close to the walls whilst the clusters are sparsely distributed in the core region.This reveals that our scaling method captures the cluster phenomena quite well.It can also be seen that a higher scaling factor leads to less clusters in the whole system.Whilst the cluster numbers do not show an obvious difference among the applied scaling factor approaches in conditions b and d and the core-annulus distribution is also well captured.This indicates that the volume ratio of grid and particle has a significant influence on cluster solids holdup profiles.In Figure 11, profiles of cluster velocity are plotted for different scaling factors and mapping conditions.The typical symmetric core-annulus distribution is seen here, with mostly downwards flowing clusters near the wall whilst clusters in the core region are mostly moving upwards.This also matches with the observations from Figure 9.Using different scaling factors, the distribution of cluster velocity is quite similar and in the annulus region there are more clusters moving downwards except for the cases K = 2.In the set of K = 2 cases, the velocity distribution of clusters becomes more symmetric near the walls.In conditions b and d, the cluster number is more uniform in comparison with other conditions, so is the cluster velocity.Hence for further studies on cluster analyses, keeping the volume ratio constant would be advisable.In this work, a scaling method is developed and validated in detail by performing extensive simulations of a pseudo-2D circulating fluidized bed riser.To maintain the same hydrodynamic behavior seven gas and particle properties were scaled, such as the particle diameter, the gas viscosity, gravity, normal spring stiffness etc.Besides scaling the gas and particle properties, the grid size and riser depth are also scaled by considering four scaled mapping conditions.The influence of the scaling method, the scaled grid size and riser depth on the fluidized riser hydrodynamics has been quantified.The experimental and simulated solids volume fraction and mass flux profiles provide quantitative information about the performance of the scaling method on gas-particle flow behavior.Firstly, it is noted that the scaling method could well capture the typical U-shaped solids volume fraction profiles, and the particle up-flow in the core region and a relatively high down-flow close to the riser walls.Secondly, considering the different mapping conditions, while applying different scaling factors the solids volume fraction distributions match quite well for all mapping conditions except when the grid size is not scaled.In the dilute region, the asymmetric geometry of the riser outlet has an apparent effect on the cross-sectional profiles of solids volume fraction when the grid size is scaled.The solids mass flux profiles obtained for different scaling factors match much better in conditions b and d than in the other two conditions.In other words, when applying the scaling rules derived in this work, it is essential to also scale the value of ΔV/Vp.Furthermore, cluster characteristics were analyzed.By applying the scaling method, the simulation results exhibit similar trends in solids holdup and cluster velocity.When scaling is applied, there are more downwards flowing clusters close to the walls whilst the upwards clusters are sparsely distributed in the core region.It is noted that clusters could be better captured when applying the scaling method under conditions b or d than condition c. By analyzing the cluster holdups and velocity for different scaling factors, the spatial distribution of the clusters is more uniform in conditions b and d, and so is the cluster velocity.Finally, the core-annulus distribution is also well captured.This confirms that scaling of ΔV/Vp is essential for faithful prediction of cluster characteristics.L. Mu: Investigation, Software, Writing - original draft.K.A. Buist: Supervision, Writing - review & editing.J.A.M. Kuipers: Conceptualization, Supervision, Writing - review & editing.N.G. Deen: Conceptualization, Methodology, Supervision, Writing - review & editing. | In this paper a scaling method is proposed for scaling down the prohibitively large number of particles in CFD-DEM simulations for modeling large systems such as circulating fluidized beds. Both the gas and the particle properties are scaled in this method, and a detailed comparison among alternative mapping strategies is performed by scaling both the computational grid size and the riser depth. A series of CFD-DEM simulations has been performed for a pseudo-2D CFB riser to enable a detailed comparison with experimental data. By applying the scaling method, the hydrodynamic flow behavior could be well predicted and cluster characteristics, such as cluster velocity and cluster holdups agreed well with the experimental data. For a full validation of the scaling method, four mapping conditions with different ratios of the grid size and particle volume and of modified ratio of riser depth to particle size were analyzed. The results show that in addition to hydrodynamic scaling of the particle and fluid properties, scaling of the dimensions for the interphase mapping is also necessary. |
553 | Systematic Evaluation of Pleiotropy Identifies 6 Further Loci Associated With Coronary Artery Disease | The study consisted of discovery and replication phases and has been described in more detail elsewhere.Briefly, the discovery cohort included 42,335 cases and 78,240 control subjects from 20 individual studies; the replication cohort, which was separately assembled and ascertained to have no sample overlap with the discovery cohorts, included 30,533 cases and 42,530 control subjects from 8 studies.With the exception of participants from 2 studies in the replication cohort who were of South Asian ancestry, all participants were of European ancestry.Samples were genotyped on the Illumina HumanExome BeadChip versions 1.0 or 1.1, or the Illumina OmniExome arrays followed by quality control procedures as previously described.In discovery samples that passed quality control procedures, we performed individual tests for association of the selected variants with CAD in each study separately, using logistic regression analysis with principal components of ancestry as covariates.We combined evidence across individual studies using an inverse-variance weighted fixed-effects meta-analysis.Heterogeneity was assessed by Cochran’s Q statistic.In the discovery phase, we defined suggestive novel association as a meta-analysis p value ≤1 × 10−6.For variants with suggestive association, we performed association analysis in the replication studies.We defined significant novel associations as those nominally significant in the replication study and with an overall p value <5 × 10−8.To identify any association between the novel loci and gene expression traits, we performed a systematic search of cis-expression quantitative trait loci.To identify candidate causal SNPs at the new loci, we annotated each of the lead variants as well as SNPs in high linkage disequilibrium on the basis of position, overlap with regulatory elements, and in silico SNP prioritization tools.For both the novel loci and all previously reported CAD loci, we tested the association of the lead CAD-associated variant with traditional cardiovascular risk factors using publicly available GWAS meta-analyses datasets for systolic, diastolic, and pulse pressures; low-density lipoprotein cholesterol level; high-density lipoprotein cholesterol level; triglycerides level; type 2 diabetes mellitus; body mass index; and smoking quantity.The maximum size of these datasets ranged from 41,150 to 339,224 individuals.For variants available on the exome array with a known genome-wide association with a risk factor, we also compared the magnitude of the reported association with the risk factor to the observed association with CAD in our analysis.To identify any associations with other diseases or traits, we searched version 2 of the GRASP database and the National Human Genome Research Institute-European Bioinformatics Institute GWAS catalog, plus we collected all associations below 1 × 10−4.For all associations, we identified the lead variant for that trait or disease and calculated pairwise LD with the lead CAD-associated variant using the SNAP web server.In the discovery cohort, 28 variants not located in a known CAD locus showed association with CAD at a p value <1 × 10−6.No marked heterogeneity was observed, justifying the use of a fixed-effects model.We then tested these 28 variants for replication, and 6 variants showed both a nominally significant association in the replication cohort and a combined discovery and replication meta-analyses p value exceeding the threshold for genome-wide significance.As typical for GWAS findings, the risk alleles were common, and the risk increase per allele was modest.Forest and regional association plots for the 6 novel loci are shown in Online Figures 1 and 2, respectively.Interrogation of the 1000 Genomes Project phase 1 EUR data using Haploreg showed that the number of SNPs in high LD with the lead variant varied between 1 and 111.Apart from the lead variant at the KCNJ13-GIGYF2 locus, which is a nonsynonymous SNP, none of the other loci had a variant affecting protein sequence in high LD with the lead variant.Notable cis-eQTL findings for the new loci are shown in Online Table 5 and functional annotation of the lead variant and variants in high LD appear in Online Figure 3.The main findings from these analyses are discussed here locus by locus.The lead variant, rs1800775, also known as −629C>A, is in the promoter of the cholesteryl ester transfer protein gene, which mediates the transfer of cholesteryl esters from HDL cholesterol to other lipoproteins and was placed on the array because of its association with plasma HDL cholesterol level.The risk allele is associated with lower HDL cholesterol and modest increases in plasma LDL cholesterol and triglycerides levels.Previous studies have shown that rs1800775 is itself functional in that the C allele disrupts binding of the Sp1 transcription factor resulting in increased promoter activity.This is in agreement with our annotation, which predicts this to be more likely to be a functional SNP than the only other SNP in high LD, rs3816117.Consistent with this, we also found associations between rs1800775 and CETP expression with the best eSNP in monocytes and liver, and previous studies have shown that the variant is also associated with plasma CETP level.The lead variant, rs11057830, and all 8 variants in high LD are located in a region of approximately 10 kb in intron 1 of SCARB1, which encodes SR-B1, a receptor for HDL cholesterol.Other variants at this locus have been associated with HDL cholesterol level.However, these HDL cholesterol variants are not in high LD with the CAD-associated variants identified here, which only have a modest association with plasma HDL cholesterol level, but a stronger association with plasma LDL cholesterol and triglycerides levels.rs11957830 was included on the array because of an association of the A allele with higher levels of vitamin E.Variants in high LD with the CAD risk allele at rs11057830 have also been associated with increased lipoprotein-associated phospholipase A2 activity.Analysis of eQTL identified an association between rs11057841, and expression of SCARB1 in the intestine.Functional annotation of the locus did not identify a strong candidate causal SNP, but rs10846744 overlaps a deoxyribonuclease I hypersensitivity peak in a region bound by several transcription factors.The lead variant, rs11172113, is in intron 1 of LRP1 and only has 1 other adjacent SNP in high LD.The risk allele of the lead variant has previously been associated with reduced risk of migraine, and there is an association of the alternate allele with reduced lung function.There are also associations at this locus for abdominal aortic aneurysm and triglyceride levels; however, these variants are in modest or low LD to the CAD-associated SNP.The lead variant overlaps a region containing peaks in deoxyribonuclease I hypersensitivity in several cells and tissues, including aortic smooth muscle cells, within a predicted enhancer element.We found associations between the CAD risk allele at rs11172113 and reduced expression of LRP1 in atherosclerotic and nonatherosclerotic arterial wall, as well as eQTLs in omental and subcutaneous adipose tissue.The lead variant, rs11042937, at this locus lies in an intergenic region between MRVI1 encoding inositol-trisphosphate receptor-associated cyclic guanosine monophosphate kinase substrate, a mediator of smooth muscle tone and CTR9 that encodes a component of the PAF1 complex with some SNPs in high LD located within intron 1 of MRVI1.The lead variant was included on the array because of a suggestive association with bipolar disorder and schizophrenia.There was no association of the locus with any cardiovascular risk factors, and we did not identify any eQTLs.Evidence for a regulatory function for either the lead variant or any of the SNPs in high LD was also weak.The lead variant, rs3130683, lies in the HLA complex in intron 1 of C2, which encodes the complement C2 protein.There are just 14 SNPs in high LD with the lead variant, but the CAD signal spans a region of approximately 300 kb including more than 20 genes.Apart from a single synonymous variant in HSPA1A, the other high LD variants are noncoding with several of the variants showing evidence for regulatory functionality.Although there is a large number of eQTLs in the HLA region, most of these are variants with modest LD with the CAD-associated variants, and the only eQTL of note was with CYP21A2 expression in whole blood.rs3869109, another variant at the HLA locus approximately 700 kb away from the new lead variant, has been reported to be associated with CAD.In our discovery cohort, rs3869109 has a p value of association with CAD of 0.23.The lead variant, rs1801251, was included on the array for identity by descent testing; rs1801251 causes a threonine to isoleucine amino acid change at position 95 in KCNJ13, an inwardly rectifying potassium channel protein.However, this is not predicted to be functionally important.There is extended linkage at this locus, with more than 100 SNPs in high LD and the lead variant in a region of ∼170 kb also spanning GIGYF2.KCJN13 is located entirely within GIGYF2 and transcribed in the opposite direction.A number of the associated variants are in annotated regulatory regions, with the top scoring candidate by in silico prediction, rs11555646, lying in the 5′-UTR of GIGYF2 close to the initiating methionine.There was no association of the locus with any of the cardiovascular risk factors, but we found eQTLs for the lead variant or a variant in high LD for both GIGFY2 and KCNJ13.We undertook an updated analysis of the association of all 62 CAD loci with traditional cardiovascular risk factors.The full results are shown in Online Table 6, and the significant associations are summarized in Table 2.Of the 62 CAD loci, 24 showed a statistical association at a Bonferroni corrected p value <8.32 × 10−5 with a traditional cardiovascular risk factor with some loci showing multiple associations.The largest number of associations were with lipid traits, followed by blood pressure traits, BMI, and type 2 diabetes.Most associations were in the direction consistent from the epidemiological association of these risk factors with CAD, although a few displayed effects in the opposite direction.To inform the interpretation of these data, we conducted a complementary analysis for variants available on the array with a known genome-wide association with a risk factor; also, we compared the magnitude of the reported association with the risk factor to the observed association with CAD in our data.Except for LDL cholesterol and BMI, the correlations between the 2 effects were either weak or insignificant.In a separate analysis conducted in the 150,000 participants in UK Biobank with currently released genotype data, we confirmed that none of the CAD-associated variants showed a sex difference in allele frequency.We next analyzed the association of the 62 CAD loci with other diseases and traits.When restricted to variants with a high LD with the lead CAD variant, 29 of 62 loci showed an association with another disease/trait at a p value <1 × 10−4.Several loci showed multiple associations.Although in most cases, the CAD-associated risk allele was also associated with an increased risk of the other disease or trait, this was not always the case.Furthermore, in some loci with multiple associations, the direction of association varied between diseases.This large-scale meta-analysis of common variants, including many with prior evidence for association with another complex trait, resulted in the identification of 6 new CAD loci at genome-wide significance.We also showed that almost one-half of the CAD loci that have been identified to date demonstrate pleiotropy, an association with another disease or trait.The findings added to our understanding of the genetic basis of CAD and might provide clues to the mechanisms by which such loci affect CAD risk.Our findings of a genome-wide association with CAD of a functional variant in the promoter of the CETP gene that is also associated with its expression and plasma activity have added to previous evidence linking genetically determined increased activity of this gene with higher risk of CAD.There has been a longstanding interest in CETP inhibition as a therapeutic target, primarily because of the effect on plasma HDL cholesterol level.However, several CETP inhibitors have recently failed to improve cardiovascular outcomes in large randomized clinical trials and, in 1 case, caused harm, despite markedly increasing plasma HDL cholesterol.Furthermore, Mendelian randomization studies have questioned the causal role of lower plasma HDL cholesterol in increasing CAD risk.Although previous studies have shown that the CETP genetic variant we report here affects CETP activity, the precise mechanism by which this variant modifies CAD risk remains uncertain.A notable finding was the association with CAD of common variants located in the SCARB1 gene.Association of variants at the SCARB1 locus with CAD was also reported by the CARDIoGRAMplusC4D consortium, but this did not reach genome-wide significance.The gene encodes the canonical receptor, SR-BI, responsible for HDL cholesteryl ester uptake in hepatocytes and steroidogenic cells.Genetic modulation of SR-BI levels in mice is associated with marked changes in plasma HDL cholesterol.Consistent with this, a rare loss of function variant in which leucine replaces proline 376 in SCARB1 was recently identified through sequencing of individuals with high plasma HDL cholesterol.Interestingly, despite having higher plasma HDL, 346L carriers had an increased risk of CAD, suggesting that the association of variation at this locus on CAD is not driven primarily through plasma HDL.Indeed, there is only a nominal association of the lead CAD variant at this locus with plasma HDL cholesterol.The variant is also modestly associated with plasma LDL cholesterol and serum triglycerides.All 3 of these lipid associations are directionally consistent with epidemiological evidence linking them to CAD risk and could, in combination, explain the association of the locus with CAD.However, the lead variant is more strongly associated with Lp-PLA2 activity and mass, which could provide an alternative explanation for its association with CAD.Irrespective of the mechanism, our findings, when combined with those of Zanoni et al., suggest that modulating SR-B1 may be therapeutically beneficial.After adjusting for multiple testing, we found that slightly more than one-third of the CAD loci showed an association with traditional cardiovascular risk factors.Although the vast majority of associations were in the direction consistent with the epidemiological association of these risk factors with CAD, as noted in the previous text with respect to loci affecting the HDL cholesterol level, this should not be interpreted as implying that these loci affect CAD risk through an effect on the specific risk factor.Indeed, for variants available on the array with a known genome-wide association with these risk factors, we found a poor correlation between the magnitudes of their effect of the risk factor and their association with CAD in our dataset except for LDL cholesterol.Nonetheless, formal causal inference analyses, using Mendelian randomization, have implicated LDL cholesterol, triglyceride-rich lipoproteins, blood pressure, type 2 diabetes, and BMI as causally involved in CAD.Almost one-half of the CAD loci showed a strong or suggestive association with other diseases or traits with, in many cases, the identical variant being the lead variant reported for the association with these other conditions.Some of the associations with other traits—for example, coronary calcification or carotid intima-media thickness—are not surprising, as these traits are known to be correlated with CAD.Others, such as risk of stroke, might reflect a shared etiology.However, the mechanism behind most of the observed pleiotropy is not clear, although the findings could provide clues as to how the locus may affect CAD risk.As an example, 5 loci show strong associations with plasma activity and/or mass of Lp-PLA2.Lp-PLA2 is expressed in atherosclerotic plaques where studies have suggested a role in the production of proinflammatory and pro-apoptotic mediators, primarily through interaction with oxidized LDL.A meta-analysis of prospective studies showed an independent and continuous relationship of plasma Lp-PLA2 with CAD risk.However, it should be noted that Mendelian randomization analyses have not supported a causal role of secreted Lp-PLA2 in coronary heart disease, and phase III trials of darapladib, an Lp-PLA2 inhibitor, have shown no benefit in patients with stable coronary heart disease or acute coronary syndromes when added to conventional treatments including statins.Chronic inflammation plays a key role in both the pathogenesis of CAD and of inflammatory bowel disease.It is therefore interesting to note the association of the same locus at 15q22 with CAD as well as Crohn’s disease and ulcerative colitis.Association of this locus with CAD at genome-wide significance was recently reported by the CARDIoGRAMplusC4D consortium with the lead SNP showing strong linkage disequilibrium with the lead SNP associated with inflammatory bowel disease.Both rs56062135 and rs17293632 lie in a region of ∼30 kb within the initial introns of the SMAD family member 3 gene, a signal transducer in the transforming growth factor–beta pathway.Indeed, rs17293632 was included on the exome array because of its known association with Crohn’s disease and showed a significant association with CAD in our combined dataset.Farh et al. interrogated ChIP-seq data from ENCODE and found allele-specific binding of the AP-1 transcription factor to the major allele in heterozygous cell lines and suggested that the T allele of rs17293632 increases risk of Crohn’s disease by disrupting AP-1 regulation of SMAD3 expression.Interestingly, the direction of effect on CAD risk observed for this variant was in the opposite direction to that for inflammatory disorders, with the C allele being the risk allele.Recent analysis of this variant in arterial smooth muscle cells confirmed that the CAD risk allele preserves AP-1 transcription factor binding and increases expression of SMAD3.Further investigation of the discordant effects of SMAD3 may shed light on the mechanisms of both diseases.First, in our discovery study, we were only able to interrogate common variants associated with other diseases and traits that were known at the time of the creation of the exome array in late 2011 and, thus, included on the array.Conversely, our interrogation for pleiotropic associations of the new and known CAD has used the latest data available in the GWAS catalogs and other sources.Second, the common variants tested in our study conferred statistically robust yet quantitatively modest effects on both CAD and potentially related traits.Thus, we may have missed associations with other traits.However, if such traits were considered as intermediary steps in the etiology of CAD, exploration of our large GWAS sample sets and respective GWAS catalogs should have detected relevant associations.Third, our discovery analysis is largely on the basis of subjects with Western-European ancestry, and any association with CAD of the new loci in other populations needs further evaluation.Finally, although we used relatively stringent criteria, the limited content of the exome array and the information available in the GWAS catalogs meant that we could not examine the extent of overlap in the loci in detail.Through an analysis of selected variants associated with other disease traits, we reported the discovery of 6 further loci associated with CAD.Furthermore, in the most comprehensive analysis to date, we showed that several of the new and previously established loci demonstrated substantial pleiotropy, which may help our understanding of the mechanisms by which these loci affect CAD risk.COMPETENCY IN MEDICAL KNOWLEDGE: Novel genetic loci influence risk of coronary artery disease, but only one-third are associated with conventional cardiovascular risk factors, whereas at least one-half of the loci are associated with other diseases or traits.TRANSLATIONAL OUTLOOK: Future studies should investigate the mechanisms that relate the observed pleiotropy to the pathogenesis of atherosclerosis and ischemic events. | Background Genome-wide association studies have so far identified 56 loci associated with risk of coronary artery disease (CAD). Many CAD loci show pleiotropy; that is, they are also associated with other diseases or traits. Objectives This study sought to systematically test if genetic variants identified for non-CAD diseases/traits also associate with CAD and to undertake a comprehensive analysis of the extent of pleiotropy of all CAD loci. Methods In discovery analyses involving 42,335 CAD cases and 78,240 control subjects we tested the association of 29,383 common (minor allele frequency >5%) single nucleotide polymorphisms available on the exome array, which included a substantial proportion of known or suspected single nucleotide polymorphisms associated with common diseases or traits as of 2011. Suggestive association signals were replicated in an additional 30,533 cases and 42,530 control subjects. To evaluate pleiotropy, we tested CAD loci for association with cardiovascular risk factors (lipid traits, blood pressure phenotypes, body mass index, diabetes, and smoking behavior), as well as with other diseases/traits through interrogation of currently available genome-wide association study catalogs. Results We identified 6 new loci associated with CAD at genome-wide significance: on 2q37 (KCNJ13-GIGYF2), 6p21 (C2), 11p15 (MRVI1-CTR9), 12q13 (LRP1), 12q24 (SCARB1), and 16q13 (CETP). Risk allele frequencies ranged from 0.15 to 0.86, and odds ratio per copy of the risk allele ranged from 1.04 to 1.09. Of 62 new and known CAD loci, 24 (38.7%) showed statistical association with a traditional cardiovascular risk factor, with some showing multiple associations, and 29 (47%) showed associations at p < 1 × 10−4 with a range of other diseases/traits. Conclusions We identified 6 loci associated with CAD at genome-wide significance. Several CAD loci show substantial pleiotropy, which may help us understand the mechanisms by which these loci affect CAD risk. |
554 | Konzo prevention in six villages in the DRC and the dependence of konzo prevalence on cyanide intake and malnutrition | Konzo is an upper motor neuron disease that causes irreversible paralysis of the legs particularly in children and young women and is very likely due to high cyanogen intake amongst people with malnutrition, who consume a monotonous diet of bitter cassava .Konzo occurs in the Democratic Republic of Congo, Mozambique, Tanzania, Cameroon, Central African Republic, Angola and possibly Congo .It has been found that the % mean monthly incidence of konzo across the year is significantly related to the monthly % of children with high urinary thiocyanate levels, which is a measure of cyanogen intake .For example, there is a high incidence of konzo cases during the dry season when cassava is harvested and urinary thiocyanate levels are high and conversely low konzo incidence in the wet season when consumption of cassava and urinary thiocyanate levels is low.That adequate nutrition may prevent konzo was shown in three unrelated konzo outbreaks in Mozambique, DRC and Tanzania, in which people of the same ethnic group living only 5 km from those with high konzo prevalence had a konzo prevalence close to zero.Those with near zero konzo prevalence in Mozambique lived near the sea , in DRC they lived in the forest and urban centres and in Tanzania they lived close to Lake Victoria .These people had a much better protein intake due to availability of fish from the sea and from Lake Victoria and animals from the forest.The animal protein that protected them from contracting konzo had an adequate supply of the sulphur amino acids methionine and cystine/cysteine that are required to detoxify cyanide to thiocyanate in the body.In a recent study Okitundu et al. showed that chronic cyanide intoxication, malnutrition, poverty and superstitious beliefs all favoured the persistence of konzo in Kahemba Health Zone, Bandundu Province, DRC.Konzo has previously been prevented in three studies in seven villages in Popokabaka and Boko Health Zones in Bandundu Province, DRC, by education of the village women to use the wetting method every day to remove cyanogens from cassava flour .In this paper we have data on the konzo prevalence, malnutrition and high cyanide intake in six villages and have developed a simple mathematical relationship between them.The wetting method has been used to prevent konzo in these villages.After initial discussions with the Kwango District medical officer and the coordinator of nutrition in Kenge City, it was decided to work in the Boko Health Zone where there were villages badly affected by konzo, and also that all personnel in Boko Health Zone should receive training in use of the wetting method.Six villages were chosen from three health areas based on the prevalence of konzo, the presence of capable village leaders and accessibility by car.The location of the six villages is shown in Fig. 1, where they are labelled Boko 3 villages.Makiku, Kitati and Kipesi villages are in the forest and Mangungu, Mutombo and Kinkamba villages are in the savannah.Each village has a primary school, there is a health centre in Mutombo and a health post in Makiku.Clean drinking water was installed 4 years ago by UNICEF in Mutombo, but the small rivers in the area do not provide clean drinking water for the other villages.Bitter cassava is grown almost exclusively because sweet cassava is often stolen.New, less bitter cassava varieties were introduced by FAO into Mutombo, Mangungu and Kinkamba.The fallow period has been reduced to 3 year from 7 to 10 years, which was normal some years ago.Cassava, maize, cowpea, taro, squash, plantain, okra, pineapple, tomato and yam are grown and there are goats, pigs and chickens.Cassava is processed by soaking in water generally for 2 days and dried for 1–4 days depending on the weather.There is trade in cassava during the dry season and corn during corn harvest.The second visit in September 2013 involved the full team of workers and a population census was carried out.Suspected konzo cases were examined by two medical doctors following a standardised WHO protocol for konzo as follows: a spastic visible walk or run, a history of sudden onset within a week of a formerly healthy person, exaggerated bilateral patellar or achillian reflexes and non-progressive evolution of the disease .The month and year of onset of konzo was recorded for each konzo case.Konzo sufferers were given multivitamins and anti-inflammatory drugs.A socio-economic and food consumption survey was conducted among 138 households with at least one konzo case.It was ascertained how many meals were eaten the day before the survey and the number of days of different foods eaten during the week preceding the survey.The food consumption score was calculated and interpreted using the methodology of the World Food Program .From each village, 30 cassava flour samples were obtained and 50 urine samples from school children with verbal agreement of their parents.The samples were analysed on site with kits supplied from Australia, using methods described below.The wetting method for removing cyanogens from cassava flour was taught first to 10–12 of the leading women of the village.Each trained woman then trained 10–15 others in the village.In each village a committee was formed to ensure follow up of the training.After the training each woman was given a plastic basin, a knife and a mat.Using the wetting method cassava flour was placed in a plastic basin and a mark made on the inside of the bowl with a knife.Water was added with mixing and the volume decreased and then increased up to the mark.The wet flour was spread out in a layer not more than 1 cm thick and stood for 2 h in the sun or 5 h in the shade, to allow hydrogen cyanide gas to escape to the air .The damp flour was then cooked in boiling water in the traditional way to make a thick porridge called fufu, which was eaten with pounded, boiled cassava leaves or another food to give it flavour.Each month after the second visit, the Caritas Mandombi team visited the villages to check on the use of the wetting method until March 2014 when there was a third visit of the full team.Using non-structured interviews with village leaders and women, a check was made for any new konzo cases and on the continued use of the wetting method.The daily use of the wetting method by each family was recorded, from which the percentage of families using the wetting method was calculated.Also 30 cassava flour samples, which had already been treated using the wetting method, were collected from each village and 50 urine samples obtained from school children.Monthly visits of the Caritas Mandombi team continued until July 2014, when the fourth and final visit by the full team was made.Using focus groups, checks were made in each village for any new cases of konzo and on the number of women using the wetting method.As before, about 30 flour samples treated by the wetting method were collected from each village and 50 urine samples from school children for analysis on site.Fifty urine samples were collected randomly from school age children in each village with oral consent of their parents and a record made of their age and sex.These samples were analysed on site using the simple picrate thiocyanate kit D1.A colour chart with 10 shades of colour from yellow to brown was used, which corresponded to 0–1720 μmol thiocyanate/L.Thirty samples of cassava flour about to be used to prepare fufu were collected randomly from households in each village before teaching the wetting method in September 2013 and at subsequent visits in March and July 2014.Analyses for total cyanide were made using a simple picrate kit B2.A colour chart with 10 shades of colour from yellow to brown was used, which corresponded to 0–800 mg HCN equivalents/kg cassava flour = ppm.The best values for x, y and z in Eqs. and were obtained by iteration using the data for six villages of %K, %T and %M.There was a total of 144 konzo cases in the six villages, with a mean konzo prevalence of 3.1%.Konzo onset usually occurred in the morning.There were 5% severely disabled who could not walk, 27% moderately disabled who needed one or two sticks and 68% mildly disabled who did not need a stick.Impaired vision was found in 40% and speech disorders in 13% of cases.Nearly half of the patients had a family member with konzo.All konzo households lived in poverty in straw houses which leaked water in the rainy season and 31% said they were living on donations from others.Only 28% had a radio and 7% a bicycle.Figs. 2 and 3 showed the month and the year of occurrence of the konzo cases respectively.The food survey showed that the day before the survey, there were 1% of konzo households who had not eaten anything, 50% had one meal, 19% had two meals and 30% had three or more meals.The mean number of days that food was consumed per week by konzo families was cassava flour 7.0, vegetables which included cassava leaves 6.0, collection products 5.0, cooking oil 2.0, fruit 1.6, meat and fish 1.6, sugar 1.3, cereals 1.2, legumes 0.8 and milk 0.8.The food consumption score for the households with konzo were calculated and the averaged values for each village are shown in Table 2 along with the calculated value of the % malnutrition .Using the values of %T and %M from Tables 3 and 2 respectively, the following values were found for the calculated %K in the six villages, with the actual value shown in brackets: Kinkamba 4.4, Kipesi 4.5, Mangungu 3.7, Makiku 4.0, Kikati 2.4 and Mutombo 5.3.Table 4 shows the mean urinary thiocyanate content of school children over the intervention and also the % of families who regularly used the wetting method in July 2014.Table 5 gives the % of school children with high urinary thiocyanate content over the time of the intervention.In the six villages the mean total cyanide content of cassava flour measured just before it was used to make fufu reduced from the value of 19 to 41 ppm before the wetting method was taught, to the following in July 2014: Makiku 9 ppm, Kitati 9 ppm, Kipesi 8 ppm, Mangungu 12 ppm, Mutombo 11 ppm and Kinkamba 8 ppm.Eq. was developed by iteration from the data of six villages to relate the % konzo prevalence with the % children with high urinary thiocyanate content and % malnutrition.There is a reasonable fit of the data for five villages, but Mutombo has a much lower actual konzo prevalence than that expected from the equation.This is probably because Mutombo is the only village that has a health centre and a secure water supply.Both of these factors would improve the health of the Mutombo people and could have decreased their konzo prevalence.Since konzo occurs only when high cyanide intake and malnutrition occur together, such as occurs in remote villages particularly during drought or war , Eq. is inapplicable if either %T or %M is zero.This relationship represents a first attempt to relate mathematically konzo prevalence with high cyanide intake and malnutrition and as shown by the result for Mutombo there are other health factors that bear on % konzo prevalence.However, Eq. does greatly strengthen the long held hypothesis that konzo is associated with high cyanide intake and malnutrition from consumption of a monotonous diet of bitter cassava and malnutrition .There were 144 konzo cases in the six Boko 3 villages.Combining the data of those severely, moderately and mildly disabled with those from seven other studies in DRC and Tanzania, it is found that the mean percentage of konzo cases severely disabled are 7, moderately disabled 26 and mildly disabled 67 .As shown in Fig. 2, konzo incidence peaks in July, the peak cassava season, with smaller peaks in February and September.Fig. 3 is particularly interesting for two reasons the long span of years going back to 1954, before independence from Belgium, and the sporadic incidence of konzo cases in earlier years compared with incidence every year over the last 14 years, rising to a very concerning maximum of 29 cases in 2013.We have noted with concern this large increase in konzo incidence in recent years in Popokabaka and Boko Health Zones of Kwango District and in adjacent Kwilu District .We believe that this is due to a decrease in production of cassava due to plant diseases and people being involved in other work.There is a socio-economic decline among the people, with many konzo families being supported by others.The teaching of the wetting method to the women and its subsequent daily use by them prevented the occurrence of any new cases of konzo and reduced the cyanide content of cassava flour to about 10 ppm, the maximum acceptable WHO level .It also reduced the mean thiocyanate content of urine of school children in the villages and the % of children who had high urinary thiocyanate content, see Table 5, and were in danger of developing konzo.The continuous reduction in urinary thiocyanate levels and in % of children in danger of contracting konzo was only achieved by good social mobilisation by the women, who formed a committee in each village and checked on the regular daily use of the wetting method by each household.This allowed collection of data on the percentage of families in each village who used the wetting method in July 2014.As shown in Table 4, there were 94% of families in Mangungu who used the wetting method which gave a mean urinary thiocyanate content of 160 μmol/L, whereas in Mutombo only 68% of families used the wetting method and the mean urinary thiocyanate content was much higher at 280 μmol/L.There is an inverse relation between the percentage of families who use the wetting method and the mean urinary thiocyanate content.The same effect is also reflected in Table 5, where there are 8% of school children with high urinary thiocyanate levels in Mutombo in July 2014 compared with 0% in Mangungu.For the wetting method to be effective in preventing konzo in a village, it is clear that at least 60–70% of women should be using the wetting method on a regular daily basis.We have now prevented konzo amongst nearly 10,000 people in 13 villages in Kwango District, DRC, by training the women to use the wetting method on cassava flour, as an additional processing method used before its consumption as fufu.The wetting method is popular with rural women and once established they continue to use it, and its use has spread by word of mouth to other nearby villages .In the current work, the total cost of the intervention in six villages with 4588 people was US$ 75,000, which equals $16 per person.Another method of preventing konzo is to reduce malnutrition and a cross-sectoral approach has been used with some success by the NGO Action Against Hunger in Kwango District .Reducing the cyanogen intake using the wetting method appears to be a much more direct, effective and less expensive method of preventing konzo than by attempting to remove malnutrition, but a broader approach could see cyanide intake and malnutrition reduced together .This equation fits the data fairly well, except for the village of Mutombo, which has a secure water supply and a health centre and hence its konzo prevalence is much lower than that calculated by the equation.The equation is a first attempt to relate mathematically konzo prevalence with high cyanide intake and malnutrition.It greatly strengthens the long held association between konzo incidence and high cyanide intake from a monotonous diet of bitter cassava that causes malnutrition .The wetting method has been recognised by the World Bank, WHO and FAO as a sensitive intervention to remove cyanogens from cassava flour.It should be promoted as an additional processing method to reduce the cyanide intake of the people in all tropical African countries, where cassava is being introduced into new areas in which there is no knowledge of the processing methods needed to remove cyanogens and cassava production is increasing to feed growing populations .The wetting method has now been used successfully to prevent konzo in the DRC in 13 villages with nearly 10,000 people.The methodology used to prevent konzo is now well established and interventions require about 9 months and cost about $16 per person, which could be reduced further by scaling up the operation.Konzo is spreading geographically in tropical Africa and is intensifying in Bandundu Province.We appeal urgently to funding agencies worldwide for additional funding to tackle specifically the scourge of konzo and more broadly other cassava cyanide diseases in tropical Africa.The authors declare that there is no conflict of interest.The Transparency document associated with this article can be found in the online version. | Six villages in Boko Health Zone, Bandundu Province, DRC, were studied with 4588 people, 144 konzo cases and konzo prevalences of 2.0-5.2%. Konzo incidence is increasing rapidly in this area. Food consumption scores were obtained from the households with konzo and the mean % malnutrition calculated for each village. Urine samples were obtained from 50 school children from each village and % high urinary thiocyanate content (>350. μmol/L) determined. The experimental data relating % konzo prevalence (%K) to % children with high urinary thiocyanate content (%T) and % malnutrition (%M) for the six villages were fitted to an equation %K = 0.06%T + 0.035%M. This confirms that konzo is due to a combination of high cyanide intake and malnutrition. The village women used the wetting method to remove cyanogens from cassava flour. During the 9-month intervention there were no new cases of konzo; cyanide in flour had reduced to WHO safe levels and mean urinary thiocyanate levels were greatly reduced. To prevent konzo at least 60-70% of women should use the wetting method regularly. The wetting method is now accepted by the World Bank, FAO and WHO as a sensitive intervention. Four successful konzo interventions have involved nearly 10,000 people in 13 villages, the cost is now $16 per person and the methodology is well established. |
555 | The day of the week effect in the cryptocurrency market | There exists a vast literature analyzing calendar anomalies, and whether or not these can be seen as evidence against the Efficient Market Hypothesis.However, with one exception to date no study has analysed such issues in the context of the cryptocurrency market – this being a newly developed market, it might still be relatively inefficient and it might offer more opportunities for making abnormal profits by adopting trading strategies exploiting calendar anomalies."We focus in particular on the day of the week effect, and for robustness purposes apply a variety of statistical methods as well as a trading robot approach that replicates the actions of traders to examine whether or not such an anomaly gives rise to exploitable profit opportunities.The paper is structured as follows: Section 2 briefly reviews the literature on the day of the week effect; Section 3 outlines the empirical methodology; Section 4 presents the empirical results; Section 5 offers some concluding remarks.The day of the week effect was one of the first calendar anomalies to be examined.Fields showed that the best trading day of the week is Saturday.Cross provided evidence of statistical differences in Friday–Monday data in the US stock market.French reported negative returns on Mondays.Further studies found evidence of a positive Friday/negative Monday pattern.Other studies on the stock market include Sias and Starks, Hsaio and Solt, and Caporale et al., whilst commodity markets were analysed by Singal and Tayal, and the FOREX by Caporale et al.Ariel, Fortune and Schwert all reported evidence against the Monday effect in developed markets, but this anomaly still appears to exist in many emerging markets.The cryptocurrency market is rather young but sufficient data are now available to examine its properties.Dwyer, Cheung et al. and Carrick show that it is much more volatile than other markets.Brown provides evidence of short-term price predictability of the BitCoin.The inefficiency of the BitCoinmarket is also documented by Urquhart, whilst Bartos reports that this market immediately reacts to the arrival of new information and can therefore be characterised as efficient.Halaburda and Gandal analyse correlations in daily closing prices.However, so far the only study examining anomalies in this market is due to Kurihara and Fukushima, who focus exclusively on the BitCoin, which is not necessarily representative of the cryptocurrency market as a whole.The present paper aims to fill this gap in the literature by providing much more extensive evidence on the day of the week effect in this market.We examine daily data for 4 cryptocurrencies, choosing those with the highest market capitalization and the longest data span, namely BitCoin, LiteCoin, Ripple and Dash.The data source is CoinMarketCap.More information on the cryptocurrency market is provided in Table 1 below.Average analysis provides preliminary evidence on whether there are differences between returns for the different days of the week.Both parametric and non-parametric tests are carried out given the evidence of fat tails and kurtosis in returns.The Null Hypothesis in each case is that the data belong to the same population, a rejection of the null suggesting the presence of an anomaly."We carry out Student's t-test, ANOVA, Kruskal–Wallis and Mann–Whitney tests for the whole sample, and also for sub-samples in order to make comparisons between periods that might be characterised by an anomaly and the others. "Student's t-tests are carried out for the null hypothesis that returns on all days of the week belong to the same population; a rejection of the null implies a statistical anomaly in the price behaviour on a specific day of the week.Given the size of our dataset, it is legitimate to argue that normality holds, and therefore these are valid statistical tests.To provide additional evidence one more method is used, namely ANOVA analysis.The main advantages of these methods are their simple interpretation, robustness and overall ease of use.Their main disadvantage is that they do not consider the possibility of non-normal distributions of the data.To take this into account, a number of additional non-parametric tests can be used, such as the Kruskal–Wallis and Mann–Whitney tests.Their key advantage is that they do not require any assumptions about the distribution of the population.The main limitation of the Mann–Whitney test is that it can only be used for 2 groups.Therefore we also carry out the Kruskal–Wallis tests that also allow testing for 3 groups or more.The reason for carrying out both non-parametric and parametric tests is to check for robustness.The size, sign and statistical significance of the dummy coefficients provide information about possible anomalies.If an anomaly is detected we then apply a trading robot approach that simulates the actions of a trader according to an algorithm with the aim of establishing whether or not that anomaly gives rise to exploitable profit opportunities, which could be seen as evidence against market efficiency.This is a programme in the MetaTrader terminal that has been developed in MetaQuotes Language 4 and used for the automation of analytical and trading processes.Trading robots allow to analyse price data and manage trading activities on the basis of the signals received.If a strategy results in the number of profitable trades > 50% and/or total profits from trading are > 0, then we conclude that there is a market anomaly.The results are presented in the "Report" in Appendix A.The most important indicators given in the “Report” are:Total net profit — financial result of all trades.This parameter represents the difference between “Gross profit” and “Gross loss”.Expected payoff — mathematical expectation of a win.This parameter represents the average profit/loss for one trade.It also shows the expected profitability/unprofitability of the next trade.Total trades — total number of trade positions.Bars in test – the number of observations used for testing.The findings are summarised in the “Graph” section of the “Report”: this represents the account balance and general account status considering open positions.The “Report” also provides full information about all the simulated transactions and their financial results.To make sure that the results we obtain are statistically different from the random trading ones we carry out t-tests.We chose this approach instead of carrying out z-tests because the sample size is less than 100.A t-test compares the means from two samples to see whether they come from the same population.In our case the first is the average profit/loss factor of one trade applying the trading strategy, and the second is equal to zero because random trading should generate zero profit.The null hypothesis is that the mean is the same in both samples, and the alternative that it is not.The computed values of the t-test are compared with the critical one at the 5% significance level.Failure to reject H0 implies that there are no advantages from exploiting the trading strategy being considered, whilst a rejection suggests that the adopted strategy can generate abnormal profits.An example of the t-test is presented in Table 2.As can be seen there is no evidence of statistically significant difference in terms of total net profits relative to the random trading case, and therefore no market inefficiency is detected.The complete set of results can be found in Appendix B.The average analysis provides preliminary evidences in favor of a day of the week anomaly in the dynamics of BitCoin and LiteCoin, whilst in the cases of Ripple and Dash it is unclear whether or not this is present.The results of the parametric and non-parametric tests are reported in Appendices C–F) and summarized in Table 3.There is clear evidence of an anomaly only in the case of BitCoin.The next step is to apply a trading simulation approach.First we design appropriate trading rules for the days when long or short positions, respectively, should be opened.As can be seen most of the tests above provide evidence in favour of the presence of an anomaly in Monday returns for BitCoin.To make sure that this is not the result of the base effect we extend the analysis in order to be able to compare provide returns on Mondays with average returns on all other days of the week except Mondays and Sundays.The results are presented in Appendix G.Most of them confirm the existence of statistically significant differences between the two sets of returns, more specifically they indicate the presence of abnormally high returns on Mondays, i.e. of a day of the week effect in the case of Bitcoin.Since the anomaly occurs on Mondays the trading strategy will be the following: open long positions on Monday and close them at the end of this day.The trading simulation results are reported in Table 5.In general this strategy is profitable, both for the full sample and for individual years, but in most cases the results are not statistically different from the random trading case, and therefore they do not represent evidence of market inefficiency.This paper examines the day of the week effect in the cryptocurrency market focusing on BitCoin, LiteCoin, Ripple and Dash.Applying both parametric and non-parametric methods we find evidence of an anomaly only in the case of BitCoin.Further, using a trading simulation approach we show that a trading strategy based on this anomaly is profitable for the whole sample: it generates net profit with probability 60% and these results significantly differ from the random ones.However, in the case of individual years the opposite conclusions are reached.On the whole, there is no conclusive evidence that the cryptocurrency market is inefficient. | This paper examines the day of the week effect in the cryptocurrency market using a variety of statistical techniques (average analysis, Student's t-test, ANOVA, the Kruskal–Wallis test, and regression analysis with dummy variables) as well as a trading simulation approach. Most crypto currencies (LiteCoin, Ripple, Dash) are found not to exhibit this anomaly. The only exception is BitCoin, for which returns on Mondays are significantly higher than those on the other days of the week. In this case the trading simulation analysis shows that there exist exploitable profit opportunities; however, most of these results are not significantly different from the random ones and therefore cannot be seen as conclusive evidence against market efficiency. |
556 | Molecular dynamics simulations of copper binding to amyloid-β Glu22 mutants | Alzheimer's disease is characterised by the deposition of abnormal structures in the brain, particularly plaques – consisting of the Amyloid-β peptide – and neurofibrillary tangles.Aβ has two common isoforms, 40 and 42 residues in length, and is generated by sequential cleavage of the amyloid precursor protein by β- and γ-secretases.There are around fifteen known mutations of Aβ that may affect its structure and properties, and hence neurobiology.Formation of fibrils, probed by ThT fluorescence assays, was thought to be the key event in AD , but more recent evidence suggests that small soluble Aβ oligomers are the key toxic species in the disease .Interestingly, more clinically severe mutations are associated with less ThT-responsive features .In addition, Aβ variations at positions Ala21-Asp23 produce less ThT response over time than wild-type Aβ, despite forming aggregates .Indeed, these mutants possess high aggregation rates , in agreement with the idea that non-ThT-responsive structures are involved in the AD process, while those that provide a ThT response are not necessarily pathogenic .This is supported by data from a series of Glu22 mutants, which display accelerated formation of Aβ intermediates, increased neurotoxicity, but reduced fibril formation .The role of metal ions in AD is increasingly recognised, as disease progression correlates with the breakdown in homeostasis of copper, iron and zinc in the brain .These ions play a key role in both the formation of aggregates and their neurotoxicity; concentrations of Cu and Zn are elevated in plaques of AD brains , while plaques without these metals have been found to be non-toxic .Furthermore, the redox activity of Cu in particular provides a mechanism for damage to brain tissue via generation of reactive oxygen species .The exact role and nature of these metal ions in AD is a subject of growing research interest, and has been extensively reviewed elsewhere .Metal ion coordination has important effects on the structure and properties of Aβ, including aggregation propensity, though the recorded effects are diverse .In general, metal ions induce Aβ aggregation though the type and toxicity of aggregate formed varies .Cu possesses high affinity towards Aβ and dominates its coordination chemistry.A range of experimental and simulation studies have established details of Cu coordination: the N-terminal region of the peptide contains the metal binding sites, though the exact nature of the coordinating residues depends on pH .Typically, Cu binds through three N-donors and one O-donor, via Asp1/Ala2, His6 and His13/14, at physiological pH. Cu may inhibit fibril formation, instead forming non-fibrillar aggregates and converting β-strand peptide structure into helices .The aetiology of disease onset is complex and not fully understood, but relative concentrations of metal and peptide can induce changes in the size and shape of aggregates formed .To date there have been very few studies of the effect of metal coordination on the structure, interactions or chemistry of Aβ mutants.In this work, molecular dynamics simulations were carried out on Cu complexes with three E22 mutants, namely E22G, E22Q, and E22K, and compared to previous studies of the wild-type .All are known mutants with established effects on aggregation and neurotoxicity.Moreover, they span a range of physico-chemical properties, from the anionic side chain in WT, through a polar but uncharged residue and small, uncharged amino acids, to a positively charged residue.Wildtype Aβ1-42 was constructed within MOE and Cu was coordinated in the binding mode."Mutations were made using MOE's inbuilt sequence editor to generate the three E22 mutants.Residue protonation states were assigned to those appropriate for physiological pH values.Low mode molecular dynamics simulations were carried out in the DommiMOE extension to MOE, utilising previously reported Cu ligand field molecular mechanics parameters and AMBER PARM94 parameters for all other atoms, to generate a diverse library of starting structures for further simulations.In particular, a combination of LFMM parameters from Type I copper protein with Cu–N bonding terms optimised for model Cu/imidazole/formamide complexes successfully reproduces DFT structures."Partial charge assignment was carried out using MOE's dictionary lookup feature and then copper and coordination sphere charges modified as reported previously .We note that other binding modes are known, but our goal here is to compare mutants with a common coordination to copper, not to explore all available binding sites.The functional form of the LFMM implementation of AMBER, in which M—L bonds are described with a Morse potential, means that metal-ligand dissociation is effectively impossible, at least at the temperatures and over the timescales used here.Ligand field molecular dynamics simulations were carried out using the DL_POLY_LF code , which incorporates LFMM within the DL_POLY_2.0 package .All simulations were carried out using an NVT ensemble, with a Nose-Hoover thermostat with relaxation constant of 0.5 ps, at a temperature of 310 K. Implicit solvation was modelled through the reaction field model with dielectric suitable for bulk water with cutoffs of 10 and 21 Å, for van der Waals interactions and long range electrostatics, respectively.Use of implicit solvent has been shown to enhance conformational sampling of flexible systems .All bonds to hydrogen were constrained using the SHAKE algorithm , with 10−8 Å tolerance.All simulations were run for 1 μs, with a 1 fs integration timestep used throughout.Atomic positions were recorded every 10 ps for trajectory analysis.All analysis of LFMD trajectories was carried out using VMD 1.9.2 .Root mean square deviation and radius of gyration were used as indicators of equilibration.The VMD timeline extension was used for secondary structure, root mean square fluctuations, salt bridge, and hydrogen bond analysis.Tertiary structure Cα contact maps were created using the ITrajComp plugin .Hydrogen bond presence was determined by a distance of less than 3 Å and angle of less than 20° between donor and acceptor.Salt bridge presence was determined by less than 3.2 Å between O and N atoms on charged residues: this definition means that it is possible for a residue to form multiple simultaneous salt bridges, so the total percentages for any given residue may exceed 100%.Three low energy structures generated by low mode molecular dynamics, with mutual RMSD greater than 1.5 Å, were chosen as separate starting points for LFMD simulations, to allow for more effective sampling of conformational phase space.Microsecond LFMD simulations were carried out for each of the three starting points, for each mutant, and associated RMSD plots are reported in Figure 1.The intrinsically disordered Aβ peptide offers complications when equilibrating MD simulations.As such, full equilibration would only occur on timescales beyond current computational capabilities.Therefore we have utilised the description of quasi-equilibration, as reported by Huy et al. , where RMSD fluctuation around a stable point is sufficient to consider a simulation equilibrated.For our systems, timescales in the order of hundreds of nanoseconds are required: quasi-equilibration for E22G required 200, 200, and 500 ns of simulation; E22Q required 250, 250, and 300 ns; E22K required 300, 500, and 500 ns; while WT required 200, 300, and 100 ns for runs 1–3 respectively.Table 1 reports statistics drawn from RMSD values for the quasi-equilibrated portion of each trajectory.All three simulations for all three mutants result in low standard deviation values, showing that beyond the equilibration point the trajectories are generally stable.Rg values for individual trajectories show stable values past the equilibration times noted above: Table 2 reports values averaged over all post-equilibration trajectories.High standard deviation values are a result of the combination of multiple trajectories: variation is much smaller within trajectories.WT has the lowest average Rg, indicating the most compact structure: the mean value compares well with literature .Ref 54 reports Rg of Aβ1-42 in the range 9–13 Å, with a mean of 1.14 nm, while values of 10–15 Å are quoted in ref .E22Q is only slightly larger on average, with increased Rg of around 2 Å, respectively.However, mutation to the small, achiral glycine or the positively charged lysine result in the most obvious differences in compactness of structure, with average values increased by almost 8 and over 5 Å, respectively.Data relating to the hydrogen bonding within mutants is reported in Table 3.As with our previous study on WT, hydrogen bonds in all mutants considered are highly transient.High standard deviation values relative to the average number of H-bonds, along with minimum numbers as low as zero and maximum numbers as high as 25, are indicative of transience.Common H-bonds, reported as donor-acceptor, include Asn27 backbone-Asp23 backbone and Gln15 sidechain-Glu11 sidechain for WT; His14 backbone-Asp7 sidechain and Ser26 sidechain-Asp23 sidechain for E22G; Ser26 sidechain-Asp23 sidechain and His14 sidechain-Glu11 sidechain for E22Q; and Ser8 sidechain-Asp7 sidechain and Asn27 backbone-Ala42 backbone for E22K.Several H-bonds fitting the expected i+4 → i pattern for α-helices are observed, including N27-D23 in WT, consistent with secondary structure patterns discussed below.Figure 2 shows the RMSF of the mutants by residue, compared to the wildtype.WT exhibits the lowest RMSF for all residues compared to the mutated peptides.Interestingly, the mutated residues are not necessarily those with largest RMSF values; this is somewhat surprising due to the different chemical nature of the residues involved.Figure 2 indicating that the effect of mutation on peptide flexibility is highly non-local.In general, the C-terminus exhibits larger RMSF values than the N-terminus, as expected due to the anchoring effect of coordination of Cu to three N-terminal residues.However, E22K displays a different pattern: the N-terminus has larger RMSF values than the C-terminus, with the coordinating residues having relatively low values but many of the others in the metal binding region exhibiting high mobility, notably Asp1-Phe4, Asp7-Ser8 and Val12.Contact maps are a useful measure of the average shapes of dynamical systems and have been utilised to compare the different mutants here.Figure 3 reports contact maps between the α-carbons of each residue for the mutants and the wildtype.WT has a relatively compact structure, with longest Cα-Cα distances of ca. 30 Å between Ser8-Phe20 and Lys28-Val40.E22Q shows a more extended structure: distances of ca. 40 Å for N-terminal residues with C-terminal residues Ile32-Gly38.Mutation to the oppositely charged lysine results in a strikingly different contact map, with much greater separation between the termini, corresponding to an extended structure.This is observed to an even greater extent in the E22G mutation, wherein Cα-Cα contacts between the termini exceed 50 Å for residues up to Gln15 with Il32-Val40.Structures of final the final frames of MD trajectories are also reported in Figure 4, which are in agreement with the findings for the contact maps.Secondary structure analysis was carried out using the STRIDE algorithm: percentage secondary structure against residue number is reported in Figure 5.A breakdown of the overall contributions of secondary structure elements for each mutation is also reported in Table 4.As expected for intrinsically disordered peptides, the major constituents of the secondary structure profile are turn and coil.These structural elements correspond to a lack of order and comprise over 70% of the total peptide structure for all systems.Interestingly, there is considerable variation in helix and sheet content between the mutants."All systems have very little β-sheet character: the largest being 2.3% for E22Q, found in the in the metal binding region as well as the peptide's central hydrophobic core.In contrast, the WT peptide adopts sheet-like conformations exclusively at the C-terminus.There are also differences in helical content across the mutants, 25% in E22K compared to 19% in WT.These consist of a mix of π, 310 and, α-helices: the latter two making up the majority.These are primarily seen in the regions of Tyr10-Gln15 and Gly25-Val40 residues.WT and E22G, but not E22Q nor E22K, exhibit π helices in the metal binding region site near His13 and His14.Some π-helical character is also observed toward the C terminus in some mutants but at much lower occurrences.A distinction can also be made between turn and coil structures: the mutants that result in the most extended structures, E22G and E22K, have the greatest concentration of coil structure.This character is centred on the central hydrophobic region and toward the C terminus.The coil character therefore indicates this presence, whilst E22Q and WT indicates a greater propensity to remain globular.Ramachandran maps shed further light on secondary structure: for WT Aβ, most conformations adopt right-handed helical-like conformations, centred around.Interestingly, there are many further conformations located around, close to the helical region of the plot.In addition, there are notable contributions from left-handed helical structures at and β-sheet type structures at.E22K and E22Q mutants exhibit similar Ramachandran maps, dominated by right-handed helical-like conformations, indicating that these mutations have relatively little effect on the total backbone conformations sampled.This reflects their similar secondary structure profiles.E22Q reports the highest incidence of β-sheet structure, but has relatively few conformations in this region of the plot, indicating that while mutants adopt sheet-like conformations, they lack the requisite hydrogen bonds to be classified as β-sheets.E22G is also dominated by helical-type conformations, but also contains more β-sheet structures.This is in agreement with other data illustrated here; this mutation exhibits the second-highest degree of β-sheet structure, as well as the most extended conformation.Salt bridge interactions strongly influence peptide structure and stability.The natural peptide has nine charged residues at physiological pH, three positive and six negative, resulting in a possible eighteen salt bridges: E22Q and E22G have fifteen possible bridges, and E22K twenty.Salt bridge contact maps for each structure are reported in Figure 7.All systems show similarities in the metal binding region, which may be expected due to their identical copper binding modes.The Asp1-Arg5 salt bridge is present at close to 100% of the time for all mutants, but just 63% for WT.WT contains an Asp1-Lys28 salt bridge, which is not present in the mutants, reflecting the more compact structure of the WT compared to the mutants.Other differences in this region include the presence of Glu3-Arg5 interactions in E22G, which are not observed for the other systems.Lys22 in E22K forms new salt bridge interactions, particularly with Glu11 and Asp23: these new interactions are formed at the expense of those with Lys28, observed in other mutants.Reduction of the stabilising interactions of Lys28 with the closer Lys22 therefore seems to be the likely origin of the extended conformation observed from the contact maps above.The Asp23-Lys28 bridge plays an important role in the aggregation behaviour of Aβ : mutation of a residue directly adjacent seems likely to have an impact.To examine this influence, the Asp23-Lys28 distance has been plotted for the mutants and WT in Figure 8.WT exhibits a sharp peak at 3.5 Å, and a much shallower, broad peak above 20 Å, illustrating the presence of two conformations.A similar profile is observed for E22Q, with the same sharp peak at 3.5 Å and smaller, broader peaks at longer distances.The two mutations that have no Asp23-Lys28 salt bridge interactions have no peaks below 5 Å E22G, the most extended system, has a peak at ca. 25 Å, as well as two at 8 and 13 Å, while E22K lacks the peak at very long distance but exhibits peaks around 8, 15, and 17 Å.We report ligand field molecular dynamics simulations of the Cu complexes formed by three different Glu22 mutants of the amyloid-β 1–42 peptide: namely E22G, E22Q and E22K."All are known to increase the rate of peptide aggregation and the likelihood of developing the symptoms of Alzheimer's disease.Three independent simulations of one microsecond for each system were performed, each reaching pseudo-equilibration after several hundred nanoseconds.Analysis of frames collected after equilibration indicates major differences between mutants and the wild-type peptide.E22Q is the most similar to the native peptide, but even here subtle differences are evident.E22G and especially E22K are markedly different in size, shape and stability, both adopting much more extended structures that are much more flexible.Somewhat surprisingly, changes induced by mutations are apparent across the entire peptide: root mean square fluctuation in particular shows that E22K induces major changes in the N-terminal sequence, up to 20 residues away from the site of mutation, while E22G causes the C-terminus to become much more flexible.In common with a previous MD study of mutated Aβ , turn and coil structures dominate all structures studied but subtle differences in helical and β-sheet distribution are noted, especially in the C-terminal region.All mutants, as well as WT, sample a wide set of structural ensembles: this structural diversity and the conformational may facilitate the interconversions between various secondary and tertiary structures that accompany aggregation of Aβ.The origin of these difference is apparently disruption to the salt-bridge network: E22Q has a strongly populated Arg5-Asp7 interaction that is absent in WT, while the Glu11-Lys16 bridge that is frequently populated in WT is much reduced.E22K leads to a quite different pattern of salt bridges, with the mutated residue itself forming interactions with Glu11 and especially Asp23.E22G leads to complete loss of the Asp1-Lys28 interaction and diminution of Glu11-Lys18.Both mutations therefore leads to substantial reduction in the interactions that keep the wild-type peptide in a relatively compact conformation, and hence to the extended conformations noted above.While we cannot draw direct conclusions on the effect of mutation on aggregation from these simulation of monomers, it is intriguing that the E22G and E22K are known to give rise to “small protofibrils and oligomers” and to “less fibrillar” aggregates, respectively .We speculate that the loss of salt bridges within the monomer and the resulting extended structure give rise to different aggregation behaviour, and that the characteristic fold of Aβ seen in mature fibrils may be less favourable in the absence of key salt bridges such as Asp23-Lys28.It is appropriate at this stage to discuss limitations of this work.Firstly, we have only studied 1:1 Cu:peptide complexes, and then only in one of several possible coordination modes.This may not be representative of the more complex in vivo situation, but serves as a basis for comparison of mutants without further complication of changing stoichiometry or coordination.Secondly, we have also only simulated monomeric Aβ whereas oligomers are thought to be the key species in disease onset: we hope to report simulations of larger systems in future publications, but at present we can only infer potential interactions from the properties of the monomer.Thirdly, use of an implicit solvent model prevents the simulations from accounting for any explicit role of water molecules in metal coordination.Despite these limitations, we have identified important differences in the structure and dynamics of the mutated peptides and their interaction with Cu that give some insight into how they behave in practice.Frames taken from all trajectories have been deposited in PDB format, available from https://doi.org/10.5281/zenodo.2537978.Jamie Platts: Conceived and designed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper.Shaun Mutter: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Wrote the paper.Matthew Turner: Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data.Rob Deeth: Contributed reagents, materials, analysis tools or data.This work was supported by EPSRC under grant reference EP/N016858/1."The authors are grateful to Cardiff University's ARCCA for computing resources.The authors declare no conflict of interest.No additional information is available for this paper. | We report microsecond timescale ligand field molecular dynamics simulations of the copper complexes of three known mutants of the amyloid-β peptide, E22G, E22Q and E22K, alongside the naturally occurring sequence. We find that all three mutants lead to formation of less compact structures than the wild-type: E22Q is the most similar to the native peptide, while E22G and especially E22K are markedly different in size, shape and stability. Turn and coil structures dominate all structures studied but subtle differences in helical and β-sheet distribution are noted, especially in the C-terminal region. The origin of these changes is traced to disruption of key salt bridges: in particular, the Asp23-Lys28 bridge that is prevalent in the wild-type is absent in E22G and E22K, while Lys22 in the latter mutant forms a strong association with Asp23. We surmise that the drastically different pattern of salt bridges in the mutants lead to adoption of a different structural ensemble of the peptide backbone, and speculate that this might affect the ability of the mutant peptides to aggregate in the same manner as known for the wild-type. |
557 | The bacillary and macrophage response to hypoxia in tuberculosis and the consequences for T cell antigen recognition | Since tuberculosis was declared a global health emergency in 1993 a number of important control efforts have led to a fall of TB-associated mortality and the saving of 45 million lives ."However, up to a third of the world's population is estimated latently infected with Mycobacterium tuberculosis, serving as a reservoir for many of the estimated 9·6 million people who developed TB worldwide in 2014, leading to 1·5 million deaths.Thus, TB now ranks alongside HIV as a leading cause of death worldwide, and the rate of HIV-TB co-infection worldwide in 2014 was 12% .Mtb is transmitted by the cough of an infected person and inhaled into the alveoli of a new host.This process can lead to three possible outcomes: i) a minority develop active primary progressive TB disease and develop a detectable but ineffective acquired immune response, ii) the majority develop latent TB infection that is contained throughout their life by an effective acquired immune response, and iii) a small proportion of those latently infected develop post-primary TB as a result of reactivation of their latent infection, which can be triggered by immune suppression such as HIV-1 infection .Latent Mtb infection is defined solely by evidence of immune sensitization by mycobacterial proteins: a positive result in either the tuberculin skin test or an in vitro interferon gamma release assay, in the absence of clinical signs and symptoms of active disease .However, TST and IGRA do not distinguish latent TB from active disease, and neither have high accuracy to predict subsequent active tuberculosis .Better understanding of the biology of Mtb and of LTBI is necessary in order to develop better diagnostic methods and treatment options.However, the interplay between Mtb and the human host is incompletely understood.Conventionally, LTBI is conceived as Mtb remaining in an inactive, stationary phase in the granuloma as a stable latent population of bacilli capable of surviving under stressful conditions generated by the host .Alternatively, viable non-replicating persistent Mtb reside within alveolar epithelial cells in the lung, with reactivation being associated with the upregulation of resuscitation promoting factors within MTB and the escape of newly dividing microorganisms into alveoli and bronchi .Recent advances in imaging technologies such as computed tomography combined with positron emission tomography have aided the evolution of a concept that LTBI encompasses a diverse range of individual states extending from sterilizing immunity in those who have completely cleared the infection via an effective acquired immune response, to subclinical active disease in those who harbor actively replicating bacteria in the absence of clinical symptoms, through to active TB disease with clinical symptoms .Thus, it has been proposed that Mtb infection may be better viewed as a continuous spectrum of immune responses, mycobacterial metabolic activity, and bacillary numbers.In this model the impact of HIV infection can be conceptualized as a shift towards poor immune control, higher mycobacterial metabolic activity, and greater organism load, with subsequently increased risk of progression to active disease .Direct measurement of lesional oxygen tension in rabbits , and indirect measurements in non-human primates and humans using hypoxia-sensitive probes demonstrate many TB lesions in vivo are hypoxic .Hypoxia is only one of the many different stresses Mtb encounters in the granuloma and in vitro and animal models are limited in the extent to which they recapitulate the multifactorial environment created by the host to arrest mycobacterial growth.Nonetheless, many conceptual advances have been achieved in recent years in our understanding of mycobacterial physiology under low oxygen conditions, particularly in the areas of gene regulation, metabolism, and energy homeostasis.The existence of a coordinated and inducible response of Mtb to low oxygen conditions was initially revealed by Wayne and colleagues, culminating in the now widely employed in vitro “Wayne” model of hypoxia-induced dormancy .In this system, bacteria grown in liquid medium in sealed tubes with limited head space gradually deplete oxygen supplies, leading to a non-replicating state of persistence characterized by reduced metabolism and increased drug tolerance.In this state cellular viability can remain unchanged for weeks to months, with synchronized replication resuming following culture reaeration.The inferred similarities between bacteria grown in vitro under hypoxic conditions and clinical cases of latent infection have made the Wayne model a key tool for investigating the molecular basis of mycobacterial dormancy.A key caveat is that many of these studies were performed using laboratory strains of Mtb that have been passaged aerobically over many years, these findings therefore need to be revisited using recent clinical isolates.Early work on gene expression analysis of Mtb undergoing hypoxic challenge identified a suite of almost 50 genes that were significantly and consistently upregulated relative to aerobic controls.Further work identified that this regulon was controlled by a transcription factor subsequently named DosR, the activation of which was mediated through two classic two-component system-type transmembrane sensor histidine kinases, DosS and DosT .Activation of DosS and DosT in turn is still the subject of some debate, however strong evidence suggests they sense cellular redox status and dissolved oxygen concentration, respectively, via their heme prosthetic groups .Genes within the DosR regulon are involved in multiple processes including central metabolism, energy generation and gene regulation; however the majority are of unknown function.Interestingly, despite its dominance of gene expression under hypoxia, multiple studies have demonstrated that genetic inactivation of dosR results in a relatively mild loss of viability under hypoxia in vitro and varying responses in vivo in multiple animal models .Further evidence suggests these effects may be dependent on the exact hypoxia model, strain, animal model, and growth media used .Furthermore, upregulation of the DosR regulon is not specific to hypoxic challenge, appears to dominate at these time points ; however both responses appear to interact at the regulatory level .The relevance of the EHR in overall bacterial adaptation to hypoxic conditions has yet to be determined.The roles of gene regulation at the posttranslational level have also been assessed in hypoxic Mtb, and both proteases and Serine/Threonine Protein Kinases have been found to play functional and essential roles .For example, a regulator of the Mtb Clp protease, Rv2745c, was identified as being required for re-adaptation of hypoxia-challenged Mtb to normoxic conditions: while viability under hypoxia of an Rv2745c null mutant was identical to wild-type, much lower viability of the mutant strain was observed upon reaeration .This data complements other studies showing an enrichment of protease and chaperone related genes during reaeration, relative to those observed under hypoxic conditions .A reduction in net carbon flux is a hallmark of hypoxia-induced dormancy in Mtb, suggesting a need to conserve carbon and energy sources for prolonged survival and later resumption of growth when conditions improve.Consistent with the dormancy/hibernation programs of other organisms, Mtb accumulates intracellular triacylglycerides under hypoxia, which correlates with increased expression of the DosR-regulated TAG biosynthetic gene tgs .Switching metabolism towards lipid storage may be a major regulator of metabolic slowdown by forcing acetyl-coA flux away from the energy generating catabolic TCA cycle and into anabolic lipid biosynthesis, as evidenced by enhanced metabolic and replication rates of tgs mutants in the initial stages of hypoxia relative to wild-type bacteria .Also, as previously observed in Mtb grown in vivo, upregulation of the isocitrate lyase transcript, protein, and activity levels are also observed under hypoxia .The canonical metabolic role of Icl is to allow growth on fatty acids as the sole carbon source, suggesting a role for Icl in metabolism of stored TAGs as a carbon and energy source under these conditions.This is supported by upregulation of methylcitrate cycle and methylmalonyl CoA pathway genes, enzyme levels, and metabolic intermediates, and the mixed upregulation/essentiality of the gluconeogenic PfkA/B genes during hypoxia and reaeration, consistent with a predominantly fatty acid-based diet .However, under hypoxia Icl appears to have multiple roles, as Icl null mutants grown on glycolytic carbon sources are significantly growth impaired at low oxygen concentrations relative to WT strains .Icl may be involved in conservation of carbon units and/or maintaining optimal NADH/NAD + ratios under the reducing conditions of hypoxia by bypassing the two oxidative and CO2 releasing TCA cycle steps, or alternatively in the maintenance of the membrane potential and/or the proton motive force via secretion of Icl-produced succinate through a succinate/H+ symport system .Indeed, large amounts of succinate are found to accumulate extracellularly in Mtb grown anaerobically and Icl contributes significantly to this effect .Elsewhere, upregulation of other fatty-acid biosynthetic and catabolic genes have been observed in Mtb grown under hypoxia .Interestingly, like Icl, many of these genes are also induced upon NO stimulation and in vivo, in mouse lung infection, suggesting that a metabolic shift towards lipid metabolism is a general stress response rather than being specific to hypoxia or due to the nature of the provided/available carbon source .Hypoxia-challenged Mtb also substantially down regulate many genes involved in the oxidative direction of the TCA cycle and upregulate expression of several members of the reductive direction.This suggests a role for the reductive TCA cycle under hypoxia, with fumarate reduction as a fermentative endpoint, and provides an alternative explanation for the observed accumulation of succinate under these conditions .The relative contributions of Icl and fumarate reduction to the production of succinate under hypoxia is debated, but is likely influenced by the available carbon source.Icl catalysis also releases glyoxylate, a metabolite toxic to Mtb cells if left to accumulate.The canonical metabolic fate of glycoxylate is condensation with acetyl-CoA to form malate catalyzed by GlcB, however GlcB activity is down-regulated in hypoxic Mtb .Instead, glycine levels are seen to increase in an Icl-dependent manner , inferring subsequent reduction of glyoxylate to glycine as an alternative detoxification step under these conditions.Consistent with this hypothesis, expression and activity of glycine dehydrogenase increases substantially in hypoxic Mtb .Glyoxylate reduction is also a possible fermentative mechanism of regenerating oxidized cofactors during the reductive stress of hypoxia.Less is known about peripheral metabolic pathways and biosynthesis of other essential compounds and macromolecules under hypoxia.There appears to be growing evidence for shifts in nitrogen metabolism, particularly influenced by the large amount of nitrogen syphoned into the sequestration of glyoxylate, changes in glutamine biosynthesis , observations of aspartate secretion , polyglutamate/glutamine biosynthesis , as well as a possible assimilatory role for the DosR-regulated nitrite reductase .Upon entry into hypoxia mycobacteria experience significant decreases in ATP levels and increases in their NADH/NAD + ratio, indicative of a blocked electron transport system and consistent with depleted stores of terminal electron acceptors .However, ATP levels remain non-zero throughout hypoxic challenge, and de novo ATP synthesis via the ETS is a strict requirement for bacterial survival under these conditions .This suggests that, despite cessation of replication, Mtb maintains both an energized membrane and constitutive ATP production even in the absence of molecular oxygen.Interestingly, transcriptional changes under hypoxia demonstrate a functional switch to the use of less energy efficient respiratory complexes, including upregulation of the non-proton-translocating type II NADH dehydrogenase and cytochrome bd oxidase and down regulation of the proton-pumping type I NADH dehydrogenase.The survival benefit in uncoupling electron transport from generation of the proton motive force suggests that cofactor recycling is more important than ATP generation under these conditions, and/or that the PMF is already sufficiently maintained by alternative measures.Succinate dehydrogenase, which physically links the TCA cycle and ETS, has recently been shown to play a key but enigmatic role in mycobacterial adaptation to hypoxic conditions.Genetic deletion of succinate dehydrogenase 1, the major aerobic SDH, abolishes the ability of bacteria to regulate oxygen consumption when approaching hypoxia which subsequently led to increased bacterial death at later stages of anaerobiosis .However, other evidence suggests that Sdh-2 may have a key role during hypoxia, either as a canonical succinate dehydrogenase/fumarate reductase and/or in maintenance of the PMF.While certain proteinaceous modules of the ETS appear to differ between hypo- and normoxia, quinone electron carriers are indispensable across all conditions.Accordingly, inhibition of menaquinone biosynthesis is cidal to anaerobic bacteria .Intriguingly, menaquinone:menaquinol homeostasis under hypoxia may also play a larger regulatory role in addition to electron transport, including in activation of the DosS sensor kinase of the DosR system and regulation of SDH-1 catalytic activity .Also, total MQ pool sizes are reduced under hypoxia, and addition of exogenous MQs lowers cell viability , while the degree of saturation of the MQ isoprenyl tail also changes under low oxygen conditions .Deletion of the gene that reduces the MQ isoprenoid side chain results in reduction of efficiency of electron transport and compromised survival in macrophages.The reduced isoprenoid side chain seems highly unlikely to affect the intrinsic redox behavior of this cofactor suggesting that this modification tunes the two forms of MQ to interact with different redox partners and that these therefore have discrete biological functions .Recently, a polyketide synthase biosynthetic gene cluster was identified in M. smegmatis that was upregulated under hypoxia and coded for the production of novel benzoquinoid compounds.Genetic deletion led to lower viability under hypoxic conditions, which could be rescued upon addition of exogenous synthetic benzoquinones.It is unknown whether Mtb carries the same biosynthetic capabilities.The benefit of such alternative electron carriers under hypoxic conditions is unknown, but may be related to the lower potential difference between oxidized and reduced forms of the benzoquinone moiety relative to the napthoquinone bicyclic ring system of menaquinones .In the absence of molecular oxygen many facultative anaerobes can switch to alternative external TEAs to sustain respiration.Mtb contains all the genetic elements necessary for reduction of nitrate and nitrite, and both of these activities have been detected in growing cells .Nitrite production increases significantly in anaerobically grown Mtb, even though neither expression of the NarGHJI operon nor corresponding catalytic activity in whole cell extracts is significantly different between bacteria grown aerobically or anaerobically.The nitrate import/nitrite export NarK2X operon, however, is part of the DosR regulon and is strongly upregulated under hypoxia , suggesting that NarGHJI activity is modified post-translationally following activation of the nitrate import machinery.Interestingly, NarG null mutants display no fitness or viability cost compared to wild-type strains when grown under hypoxic conditions , casting doubt on the functional importance of nitrate reduction within the context of the ETS under low oxygen conditions.Similarly, the nitrite reductase NirBD only appears to be expressed and have physiological importance when nitrate or nitrite is supplied as the sole nitrogen source, whether under aerobic or anaerobic conditions .However, adding exogenous nitrate to the growth medium of anaerobic bacteria abolishes the aforementioned succinate secretion, restores ATP levels, lowers the NADH/NAD + ratio, and also buffers against the -cidal effects of mild acid challenge, but only in the presence of an intact NarGHJI operon .Therefore, nitrate reduction may occupy a non-essential but conditionally important role, independent of nitrogen assimilation, in mycobacterial survival of hypoxic challenge by aiding in maintenance of both the PMF and ATP levels.Macrophages undergo substantial phenotypic changes when exposed to reduced oxygen tension and several lines of evidence suggest that hypoxia modulates central effector functions of this key innate immune cell.The restriction of local oxygen supply was shown to lead to an increased formation of cytokines, chemokines proangiogenic factors but to a reduced eicosanoid synthesis by these cells .Human mononuclear cells and macrophages facing hypoxic conditions secrete significantly enhanced amounts of the major pro-inflammatory cytokines IL-1β and TNF .Various studies have shown that there is a hypoxia-mediated increase in innate immune cell migration into tumor tissue and other hypoxia-related disease settings such as rheumatoid arthritis and atherosclerosis .During migration into inflammatory tissue, monocytes/macrophages encounter a gradual decrease in oxygen availability.The increased migration may be due to a hypoxia-induced chemokine gradient or due to recently observed HIF-1α dependent, chemokine independent accelerated migratory capacity of macrophages, when oxygen tension drops below a certain value .HIF-1α plays a key role for macrophages to adapt to low oxygen tension.Cell-specific deletion of HIF-1α or transient gene silencing in macrophages reduces inflammatory responses with regard to macrophage motility and invasiveness, phagocytic capacity and most importantly bacterial killing .However also under normoxic conditions HIF-1α is induced upon bacterial infection .It plays an important role for the production of key immune effector molecules, including granule proteases, antimicrobial peptides, TNF and nitric oxide.The latter is of major importance since antibacterial immunity critically depends on NO production through Nitric Oxide Synthase-2 in macrophages of infected mice.The importance of HIF-1α for bacteria induced NOS2 expression has been also demonstrated in studies using macrophages stimulated with lipopolysaccharide , lipoteichoic acid and mycobacteria derived trehalose dimycolate .Notably, Mi et al. showed that pattern recognition receptor dependent stimulation of murine macrophages under hypoxia leads to enhanced NOS2 expression when compared to normoxic conditions , indicating that cell activation by conserved microbial structures is augmented under hypoxic conditions.Indeed, there is a close relationship between HIF-1α and a central transcriptional regulator for innate immunity and inflammatory processes the transcription factor NF-kappaB .It was shown that hypoxia itself activates NF-kB through decreased Prolyl hydroxylase-1-dependent hydroxylation of IkappaB kinase-beta .In addition TLR4 activation enhances HIF-1α transcript levels and thus promotes the expression of NF-kB -regulated cytokines in macrophages .The key role of HIF-1α for the production of central immune effector molecules is directly linked to reduction of cellular ATP levels .Under hypoxic conditions HIF-1α promotes the switch to glycolysis so that these cells can continue to produce ATP when oxygen is limited .This change in cellular energy metabolism is also observed in LPS-stimulated macrophages, similar to hypoxic conditions: leading to a metabolic shift towards glycolysis away from oxidative phosphorylation .This phenomenon of aerobic glycolysis in immune cells, resembling the Warburg effect in tumors , seems to be necessary for a vigorous and robust response upon classical activation of macrophages, though this metabolic transition results in an abating Krebs cycle which is coupled to a less efficient energy production.This reprogramming leads to an increased production of critical metabolites such as succinate, itaconic acid and nitric oxide, all of which have key effector functions during infections .During activation macrophages use other metabolic pathways to satisfy their need for precursor molecules.For example, murine macrophages use an aspartate-arginosuccinate shunt to maintain Interleukin-6 and NO production during M1 activation .Huang et al. showed that cell-intrinsic lysosomal lipolysis is essential for alternative activation of macrophages , further substantiating the link between inflammatory activation and metabolic reprogramming.These studies not only show that inflammatory activation modulates cellular metabolism, but also suggest that the metabolic pathways themselves alter macrophage effector functions dramatically .Intriguingly, the Krebs cycle metabolite succinate serves as an inflammatory signal in macrophages, enhancing IL-1β production by stabilizing HIF-1α .This study, and also the work of Haschemi et al. implicating the carbohydrate kinase-like protein CARKL as an immune modulator in macrophages, shows that metabolic reprogramming is required for full macrophage effector function .However, it also suggests that a manipulation of biosynthetic pathways or changes in metabolite levels may affect immune cell function, as shown for saturated and polyunsaturated fatty acids in dendritic cells .Mtb infects macrophages, dendritic cells and neutrophils, with macrophages most extensively studied.Infection with Mtb leads to a wide array of cellular responses, most of which have been studied under normoxia.The evolutionary success of virulent mycobacteria likely depends on cross-species-conserved mechanisms operative in infected cells , which allow bacillary replication and persistence by fine-tuning pro- and anti-inflammatory activity .Limited inflammation results in improper activation of macrophages, defective antimicrobial activity, and intracellular survival of the bacilli.Excessive inflammation promotes recruitment of additional Mtb-permissive cells, cell death, and extracellular replication of the bacilli .Most studies indicate that reduced tissue oxygen promotes innate immune cell functions.From a host perspective, by affecting the fine-tuned inflammatory balance within granulomas, hypoxia could do both, either improve the immunity against Mtb, or lead to an impaired growth restriction by causing excessive inflammation and immunopathology.Human monocyte derived macrophages cultured in 5% oxygen, corresponding to the physiological tissue concentration, permitted significantly less growth than those cultured at the 20% oxygen levels of ambient air .Meylan et al. concluded that macrophages cultured at low oxygen tension may differ from their counterparts cultured at a higher oxygen level in that their intracellular milieu is less supportive of mycobacterial growth.A low pO2, which is closer to tissue conditions, did not affect the growth of free-living bacteria but strikingly reduced the growth of intracellular mycobacteria.The growth inhibitory effect was not due to a putative differential response to IFN-γ or TNF-α at low oxygen conditions, but was associated with a shift from oxidative toward glycolytic metabolism, consistent with earlier work in which macrophages cultured at low pO2 showed a metabolic shift toward glycolysis .This was an early hint that metabolic changes contribute to Mtb growth control in macrophages.Recent data now show that glycolysis is involved in Mtb growth control in human and murine primary macrophages .A third study also clearly demonstrated significantly decreased growth of Mtb under hypoxia, when compared to human macrophages kept at 20% .Importantly, macrophage viability, phagocytosis of live Mtb bacteria and Mtb-induced cytokine release were not affected.It has been shown that hypoxia also leads to the induction of autophagy , an important mechanism known to limit the growth of intracellular pathogens including Mtb .However there are no data that imply a functional role for this anti mycobacterial effector mechanism under hypoxic conditions.Thus the molecular mechanisms limiting Mtb growth under hypoxic conditions are still incompletely understood.At the same time Mtb is thought to adapt to an intracellular lifestyle of non-replicating persistence in which it is largely resistant to known bactericidal mechanisms of macrophages and many antimicrobials .This hypoxia-mediated control of Mtb replication is at the same time associated with a significant metabolic reprogramming of its host cell.Human macrophages cultured for 24 h under hypoxia accumulate triacylglycerols in lipid droplets .The authors observed increased mRNA and protein levels of adipocyte differentiation-related protein also known adipophilin/perilipin 2, a key factor of lipid droplet formation .Exposure to hypoxia but also to conserved microbial structures decreased the rate of beta-oxidation, whereas the accumulation of triglycerides increased inside the host cell.This phenomenon has recently been attributed to a metabolic switch towards glycolysis by simultaneously decreasing lipolysis and fatty acid oxidation .It appears this metabolic shift leading to lipid droplet formation is exploited by Mtb.Daniel et al. observed that human peripheral blood monocyte-derived macrophages and THP-1 macrophages incubated under hypoxia accumulate Oil Red O-stained lipid droplets containing TAG .The authors were the first to study this effect in the context of Mtb infection.They demonstrated that inside hypoxic, lipid-laden macrophages, nearly half the Mtb population developed phenotypic tolerance to isoniazid, lost acid-fast staining and accumulated intracellular lipid droplets.The fatty acid composition of host and Mtb TAG were nearly identical suggesting that Mtb utilizes host TAG to accumulate intracellular TAG.Other groups suggested that Mtb actively induces this type of lipid-laden phenotype via targeted manipulation of host cellular metabolism resulting in the accumulation of lipid droplets in the macrophage .Mtb oxygenated mycolic acids trigger the differentiation of human monocyte-derived macrophages into foamy macrophages .Interestingly it has been observed that inhibition of autophagy leads to increased levels of TAG and lipid droplets, and pharmacological induction of autophagy leads to decreased levels of lipid droplets .This may be of functional relevance, since it was shown that Mtb uses a miRNA circuit to inhibit autophagy and promote fatty acid stores in lipid droplets to ensure its own intracellular survival .Lipid-loaded macrophages are found inside the hypoxic environment of the granuloma.They contain abundant stores of TAG and are thought to provide a lipid-rich microenvironment for Mtb .Numerous studies have demonstrated that Mtb relies on fatty acids and also cholesterol as important nutrients during infection, which are used for energy synthesis, virulence factor expression, cell wall and outer membrane construction; and to limit metabolic stress .Moreover, the development of the lipid-rich caseum in the human TB granuloma has been shown to correlate with a realignment in host lipid metabolism within the granuloma, suggesting a pathogen-driven response leading to the pathology necessary for Mtb transmission .The formation of granulomas is the hallmark of Mtb infection.A granuloma can be defined as an inflammatory mononuclear cell infiltrate that, while capable of limiting growth of Mtb, also provides a survival niche from which the bacteria may disseminate.The tuberculosis lesion is highly dynamic and shaped by both, immune response elements and the pathogen .During disease the formation of necrotic granuloma may occur.Necrotic granulomas have an outer lymphocyte cuff dominated by T and B cells and a macrophage-rich mid region that surrounds an amorphous center of caseous necrosis .In these characteristic lesions, mycobacteria often reside within necrotic tissue that has no obvious supply of oxygen .Indirect evidence links changes in oxygen tension with varying TB disease .Intriguingly tuberculosis infections preferentially occur in the most oxygen-rich sites in the human body .In line with these data is the observation that within the lungs of patients failing TB chemotherapy, histological examination of different lung lesions revealed heterogeneous morphology and distribution of acid-fast bacilli .Both studies suggest that reduced levels of O2 may limit Mtb growth in vivo.It is presumed that Mtb resides in these regions in a slow growing or non-replicating form, due to limited availability and supply of oxygen and nutrients .A number of animal model systems including mice, guinea pigs, rabbits, zebrafish and non-human primates are used to research aspects of granuloma immunopathology in mycobacterial infections.The low dose aerosol model of experimental TB infection in mice has been valuable to define immunological mechanisms of protection against infection, the virulence of mycobacterial strains, or validating novel chemotherapeutic strategies against TB .However mice infected with Mtb fail to produce highly organized caseous or necrotic lesions and do not develop hypoxic regions within their infected lungs suggesting that standard mouse models of persistent tuberculosis may not be suitable for the study of the hypoxic response in Mtb infection.In contrast to mice tuberculous granulomas in guinea pigs, rabbits, nonhuman primates , and zebrafish are hypoxic and are appropriate models to study the effect of low oxygen tension in Mtb infection.However three independent, recently developed mouse models may offer new opportunities to study these effects also in TB infected mice.Dermal TB infection of NOS-deficient mice results in development of classic human granuloma pathology when IFN-γ or TNF-α activity is blocked in vivo .Unlike BALB/c and C57Bl6 mice, C3HeB/FeJ mice infected with Mtb showed evidence of lesion hypoxia, fibrosis, liquefactive necrosis, and occasional cavity formation .Very recently aerosol Mtb infection of IL-13 overexpressing mice resulted in pulmonary centrally necrotizing granulomas with multinucleated giant cells, a hypoxic rim and a perinecrotic collagen capsule, with an adjacent zone of lipid-rich, acid-fast bacilli-containing foamy macrophages, thus strongly resembling the pathology in human post-primary TB .Thus the use of human tissues or an appropriate animal model to study the host granulomatous response to Mtb is of ultimate importance.What are the characteristic features of macrophages in hypoxic conditions within the granulomatous lesion?,Macrophages in granulomas are both antimycobacterial effector but also the host cell for Mtb.Detailed immunohistochemical analysis of granulomatous lesions from Mtb infected cynomolgus macaques, a non-human primate, using a combination of phenotypic and functional markers suggests that macrophages with anti-inflammatory phenotypes localized to outer regions of granulomas, whereas the inner regions were more likely to contain macrophages with proinflammatory, presumably bactericidal, phenotypes.Active lesions display a gradient of anti- and pro-inflammatory phenotypes, with anti-inflammatory CD163+ iNOS+ Arg1high macrophages on outer margins and proinflammatory CD11c+ CD68+ CD163dim iNOS+ eNOS+ Arg1low macrophages toward the center, thus making it possible to mount antibacterial responses safely away from uninvolved tissue.These data support the concept that granulomas have organized microenvironments that balance antimicrobial and anti-inflammatory responses to limit pathology in the lungs .This is consistent with a recent study demonstrating that inflammatory signaling in human tuberculosis granulomas is spatially organized .The authors applied laser-capture microdissection, mass spectrometry and confocal microscopy, to generate detailed molecular maps of human granulomas.It was observed that the centers of granulomas have a pro-inflammatory environment that is characterized by the presence of antimicrobial peptides, reactive oxygen species and proinflammatory eicosanoids.Conversely, the tissue surrounding the caseum has a comparatively anti-inflammatory signature.If one relates these data to the spatial distribution of local oxygen tension within TB granuloma, there is a nearly perfect concordance between areas of hypoxia, necrosis, and a high degree of proinflammatory activities.In other words the highly hypoxic center is the focus of greatest antimicrobial activity, which is surrounded by an area of reduced proinflammatory activity and gradually increasing oxygen tension.It is of particular interest that foamy macrophages, which are key participants in both sustaining persistent bacteria and contributing to tissue pathology are located mainly in the interface region surrounding central necrosis .As a result of the complex host pathogen interplay foamy macrophages in the interface region may reflect the perfect niche and prime location for Mtb to initiate a new round of infection.The development of hypoxia is also known to be a stimulus for vascularization .In TB it has been observed that cavitary TB patients presented patterns of low vascularization in the areas of peripheral infiltration, whereas tuberculoma lesions were always surrounded by highly vascularized tissue .This is consistent with the finding that progression to necrosis and caseation is associated with the formation of vascular epithelial growth factor by activated macrophages .Indeed VEGF, a primary mediator of host vascularization, has been found to be induced in human tuberculosis patients .In another smaller study VEGF was postulated as a host marker to differentiate active TB from latent TB infection .A recent study showed that vascularization of zebrafish granulomas was accompanied by macrophage expression of VEGF.Most importantly, treatment of infected animals with a VEGFR antagonist led to dramatic reductions in vascularization and bacterial burdens, demonstrating that a granuloma-induced VEGF-mediated angiogenic program is beneficial to mycobacteria .Taken together, while hypoxia seems host protective at first sight, Mtb may exploit the hypoxia-induced host response to ensure its survival and transmission.Understanding the host immune responses following infection with MTB is essential to help design effective vaccines and identify diagnostic and prognostic immune biomarkers.Antigen discovery efforts have been a core activity in mycobacterial research for several decades, facilitated by the availability of the genome sequence .Antigen discovery approaches include i) the use of algorithms for genome-based prediction of immunodominant epitopes, ii) evaluation of candidate antigens/epitopes for T cell recognition, and iii) understanding the relationship between epitope specificity and the phenotype of the responding T cells.All these approaches rely on the assumption that the antigens of interest are expressed, translated and presented by infected cells, where they are recognized by T cells.While the MTB genome consists of close to 4000 genes, little is known about the MTB antigen repertoire that is actually expressed by the bacilli during infection of human cells.Sequencing the genomes of 21 strains, representative of the global diversity of the MTB complex showed, that the majority of the experimentally confirmed human T cell epitopes had little sequence variation, suggesting they are evolutionarily hyperconserved, implying that MTB might benefit from recognition by human T cells .However, this knowledge is biased by the methods used to experimentally confirm the human T cell epitopes: using IFN-gamma production as a read-out.IFN-gamma is the most established readout of cell mediated immune response assays and a hallmark of the Th1 type cellular immunity .The importance of the Th1 type immunity in controlling MTB infection has been established both in mice and humans .However, it may be an incomplete representation of the cytokine repertoire and functional response of T cells to MTB antigens, and we still do not have a validated immune correlate of protection from TB disease to aid antigen discovery and identification of vaccine candidates.Thus, antigens activating immune cells other than CD4+ and/or CD8+ T cells, producing cytokines other than IFN-gamma are less widely explored .While a number of cytokines and chemokines are being evaluated as alternatives to IFN-gamma, data are still preliminary .Additionally, broadening antigen selection strategies is necessary, such as screening subdominant epitopes, which are not, or only weakly, recognized during natural immunity, but are able to induce immunity and protection against MTB challenge, as demonstrated in mouse models .As indicated above, Mtb can adapt transcriptionally to a wide variety of environmental conditions, such as nutrient depletion, shifts in pH and hypoxia in vivo.The hypothesis that genes highly induced under such conditions may also be expressed and available as potential T cell targets has led to the derivation of what are termed infection stage specific MTB genes and thus their cognate antigens.Amongst the first antigens to be investigated were those of the heat shock response: proteins induced under stress conditions, such as elevations of temperature causing denaturation of proteins during infection .Heat shock proteins assist the survival of MTB but also provide a signal to the immune response.The gene Rv0251c is induced most strongly by heat shock in MTB.It encodes Acr2, a member of the alpha-crystallin family of molecular chaperones.The expression of Acr2 increases within 1 h after infection of monocytes or macrophages, reaching a peak of 18- to 55-fold increase by 24 h of infection in vitro.However, a deletion mutant was unimpaired in log phase growth and persisted in IFN-gamma-activated human macrophages, suggesting that Rv0251c is dispensable.The protein Acr2 is strongly recognized by cattle with early primary Mycobacterium bovis infection and also by healthy MTB-sensitized people.Interestingly, within the latter group, those with recent exposure to infectious tuberculosis had higher frequencies of Acr2-specific IFN-gamma-secreting T cells than those with more remote exposure, suggesting infection stage-specific immunity to tuberculosis .Several studies evaluated the above candidate genes, and many were found to encode MTB antigens that induce strong immune responses.One of the most abundant upregulated proteins during hypoxia is the 16 kDa protein , also a DosR regulated antigen.Attributes of immunodominance, predominant expression during mycobacterial dormancy and species specificity made it a highly attractive candidate for the study of the immune response in humans.Further studies demonstrated it to be immunodominant in both the murine and human systems .The most permissively recognized region was found to be between amino acids 91–110, possibly due to its ability to bind multiple HLA-DR alleles .The finding that the IFN-gamma response to Rv2031c was higher in healthy BCG-vaccinated controls compared to those with extensive untreated tuberculosis led to the speculation that prolonged containment in humans may be contributed to by long-lived Rv2031c-specific cells, able to divide on re-challenge, and thus limit dissemination .This was further investigated by comparing T-cell responses against Rv2031c and the secreted MTB protein Ag85B in TB patients and various controls.Gamma interferon responses to Rv2031c were higher in MTB-exposed individuals, with no such differences found against the secreted Ag85B.The term ‘latency antigens’ was coined and suggested that subunit vaccines incorporating latency antigens, as well as recombinant BCG strains expressing latency antigens should be considered as new vaccines against TB .These findings prompted the investigation of the human immune response to other DosR regulon encoded genes, summarized in Ref. .Overall, DosR encoded immunodominant antigens have been termed ‘latency antigens’ due to preferential recognition shown by those with LTBI in terms of a higher IFN-γ response, when compared to those with active tuberculosis .In particular Rv1733c, Rv2029c, Rv2627c and Rv2628c induced strong IFN-gamma responses in skin test positive individuals, suggesting that immune responses against these antigens may contribute to the control of LTBI.The immunogenicity of these promising DosR regulon-encoded antigens by plasmid DNA vaccination was also assessed in mice.Strong immune responses could be induced against most, the strongest being Rv2031c and Rv2626c, providing proof-of-concept for studies in mice mimicking LTBI models and their extrapolation to humans for potential new vaccination strategies against TB .A number of comprehensive studies followed, partially summarized in Table 1, which is however by no means exhaustive.A detailed analysis of MTB genes that are upregulated during the latent stage of infection was considered a priority to identify new antigenic targets for vaccination strategies .Transcriptional analysis of the hypoxic response at later timepoints led to the identification of 230 genes induced between 4 and 7 days of hypoxia, that were named the enduring hypoxic response genes .Analysis of EHR encoded proteins could provide novel T cell targets, with the hypothesis that these genes may be expressed in vivo and thereby could be targets of the immune response .In order to relate what is expressed by the bacilli in vivo or in vitro, to what is recognized by human T cells as antigens, a combined bioinformatic and empirical approach was employed as a novel genome based strategy, to guide the discovery of potential antigens.The fold induction of the top 100 highly induced genes at 7 days of hypoxia, their transcript abundance, population specific MHC class II-peptide binding prediction, and a literature search was combined, leading to the selection of 26 candidate genes.Overlapping peptides were used in combination with two readout systems, ELISpot for IFN-γ as well as IL-2.Five novel immunodominant proteins: Rv1957, Rv1954c, Rv1955, Rv2022c and Rv1471, showed responses similar to the immunodominant antigens CFP-10 and ESAT-6 in both magnitude and frequency.These findings revealed that a number of hypoxia-induced genes are potent T-cell targets and therefore offers general support to the important role of hypoxia in the natural course of TB infection.Importantly however, only moderate evidence of infection stage specific recognition of antigens was observed .In light of the above findings, the hypoxia inducible MTB specific proteins absent from the BCG vaccine strains were also evaluated.One region of difference 2 and two RD11 encoded proteins were identified, that are absent from the commonly used BCG strains and all M. bovis strains including BCG, respectively.When compared to the immunodominant molecules ESAT-6 and CFP-10, IFN-gamma responses to the RD11 proteins were inferior in both aTB and LTBI groups.A strong IL-2 recall response to Rv1986 was found in LTBI, targeted at two epitopic regions, containing residues 61–80 and 161–180 .These studies confirmed that genomic knowledge does aide antigen discovery, especially when it is complemented with population specific MHC-class II-peptide prediction analysis, as also shown in a different study later .Additionally, these studies also confirmed that a number of EHR genes are expressed in vivo and are potent T-cell targets of the immune response.The results further our understanding of the biology of latent infection and offer general support to the hypoxia hypothesis and its relationship to the natural infection of MTB.While some of these findings did not provide support to the hypothesis of infection stage specific antigen recognition, they support an overlapping immunological spectrum between those with latent and active TB disease as suggested .Whilst hypoxia does characterize granulomas in tuberculosis infection, but it is increasingly appreciated and accepted that even those with active TB disease have a spectrum of lesions, similar to those of the latently infected and it is likely that the hypoxic lesions are present in both clinical states .This has been shown in the cynomolgus macaque model: the fate of individual lesions varies substantially within the same host, suggesting that critical responses occur at the level of each individual lesion, to ultimately determine the clinical outcome of infection in the infected host . | Mycobacterium tuberculosis is a facultative anaerobe and its characteristic pathological hallmark, the granuloma, exhibits hypoxia in humans and in most experimental models. Thus the host and bacillary adaptation to hypoxia is of central importance in understanding pathogenesis and thereby to derive new drug treatments and vaccines. |
558 | A Prospective, non-intErventional Registry Study of PatiEnts initiating a Course of drug Therapy for overactIVE bladder (PERSPECTIVE): Rationale, design, and methodology | Overactive bladder is a syndrome characterized as urinary urgency, with or without urgency incontinence, and is usually accompanied by frequent urination and nocturia in the absence of a urinary tract infection or other obvious pathology .The worldwide prevalence of OAB is projected to be approximately 20% in 2018 .A national population survey conducted in adults in the United States estimated the overall prevalence of OAB to be 16.5% , whereas a Canadian study found a slightly lower overall prevalence of OAB among adults .OAB is noted to be more common in women than men, and OAB symptoms are noted to increase with age in both sexes .OAB represents a substantial health burden and is associated with several comorbidities and consequences, including urinary tract infection, certain skin infections, depression, sleep disturbances, and falls and fractures .The most widely used pharmacologic treatments for OAB are antimuscarinics.However, treatment is not always effective and is often associated with side effects that limit its clinical use .Mirabegron is a β3-adrenoceptor agonist that was approved for the treatment of OAB with symptoms of urgency urinary incontinence, urgency, and/or urinary frequency in the US in 2012, and in Canada in 2013 .Data from clinical trials in a variety of populations have demonstrated significant improvements in the efficacy of mirabegron versus placebo and comparative efficacy with antimuscarinics, with an adequate and well tolerated safety profile .Findings from studies utilizing patient-reported outcome measures have shown that improvements in symptoms and health-related quality of life were also higher in patients who took mirabegron versus placebo .However, these findings have not been confirmed in a real-world setting in North America.Antimuscarinic side effects may be responsible for lack of treatment persistence and, therefore, therapy using a drug with a different mechanism of action could lead to improved persistence.Retrospective claims database studies in both the US and Canada concluded that patients treated with mirabegron persisted longer on treatment than those treated with an antimuscarinic.Although treatment with mirabegron led to higher 12-month persistence and adherence rates , nearly one-third of mirabegron-treated patients discontinued treatment by 12 months, and reasons for doing so remain unclear.The purpose of this paper is to describe the objectives and methodology of PERSPECTIVE, a novel registry that was designed to provide real-world data on patients with OAB beginning a new course of either mirabegron or an antimuscarinic.Assessment of treatment patterns and persistence with OAB medication will provide insights into the current management of OAB while evaluation of treatment effectiveness and satisfaction from the patient perspective will help us better understand the reasons for nonadherence, discontinuation, and switching of OAB medications.PERSPECTIVE will also explore how patient characteristics may affect treatment utilization and OAB-related outcomes.PERSPECTIVE was a prospective, multicenter, non-interventional registry following adult patients diagnosed with OAB for at least 3 months who were starting a new course of pharmacotherapy in routine clinical practice in the US and Canada.The primary objective of the study was to identify factors associated with improved effectiveness of pharmacologic therapy for OAB from a patient perspective through selected PROs.The study also aimed 1) to compare persistence rates, reasons for discontinuation, and switching patterns among patients taking mirabegron compared with those taking antimuscarinic treatments for OAB, and 2) to understand how differences in drug selection, OAB treatment history, and comorbid conditions contribute to OAB symptom bother and HRQoL in the real world.To ensure a rigorous study design, implementation, and evaluation, an independent scientific advisory committee composed of clinicians and researchers with experience in the management of OAB in general practice/internal medicine, obstetrics and gynecology, urology, or uro-gynecology settings, was established.The SAC provided scientific review and guidance on registry development, including the choice of data elements, as well as conduct of the study.PERSPECTIVE was conducted in accordance with all applicable laws and regulations including, but not limited to Good Pharmacoepidemiology Practices, Good Pharmacovigilance Practices, the European Network of Centres for Pharmacoepidemiology and Pharmacovigilance Guide on Methodological Standards in Pharmacoepidemiology, the ethical principles that have their origins in the Declaration of Helsinki and applicable privacy laws.The protocol and any associated documents for this study were approved by an appropriate Institutional Review Board or Independent Ethics Committee at each site prior to any patients being enrolled.Pertinent safety information is submitted to the relevant IECs during the course of the study in accordance with local regulations and requirements.The IRBs approved compensation to patients for time spent in the study via gift cards.Healthcare centers and sites with providers in medical specialties who are routinely involved in the care and treatment of OAB patients were targeted for recruitment.Enrolled patients initiated a new course of treatment with either mirabegron or an approved antimuscarinic medication for the treatment of OAB.Treatment-naïve patients as well as patients changing the class or brand of oral pharmacotherapy for OAB were eligible for enrollment.To ensure the recruitment of a representative patient population for the study, key characteristics were incorporated into the recruitment plan.Site-level enrollment caps were set to ensure enrollment of a geographically diverse population.Caps were also set to enroll 85% of patients from the US and 15% from Canada across approximately 100 centers.Patients with OAB treated with onabotulinumtoxinA, sacral neuromodulation, or percutaneous tibial nerve stimulation, and those with a history of external beam radiation, indwelling urinary stents, surgery, or intermittent catheterization prior to or at the time of enrollment were ineligible for enrollment into the study.Also ineligible were patients with neurologic conditions associated with OAB symptoms, pregnant or breastfeeding patients, and those residing in a nursing home.The study was conducted from January 2015 until August 2017.The overall study duration was 28 months; 16 months for recruitment plus an additional 12 months of follow-up.The registry protocol did not provide or recommend any specific treatment nor mandate patients to complete in-person follow-up visits, though all patients must have had a routine clinical care visit in person that qualified as a study baseline/initiation visit."The observational design of this registry allowed treatment decisions to be made at the discretion of the treating clinician in accordance with their usual practices, and all clinical assessments were performed at the time of a routine clinical encounter with additional data collected from the patients' medical records.It is important to note that all provinces in Canada, except British Columbia and Quebec, participated in a patient support program that ran from December 2014 through July 2015.In the province of Alberta, the voucher program ran from December 2014 to September 2015.The program paid up to 100% for mirabegron, including refills, resulting in reduced or no out-of-pocket costs for the patient.This type of program can impact short-term treatment decisions and, thus, was carefully considered in the analyses for this registry.No such support was provided in the US."Data important to assess use and effectiveness of OAB pharmacotherapies were captured from patients' medical records and electronically directly from the patients at baseline, and at 1, 3, 6, and 12 months of follow-up, as well as at any ad hoc time points at which patients either switched or discontinued their treatment. "Clinical information was collected from data routinely recorded in patients' medical records as well as prospectively by the investigators.Sites entered de-identified patient data into a secure internet-based data capture system via electronic case report forms."At baseline, patients' demographics, measures of health, diagnosis of and prior history of OAB symptoms and treatment, and relevant medical history were collected.Data collected at baseline and throughout follow-up, should any visits have occurred, included concomitant medications, use of complementary/supportive OAB therapies, new or current mirabegron/antimuscarinic use and changes in use, current non-pharmacologic OAB interventions, as well as frequency and type of medical care and utilization.In addition, sites were asked to provide reasons for patients discontinuing, switching, or adding on OAB medications as well as reasons why referrals to a specialist were made.Spontaneous reporting of safety events, including information on event type, severity, relationship to OAB treatment, and outcome, was also conducted throughout follow-up.The PROs were completed electronically by the patient via an email link at the specified time intervals, i.e., baseline: days 0 to 7, month 1: days 30 to 45, month 3: days 91 to 125, month 6: days 182 to 208, and month 12: days 336 to 377.The OAB-specific PROs were validated for use in OAB populations, and they included the following: OAB Questionnaire Short-Form , Patient Perception of Bladder Condition , and OAB Satisfaction Questionnaire ).The EuroQoL 5 dimensions, developed by the EuroQol Group, is a system of HRQoL states that has been shown to be able to detect changes in severity of disease among individuals with OAB .As only minor modifications were made to the format of the PROs when adapting the forms for electronic use, no additional validation was considered necessary ."De-identified PRO data were linked with a patient's clinical data entered by the sites for analysis.Patients received up to three email reminders for each PRO follow-up time point with links to the PRO instruments.Patients also completed questions on reasons for discontinuing OAB medication, if applicable.The PROs used in this study were selected to align with the concepts central to OAB-related symptomatology and HRQoL, and represent instruments that are validated for or could easily be adapted to electronic administration and which pose relatively little burden on patients.Additionally, they were available in English, Spanish, and Canadian French and have been used previously in OAB populations.This PRO assesses OAB symptoms and impact and consists of a six-item symptom bother scale and 13 HRQoL items that form three subscales.Patients rate each item using a Likert scale ranging from one to six for the symptom bother items and “none of the time” to “all of the time” for the HRQoL items.Patients rate the severity of their bladder-related problems on a scale from one to six.This instrument assesses OAB treatment satisfaction using five scales on OAB control expectations: impact on daily living with OAB, OAB control, OAB medication tolerability and satisfaction with control."In addition, there are five questions on overall assessments of the patient's fulfillment of OAB medication expectations, interruption of day-to-day life due to OAB, overall satisfaction with OAB medication, willingness to continue OAB medication, and improvement in day-to-day life attributed to OAB medications.The EQ-5D is a standardized instrument to assess quality of life that provides a descriptive profile and a single index value for health status that can be used in the clinical and economic evaluation of health care as well as in population health surveys.This instrument comprises five dimensions, which patients rate on a scale from one to five.The EQ-5D also includes a visual analog scale where patients rate their health on a scale from zero to 100.It can be used to generate Quality Adjusted Life Years, for cost-effectiveness analysis.The planned study size of 1500, including 600 mirabegron and 900 antimuscarinic patients, was determined to provide a sufficient level of sensitivity to identify individual characteristics that could be associated with differential effectiveness by the OAB-Q within each treatment cohort.The maximal power for a characteristic associated with a four-, five-, and six-point change in the OAB-Q symptom bother score with a standard deviation of 18.4 is 82%, 95%, and 99%, respectively.A change in the OAB-Q symptom bother domain over 12 weeks showed a historical standard deviation of approximately 18.4 .Comparison between the two cohorts with 600 and 900 patients per cohort to detect a three-point difference in the change from baseline for OAB-Q symptom bother was powered at 87%.Generalized linear regression models will be used to examine the influence of baseline covariates as well as initial treatment with mirabegron or antimuscarinic therapy on change in PRO response from baseline.The structure of the generalized linear regression model and generalized linear mixed model were chosen a priori.Univariate regression models will be used to determine factors that are predictive of outcomes, and these factors will then be used in the generalized linear regression and the generalized linear mixed models.Forward, backward or stepwise model variable selection techniques will not be used.For analyses at specific time points, analysis of covariance models with least square means will be used to model the change in OAB-Q-SF, PPBC, OAB-S, and EQ-5D while adjusting for key baseline covariates.Traditional multivariate adjustment and estimated propensity scores, i.e., the predicted probability of initiating mirabegron or antimuscarinic treatments, will be used to adjust for confounding.Significant predictors identified in the multivariate propensity score analysis can highlight important confounders in studies of OAB.Patients will be analyzed according to the cohort in which they initiated treatment at baseline, regardless of whether they switched therapies at some point during the study.Persistence with the initial OAB treatment will be analyzed through the use of the log-rank test with persistence rates estimated via Kaplan-Meier methods for all patients and for mirabegron and antimuscarinic medication initiators, separately.End of persistence is defined as treatment discontinuation, switch, or add-on of other OAB medication or selected therapies including onabotulinumtoxinA and neuromodulation.As a sensitivity analysis, multivariate Cox regression will be used to compare time to end of persistence.Additional analyses will address patterns of adding, switching, or discontinuing treatment.The patterns will be categorized based on the initial OAB treatment, second-line and third-line OAB medications or treatments, including onabotulinumtoxinA and neuromodulation, as well as discontinuation of any treatment.Reasons for discontinuation, switch, or adding on treatment as reported by patients and healthcare providers will be summarized for each specific medication as numbers and percentages.Counts and incidence proportions of spontaneous adverse events and serious adverse events following use of specific OAB medications will be summarized by the Medical Dictionary for Regulatory Activities, System Organ Class and preferred term, severity, fatal or non-fatal, and relationship to the OAB medication.Clopper-Pearson confidence intervals for the proportion will be constructed around the point estimate of the incidence.Characteristics of patients with serious adverse events and any specified treatment-emergent adverse events, which may include use of study medications and underlying medical conditions, will be examined.As comparisons are exploratory, there will be no formal hypothesis testing and data analyses will be based on descriptive statistics.The chi-square test will be used to analyze categorical variables and, depending on the data distribution, the Wilcoxon rank sum test or t-test will be used for continuous variables.Differences will be considered statistically significant with P-values < .05.No adjustment will be made for multiplicity.In general, missing data will not be imputed and the data will be analyzed as they are recorded.Missing items on the questionnaires will be handled according to questionnaire scoring guidelines for missing data.Large registry data on broader patient populations than are typically included in randomized clinical trials are needed to describe OAB treatment patterns and to assess OAB treatment effectiveness in the real world.PERSPECTIVE is the first observational study in the US and Canada that has enrolled over 1500 patients who initiated a new course of treatment with mirabegron or antimuscarinic medication.In clinical practice, effectiveness of OAB treatments can be assessed through measures such as micturition diaries, quality of life questionnaires, or via discontinuation rates, switching or add-on medications, as well as the initiation of non-pharmacologic interventions for OAB.Although useful in the setting of a clinical trial, micturition diaries are not commonly used in clinical practice and, in certain instances, may alter the behavior that they are measuring.Assessment of symptom bother and treatment effectiveness and satisfaction from the patient perspective through PROs is another important outcome measure."OAB symptoms are known to negatively impact patients' HRQoL including psychosocial and physical functionality domains . "The choice of an OAB treatment regimen depends on many factors including the severity and bother of symptoms and the extent to which the symptoms interfere with the patient's lifestyle. "The patient's satisfaction with treatment may also impact whether the patient continues or discontinues treatment.Real-world evidence of treatment effects and effectiveness in patients with OAB is needed, and information on prospectively collected PROs is especially important to assess patient experience, treatment satisfaction, change in symptoms and severity of OAB, and general HRQoL.This prospective, real-world information is not collected systematically in existing databases, and only a few PROs are collected in structured, regulatory, long-term clinical trials.Information on effectiveness from a patient perspective is essential to inform patient-centric policy and ultimately to improve quality of care and patient outcomes.While American Urological Association guidelines equally recommend mirabegron and antimuscarinics as second-line treatments to behavioral therapy and lifestyle interventions , a technical guidance issued in June 2013 in the United Kingdom by the National Institute for Health and Care Excellence recommended mirabegron as an option for treating the symptoms of OAB only for people in whom antimuscarinic drugs are contraindicated, clinically ineffective, or have unacceptable side effects .The guidance also noted the absence of real-world data on persistence with mirabegron, and that data from randomized clinical trials were unlikely to be representative of the persistence rates in clinical practice.As these recommendations are driven by clinical trial data, there is a need for real-world data to better guide clinicians in their pharmacologic treatment choices for patients with OAB."Data in PERSPECTIVE were collected from patients' medical records and directly from the patients to provide information on treatment persistence, discontinuation, and switching with mirabegron or antimuscarinics in routine clinical practice.The protocol did not mandate or recommend any specific treatments, although the possibility that participation in the study may have influenced treatment decisions or other actions that may have impacted study results cannot be ruled out.However, we have no reason to believe that any resulting effect would be different across treatment groups.PERSPECTIVE is also one of the first OAB studies to use PROs to acquire HRQoL data to evaluate OAB disease burden and treatment effectiveness and satisfaction from the patient perspective to guide clinical practice.Results of the registry will be presented in subsequent publications and are expected to add value to the OAB research landscape by producing real-world evidence on treatment utilization and of factors associated with improved effectiveness and persistence of pharmacologic therapy for OAB.The large sample size and the selection of a robust control group will allow for rigorous comparative effectiveness research."Furthermore, the registry's results are expected to aid future treatment decisions and inform policy by providing novel real-world and pragmatic insights into the comparative effectiveness of marketed OAB treatments. | Introduction: Pharmacotherapy of overactive bladder (OAB) typically involves treatment with an antimuscarinic or mirabegron, a β3-adrenoceptor agonist, but real-world evidence on their use, including treatment access, persistence, and switching, is limited. Here, we describe the design of a prospective, multicenter, non-interventional registry of patients beginning a new course of OAB pharmacological therapy in routine clinical practice. Methods: Adults with an OAB diagnosis for at least 3 months who either initiated a new course of mirabegron or antimuscarinic, or who switched therapy were enrolled into PERSPECTIVE (a Prospective, non-intErventional Registry Study of PatiEnts initiating a Course of drug Therapy for overactIVE bladder). The primary objective was to identify factors associated with improved OAB treatment effectiveness from a patient perspective. Secondary objectives were to compare persistence rates, reasons for discontinuation, and switching patterns between patients taking mirabegron or antimuscarinics. Healthcare centers and sites involving medical specialties who routinely participate in the care and treatment of patients with OAB (e.g., gynecology, urology, and primary care practices) were targeted for recruitment. Patient-reported outcomes (PROs), including quality of life, symptom bother, and treatment satisfaction from OAB-validated scales, were collected at baseline, months 1, 3, 6, and 12, and when patients switched or discontinued their current OAB medication. Conclusions: PERSPECTIVE is the first real-world observational study in the United States and Canada on clinical and patient perspectives in OAB management. Recruitment was reflective of centers where patients are treated for OAB to maximize generalizability to the real-world population. Trial registration: ClinicalTrials.gov, ID number NCT02386072 (date of registration March 6, 2015). |
559 | Is Chinese trade policy motivated by environmental concerns? | Since 2007, China has introduced export taxes and reduced export value-added tax rebates for a range of products."According to China's National Development and Reform Commission, the VAT rebate adjustments aim at controlling “exports of energy-intensive, pollution-intensive and resource-intensive products, so as to formulate an import and export structure favorable to promote a cleaner and optimal energy mix”.Statements which link VAT rebates and export taxes to environmental concerns appeared repeatedly in consecutive years.This paper investigates whether, in practice, Chinese trade policy reflects environmental motives.It is not obvious that the VAT rebate and export tax adjustments are driven by environmental concerns."Other potential motives include an attempt to manipulate the terms of trade in China's favour, a desire to attract downstream producers to China or lobbying pressure by different industries.The policy relevance of the motivation behind Chinese export restrictions manifests itself in the WTO dispute settlement cases on Chinese export restrictions for raw materials and rare earths.In both cases, China is a leading producer of the goods in question which are used as intermediate inputs into high-tech products.The complainants hold that China uses export restrictions to manipulate the world market price and to force intermediate producers to move to China where supply of these crucial inputs is stable."China, however, argues that the export restrictions are necessary to protect China's natural resources and the health of its citizens since the production is highly polluting.Even though China committed to cancelling its export taxes on most products in its Accession Protocol to the WTO, it argues that the export restrictions are justified under Article XX of the GATT.Article XX of the GATT allows an exemption from GATT rules for environmental objectives such as the protection of exhaustible natural resources and health considerations."The export VAT rebate and export tax reforms have to be assessed against the background of China's environmental agenda. "In the last decade, the Chinese government has launched an ambitious attempt to tackle the country's environmental problems.There are several reasons for this increasing focus on environmental policy."Firstly, the Chinese leadership realized that environmental problems might hamper China's growth in the long run.Secondly, public discontent concerning pollution has been growing.This has become obvious in mass protests in response to environmental degradation and in increasing participation in environmental NGOs.Gang argues that addressing environmental issues might be crucial for the government to consolidate its rule.Finally, international pressure on China to adopt stricter environmental policies is increasing.It is well-known that local environmental distortions are best internalized through the use of domestic policy instruments such as pollution taxes. Copeland, however, shows that a country which fails to implement optimal pollution regulation can use trade policy as a second-best instrument to reduce pollution.Arguably, the second-best scenario applies to China."The Chinese government's attempts to reduce pollution are reflected in ambitious targets for environmental protection in recent Chinese Five Year Plans. "However, corruption and difficulties with the enforcement of environmental regulation limit the Chinese government's ability to use pollution levies in order to reduce domestic pollution.Trade policy instruments like export taxes and partial export VAT rebates can, thus, be used as second-best environmental policy instruments."The theoretical foundation for our analysis is an extension of Copeland's model to a large country which sets trade and environmental policy unilaterally. "We solve the model for the second-best export tax and find that the second-best export tax increases in a product's pollution intensity.The intuition behind this result is simple: The export tax reduces production and exports of a particular good.As a result, resources are reallocated to sectors which are subject to a lower export tax.If the export tax is largest for the most pollution intensive goods, more resources are allocated to the production of relatively clean goods and the pollution intensity of production declines."The prediction that the second-best export tax is positively correlated with a product's pollution intensity guides our empirical analysis.We investigate whether the export tax equivalent of partial export VAT rebates, henceforth called VAT tax, or the export tax are higher for products which are more pollution intensive along several dimensions.The analysis considers pollutants for which the Chinese government has specified emission reduction targets in its Five Year Plans.These include waste water, chemical oxygen demand, ammonium nitrogen, soot, SO2, solid waste and energy use.The dataset used for this analysis covers the years 2005–2009.Since Chinese officials repeatedly linked trade policy to environmental concerns from 2007 on, we are particularly interested in the relationship between trade policy variables and pollution intensities for the years 2007–2009.The data for the years 2005 and 2006 allow us to test whether there is a stronger link between trade policy variables and pollution intensities as a consequence of the VAT rebate and export tax reforms from 2007 on compared to the situation prior to 2007.Our empirical results suggest that the VAT tax is larger for industries with a higher water pollution intensity, SO2 intensity and energy intensity from 2007 on.This pattern is in line with the actions of a regulator who uses partial VAT rebates to reduce exports of water-pollution, SO2 and energy intensive products.Our analysis also reveals that the VAT tax is significantly higher for resource products, indicating that the VAT tax is used to curb exports of natural resources like wood, mineral and metal products as well as precious stones."There is little evidence for an environmental motive behind China's export taxes.However, the export tax is significantly higher for primary products.This could create an incentive for downstream producers to relocate to China in order to get access to raw materials at a lower price.The paper is structured as follows.A review of the related literature is followed by information on the policy background for our study.In the Environmental policy in China section, we argue that environmental problems play a prominent role on the Chinese policy agenda, but that the government struggles to implement and enforce effective domestic pollution taxes.The policy background section on export value-added tax rebates explains why partial export VAT rebates are similar to export taxes, followed by a section which provides background information on Chinese export taxes."In the Second-best export taxes as environmental policy section we derive a formula for a second-best export tax and show that it increases in a product's pollution intensity.This prediction is the foundation for our empirical analysis.We investigate whether the export tax and the VAT tax are higher for more polluting products.The precise empirical strategy is explained in the Empirical strategy section."The paper focuses on the determinants of the VAT tax but it also investigates whether there is an environmental motive behind China's export tax.Both the VAT tax and the export tax are analysed as a function of the pollution intensities and a set of control variables.The dataset section describes the dataset followed by a Summary statistics in Summary statistics section.The regression results with the VAT tax as dependent variable are presented in the Determinants of VAT taxes section.The subsequent Export tax as dependent variable section-briefly discussed of the determinants of the export tax However, the robustness checks in the Sensitivity analysis section focus on the determinants of the VAT tax.The last section of this paper concludes."Wang et al. provide an analysis of the environmental aspects behind China's trade policy. "Wang et al. calculate the implicit export carbon tax behind China's export taxes and export VAT rebates for 8 energy-intensive sectors and find that it differs considerably across sectors.The implicit export carbon tax ranges from 18 US$ per ton of CO2 emissions for basic chemicals to 764 US$ for chemical fibre.Yet, this finding does not imply that the VAT rebate and the export tax reforms are not based on environmental concerns."CO2 emissions may not have been the Chinese government's only concern when it designed the policy.We study the relationship between trade policy instruments and a range of pollutants.This reveals whether Chinese trade policy reflects concerns about air, water or solid waste pollution as well as energy use and indicates which pollutant has the largest impact on the respective trade policy instruments."As opposed to Wang et al. we consider the overall pollution generated in all stages of the production process and not only a sector's direct pollution emissions. "A description of the rational behind China's export restrictions can also be found in Karapinar, who highlights that Chinese export restrictions could be motivated by an attempt to protect downstream producers in China as well as environmental motives. "This is confirmed in Korinek and Kim's description of Chinese export restrictions for Molybdenum.Both Korinek and Kim and Pothen and Fink analyse export restrictions for Chinese rare earths.The potential objectives behind those export restrictions include the protection of downstream producers in China, the conservation of natural resources and environmental protection.OECD, Kim and Piermartini discuss objectives to introduce export taxes for a wide range of countries.OECD and Kim collect information on export restrictions and export duties from WTO Trade Policy reports and find that export duties and export restrictions are mostly applied to renewable resources such as forestry and fishery products, agricultural products as well as leather and hides and mineral and metal products.Fiscal revenues, the promotion of downstream producers as well as attempts to protect the environment and conserve resources are identified as the main drivers for the implementation of export restrictions and export duties in these sectors.The above-mentioned papers highlight important motives for the introduction of export taxes and restrictions.However, they are of purely descriptive nature and do not employ an empirical analysis which would allow them to capture the importance of different motives.Our analysis investigates whether the VAT taxes and export taxes are higher for polluting products and resources, controlling for a range of non-environmental motives such as export taxes for raw materials in order to protect downstream producers and a terms of trade motive.This approach allows us to single out the importance of environmental motives.Empirical work which explicitly takes the relationship between tariffs and pollution regulation or pollution intensity into consideration is very sparse.One exception is the paper by Ederington and Minier, which looks at pollution policy as a second-best trade barrier.If the regulator is constrained in setting import tariffs due to WTO membership, lenient environmental policy reduces production costs and can, thus, be used as a secondary means to protect the domestic industry.However, the empirical results show that tariffs are negatively correlated with environmental regulation.Our paper analyses the determinants of trade policy instruments.Just like Ederington and Minier, we refer to the literature on the political economy of protection for the choice of our control variables.Baldwin, Gawande and Krishna and Ethier provide excellent surveys of that literature.We refer to the relevant papers for our work in the section that explains the control variables.Three papers, which analyse Chinese trade policy, however, are worth describing in detail.Wang and Xie study the industry characteristics which determine the structure of Chinese VAT rebates in 2008.Wang and Xie find that industries with a higher export share have a higher rebate rate, since China supports an export-oriented strategy.The VAT rebate rate is also positively related to the share of national capital.This is interpreted as a sign that state-owned enterprises receive a favourable treatment.Moreover, Wang and Xie argue that the VAT rebate rates reflect an attempt to reduce adjustment cost and achieve social stability, as reflected by the fact that the rebate rate is higher for less profitable industries, industries with a lower labour productivity and industries with a lower ratio of value added.The results also show that industries with more assets, a large number of firms and a large presence of foreign capital receive higher VAT rebates.Chen and Feng analyse the determinants of Chinese tariffs and find that most of the variation in tariffs can be explained by information on industry size.The complainants in the WTO dispute settlement cases against China hold that China introduced export taxes on primary products in order to induce downstream producers to relocate to China."Garred's findings support this hypothesis in showing that the joint export tax equivalent of Chinese export taxes and VAT rebates increased faster for raw materials than for other products over a sample period from 2002 to 2012.Even though we focus on cross-sectional variation rather than variation over time, we also come to the conclusion that the export tax discourages exports of primary products.1,The papers by Wang and Xie, Garred and Chen and Feng highlight important determinants of Chinese trade policy.Their findings guide the choice of our control variables.However, our study differs considerably from Wang and Xie, Garred and Chen and Feng.While we are mostly interested in the relationship between VAT rebates or export taxes and pollution intensities as well as resource intensities, this aspect is completely neglected in Wang and Xie, Garred and Chen and Feng."In recent years, the Chinese government has launched an ambitious agenda to tackle the country's environmental problems.Firstly, it adopted ambitious targets to reduce pollution and energy consumption in recent Five Year Plans.During the sample period of our study, the Chinese government aimed at reducing water pollution, air pollution as well as the generation of solid waste.Moreover, China planned to considerably reduce energy consumption per unit of GDP.2,In addition to ambitious environmental targets, the Chinese government undertook substantial reforms of its administrative structure to give environmental protection a more prominent role in its political hierarchy.The State Environmental Protection Agency was elevated to the rank of a Ministry in 2008 and received a larger budget and more staff.Moreover, China set up a National Leading Group to Address Energy Saving, Emission Cutting and Climate Change.It is headed by the Premier and has thus got a high rank in the political hierarchy."I addition to that, the government started to incentivize local leaders to protect the environment by linking the measure for local officials' performance not only to economic growth but also to environmental achievements. "Even though China embraced ambitious attempts to protect the environment, the lack of enforcement is “the single biggest weakness in China[s environmental law”.The implementation of environmental regulation and the collection of levies is in the hands of provincial and municipal local environmental protection bureaus.3,Local EPBs often lack the power to implement the regulations for several reasons.Firstly, even though their personnel has grown, EPBs are still short of staff to guarantee satisfactory implementation.Secondly, local environmental agencies lack the skills and the technology to detect infringements of pollution standards.Thirdly, local firms are an important source of government revenue and are thus promoted by local authorities.Pollution emissions from these firms are often ignored or permitted.If local leaders consider a certain company important for the local economy, Environmental Protection Bureaus are often impeded from collecting levies.Endemic corruption exacerbates the weak enforcement power of local environmental agencies.There is ample anecdotal evidence that local authorities negotiate levy payments with firms.The precise environmental cost of corruption is difficult to estimate, but it is likely to be significant.According to a statement by Zhou Shengxian, the director of the State Environmental Protection Agency, in 2006, the government investigated pollution control approvals for construction projects and discovered violations in about 40% of the cases.Even if environmental regulation is enforced and levies are collected, environmental policy can only be effective if it induces producers to reduce pollution.The Chinese government has taken several steps to improve compliance with environmental regulations and standards such as increased criminal liability, higher fines for non-compliance in some cases and attempts to improve enforcement.However, the deterrent effect of those measures is limited.There is evidence that violating pollution standards and paying the levies is still cheaper than compliance.This is consistent with recent empirical evidence by Cole et al., who find that neither formal nor informal regulation have a deterrent effect on industry level emissions.Weak enforcement power by local environmental protection bureaus, corruption as well as a questionable deterrent effect of pollution levies make it difficult for the central government to tighten environmental policy in China.Hence, trade policy might be a second-best option to reduce domestic pollution.Since the mid 2000s, the Chinese government has adjusted VAT rebate rates on a frequent basis.The Chinese government quotes environmental protection as a main motivation for these adjustments."According to the Communication on China's Policies and Action for Addressing Climate Change4 these VAT rebate adjustments are geared towards reducing energy-intensive and polluting exports and can be considered part of China's climate policy.However, environmental concerns were not the only motive for the VAT rebate rate adjustments."The reforms were also meant to serve China's development strategy and foster the production of high-tech and high-value added exports.Most countries levy value added taxes.It is common practice to exempt exporters from VAT payments or refund the VAT that exporters paid for their intermediate inputs.In China, exporters only get partial VAT rebates.Partial export VAT rebates have similar effects as export taxes if the export destination levies VATs on its imports.In the absence of VAT rebates, producers face double taxation since they are taxed both in the country of origin and in the export destination.The partial refund is a comparative disadvantage for Chinese producers compared to producers from countries with full export VAT rebates.The lower the rebate rate, the higher the double taxation of exporters.Hence, a reduction in the VAT rebate rate has a similar effect as an increase in the export tax.This theoretical prediction is supported by empirical evidence, which shows that partial export VAT rebates curb Chinese exports considerably.According to Chandra and Long, an increase in the actual VAT rebate rate by one percentage point is estimated to raise exports by 13%.Gourdon et al. find that an increase in the VAT rebate rate by one percentage point leads to an increase in exports quantities by 6.5% for products which are eligible to VAT rebates.An analysis of the VAT rebate rate itself is not informative due to differences in the value-added tax across products.The VAT amounts to 17% for most goods and 13% for some agricultural products.A small range of products is not subject to a VAT.A VAT rebate of 5% generates a lower burden for an exporter whose final product is subject to a VAT of 13% rather than 17%.In order to asses the export tax equivalent of the partial VAT rebate policy on producers, it is necessary to use information on VATs.5,In its Accession Protocol to the WTO, China agreed to levy export taxes on no more than 84 product lines and the export taxes were not allowed to exceed a certain threshold.This threshold ranges between 20% and 40%, depending on the product.In practice, China introduced export taxes on far more products since 2007.The first row in Table 1 shows that only 29 products at the HS8 digit level in our sample were subject to an export tax in 2006.In 2009, the government levied export taxes on 231 products in our sample.Since the export taxes only affect a small fraction of a total of about 5700 HS8 digit product lines in our sample, the empirical analysis in the paper and the discussion of the results focus on the determinants of the VAT tax.China seems determined to reduce its emissions but struggles to enforce pollution regulation to internalise the environmental distortion.Under these circumstances, an export tax can be used as a second-best environmental policy instrument."Production of all non-numeraire goods causes domestic pollution z. Pollution reduces the consumer's utility but does not affect the productivity in other sectors.The model considers Np different pollutants.Firms face pollution taxes s per unit of pollution.The world in this model consists of China and the rest of the world.China is modelled as a large country.The Chinese regulator unilaterally chooses the optimal export tax in response to local pollution and in order to manipulate the terms of trade.We assume that the rest of the world does not adjust its trade and environmental policy in reaction to changes in Chinese trade policy.This assumption is not unusual in papers which analyse optimal policy and it is justified since the rest of the world consists of many countries which do not coordinate their trade and environmental policy to an extent that would warrant a model in which the rest of the world responds to changes in Chinese policy.6,The variable X in Eqs. and represents the net export vector which equals output minus compensated demand.Eq. is like a budget constraint for the economy.It states that consumer expenditure equals GNP plus the rebated pollution tax and export tax income.If the economy-wide budget constraint is satisfied, trade is balanced.Eq. does not allow for a trade deficit or surplus.Eq. pins down the equilibrium pollution level in the economy.The total differentials of equilibrium conditions– allow us to solve for the second-best export tax.The second-best export tax is the export tax that yields the highest level of utility u in a situation in which environmental policy cannot be altered.The second-best export tax features a terms of trade motive and an environmental motive.The terms of trade motive captures the incentive to introduce an export tax in order to reduce the supply of the good on the world market.As long as the country is a large supplier on the world market, this drives up the world market price and allows the country to get a higher price for its exports.The terms of trade motive equals the inverse of the foreign elasticity of import demand.The environmental motive reflects the attempt to reduce pollution through the use of export taxes.An export tax reduces production and hence reduces emissions which are generated during the production process.If the government increases the export tax for the most polluting product, the production of this pollution intensive product declines.The resources which are set free in the polluting sector are allocated to the production of cleaner goods.This way, the pollution intensity of production in the economy declines."The second-best export tax increases in a product's pollution intensity if the pollution tax is lower than marginal damage.The relationship between the second-best export tax and the pollution intensity should be stronger, the larger the gap between marginal damage and the pollution tax.This result from the theoretical model can be tested using Chinese data on VAT taxes, export taxes and pollution intensities.The Chinese government claims to use VAT rebates and export taxes to reduce “exports of energy-intensive, pollution-intensive and resource-intensive products”.Our empirical analysis tries to reveal whether, in practice, the VAT taxes and the export taxes reflect an attempt to protect the environment.If the VAT taxes or the export taxes are used as a secondary pollution policy, they should be higher for more polluting products as long as environmental policy is too lenient.Which pollutants should be considered in the analysis?, "The NDRC's announcement does not tell us which pollutants the Chinese government targets with its policy. "In order to gauge which pollutants play a prominent role in China's environmental agenda, we refer to the environmental section of the Chinese Five Year Plan, which sets out specific pollution reduction targets for major pollutants.We expect that the set of pollutants K which determine trade policy is similar to the set of pollutants for which the government specifies emission reduction targets in the FYP.The relevant FYPs for our sample period specify emission reduction targets for water pollution, air pollution as well as energy use per unit of GDP.Moreover, China aims at reducing the amount of solid waste generated and at increasing the ratio of solid waste that is recycled."This indicates that the Chinese regulator's objective function puts weight on the above-mentioned pollutants and suggests that our analysis focuses on precisely those pollution indicators.9",When we analyse the relationship between trade policy and the pollution intensity of exports, we are interested in the overall pollution generated during all stages of the production process taking place within China.In order to obtain the overall pollution content of exports, it is necessary to use input-output analysis.The precise construction of the overall pollution intensities is described in The dataset section.The theoretical model predicts that a higher pollution intensity has a stronger effect on the export tax if the gap between marginal damage and the pollution tax is larger.Hence, it is necessary to interact the pollution intensities with a measure for the gap between marginal damage and the pollution tax.This regulatory gap, or Reg_gap, is proxied by the share of emissions not meeting discharge standards.This measure varies at the industry level and over time.The share of emissions not meeting discharge standards is a useful measure of the regulatory gap since the national authority can set discharge standards such that they internalize marginal damage.The enforcement is left to local authorities.If local enforcement is lax, firms have few incentives to satisfy discharge standards.Hence, a low ratio of emissions meeting discharge standards means that regulation is ineffective since it does not have a strong deterrent effect on emissions.Data on the ratio of a pollutant meeting discharge standards is only available for waste water, soot and SO2 emissions.However, the Five Year Plans for the years 2000–2010 foresee reductions in the emissions of all pollutants in our analysis.The fact that the government intends to reduce emissions and sets targets for emission reductions indicates that the emissions are above the social optimum, which is equivalent to a situation in which the pollution tax is too lax.With the pollution tax not internalizing the distortion to the desired extent, we would expect to see a positive relationship between the VAT tax and the pollution intensity if the VAT tax is used as a second-best environmental policy.In other words, we expect the coefficients βk in Eq. to be positive.According to the NDRC, the VAT rebate adjustments are also geared towards reducing exports of resource-intensive products.In order to control for this aspect of policy setting, we introduce dummy variables which take the value of 1 if a product is resource intensive.We distinguish between four categories of resources: mineral products, wood products, precious stones and metal products.The resource dummy variables are constructed based on the HS classification.The dummy variable Mineral takes the value of 1 for all products in the HS2 digit categories 25–28.The range of products in these categories includes ores, mineral fuels and oil as well as rare earths.The dummy variable Wood takes the value of 1 for all wood products, articles of wood and wood charcoal.Stones is a dummy variable for all products in the HS2 digit category 71.This includes precious metals, precious stones, pearls and jewellery.The dummy variable Metal takes a value of 1 for all metal products in the HS2 digit categories 72–81, including iron and steel, copper, aluminium, lead, zinc, tin and articles thereof.We expect a positive relationship between the VAT tax and the resource dummy variables if the trade policy reforms are a substitute for resource conservation.10,The equation for the second-best export tax applies to the cross-section and therefore our identification comes from cross-sectional variation.Eq. demonstrates that the second-best export tax is higher for goods which are more pollution intensive at a particular point in time.A comparison of export taxes and pollution intensities between goods at one particular moment in time requires a cross-sectional analysis.Even though we have a panel dataset, we will not use product fixed effects, since the latter eliminate cross-sectional variation.The time-series dimension of the dataset is used to allow the coefficient estimates to vary across time.The Chinese government repeatedly linked VAT rebate adjustments to environmental concerns from 2007 onwards.Since the dataset spans the years 2005–2009, we have information on trade policy and pollution intensities both before and after the policy announcement.Hence, the panel dimension of the dataset allows us to test whether there was a policy change in the years 2007–2009 compared to the years 2005–2006.The dependent variable in our model varies at the HS8 digit level.However, data on pollution intensities are only observed at the industry level.In order to take account of the fact that our main explanatory variable varies at a higher level of aggregation, we cluster the standard errors at the industry level.11,The second-best export tax in Eq. features an environmental component as well as the terms of trade motive.This terms of trade motive depends on the foreign elasticity of import demand and is difficult to measure."However, the terms of trade motive reflects China's market size on the world market and is thus reflected in China's share in global exports.In order to control for the terms of trade motive, we use two dummy variables.The first dummy variable Exp share [5–15) takes the value of one if China exports at least 5% and less than 15% of global exports.About 28% of the observations in our sample fall into this category.The second dummy variable Exp share 15+ takes the value of one if China exports more than 15% of global exports.Another 32% of the observations in our sample fall into the latter category.Since the second-best export tax is larger for products for which China has market power, we expect the coefficient estimates for those dummy variables to be positive.The complainants in the WTO dispute settlement case on rare earths argue that China introduced export restrictions on raw materials in order to grant downstream producers in China protected access to those raw materials."In order to test for this hypothesis, we add a dummy variable to the regression which takes the value of one if a product is classified as a primary product according to the United Nation's Classification of Broad Economic Categories.12",A positive coefficient estimate for this variable would suggest that the export restrictions target primary products.This could provide an incentive for downstream producers to relocate to China where the primary products are available at a lower price."The literature on the political economy of protection highlights alternative motives that can drive the government's choice of trade policy.Corden, e.g., links trade policy to social concerns.Social considerations might be particularly important in China, since the Chinese government under president Hu Jintao emphasised the goal to build a “harmonious society” and reduce inequality within the country.13,In Corden, the regulator grants protection to industries which suffer from adverse economic shocks.14,This would suggest that the VAT tax or the export tax is lower in industries in which output growth is lower.Therefore, we control for output growth compared to the previous year.Even though the Chinese government does not face any elections, we can still assume that it tries to gain popular support for its policies in order to consolidate its power.15,Therefore, it may protect industries with a larger number of employees16 and firms and we control for both variables.Moreover, the government might grant more protection to industries with a larger share of state-owned enterprises.SOEs are likely to have links to the government which allow them to lobby for protection.Branstetter and Feenstra show that the Chinese government gave between four and seven times more weight to SOEs than to consumer welfare in the context of policies that facilitate foreign direct investment."This suggests that SOEs have a significant influence on government decisions which might also be reflected in China's trade policy.We, thus, control for the output share of state owned enterprises in an industry, measured as the output value of SOEs relative to the output value in the industry.In a similar vein, the Chinese government might try to grant foreign firms better treatment in order to enhance the investment climate in China.This would imply that industries with a higher share of foreign output get higher protection.Since foreign firms might use less polluting production technologies, it is necessary to control for the output share of foreign firms.The output share of foreign firms is measured as the output value of foreign firms relative to the overall output value in an industry.Several authors argue that protection should be higher in industries for which the country does not have a comparative advantage.As a country with an abundant labour supply, China is traditionally associated with a comparative advantage in labour-intensive industries.Hence, the VAT tax or export tax might be higher in labour-intensive industries.We, thus, control for labour intensity which is constructed as the number of employees over fixed assets in an industry.If the government protects industries in which it does not have a comparative advantage, the protection could also be expected to be higher in export-intensive industries.The export intensity is measured as the value of exports relative to the output value.The choice of the VAT rebate rate could also be driven by concerns about government revenue.Evenett et al. show that the expenses for VAT rebates constitute 8–10% of final government spending between 2007 and 2010."The theoretical model presented in the Second-best export taxes as environmental policy section incorporates the revenue generated by export taxes in the regulator's welfare maximization problem.Hence, there is no need to control for government revenue if we implement Eq.If the government is concerned about the effect of the policy on its budget, it might, however, take income taxes from firms into consideration.A higher VAT tax or export tax can be expected to lead to a contraction of the industry and reduce taxes that firms have to pay on their business.Hence, the government might set a lower VAT tax or export tax for industries which pay a higher tax on their principal business."We control for this motive using data on the firms' income tax.Some of the above-mentioned control variables may be endogenous if we use the contemporaneous value."It is possible that an increase in the export tax or a reduction in the VAT rebate rate reduce China's export share on the world market or the number of firms and employees in an industry. "Moreover, trade policy may affect output growth in an industry or the industry's profitability.In order to avoid reverse causality, we use the lagged value of the control variables.17,We also control for the export tax in order to avoid an omitted variable bias.The export tax is correlated with the VAT tax and it may also be correlated with the regressors in our model."The data reveal that the Chinese authorities set the maximum VAT tax before they introduce export taxes which violate China's obligations under the WTO Accession Protocol.The average rebate rate is zero for all but 15 products for which the export tax is positive and equal to or higher than the export tax allowed under the WTO Accession Protocol.Generally, the VAT rebates are zero for 92% of the products on which the Chinese government levies an export tax.The fact that an export tax goes along with the highest possible VAT tax in most cases suggest that the Chinese government exploits export VAT rebates as an instrument before it resorts to export taxes."Due to potential reverse causality between the export tax and the VAT tax, we use the maximum export tax allowed under China's WTO Accession Protocol as an instrument for the export tax and estimate Eq. using a 2SLS estimator.The maximum export tax which is allowed under the WTO Accession Protocol is arguably exogenous to the export tax in the mid 2000s, since the Accession Protocol was negotiated at the end of the 1990s and became effective in 2001."China's export taxes differ from the export taxes negotiated under the WTO Accession Protocol.In a few instances, the Chinese government reduced export taxes below those allowed under the Accession Protocol or introduced additional temporary export taxes.The correlation coefficient between the actual export tax and the export tax allowed under the WTO Accession Protocol is 0.47.We also control for the ad valorem tariff.Even though the government does not link the import tariff to environmental concerns, the tariff might be determined by similar political economy motives as the VAT tax.If protection on the import side is a substitute for the VAT tax, the tariff would be negatively correlated with the VAT tax.A complete list of variables used in the analysis as well as the variable definition and the expected sign of the coefficient estimates is available in Table 2.For the construction of the dependent variable VAT tax, we gather data on VAT and VAT rebate rates at the product level."The VAT data are from the China Customs homepage18 and from the Customs Import and Export Tariff of the People's Republic of China.19",Data on VAT rebate rates for the years 2005–2006 are from the China Customs homepage.From 2006 onwards, we only have information on changes in export VAT rebate rates.This information is used to update the VAT rebate rate schedule.A list of VAT rebate rate reforms and the data source can be found in Table 13 in Appendix A.In 2007 there was an international reclassification of HS tariff lines which also affects the Chinese tariff lines.To the best of our knowledge, there is no concordance table at the HS8 or HS10 digit level that relates Chinese tariff lines prior to 2007 to tariff lines from 2007 on.Concordance tables only exist at the HS6 digit level.Since we cannot link the tariff lines before and after the reclassification, we only use the VAT rebate rates for tariff lines which were not affected by the reclassification.This should not bias our results, since the HS reclassification at the HS6 digit level is undertaken by the World Customs Organization and not by the Chinese government itself.Information on export taxes at the product level for the years 2005–2007 is available on the China Customs homepage.The homepage of the Ministry of Finance provides export tax data at the HS8 digit level for the years 2008–2009.Data on applied most favoured nation import tariffs at the HS8 digit level are from the WITS TRAINS database.The China Statistical Yearbook on Environment provides information on water pollution, air pollution and solid waste at the industry level.Data on energy consumption at the industry level are from the China Statistical Yearbook.The Chinese industry level data distinguish between 40 industry sectors.These sectors include mining, manufacturing as well as production and supply of electricity, gas and water.However, there is no trade in the sectors Production and Supply of Gas and Production and Supply of Water.Hence, these sectors do not appear in our analysis.A list of industries in our dataset is available in Appendix A in Table 11."Emissions are scaled by an industry's output level in order to obtain the pollution intensities.Output data are from the Industry Chapter of the China Statistical Yearbook and we deflate the output value using the manufacturing producer price index from the China Statistical Yearbook."As suggested in the Empirical strategy section, it is necessary to work with the pollution embodied in China's exports rather than the pollution generated by each industry sector. "The pollution intensity of China's exports can be obtained using input-output analysis.The input-output table is slightly more disaggregated than the industry classification in the environmental data.Hence, it is necessary to aggregate the IO table to the industry level for which we have environmental data.We then calculate the input-output coefficients and the adjusted Leontief inverse for the aggregate table.Note that the input-output table includes information on agricultural and service sectors.However, there are no corresponding emission data for these sectors.Hence, we aggregate all of these sectors into one sector and delete the respective row and column from the IO table.Therefore, the overall pollution intensity represents the total pollution generated by a final product and all its manufacturing inputs.It does not include the pollution generated by intermediate outputs from agricultural and service sectors.The information on the number of employees, the number of firms in an industry, output growth compared to the previous year, the output share of SOEs and foreign firms, profits, the capital intensity and the income tax is from the industry chapter of the China Statistical Yearbook.The export intensity variable is constructed using data on the trade volume from the BACI trade database.Since the BACI trade data are denoted in US$, we use the average annual exchange rate from the China Statistical Yearbook to obtain the volume of exports in RMB.The export volume is divided by the output value to obtain the export intensity.Table 12 in Appendix A summarizes the data source for all variables which are used in the analysis.The trade policy data at the HS8 digit level are merged to the industry-level data using two concordance tables.The first table, which is available in the appendix of the 2007 Chinese input output table, links the HS8 digit tariff lines to the sectors in the Chinese input-output table for 2007.The second concordance table links the relevant sectors from the input-output table to the 40 industry sectors in our dataset.This table is constructed manually based on the subcategories of the Chinese industry classification system GB/T4754-2002.This section provides summary statistics for all trade policy variables, pollution intensities and the control variables.Table 3 provides means and standard deviations.Table 4 allows us to track the changes in the means of all variables over time.The summary statistics in Table 3 show that the average difference between the VAT and the export VAT rebate is 5.7%.The average VAT tax increases from 4.4% in 2005 to 7.9% in 2008 and falls to 6% in the aftermath of the economic and financial crisis.The average export tax increases from 0.09 in 2005 to 0.65 in 2009 due to an increase in the scope of the export tax.The last row of Table 1 shows that the average export tax for products which are subject to a positive export tax falls from 18.5% in 2005 to 16% in 2009.The pollution intensity for all pollutants declines over the sample period.In levels, waste water, solid waste and energy use increase in the course of the sample period.However, output grows faster than emissions.Prior to the analysis of our results, we look at the relationship between the VAT tax and the pollution intensities in the raw data.Fig. 1 plots the development of the average VAT tax from 2004 to 2009 for the two pollution-intensive industries Paper and Non-metallic Mineral Products against the VAT tax of the relatively clean industries Articles for Culture/Education and Manufacture of Communication Equipment.Paper Production is the most water, COD and ammonium nitrogen intensive industry in 2007.21,The average VAT tax for Paper Production is close to 15% points throughout the sample period.It is raised to about 16% points in 2008.The graph also shows the average VAT tax for Non-metallic Mineral Products.The latter is the second most soot, SO2 and energy intensive industry.The average VAT tax for Non-Metallic Mineral Products is 4% in 2004.From 2006 on it increases gradually and reaches 12% points in 2008.A surge in the VAT tax for a pollution intensive industry from a very low level could represent an adjustment towards an environmentally motivated VAT tax.The industries Articles for Culture/Education and Communication Equipment have low pollution intensities across almost all of the pollutants.From 2007 on, the average VAT tax for the two clean industries is lower than the VAT tax for the polluting industries.We would expect this pattern if the VAT tax was motivated by environmental concerns.22,As a consequence of the adjustments in the VAT tax, the incentive to produce relatively clean goods increases whereas the incentive to produce polluting goods declines.The VAT rebate adjustments could thus lead to a reallocation of resources from pollution intensive industries to clean industries.As a result, the overall pollution intensity of production could decline.Ex ante, there is little evidence that the Chinese government uses export taxes to reduce pollution along any other dimension than solid waste.The correlation coefficient between the export tax and most of the pollution intensities is negative.Only the solid waste intensity and the energy intensity are positively correlated with the export tax.However, the correlation between the export tax and the energy intensity is close to zero.This section presents our empirical results.The Determinants of VAT taxes section explains our findings for the determinants of the VAT tax.The results for regressions using the export tax as dependent variable are explained in the Export tax as dependent variable section."When we estimate the determinants of the VAT tax, we use the export tax allowed under China's WTO Accession Protocol as an instrument for the export tax.Prior to the discussion of the results, we assess the quality of our instrument and the necessity of an instrumental variable procedure.The F-statistic for the first-stage regression shows that the instruments are jointly highly significant and that the 2SLS results do not suffer from problems related to weak instruments.We also test for the exogeneity of the export tax using a test that allows for clustered standard errors and reject the null-hypothesis that the export tax is exogenous.23,Column 1 of Table 5 shows the relationship between the VAT tax and the pollution intensities as well as the control variables for the years 2005–2006.The result in Column 1 are used as a benchmark against which we assess the VAT rebate adjustments.The relationship between the pollution intensities and the VAT tax for the years 2007–2009 is displayed in Column 2 of Table 5.Since the Chinese government linked VAT rebates to environmental motives from 2007 on, we expect to see positive coefficient estimates for the pollution intensities in Column 2 of Table 5 and the discussion therefore focuses on the results in Column 2.The third column of the table indicates whether there is a statistically significant difference between the coefficient estimates for the 2005–2006 sample and the coefficient estimates for the 2007–2009 sample.In other words, Column 3 shows whether the interaction term between the respective regressor and the dummy variable D2007 is statistically significant."The results support the Chinese government's claim that VAT rebate rates are used for environmental motives.An F-test demonstrates that the coefficient estimates for the pollution intensities as well as the energy intensity are jointly significant in both sample periods.Hence, the pollution intensities are a significant determinant of the VAT tax even when we control for other motives to manipulate trade policy."Concerns about water pollution seem to be one reason for China's VAT rebate adjustments.The statistically significant positive coefficient estimate for Water Reg_gap*Int in Column 2 of Table 5 demonstrates that there is a positive relationship between the VAT tax and the overall waste water intensity from 2007 onwards.This is in line with our expectations if the VAT tax is motivated by concerns about waste water discharge from 2007 on.The coefficient estimate for the waste water intensity indicates that the VAT tax increases by 0.012% points as the waste water intensity increases by 1 ton per million yuan output."In order to asses whether the magnitude of the coefficient estimate is economically meaningful, we calculate the predicted change in the VAT tax that results from an increase in a product's pollution intensity with respect to pollutant k from the 25th percentile to the 75th percentile.The predicted change in the VAT tax is displayed in Table 6.Table 6 shows that a jump in the waste water intensity from the 25th to the 75th percentile would lead to an increase in the VAT tax of 0.92% points in the 2007–2009 sample.Moreover, the VAT tax is significantly higher for ammonium nitrogen intensive products in the 2007–2009 sample, hence discouraging exports of those products.The VAT tax is predicted to increase by 0.59% points as the ammonium nitrogen intensity increases from the 25th to the 75th percentile.Since ammonium nitrogen is a water pollutant, this gives further support to the notion that the VAT rebates are a second-best instrument to reduce water pollution.Despite the evidence that the VAT taxes discourage water pollution intensive exports, the COD intensity does not seem to be a significant determinant of the VAT tax in the 2007–2009 sample.This result could be due to the high correlation between the waste water intensity and the COD intensity.Table 14 in Appendix A shows a correlation coefficient of 0.92 between the two variables.It is, thus, possible that the COD intensity does not affect the VAT tax once we control for the waste water intensity.Considering the water scarcity and the severity of water pollution, it is not surprising that China uses trade policy to discourage waste water intensive exports.Per capita availability of water in China is only a quarter of the world average and the availability of water is unevenly distributed with the North being particularly water-scarce.Water scarcity is exacerbated by pollution.In 2004, only 40% of the monitored river sections and 29 of the monitored lakes and reservoirs were safe for human consumption after treatment and the situation has not improved since then.24,More than 300 million people in rural China lacked access to safe drinking water in the mid 2000s.The economic costs of water pollution in China are considerable.World Bank estimates that water-pollution related water scarcity imposes a cost of 147 billion RMB or about 1% of GDP.The cost of ground water depletion amounts 92 billion RMB.Moreover, water pollution affects the health of more than 300 million people who do not have access to safe drinking water in China.The cost of the resulting health damages is estimated to be between 0.3 and 1.9% of rural GDP.There is also some evidence to suggest that the VAT tax discourages exports of air pollution intensive products.The VAT tax is significantly higher for SO2 intensive products.An increase in the SO2 intensity, interacted with the share of emissions not meeting discharge standards, from the 25th to the 75th percentile is associated with an increase in the VAT tax by 0.49 percentage points.According to Xie et al., China is the country with the highest SO2 emissions globally.It did not meet its goal to reduce SO2 emissions during the 11th FYP with emissions being 42% higher than the target.Therefore, the use of VAT rebate rates as a second-best way of reducing SO2 emissions would not be surprising.The VAT tax is not significantly correlated with the soot intensity.This may be due to the high correlation of 0.94 between the soot and SO2 intensity.Moreover, the Five Year Plan for the years 2005–2010 does not include a target to reduce soot emissions.If the government does not plan do reduce soot emission, it is unlikely to use trade policy as a second-best instrument towards that end."Furthermore, the data support the Chinese authorities' claim that the VAT rebate adjustments aim at reducing energy consumption.Based on our coefficient estimates for the 2007–2009 sample, the VAT tax is 2.81percentage points higher for a product with an energy intensity at the 75th percentile than for a product with an energy intensity at the 25th percentile, ceteris paribus.25,This difference is economically meaningful.Table 6 also shows that differences in the energy intensity can explain more of the difference in the VAT tax than differences in the pollution intensity with respect to any other pollutant k."The Chinese government seems to be most concerned about China's energy consumption when it chooses the VAT tax.This finding could be explained by two factors."Firstly, China is the world's second largest energy consumer in the mid 2000s. "However, domestic oil, gas and coal can no longer satisfy the energy appetite of China's growing economy.In the early and mid-2000s China was struggling with energy shortages, increasing import dependence and price volatility."Secondly, 70% of China's energy supply results from the combustion of coal, which is a major source of air pollution.Xie et al. attributes rising SO2 emissions to higher energy consumption and in particular to the high use of coal.An attempt to clean up the air should thus be accompanied by a reduced reliance on energy from pollution intensive coal-fired power plants.The results do not suggest that the VAT tax is used as a second-best policy instrument to address the generation of solid waste or increase recycling.The recycling ratio is not significantly correlated with the VAT tax.Moreover, the VAT tax is adjusted such that it declines in the solid waste intensity in the 2007–2009 sample, thus, encouraging exports of solid waste intensive products."The fact that the coefficient estimates do not reflect an attempt to encourage recycling or reduce the generation of solid waste is not startling if we look at China's environmental achievements in the 10th FYP.China planned to recycle 50% of its solid waste in the 10th FYP period."This target was overachieved with a recycling ratio of 56%.Moreover, the generation of industrial solid waste was meant to decline by 10%.In fact, China achieved a reduction in industrial solid waste of 48%.This indicates that domestic instruments might suffice to increase recycling and reduce the generation of solid waste.The use of trade policy as a second-best policy instrument does not seem to be necessary."Moreover, the results support the claim that the VAT rebate adjustments are meant to contribute to the conservation of China's natural resources.In the period from 2007 on, the VAT tax for mineral and metal products is estimated to be 5.7 and 4.5 percentage points higher than the VAT tax for other products, respectively.26,The VAT tax for wood and precious stones exceeds the VAT tax for other products by 6.6 and 4.1 percentage points respectively."The conservation of resources seems to be the most important motive behind China's VAT rebate rate adjustments especially when we compare the magnitude of the coefficient estimates for the resource dummy variables to the predicted changes in the VAT tax as a consequence of changes in the pollution intensities.As a large producer on the world market, China could introduce export taxes in order to manipulate the terms of trade in its favour.This would be reflected in positive coefficient estimates for the dummy variables L.Exp share [5–15) and L.Exp share 15+, which capture the share of Chinese exports in global exports in the previous period."The findings in Table 5, however, show that the export tax is not significantly related to China's share in global exports.Hence, there is no evidence that the VAT tax reflects an attempt to raise the world market price for Chinese exports.Moreover, there is no evidence that the VAT tax is set in a way that discourages exports of primary products.The sign of the coefficient estimates for the control variables is largely in line with our expectations.Industries with a larger output share of state-owned enterprises face a significantly lower VAT tax.The finding indicates that links between the government and SOEs lead to a preferential treatment of industries in which SOEs produce a large proportion of the output value.Furthermore, the Chinese government seems to grant more protection to large industries.The VAT tax is significantly lower for industries with a larger number of employees in the period from 2007 on.27,The results also reveal that a high export tax is accompanied by a high VAT tax.An increase in the export tax by one percentage point is associated with an increase in the VAT tax of 0.33 percentage points in the 2007–2009 sample.This reflects the fact that the rebate rate is zero for almost all products for which the Chinese governments levies an export tax and suggests that the export tax is not set in a way which offsets the effect of the VAT tax.The tariff is not significantly correlated with the VAT tax in the 2007–2009 sample.To the best of our knowledge, the Chinese government does not link import tariffs to environmental concerns.While our results suggest that the VAT rebate rates reflect concerns about pollution, energy use and resource conservation, protection on the import side may be driven by other motives.This could explain why there is no statistically significant relationship between the tariff and the VAT tax."According to the WTO trade policy report not only the VAT rebates but also China's export taxes are motivated by environmental concerns.This section examines whether the data support this claim.The export tax is modelled as a function of the pollution intensities, the resource dummies and the control variables as in Eq.In order to ovoid an omitted variables bias, we also include the lag of the VAT tax as a control variable.As explained in the Control variables section, the Chinese government only implements export taxes which exceed the export tax allowed under the WTO Accession Protocol once it has exhausted the VAT tax as a policy instrument.This is similar to a situation in which the regulator chooses the VAT tax first and then chooses the export tax.Therefore, we are not concerned about reverse causality from the export tax to the VAT tax.Furthermore, using the lag of the VAT tax guarantees that the regressor is not influenced by the dependent variable.The model is estimated using an OLS regression.The estimated relationships between the export tax, the environmental variables and the control variables are shown in Table 7.Column 1 of Table 7 displays the results for an OLS regression using data for the years 2005–2006.Column 2 of the same table shows the results for a sample covering the years 2007–2009."The results show little evidence of an environmental motive behind China's export taxes.Neither the pollution intensities nor the control variables are significantly positively correlated with the export tax.One notable exception is the solid waste intensity.Table 7 reveals a statistically significant positive relationship between the solid waste intensity and the export tax throughout the sample period.The relationship between the waste intensity and the export tax is significantly larger from 2007 on indicating that the introduction of export taxes could be geared towards a reduction in the generation of solid waste.However, even from 2007 on, the economic effect of a change in the solid waste intensity on the export tax is small.Table 8 displays the predicted change in the export tax as the pollution intensities increase from the 25th to the 75th percentile.Such an increase in the solid waste intensity is predicted to raise the export tax by no more than 0.24 percentage points prior to 2007 and by 0.71 percentage points in the 2007–2009 sample.The results indicate that the export tax is significantly lower for SO2 and energy intensive products as well as wood products and precious stones.This is contrary to the actions of a regulator who uses trade policy as second-best instrument to reduce pollution and conserve resources.The finding that the export tax is not motivated by environmental concerns is not surprising if we bear in mind that the Chinese government is restricted in its choice of export taxes due to its commitment under the WTO Accession Protocol.When we derive the equation of the second-best export tax in the Second-best export taxes as environmental policy section, we assume that the regulator can choose trade policy to its liking.According to its WTO Accession protocol, China is only allowed to levy export taxes on 84 products.Hence, its ability to use export taxes as secondary environmental policy instrument is limited.Discouraging exports of primary products seems to be a motivation for the introduction of export taxes.Prior to 2007, primary products are taxed 0.6 percentage points more than processed products.Between 2007 and 2009, the export tax is 1.25 percentage points higher for primary products.This suggests that the Chinese government may have introduced export taxes for a range of products in an attempt to attract downstream producers to China where they have access to primary products at a lower price.This section analyses the sensitivity of our results."Since China's WTO Accession protocol constrains the use of export taxes and the Chinese government only levies export taxes on less than 300 out of more than 5700 products, most of the variation in trade policy results from variation in the VAT rebate rates.The sensitivity analysis therefore focuses on regressions with the VAT rebate rate as dependent variable.Regression results for the same sample period with the export tax as dependent variable can be found in Table 17 in Appendix A.Moreover, the sensitivity analysis focuses on the time period from 2007 onwards, since the Chinese authorities link trade policy to environmental concerns during this time period.Our sample contains a broad spectrum of industries.In this section, we restrict the sample to manufacturing industries and exclude extractive industries like Mining and Washing of Coal, Extraction of Petroleum and Natural Gas and Mining and Processing of Metal and Non-Metal Ores.Moreover, the sectors Recycling and Disposal of Solid Waste and Production and Supply of Energy are removed from the sample.All of the extractive industries as well as power generation and supply are classified as resource intensive and Table 16 in Appendix A shows that they are amongst the most polluting industries.Therefore, we investigate whether the positive correlation between the VAT tax and the pollution intensities follows through in a sample without the above-mentioned resource- and pollution-intensive industries.The regression results are presented in Column 1 of Table 9.The results are very similar to those we obtained for the entire sample.The VAT tax is significantly higher for minerals, wood products and precious stones as well as waste water, ammonium nitrogen, SO2 and energy intensive products.The magnitude of the coefficient estimates is similar to the magnitude of the coefficient estimates in the baseline regression.The theoretical model shows that the second-best export tax is also driven by an incentive to manipulate the world market price of exports.This terms of trade motive is difficult to measure."In the main analysis we proxy the terms of trade motive using dummy variables for the lag of China's share in global exports.However, we want to scrutinize the terms of trade motive further.As a robustness check, we only look at observations for which China is a small producer in the world market.There is no incentive to set an export tax or a VAT tax in order to manipulate the terms of trade if the industry is small on the world market.Hence, we look at a sample of products for which China exports less than 15% of global exports.The results are presented in Column 2 of Table 9.They are similar to the results for the unrestricted sample.28,In our baseline regression, we interact the waste water intensity, the soot intensity and the SO2 intensity with the share of emissions meeting discharge standards.The latter variable is a proxy for the difference between marginal damage and the pollution tax in the theoretical model.In order to investigate whether this measure for the regulatory gap drives our results, we include the waste water, soot and SO2 intensities as regressors without interacting them with the measure for the regulatory gap.As mentioned above, the relevant Five Year Plans foresee reductions in SO2 and soot emissions as well as water consumption, indicating that the pollution tax is lower than marginal damage.Moreover, the share of emissions exceeding discharge standards is positive for all industries and all years in the sample.With the pollution tax not internalizing the environmental distortion to the desired extent, we would expect a positive relationship between soot, SO2 and waste water intensities and the VAT tax.The regression results for a model which does not interact the pollution intensities with a measure for the regulatory gap are displayed in Column 3 of Table 9.As in the baseline regression, the VAT tax is significantly higher for SO2 intensive products and not significantly related to the soot intensity.The coefficient estimate for the waste water intensity, however, is not statistically significant, indicating that the positive relationship between the waste water intensity and the VAT tax only holds if we consider differences in the industries’ compliance with discharge standards.This result highlights the importance of using economic theory to guide the empirical analysis.Based on the theory, the correct specification requires an interaction term between the pollution intensity and the regulatory gap.Omitting the regulatory gap can be considered a misspecification and the misspecified model would suggest that the Chinese government does not have reductions in waste water emissions in mind when it sets the VAT rebate rates.The sample period includes the years of the economic and financial crisis which had a large negative impact on Chinese exports in some product categories.When Chinese exports plummeted, the Chinese government tried to support some of its export industries via increases in VAT rebate rates.29,The summary statistics in Table 4 show that, as a consequence, the average VAT tax declined in 2009.In order to demonstrate that our results are not driven by VAT rebate adjustments in response to the economic and financial crisis, we restrict the sample to the year 2007 instead of using observations for the years 2007–2009.The results presented in Column 4 of Table 9 corroborate our findings.The VAT tax is significantly positively correlated with the waste water intensity, the SO2 intensity and the energy intensity and the coefficient estimates are of a similar magnitude as in the baseline regression, indicating that the VAT tax may aim at reducing pollution along those dimensions.The results suggest that the VAT tax and the export tax are driven by different factors, in particular with respect to the pollution intensities and the resource dummy variables.The Chinese government uses VAT rebate rates to discourage exports of natural resources, waste water, SO2 and energy intensive products and natural resources.The export tax, on the other hand, is lower for SO2 and energy intensive products as well as the wood products and precious stones in the 2007–2009 sample.The fact that the coefficient estimates for some of the environmental variables have opposite signs in the regression using the VAT tax or the export tax as dependent variables means that the export tax could potentially offset the effect of the VAT tax and vice versa.In order to investigate whether this poses a problem, we add the VAT tax and the export tax to generate a variable called “Overall export tax”.This variable is used as the dependent variable in the regression presented in Column 5 of Table 9.A comparison between Column 1 of Table 9 and Column 2 of Table 5 reveals great similarities between the determinants of the overall export tax and the VAT tax.This is not surprising if we bear in mind that no more that 252 out of more than 5700 products in our sample were subject to an export tax.The export tax does not seem to offset the positive correlation between the pollution intensities and the VAT tax.The overall export tax is significantly higher for waste water, ammonium nitrogen intensive and energy intensive products as well as natural resources.The results confirm the hypothesis that Chinese trade policy is motivated by environmental concerns.From 2007 on, the Chinese government repeatedly emphasised that it uses export taxes as well as VAT rebates as second-best environmental policy instruments.This paper investigates whether, in practice, concerns about pollution drive Chinese trade policy reforms.Environmental issues are of increasing importance on the Chinese policy agenda.However, the decentralized implementation and a lack of enforcement of pollution regulation pose a challenge to internalizing the environmental distortion.Given this constraint on the use of domestic pollution taxes, partial export VAT rebates and export taxes can be used as a second-best policy instrument to protect the environment."Extending Copeland's model to the large country case, we solve for the second-best export tax in a situation in which the regulator cannot adjust trade policy. "Under certain assumptions, it is possible to show that the second-best export tax increases in a product's pollution intensity.This relationship guides our empirical analysis."This paper investigates whether the difference between the VAT and the VAT rebate and the export tax are positively correlated with a product's air, water, solid waste and energy intensity.The analysis is based on product-level trade policy data as well as data on Chinese pollution emissions and energy use spanning the years 2005–2009.The results presented in this paper lend support to the Chinese authorities’ claim that the export VAT rebate adjustments are driven by environmental concerns.The VAT rebate rates are set in a way which discourages exports of waste water, ammonium nitrogen, SO2 and energy intensive and products."Moreover, the conservation of natural resources such as minerals, metals, wood products and precious stones seems to be a key determinant of China's VAT rebate rates.However, there is no evidence that the export tax is used as a secondary instrument to reduce pollution or conserve natural resources.The export tax seems to be motivated by an attempt to protect downstream producers in China, since the export taxes are higher for primary products. | This paper analyses whether China's export VAT rebates and export taxes are driven by environmental concerns. Since China struggles to enforce environmental regulation, trade policy can be used as a second-best environmental policy. In a general equilibrium model it is possible to show that the second-best export tax increases in a product's pollution intensity. The empirical analysis investigates whether the export tax equivalent of partial VAT rebates and export taxes are higher for products which are more pollution intensive along several dimensions. The results indicate that the VAT rebate rates are set in a way that discourages exports of water pollution intensive, SO2 intensive and energy intensive products from 2007 on. Moreover, the conservation of natural resources such as minerals, metals, wood products and precious stones seems to be a key determinant of China's export VAT rebate rates. There is little evidence that export taxes are motivated by environmental concerns. |
560 | Acute neuropharmacological effects of atomoxetine on inhibitory control in ADHD children: A fNIRS study | Attention Deficit Hyperactivity Disorder is one of the most prevalent developmental disorders, affecting between 5 and 9% of school-aged children.ADHD is associated with a primary impairment in executive controls, including response inhibition and working memory.Symptoms of ADHD typically develop during early elementary school years, and, in most cases, progress to a chronic state during adulthood.Because of this, initiating appropriate treatment in youth upon early identification is important in order to confer long-term positive effects.Recommended treatments for ADHD children include both medication and behavioral therapy.The non-stimulant drug, atomoxetine as well as the stimulant drug, methylphenidate have been recommended as primary medications for the improvement of executive function in ADHD patients.Conventionally, MPH has stood as the mainstay of medication treatment of ADHD patients.MPH is a reuptake inhibitor of catecholamines, including dopamine and noradrenaline, which it does by blocking their transporters.The affinity that MPH has with each catecholamine transporter is different: While the dissociation constant value, or K, of MPH to the NA transporter is 339 nM, that to the DA transporter is 34 nM.Thus, MPH is considered to have by far a greater effect on the DA system.Conversely, ATX, the first approved non-stimulant ADHD medication treatment, has been considered a selective NA reuptake inhibitor.The affinity that ATX has with these catecholamine transporters is biased toward the NA system with the K of ATX to NA and DA transporters being 5 and 1451 nM, respectively.These profiles demonstrate that both MPH and ATX act as monoamine agonists to normalize brain function in ADHD patients, but that they do so in different manners.ADHD is considered to include dysfunction of the DA and NA systems.In many ADHD neuroimaging studies, MPH has been shown to upregulate hypofunction in the DA system at the prefrontal cortex and the striatum, improving inhibitory functions.On the other hand, it has been posited, based on findings from in vitro studies, that ATX acts on the NA system, mainly located in the locus coeruleus with axonal projections to the prefrontal and parietal cortices.However, there have not been any neuroimaging studies of the NA system in ADHD patients.Such a plausible functional difference might be reflected in differential neuropharmacological responses of ADHD children to MPH and ATX: there is a 30% non-responder rate for one or the other preferentially.Yet, the clinical therapeutic effects of these medications in ADHD children are not yet clearly understood.In addition, there is no evidenced-based method with objective markers for selecting effective medications.Furthermore, while these treatments have no symptomatic benefits in non-responders, their side effects remain present.Even patients who do respond must be appropriately monitored to prevent possible side effects such as headaches, stomachaches, nausea, abdominal pain, decreased appetite and vomiting.Preferably, the efficacy of either medication for ADHD children should be assessed both pre- and post-administration.One promising approach is the exploration of distinct biological markers and their testing with a non-invasive neuroimaging modality.A number of neuroimaging results for ADHD children, adolescents and adults have shown that right middle and inferior frontal hypoactivation is distinctly associated with response inhibitory dysfunction.This gives rise to the possibility that activation in the inferior and middle frontal gyri could be a characteristic candidate as a neuropharmacological biomarker for ADHD.Indeed, a growing body of neuroimaging research has started to explore the neural basis for the clinical effectiveness of MPH in ADHD patients.An increasing number of fMRI-based neuropharmacological studies of MPH effects have demonstrated acute functional upregulation and normalization of the right middle and inferior frontal gyri after MPH administration.Meanwhile, our previous fNIRS study assessed the pharmacological neuromodulation produced by MPH using a randomized, double-blind, placebo-controlled, crossover design.We reported that MPH normalized the hemodynamic responses in the right middle and inferior gyri during a motor-related inhibitory task using fNIRS on young ADHD children, which was in accordance with previous evidence from a study with adult ADHD patients and fMRI.As demonstrated in our previous studies, fNIRS offers robust advantages such as its compactness, affordable price, tolerance to body motion and accessibility, which, in addition, have allowed it to be applied to the clinical assessment of ADHD children.Conversely, it is often difficult to assess neuroactivation patterns during locomotor tasks with fMRI-based neuroimaging, and this can often cause problems in the neuro-functional assessment of school-aged ADHD children with hyperactivity.In fact, the rejection rate of fMRI studies is high: one study enrolling a relatively young sample of children rejected 50% of ADHD subjects and 30% of normal control subjects.The high exclusion rate for ADHD patient populations in fMRI studies is mainly due to motion and lack of compliance.According to the validation of our study and the fact that our drop rate has been 0% of a total 30 ADHD subjects, our fNIRS-based examination is favorable in particular for measurements of active subjects, such as patients with ADHD, and should be further extended to neuropharmacological assessment of ATX effects in ADHD children.Thus far, several fMRI studies on the effects of ATX have provided evidence of up-regulation of middle and inferior frontal gyrus activation in healthy control subjects, as with MPH.However, there are only three fMRI studies that have performed neuropharmacological assessments, utilizing double-blind, placebo-controlled designs, of the effects of ATX administration on inhibition function in ADHD patients including children, and no fNIRS studies had been performed until now.The lack of evidence associating a neuropharmacological mechanism with therapeutic improvement is tantamount to a missed opportunity for appreciating how ATX works, and such understanding is a vital step toward developing an objective, evidence-based neuropharmacological treatment for ADHD children.Thus we performed the current fNIRS study in order to assess acute neuropharmacological effects of ATX on inhibitory functions of ADHD children.In the current study, we enrolled sixteen ADHD children and age- and sex-matched control subjects, and examined the neuropharmacological effects of ATX on inhibition control, utilizing a within-subject, double-blind, placebo-controlled design.We hypothesized that the ADHD subjects would exhibit hypoactivation in the right middle and inferior frontal gyri in comparison with control subjects, and that ATX would normalize hemodynamic responses during a go/no-go task while a placebo would not.Sixteen clinically referred, right-handed Japanese children with a mean age of 8.9 years who met the Diagnostic and Statistical Manual of Mental Disorders-IV criteria for ADHD participated in the study.The Wechsler Intelligence Scale of Children — Third Edition full IQ scores of subjects were all over 70.Sixteen right-handed healthy control subjects were matched with the ADHD subjects according to age and gender.IQs of controls were significantly higher than those of ADHD subjects.All children and their parents gave oral consent for their participation in the study.Written consent was obtained from the parents of all subjects.The study was approved by the Ethics Committees of Jichi Medical University Hospital and the International University of Health and Welfare.The study was in accordance with the latest version of the Declaration of Helsinki.This study was registered to the University Hospital Medical Information Network Clinical Trials Registry as “Monitoring of acute effects of ATX on cerebral hemodynamics in ADHD children: an exploratory fNIRS study using a go/no-go task”.Fig. 1 summarizes the experimental procedure.We examined the effects of ATX in a randomized, double-blind, placebo-controlled, crossover study while the subjects performed a go/no-go task.We examined ADHD subjects twice, at least 2 days apart, but within 30 days.Control subjects only underwent a single, non-medicated session.On each examination day, ADHD subjects underwent two sessions, one before drug administration, and the other at 1.5 h after drug administration.Before each pre-administration session all ADHD subjects underwent a washout period of 2 days.We allowed subjects to take off the probe during waiting periods between the first and second sessions.Each session consisted of 6 block sets, each containing alternating go and go/no-go blocks.Each block lasted 24 s and was preceded by instructions displayed for 3 s, giving an overall block-set time of 54 s and a total session time of 6 min.In the go block, we presented subjects with a random sequence of two pictures and asked them to press a button for both pictures.In the go/no-go block, we presented subjects with a no-go picture 50% of the time, thus requiring subjects to respond to half the trials and inhibit their response to the other half.Specifically, the instructions read in Japanese, “You should press the button as quickly as you can.Remember you want to be quick but also accurate, so do not go too fast.,Participants responded using the forefinger of the right hand.A go/no-go ratio of 50% was selected as it has been most often used in former neuroimaging studies.We presented pictures sequentially for 800 ms with an inter-stimulus interval of 200 ms during go and go/no-go blocks.At the beginning of each block, we displayed instructions for 3 s to inform the subject about the new block.Each subject performed a practice block before any measurements to ensure their understanding of the instructions.After ADHD subjects performed the first session, either ATX or a placebo was administered orally.The experimental design was as previously described.All patients were pre-medicated with ATX as part of their regular medication regimen."Specific, acute, experimental doses were the same as the patient's regular dose as described in Table 1.We calculated the average reaction times for go trials, and accuracy rates for go and no-go trials in each go/no-go block for ADHD and control subjects.We averaged the accuracy and RTs across go/no-go blocks, and subjected the resulting values to statistical analyses as described in a subsequent section.We calculated mean RT for each participant by taking the average of RTs for correct go trials in the go/no-go block.We computed accuracy for go trials by dividing the number of correct responses by the total number of go trials for the go/no-go block.Similarly, we computed accuracy for no-go trials by dividing the number of correct inhibitions by the total number of no-go trials in the go/no-go block.We set the statistical threshold at 0.05 with the Bonferroni method for multiple-comparison error correction.We used the multichannel fNIRS system ETG-4000, utilizing two wavelengths of near-infrared light.We analyzed the optical data based on the modified Beer–Lambert Law as previously described.This method enabled us to calculate signals reflecting the oxygenated hemoglobin, deoxygenated hemoglobin, and total hemoglobin signal changes, obtained in units of millimolar·millimeter.For statistical analyses, we focused on the oxy-Hb signal because of its higher sensitivity to changes in cerebral blood flow than that of deoxy-Hb and total-Hb signals, its higher signal-to-noise ratio, and its higher retest reliability.We set the fNIRS probes so that they covered the lateral prefrontal cortices and inferior parietal lobe, referring to previous studies.Specifically, we used two sets of 3 × 5 multichannel probe holders that consisted of eight illuminating and seven detecting probes arranged alternately at an inter-probe distance of 3 cm.This resulted in 22 channels per set.We defined the midpoint of a pair of illuminating and detecting probes as a channel location.We attached the bilateral probe holders in the following manner: their upper anterior corners, where the left and right probe holders were connected by a belt, were symmetrically placed across the sagittal midline; the lower anterior corners of the probe holder were placed over the supraorbital prominence; and the lower edges of the probe holders were attached at the upper part of the auricles.For spatial profiling of fNIRS data, we adopted virtual registration for registering fNIRS data to MNI standard brain space."Briefly, this method enables us to place a virtual probe holder on the scalp based on a simulation of the holder's deformation and the registration of probes and channels onto reference brains in an MRI database.Specifically, we measured the positions of channels and reference points, consisting of the Nz, Cz and left and right preauricular points, with a 3D-digitizer in real-world space.We affine-transformed the RW reference points to the corresponding reference points in each entry in reference to the MRI database in MNI space.Adopting these same transformation parameters allowed us to obtain the MNI coordinates for the fNIRS channels and the most likely estimate of the locations of given channels for the group of subjects together with the spatial variability associated with the estimation."Finally, we estimated macroanatomical labels using a Matlab function that reads labeling information coded in a macroanatomical brain atlas, LBPA40 and Brodmann's atlas.We preprocessed individual timeline data for the oxy-Hb and deoxy-Hb signals of each channel with a first-degree polynominal fitting and high-pass filter using cut-off frequencies of 0.01 Hz to remove baseline drift, and a 0.8 Hz low-pass filter to remove heartbeat pulsations.Note that Hb signals analyzed in the current study do not directly represent cortical Hb concentration changes, but contain an unknown optical path length that cannot be measured.Direct comparison of Hb signals among different channels and regions should be avoided as optical path length is known to vary among cortical regions.Hence, we performed statistical analyses in a channel-wise manner.From the preprocessed time series data, we computed channel-wise and subject-wise contrasts by calculating the inter-trial mean of differences between the peak Hb signals and baseline periods.For the six go/no-go blocks, we visually inspected the motion of the subjects and removed the blocks with sudden, obvious, discontinuous noise.We subjected the resulting contrasts to second-level, random-effects group analyses.We statistically analyzed oxy-Hb signals in a channel-wise manner.Specifically, for control subjects, who were examined only once, we generated a target vs. baseline contrast for the session.For ADHD subjects, we generated the following contrasts: pre-medication contrasts: the target vs. baseline contrasts for pre-medication conditions for the first day exclusively; post-medication contrasts: the respective target vs. baseline contrasts for post-placebo and post-ATX conditions; intra-medication contrasts: differences between post- and pre-medication contrasts for each medication; and inter-medication contrasts: differences between ATXpost-pre and placebopost-pre contrasts.To screen the channels involved in go/no-go tasks in normal control subjects, we performed paired t-tests on target vs. baseline contrasts.We set the statistical threshold at 0.05 with Bonferroni correction for family-wise errors.For thus-screened channels, we performed comparisons between control and ADHD for the following three ADHD contrasts: pre-medication, post-placebo, and post-ATX.We performed independent two-sample t-tests on these contrasts with a statistical threshold of p < 0.05.To examine the medication effects on ADHD subjects, we performed paired t-tests with a statistical threshold of p < 0.05 for comparison between ATXpost-pre and placebopost-pre.We performed all statistical analyses with the PASW statistics software package.The average accuracy for go and no-go trials and RT for correct go trials in the go/no-go block for control and ADHD subjects and ADHD inter-medication comparisons are summarized in Tables 2 and 3.We found no significant differences in accuracy for go and no-go trials or in RT for correct trials between control and pre-medication, post-placebo and post-ATX ADHD subjects.The inter-medication contrast comparing the effect of ATX against the placebo revealed no significant differences in behavioral parameters between ADHD subjects.First, we screened for any fNIRS channels involved in the go/no-go task for control and ADHD contrasts."We found a significant oxy-Hb increase in the right CH 10 in control subjects. "Conversely, in ADHD conditions, only post-ATX exhibited a significant oxy-Hb increase in the right CH 10.Thus, we set the right CH 10 as a region-of-interest for the rest of the study.This channel was located in the border region between the right MFG and IFG: 50, 37, 33, MFG 68%, IFG 32%, Table 4) with reference to macroanatomical brain atlases."Comparison between oxy-Hb signals of the control and pre-medicated ADHD subjects revealed marginally significant activation of oxy-Hb signal in the right CH 10 in the control subjects.This indicates that the control subjects exhibited higher right prefrontal activation during go/no-go tasks than did the pre-medicated ADHD children.Then, we examined the effects of medication between control subjects and post-placebo-ADHD subjects, and between control subjects and post-ATX-ADHD subjects."Oxy-Hb signal in control subjects was significantly higher than in post-placebo ADHD subjects, while there was no significant difference between control subjects and post-ATX-ADHD subjects.This suggests that ATX administration normalized the impaired right prefrontal activation.Finally, we examined whether there was an ATX-induced, but not placebo-induced, right prefrontal activation in ADHD subjects."In the inter-medication contrast, we found the right CH 10 to be significantly different between conditions.This result demonstrates that ATX, but not the placebo, induced an oxy-Hb signal increase during the go/no-go task.Because we did not match the IQ of the ADHD and normal healthy control subjects, we additionally examined whether there was any possible effect of IQ.We performed correlation analyses for IQ and activation in the right CH 10 for ADHD subjects and control subjects, respectively."In ADHD subjects, Pearson's correlation coefficient was −0.043, while that in control subjects was −0.023: In neither analysis did we find any significant correlation with a meaningful effect size. "Further, we compared the two correlation coefficients, but did not find any significant difference.This led us to conclude that there was no correlation between IQ and the activation in the right CH 10 in either group.Our current study, using a double-blind, placebo-controlled, crossover design, provided the first fNIRS-based neuropharmacological evidence of the acute ATX effect on inhibitory control in school-aged ADHD children.Through assessing cortical activation data of ADHD and healthy control subjects performing a go/no-go task reflecting function of the motor-related inhibitory network, we revealed that the right IFG/MFG is a neural substrate of ATX effects in ADHD children based on the following findings.First, ADHD children exhibited reduced cortical activation in the right IFG/MFG during go/no-go task blocks compared to control subjects.Second, the reduction of right IFG/MFG activation was acutely normalized after ATX administration in ADHD children.Third, the ATX-induced right IFG/MFG activation was significantly greater than placebo-induced activation during go/no-go task blocks.The recovered right IFG/MFG activation in ADHD children detected by fNIRS measurements after ATX administration is consistent with our previous studies using MPH.These results suggest that normalized right IFG/MFG activation during a go/no-go task, as observed using fNIRS, may serve as a robust neurobiological marker for evaluating ATX effects on ADHD children as with evaluating MPH effects.One of the most commonly used experimental paradigms for evaluating response inhibition is the go/no-go task, in which subjects are generally required to inhibit a prepotent response when no-go stimuli are presented within a sequence of go stimuli.This is an essential cognitive function required in daily life, and impaired response inhibition is a potential biomarker candidate for ADHD in children.Because of this, a number of go/no-go paradigms have been widely adopted to explore the disinhibitory nature of ADHD in fMRI studies.In general, a go/no-go task allows the assessment of detailed aspects of inhibitory response controls reflected in a variety of parameters: Errors of omission are generally interpreted as a symptom of inattention; errors of commission and overly reduced reaction times with standard stimuli are commonly considered indicators of impulsivity.However, our current study did not show any significant differences in behavioral performance between ADHD children and control subjects.Thus far, we have observed inconsistency in behavioral data for ADHD children: our previous studies showed performance impairment in ADHD children compared with control subjects.However, our fNIRS studies have consistently exhibited hypoactivation in the MFG/IFG in pre-medicated ADHD children without corresponding behavioral effects.This tendency is reminiscent of an fMRI study by Smith et al. reporting that the go/no-go task parameters showed no difference between ADHD children and IQ- and age-matched healthy controls, while hypoactivation in the bilateral prefrontal and right parietal lobes was found in the ADHD patients.These inconsistencies among the results of both studies represent the difficulty in interpreting behavioral parameters compared with brain activation patterns for detecting cognitive dysfunction in ADHD children.In our current study, we detected brain activation in the right MFG/IFG during go/no-go task blocks in the healthy control subjects.This activation pattern is in accord with that found in previous fMRI studies, and this region is regarded as especially important for inhibitory control.This led us to conclude that our current fNIRS measurements robustly extracted concurrent activations for response inhibition in the right prefrontal cortex in control subjects.In ADHD conditions, ATX-induced normalization in the MFG/IFG, as identified using fNIRS, is consistent with former MPH-related studies.Also, these activation patterns are similar to the results of previous fMRI studies.In a different vein of studies using animals, both ATX and MPH led to increased NA and DA in the prefrontal cortex of mice and rats.Taken together, it would be natural to conclude that administration of either ATX or MPH increases NA and DA concentration in the prefrontal cortex, leading to normalization of inhibitory control in ADHD children.However, this does not necessarily suggest that both medications affect prefrontal functions via the same neuropharmacological mechanism.We must note here that ATX and MPH have an almost opposite affinity to DA and NA transporters.While MPH has a 10-fold higher affinity to DA than to NA transporters, ATX has a 300-fold higher affinity to NA than to DA transporters.According to this evidence, we speculate that MPH has by far larger effects on the DA system between the prefrontal and striatal regions, while ATX has far larger effects on the locus coeruleus NA system between the prefrontal and coeruleus areas.Thus, what appears as the similar activation patterns induced by ATX and MPH in the prefrontal cortex may reflect different neural substrates.In order to elucidate the precise neuropharmacological mechanism underlying the right prefrontal functional normalization by ATX and MPH, further investigation is necessary.In the present study, we selected a go/no-go task paradigm with alternating go blocks as baseline blocks and go/no-go blocks as target blocks without rest segments in between active task blocks.Tsujii et al. and Cui et al. also adopted a similar block designed for go/no-go tasks, and treated the go task period as the baseline for contrast with the go/no-go task period when analyzing fNIRS signals.This paradigm was set primarily because of the difficulty with ADHD patients staying still without performing any tasks, which may lead to unexpected movements or hyperactive behavior.In addition, we omitted rest blocks to save time, as a long experiment time would bore ADHD subjects.Furthermore, the go and go/no-go block design is commonly used in fMRI studies.Thus, considering comparisons across modalities, the use of the go/no-go task paradigm in the current study is appropriate.Another merit of the block-design paradigm is that the baseline blocks serve as a motor control for the target blocks.Schecklmann et al. used a weekday-reciting task as a baseline block and a word fluency task as a target block, and used fNIRS to analyze the difference in signal between the two tasks.In this paradigm, movement and muscle artifacts in the task condition are expected to be neutralized with the use of a control condition with a similar motor output.Similarly, we adopted the go task as the baseline task.As the physical movements made by children during the go task are similar to those of the go/no-go task, movement and muscle artifacts are expected to be ruled out.Accordingly, activation during the go/no-go task block is considered to reflect inhibitory control; thus, this paradigm is more appropriate than one using a rest block as the baseline.Although fNIRS studies often use a paradigm where rest and task blocks are alternately performed, we suggest that it would be more applicable for studies involving younger ADHD children to adopt the alternating go and go/no-go block design.Reminiscent of our study demonstrating the clinical utility of fNIRS-based assessment of the efficacy of an acute single dose of MPH to ADHD children, here ATX has been shown to be similarly effective: the current study demonstrates the utility of fNIRS-based assessment of the efficacy of an acute single dose of ATX administered to ADHD children.fNIRS-based assessment has a fundamental clinical importance as a diagnostic tool and for therapeutic encouragement.For the diagnostic aspect, we demonstrated that fNIRS-based measurement can reveal the effects of an acute single dose of ATX with higher sensitivity than can behavioral parameters."The moderately large effect size of the acute single dose of ATX as compared to that of the placebo demonstrates that fNIRS-based assessment can serve as a comparably effective diagnostic tool for the effect of ATX in ADHD children, especially those at elementary-school ages.Moreover, fNIRS-based measurement could provide therapeutic encouragement to ADHD children and their families.One major problem of medication treatment, which is common with both AXT and MPH, is the high discontinuation rate estimated at between 36 and 85%."Since guardians' subjective feelings about the efficacy of medication stand as a major cause for the discontinuation of medication treatment with ADHD children, encouragement of family members of ADHD children by demonstrating therapeutic success may facilitate successful ATX treatment.Objective demonstration of ATX effects as visualized with cortical activation observed with fNIRS-based measurements could act as an informative guide, encouraging ADHD children and their guardians to continue ATX treatment.As discussed above, the current study has demonstrated the ATX-effect assessment on inhibitory control in ADHD children using fNIRS.However, for adequate understanding of current findings, several issues need to be addressed.First, IQs of control children were significantly higher than those of ADHD children.IQ has been reported as having a negative correlation with ADHD scores.Since IQ is not independent of ADHD, IQ matching to control subjects could remove a disorder-related variance from the ADHD group.Further study with a larger sample size may have to be performed in order to explore the possible effects of IQ.The second limitation of this study is that controls were only tested once, while children with ADHD were tested a total of four times.The practice effect of multiple testing in ADHD children was controlled for by the counterbalanced design.Ethical limitations prevented us from testing healthy controls under stimulant medication, as well as from having them wait for 90 min to retest; however, we need to explore ways to eliminate potential training effects with appropriate experimental procedures.Since there are no studies on assessing order and learning effects of go/no-go tasks associated with fNIRS signals, this would be an interesting and essential area for future study.The current study examining the effects of a single acute dose of ATX on inhibitory control in ADHD children using a double-blind, placebo-controlled, crossover design, revealed the following findings.First, the activation foci, which are involved in inhibition control, were activated in control subjects performing a go/no-go task, but not in ADHD children.Second, the ATX-induced right IFG/MFG activation was significantly greater than placebo-induced activation during go/no-go task blocks.Third, the activation in the right IFG/MFG region was normalized after ATX administration.Taken together, these findings led us to conclude that the activation in the MFG/IFG could provide an objective neuro-functional biomarker that indicates the effects of ATX on inhibitory control in ADHD children.This fNIRS-based examination on the effect of ATX is applicable to ADHD children at elementary school ages including those as young as 6 years old.Thus, we believe that fNIRS-based examination is a promising clinical tool that could enable the early diagnosis and treatment of ADHD children. | The object of the current study is to explore the neural substrate for effects of atomoxetine (ATX) on inhibitory control in school-aged children with attention deficit hyperactivity disorder (ADHD) using functional near-infrared spectroscopy (fNIRS). We monitored the oxy-hemoglobin signal changes of sixteen ADHD children (6-14 years old) performing a go/no-go task before and 1.5 h after ATX or placebo administration, in a randomized, double-blind, placebo-controlled, crossover design. Sixteen age-and gender-matched normal controls without ATX administration were also monitored. In the control subjects, the go/no-go task recruited the right inferior and middle prefrontal gyri (IFG/MFG), and this activation was absent in pre-medicated ADHD children. The reduction of right IFG/MFG activation was acutely normalized after ATX administration but not placebo administration in ADHD children. These results are reminiscent of the neuropharmacological effects of methylphenidate to up-regulate reduced right IFG/MFG function in ADHD children during inhibitory tasks. As with methylphenidate, activation in the IFG/MFG could serve as an objective neuro-functional biomarker to indicate the effects of ATX on inhibitory control in ADHD children. This promising technique will enhance early clinical diagnosis and treatment of ADHD in children, especially in those with a hyperactivity/impulsivity phenotype. |
561 | Bridge safety is not for granted – A novel approach to bridge management | Many towns and cities are located upon rivers.When a community grows, connection and accessibility become critical aspects of urban development, and rivers critical crossing points.Modern societies are extremely reliant on bridges, not only because they facilitate movements of people and goods, but also because they carry utilities over otherwise impassable obstacles.River bridges are intrinsically highly exposed to flood-related hazards, more than any other infrastructural element.They are also vulnerable to man-made hazards, such as vessel or vehicle collision ; however, these phenomena are out of the scope of this study.The high capital cost of bridges often results in few structures and limited redundancy in the system; thus, their failures can lead to cascading effects and disproportionately negative consequences for the community .The economic consequences of a bridge failure include loss of utility, repairs and public overreaction costs ; the societal importance covers aspects of emergency management and post-disaster operations .Recent events have underlined the need of placing resilience measures to mitigate the consequence of flooding on bridges and roads, in particular in the light of aging infrastructure and climate change.In the past two decades, progress has been made in the field of bridge engineering, especially in studying the damage mechanism of scour to bridges .However, those are limited either to theoretical aspects or to case studies of single bridges, due to the lack of homogeneous detailed information and demanding computational processes.A few studies have investigated the bridge vulnerability at larger scale, considering the systemic risk of a bridge as an element in the wide transport network.Investigating bridge vulnerability at larger scale is currently challenging because a complete picture of the bridge stock does not exist, which causes loss of control over the assets.Nevertheless, effective bridge management is based on organised and complete data of the bridge stock, thus more research is needed in order to develop better practice.This paper aims to set the scene for a holistic risk-based management system for bridges, based on a national bridge database.This study overviews current practice of bridge management, framing it within a risk-based approach.It also provides evidence for the need of a more systematic and protocolled data collection, proposing a new taxonomy for bridges at risk of flooding.Finally, it discusses the development of a national bridge inventory for the UK.It is estimated that the UK has more than 160,000 bridges .The annual expenditure on maintenance and repair of national bridges in England only is around £180 m, and the estimated maintenance backlog for local authorities bridges is £590 m .Despite the high costs associated with bridges, the absence of a national bridge database makes these number quite unreliable ."In 2009, six bridges collapsed and 16 were closed to the traffic due to intense flooding in the Cumbria region; this flooding event caused damages for £34 m to the county's bridges and roads, and the collapse of one bridge killed a police officer .Bridge collapses happen due to both natural and human actions.Flooding represents the cause of almost half of bridge failures for a range of factors: scour at foundation; hydrodynamic loads and pressures on the deck, piers and/or foundations; overtopping; and debris accumulation .Since different types of bridges are sensitive to different failure mechanisms, having deep knowledge of the bridge stock is the first step for an effective risk management of bridges.Globally, the definition of a univocal system of parameters for unequivocally classifying structures is advocated for defining criticalities and interventions, especially in the light of limited budgets .Currently, various countries are attempting to develop a national bridge database for improving asset management, like France , Italy , Vietnam , Thailand , Iran , Taiwan and India .In the USA, following a major bridge collapse, the Federal-Aid Highway Act of 1968 required every member state to compile the National Bridge Inventory with the specifications of any bridge longer than six meters and used for vehicular traffic.Currently, the NBI is a unified database used to analyse bridges and judge their conditions, for safety and management purposes .A robust digital data protocol is used to automate the exchange of bridge information in the various activities of a bridge lifecycle .Although designed to be extended to local bridges, the inventory is currently limited to federal highways bridges.The NBI is adopted for in-depth national analysis, and research studies.Climatic and socio-economic changes may have also exacerbated bridge conditions; thus, some bridges could have reached the end of the expected life span .UK bridges are owned and managed by various agencies, who use in-house management systems with various level of sophistication.This distributed ownership leads to a “notorious lack of national data” , which prevents from drawing reliable estimations .For example, most of the UK bridges have similar characteristics to the collapsed bridges in Cumbria ; however, there is currently no capability to identify and quantify them.The quality of records is also undermined by the outsourcing of contractors who create and maintain records, who do not have access to a consistent structure file for all assets .This lack of knowledge of the bridge population is recognised as a major problem and a pressing issue for progressing informed decisions in the long-term .The risk of an asset is usually described as a combination of exposure, hazard and vulnerability.Rating risk on the structural characteristics alone or inspections has been revealed insufficient; the absence of other factors could mislead evaluations about an asset .For example, the Shuang-Yuan Bridge collapsed in 2009 in Taiwan due to severe floods, despite judged in good conditions .Various bridge taxonomies can be found in the earthquake literature for the purpose of vulnerability assessment, accounting for the hazard intensity .Nevertheless, a similar classification does miss for bridges exposed to floods.This paper firstly provides an overview over traditional bridge management systems, as opposed to more holistic risk-based approaches; in particular, a case study illustrates capabilities and limits of a local bridge dataset in the UK.Then, it proposes a new bridge taxonomy for bridges at risk of flooding, as a mean for harmonising current datasets, producing homogeneous data and supporting decision-making.Finally, it draws implications and challenges regarding the practical implementation of a national bridge inventory.Bridge Management Systems are used to systematically control the bridge stock, and ensure both safety and performance.BMSs are functional at a range of levels for: collecting inventory data in a systematic and organised way; carrying out inspections and damage assessment; planning repair and maintenance schedule; allocating funds.A BMS traditional structure includes four standard modules .The Inventory Module collects data regarding the bridge stock; the Inspection Module collects inspection data to classify the condition state; the Maintenance, Repair and Rehabilitation Module monitors short-term and long-term plans for intervention; finally, the Optimisation Module integrates the previous modules for budget-expenditure forecasts.The inventory is considered the most important part of a BMS , and most BMSs are just limited to this module.This limitation prevents many countries from adopting BMSs to make decisions on the risk state; for example, Belgium, France, Germany and Ireland base their decisions on engineering judgement .Nevertheless, the modular format of BMSs is flexible and could allow the introduction of additional modules according to the users’ needs .Some BMSs are national, others focus on a single city.Comprehensive reviews of BMSs and national models are offered, among others, by Flaig and Lark , Pellegrino et al. and Woodward et al. .In the UK, a national BMS is missing.Highways England has developed a Structures Information Management System for their assets, containing basic inventory and inspection data.They have also published two Design Manuals for Roads and Bridges containing information about current standards, advice notes and other published documents relating to the design, assessment and operation of trunk roads.A new document is currently in preparation for updating both manuals.Despite local authorities refer to the DMRB for practice, a national database is seen as an essential requirement for given a common core format to information and for the long-term management of the bridge stock .Further guidance on scour at bridges and other hydraulic structures is provided by the CIRIA Manual .This manual addresses scour problems affecting both new and existing structures; however, its up-taking is judged practically difficult, especially by non-experts .The bridge engineering literature is more developed in the earthquake community and the concept of bridge taxonomy has been already advanced for risk assessment .Available classifications include main typological features, and a measure of the hazard .Existing classifications vary due to the geographic location and objectives of the study, and often focus on individual bridges or the most common type in a region.Given this limitation, existing taxonomies may not be appropriate to fully describe different areas or bridge types .The UK is not an earthquake-prone country; therefore, no classification is available in the seismic literature that refers to British bridges.Different types of classification underpin the Highways England’s database and other local databases; however, these were designed with ad-hoc architecture and not-standard core structure.The last decades have witnessed the shift from “fighting” natural hazards to “managing” the risk from them .These risk-based approaches provide a methodological framework formed by three elements: hazard, exposure and vulnerability .Such methods are particularly suitable for low-probability high-impact events, such as floods .All riverine bridges are subjected to the risk of flooding, as naturally located upon rivers.Different consequences arise from floods depending from the hazard intensity and the vulnerability of the bridge.The vulnerability does not come from the type of structure alone, but includes a range of influential factors, such as the catchment topography or the load intensity.It is of note that existing BMS modules differ from the “risk modules”, probably because the notion of risk is relatively recent; environmental considerations is then missing from current management systems.The hazard module deals with simulating a range of flooding scenarios and each event is defined by a specific Intensity Measure, location and probability of occurrence based on historical data.The river flow is governed by rainfall duration and intensity, as well as by ground conditions; saturated grounds can amplify impacts, especially in the case of storm clusters.The type of drainage and the catchment topography can deeply influence the flooding impact on structures; for example, steep catchments are characterised by high velocity floods and debris, while open catchment are likely to be impacted more by inundations rather than water velocity.The first step consists in estimating the hydrodynamic forces with hydraulic and hydrological models.Extensive literature is available regarding assumptions and characteristics of the multiple conceptual, physically-based, and stochastic hydrological models developed so far .The second stage involves the impact modelling of the forces on the bridge, by considering the asset characteristics; this stage is illustrated in Section 4.3.An optimal asset management starts with complete data of the assets, i.e. the exposure .The exposure contains details of the location, value and characteristics of the “assets at risk”, i.e. bridges potentially subjected to damage or disruption.Information can be derived from geo-information systems, inspections and other available datasets; these are objective properties, independent from the hazard.The US National Bridge Inventory is a good example of a modern database with a standard format .It is geocoded and helps governments to manage local and national bridges; for example, some US communities share information about local bridges on public platforms, giving the opportunity to comment and feedback.Citizens can also make decisions on this, e.g. commuters can make their own consideration about their route to work in case of a bridge disruption .In England, multiple authorities are responsible for the bridge stock: Network Rail, for railway bridges; Highways England, for most motorways and few A road bridges; local authorities, for few highway bridges, most A road bridges and local bridges at county level."Each authority has his own method of data collection and risk assessment; although some best-practice is shared through national forums, the consistency and quality of record is not satisfying .It is expected that the different datasets contain similar general data, although no common framework exists to guarantee the inter-operability of databases.Moreover, more specific information is scarce, particularly regarding foundation type, height above river and material.Well-known relations, local knowledge and expert opinion can support assumptions for covering some gaps in the datasets; however, this type of reasoning is not always reliable and generally time-consuming.The vulnerability is the susceptibility of exposed elements of being damaged by adverse events .The damage estimation consists of evaluating costs and losses, under different load conditions of hazard.Worldwide, Damage Functions are recognised as the standard method for urban flood assessment, and a wide range of research is present in the literature .DFs relate hazard IMs to the damage experienced by the object at risk, representing its susceptibility to the hazardous event.Traditionally, DFs presents the monetary damage for buildings affected by floods according on the building use and typology .Less research has been done for infrastructure; models such as HAZUS-MH computes physical damages of roads and bridges , while very limited research investigates on their functionality loss .This area requires more research , but such development is out of the scope of this study.Identifying bridges with the same vulnerability is functional for preventing simultaneous failures, especially during extreme events and storm clusters.The vulnerability identification usually includes the development of a ranking.This is on-going in many agencies, although limited to structural properties and not-inclusive of environmental factors .Flood risk assessment to bridges is challenging.Vulnerability and exposure are dynamic entities, depending on temporal and spatial scales, as well as on a wide range of factors .Nevertheless, exposed elements can be detailed into inventories, which can support monitoring changes.In the UK, the method for data collection varies according to the responsible authority who manages the asset.Clear criteria for recording information are the onset for harmonising data, and produce useful inventories.A first taxonomy of bridges at flood risk is proposed in Section 5.2, as an effort to produce a protocolled method for gathering or creating uniform data over the country, based on the inter-operability between the databases of different owners.The Lancashire County Council manages more than 1800 road bridges.Motorways and railways run from North to South as well as from East to West, therefore it can be considered a high-infrastructured area.It is in the top-three counties for both absolute number of bridges and number of bridges per m2.Lancashire watercourses drain westwards from the Pennines into the Irish Sea, and include three major rivers and their tributaries.Lancashire is a flood-prone region where flooding caused by extreme rainfalls has become a bigger issue over the last few years .During the 2009 floods, several major roads were flooded and not passable, and various bridges were closed due to concerns over structural integrity.These events lead to the development of a bridge protection programme.The BMS of the LCC includes a bridge register that collects information regarding the bridge geometry, location, maintenance, road type, crossed obstacle, carried loads.Most of the Lancashire bridges are allowed to carry >40 t, thus without particular load restrictions; almost all the structures have length <100 m. 88 structures are recognised as listed by Historic England, i.e. they are of special interest and must be preserved.For the 30% and 36% of their structures, targeted and preventive maintenance is respectively planned.The LCC developed a preliminary risk rating for scour on the basis of the register, by weighting various factors with a score ranging from 5 to 30.The sum of all the scores gives the risk rating for a specific bridge.All structures with a score higher than a baseline value were classed as susceptible to scour; this baseline score was developed via expert opinion .A desktop study in 2010 reported 56 structures at major risk, considering scour only.Although the register is a remarkable example of local management, the LCC authorities underlined that the database includes handmade tasks, that are necessarily approximate.Furthermore, despite the considerable number of attributes, fundamental information for a complete flood risk assessment is missing .The structure of this database differs from the one of Highways England or Network Rail.Finally, available flow and flood data are not integrated with the bridge register information.The register is in the process to be updated, and a common core format is sought by authorities.Similarly to the SYNERG-Y work for seismic risk assessment, a detailed taxonomy is advanced for bridges prone to floods.The proposed bridge taxonomy aims to be used by practitioners in the UK, where data collections is currently limited.It follows precise criteria for development: be relevant and comprehensive by including the fundamental features for evaluating the bridge performance in the context of flood risk; be intuitive and user-friendly, in order to be handled by sub-contractors; be applicable to the UK context, where data collection is currently not advanced.The taxonomy was developed on the basis of data from literature , manuals.20 attributes have been considered for describing the characteristics of road and rail bridges in flood prone areas; they are detailed in Table 2.Some attributes describe general features, while others refer to structural parameters and geometry.Topography, water depth and peak flow rate relate to the environmental conditions, and give information regarding the vulnerability of the bridge to floods.The flood design standard indicates if any particular design has been considered.Past inundation and past maintenance are the attributes that keep record of the history of the bridge.A taxonomy-based inventory allows making multiple queries and searches about the recorded bridges.By including environmental parameters, the inventory is suitable for risk modelling and it would enable the identification of the structures at higher risk.These analysis and results could be displayed via graphical tools for strategically mapping and planning.Furthermore, a taxonomy-based inventory would also facilitate collaboration and growth of joint knowledge in the bridge community, allowing comparison of risks across the country.This taxonomy would guarantee the inter-operability among the datasets of the various bridge owners and facilitate the development of a national bridge inventory.Further insights and potential of a national bridge inventory are discussed in Section 6.In an era of changes and austerity, bridge owners need to know the risk level of their assets in order to prioritise resources.A national bridge inventory would support identifying the structures in need of mitigation measures, thus allocating funding.If associated with flood forecasting models at national scale, the national bridge inventory could be used for probabilistic analysis, supporting the estimation of likelihood, impact and location for severe events at country-scale.If associated with water level gauges and flood forecasting models, it would be also an invaluable tool for developing early-warning systems and directing emergency operations.In addition to sharing best-practice and joining knowledge, analysis resulting from data would also be comparable to countries overseas, for an evaluation of national standards and codes.At a later stage of its development, the national inventory should also be able to include photos, drawings, and various type of documents.Such architecture will also be fundamental for recording the Big Data and “smart information” that are going to be produced in the next decades.A complete bridge database is also preliminary to science and transportation progress, such as 3D mapping, BIM, Digital Twins, and real-time monitoring.A well-developed, advanced, comprehensive database produces data that is useful for the society.The proposed taxonomy is integrating structural data of bridges exposed to flood risk with environmental parameters.Cost-benefit analyses are sensitive to the exposure; the exposure may change over the asset life time, so this approach is particularly relevant in regions affected by climatic effects.This integration aims to transfer risk principles into current bridge management, shifting the focus of the fund allocation from “defected bridges” to “vulnerable bridges”, moving towards a new generation of BMSs.The presented taxonomy is UK-based and flood-focused, but could be adopted by other countries prone to other hazards.In fact, the taxonomy could be modified to accommodate local features, changes over the time, and different environmental parameters.Moreover, it could also be updated for including future bridge design criteria and materials.UK bridges are owned by various agencies and managed with different methods; the proposed taxonomy could support datasets inter-operability, and ultimately lead to a coherent nation-wide database.The presented taxonomy is a preliminary proposal that should be refined by adopting consultation and open discussion with bridge owners and experts."This discussion would enable to design a system capable of accommodating agencies' preferences and needs.In order to facilitate the progress towards a more protocolled data collection, bridge-related authorities should formulate and advance a national strategy for the development of policies, functional for setting the bridge national database.This would advise the Department for Transport in developing a roadmap to identify specific steps for drawing and implementing protocols of data collection and datasets compilation.This would help in aligning current manual updates, defining a common regulation for data compilation and supporting the uptake in practice.Further regulation would be needed for tackling issues over the ownership of the database, alongside its ongoing updating and maintenance.One possibility is that each agency would update and maintain their own data, following a shared architecture of rules, while the Government owns the whole database.All the agencies and governmental links should agree about the public accessibility of it.There is no doubt that the database development represents a substantial challenge for the Government and relevant agencies, considering the high number of bridges in the UK; however, it would play a crucial role in preserving public safety in future years, alongside supporting the allocation of resources and the review of existing standards.The next stage of this research will develop a pilot version of the national bridge database, considering several counties in England.This stage will investigate the availability of data and the issue of inter-operability due to different datasets, while working closer with county agencies and stakeholders.The pilot version of the database could be applied for: risk analysis for a set of flooding scenarios, by means of damage curves; economic appraisal of bridge disruption; emergency planning.The current unavailability of high-quality data and the consequent lack of understanding of bridge performance jeopardise bridge safety, and hinder the ability to prioritise resources.The UK, as for many other countries, should not take bridge safety for granted and should take precautionary preventative action for defining a new programme for bridges at risk of floods.Within a risk-based approach, being aware of the exposure condition is fundamental to control and manage local and national infrastructure threatened by natural hazards.Currently, bridges are managed by a range of authorities and their in-house systems have different degrees of sophistication and methods, preventing the possibility of drawing a clear and coherent picture across the country.There is a consensus in advancing a consistent methodology, and a formal procedure, for conforming information, aiming at better analysis and assessment.In particular, the creation of a national bridge database would enable the meaningful identification and comparison of risks to bridges across the country, building a deep knowledge of the national bridge stock.This study presented a preliminary protocolled taxonomy for data collection of bridges, while illustrating the implication of a national bridge inventory in the UK.The national database could have the capability of being integrated with hydrological and transport models, providing advanced information for estimating failures and disruption.The paper set the scene for a unified bridge database, and advocated the engagement of national authorities for developing a roadmap of policies leading to it.The author is unable to provide access to data underpinning this study.Data was provided by Lancashire County Council under a data transfer agreement which prohibited data redistribution. | Bridges are crucial points of connection in the transport system, underpinning economic vitality, social well-being and logistics of modern communities. Bridges have also strategic relevance, since they support access to emergency services (e.g. hospitals) and utilities (e.g. water supplies). Bridges are mostly exposed to natural hazards, in particular riverine bridges to flooding, and disruption could lead to widespread negative effects. Therefore, protecting bridges enhances the resilience of cities and communities. Currently, most of the countries are not able to identify bridges at higher risk of failure, due to the unavailability of high-quality data, the mix ownership of the assets or the lack of a risk-based assessment. This paper introduces a risk-based approach to bridge management, alongside the gaps of current methodologies. Then, it presents a preliminary protocolled taxonomy for data collection of riverine bridges in flood-prone areas, while illustrating the implication of a national bridge inventory in the UK. This paper advocates the engagement of national authorities for developing a roadmap of policies leading to a unified bridge database functional for strategic risk assessment. |
562 | A graded tractographic parcellation of the temporal lobe | The temporal lobe is a complex region that supports multiple cognitive domains including language, semantic processing, memory, audition and vision.In order to understand its roles in these diverse cognitive functions, researchers have attempted to map the precise anatomical organisation within the temporal lobe, revealing an intricate functional architecture and regions of specialisation throughout the temporal cortex.For example, within the temporal lobe, antero-ventral and middle temporal areas have been found to be associated with semantic processing, while medial areas have long been implicated in episodic memory.One way to understand the functional organisation of a region is to understand its structural composition.Traditionally, the exploration and mapping of structural/functional subdivisions within the cortex has been based primarily on cytoarchitecture but there has also been work on receptor distribution and other microarchitectural patterns.The laminar distribution of a given area, in conjunction with local microcircuitry and connectivity patterns determines its functional processing capabilities.Indeed, the cortex does not exist as a detached entity and regions such as the temporal lobe are highly interconnected both locally and to other areas throughout the brain via white matter fibre bundles.These structural connections are assumed to be a determinant of the functional capabilities of a cortical area, governing the nature and flow of information to and from an area, and can influence both its underlying neural architecture and its functioning.While there has been a lot of research mapping function to structure within the temporal lobe and reconstructing the white matter fibre bundles that course through it, there has been relatively little exploration of the organising principles underlying connective similarity in the temporal lobe.Parcellation schemes identify core regions within the target area of interest where there is high intra-regional similarity in relation to some aspect of their anatomical or functional anatomy, but comparatively low similarity with areas outside the sub region.From these parcellations, researchers are able to delineate key regions of anatomical distinction, and by inference, areas of functional specialisation.However, there is evidence to suggest that such hard parcels may not always describe the true underlying nature of the data.Brodmann himself noted that “not all these regions are demarcated from each other by sharp borders but may undergo gradual transitions as, for example, in the temporal and parietal regions.,.In recent years with the advent of modern imaging techniques, researchers have begun to explore different ways to parcellate the cortex based on their patterns of connectivity described as connectivity-based parcellation.Three main types of algorithms have been used.The first two are k-means clustering and hierarchical clustering.The third approach utilises principles of spectral graph theory to perform the parcellation.The latter approach, often referred to as spectral reordering or a closely related Laplacian eigenmapping, allows for the investigation of the relationships between areas, whether these are graded or distinct, and hence is appropriate to investigate the organising principles of the temporal lobe.Our aims were twofold.First, we wanted to establish whether the data supported graded regions within the temporal lobe.Second, we wanted to explore how connectivity similarity varied across the cortex."In order to address these questions, the current study used spectral reordering, a data transformational technique, to explore the temporal cortex's connectivity.While not a clustering technique in the formal sense, the approach is well-established in the literature and its results have been validated.We extended the method by projecting the reordered voxels into brain space.This allows one to elucidate the spatial pattern of connectivity across the cortex.We applied this technique to the temporal lobe and found that connectivity changes occur along a medial to lateral as well as anteroventral to posterodorsal axis.The tracts that underlie these axes were then explored.We finally discuss the possible functional processes that these gradations underpin."Throughout this paper we have referred to the current approach as a 'graded' parcellation.It is important to note that this is not to imply a presupposition about the underlying anatomical structure, but to differentiate it from more traditional methods which impose the delineation of hard boundaries.A dataset containing structural, and diffusion-weighted MR images from 24 healthy participants was used.All participants were right handed, as determined by the Edinburgh Handedness Inventory.The study was approved by the local ethics committee and all participants gave their informed consent.The images were acquired on a 3 T Philips Achieva scanner, using an 8 element SENSE head coil."Diffusion-weighted images were acquired with a pulsed gradient spin echo echo-planar sequence with TE=59 ms, TR ≈ 11884 ms, or using electrocardiography), Gmax=62 mT/m, half scan factor=0.679, 112×112 image matrix reconstructed to 128×128 using zero padding, reconstructed in-plane voxel resolution 1.875×1.875 mm2, slice thickness 2.1 mm, 60 contiguous slices, 61 non-collinear diffusion sensitization directions at b=1200 s/mm2, 1 at b=0, SENSE acceleration factor=2.5.In order to correct susceptibility-related image distortions, two volumes were obtained for each diffusion gradient direction with inversed phase encode directions, with distortion correction carried out using the method described in Embleton et al.In order to obtain a qualitative indication of distortion correction accuracy, a co-localized T2-weighted turbo spin echo scan was obtained.A high resolution structural T1-weighted 3D turbo field echo inversion recovery scan, was acquired in order to obtain high accuracy anatomical data on individual subjects which were used to define individualised anatomical seed regions,A temporal lobe region of interest was created which included all voxels within the temporal lobe at the boundary between the grey matter and the white matter."To do this, each participant's skull stripped T1-weighted image was co-registered to the distortion-corrected diffusion images using FSL's linear affine transformation. "The interface between the grey and white matter of the co-registered T1 image was then obtained using FSL's FAST algorithm to obtain a partial volume map of white matter, which was binarised with no threshold to ensure that the map overlapped the edge of the grey matter – the grey matter to white matter interface.The perimeter voxels of this map were extracted using an in-house MATLAB script.The GWI was then masked to include only those voxels within the temporal lobe.A temporal mask was first defined in MNI space using the MNI structural atlas within FSL."This mask was then normalised and co-registered to each participant's native diffusion space.In order to ensure full temporal lobe coverage, the original probabilistic temporal mask was leniently thresholded.This resulted in the region of interest encroaching into other lobes.In order to ensure that only the temporal lobe was used as a region of interest, the masks were all manually reviewed in native space.The corrected temporal mask was then used to mask the GWI to create the temporal GWI seed regions of interest used for tracking.Unconstrained probabilistic tractography was performed from every individual voxel in the temporal lobe GWI using the probabilistic index of connectivity algorithm, which sampled the voxel-wise diffusion probability distribution functions generated via the constrained spherical deconvolution and model-based residual bootstrapping method.During tracking, 10,000 streamlines were propagated from each seed voxel, with step size for streamline propagation set to 0.5 mm.An exclusion mask was created and used to avoid path propagation through the grey matter and tracts anomalously jumping sulcal boundaries and gyri.The streamlines were set to stop if they hit the exclusion mask, if the path length of the streamline was greater than 500 mm, or if the curvature of the streamline was greater than 180°.For each individual seed voxel within the temporal lobe GWI, the number of streamlines originating from the seed which reached a given voxel in the brain was recorded, generating a tractographic connectivity profile for each temporal GWI seed voxel.The graded parcellation via spectral reordering was carried out based on the work by Johansen-Berg et al. as follows."The tractographic connectivity profiles of each individual participant's temporal GWI seed voxels were first normalised to a common group space using SPM's DARTEL. "Each seed's connectivity profile was then thresholded at 0.05 percent of the maximum in order to remove noise, in keeping with a similar study by Devlin et al. "The resulting 3D tractographic volumes for each seed voxel were downsampled by a factor of 2 due to the machine's memory constraints.The image was binarised and flattened into 1×m row vectors where the columns represented every point in the brain.The participant tractographic connectivity profiles in row vector form were concatenated into individual n×m matrices, where each row represented the connectivity profile of an individual temporal seed voxel with every other voxel in the brain.If a column contained all zero entries, it was removed to further reduce memory load.In order to determine which temporal lobe seed voxels shared similar connectivity profiles, a pairwise similarity algorithm was run on the temporal connectivity matrix to calculate the cosine of the angle between each pair of rows in the above matrix.This generated an n×n symmetric matrix with all of the temporal seeds plotted against each other, and values representing the degree of similarity between each pair of seed voxels in their patterns of tractographic connectivity.The spectral reordering algorithm was then applied to the matrix, permuting it and forcing seed voxels with strong similarity to be positioned close together and the supplementary materials from Johansen-Berg et al. for further details on the mathematical background of the algorithm).To obtain the final graded parcellation, the temporal seed voxels in the reordered matrix were projected back onto the brain, enabling reordered voxels within the similarity matrix to be visualised in reference to their anatomical location.To visually code the position of each seed voxel within the matrix a graded colour spectrum was used.This colour spectrum transitioned across the matrix from left to right such that, for example, those voxels which grouped together on the far left of the matrix were coloured blue and those clustered together on the far right coloured red."To perform the graded parcellation at the group level, each individual's tractographic connectivity profiles were mapped onto a group template GWI. "The group template GWI was created using the steps for creating the individual participant temporal lobe GWI as described above, but with the group averaged template brain produced by SPM's DARTEL. "Each voxel in every participant's temporal GWI was then mapped onto its nearest neighbour onto the group template.Since all individual GWIs had less voxels than the template GWI, some individual voxels were mapped onto more than one voxel on the template GWI.Once the mapping was complete, each voxel on the template GWI had 24 binarised tracts associated with it.These were averaged to produce a probabilistic tract across the 24 individuals.These tracts were then flattened to produce an n×m matrix, where each row represented the connectivity profile of an individual temporal seed voxel with every other voxel in the brain in the same way that the individuals’ data were processed.Every entry in this matrix, however, ranged from zero to one where zero denoted that no individual had a binarised tract visit that voxel and one denoted that all individuals had a binarised tract visit that voxel.The remainder of the procedure was carried out in the same manner as for the individual participant analyses.To determine whether the temporal cortex was characterised by gradations or whether clear cortical groupings were sufficient to describe the connectivity pattern, two methods were used.First, a qualitative visual inspection of the reordered matrix was performed to identify patterns and gradients within the spectrally reordered connectivity profiles.Second, a quantitative measure of gradation was employed which used information provided by the second smallest eigenvalue of the Laplacian.This eigenvalue will be close to zero if the similarity matrix contains groups of voxels which are strongly interrelated within but not between groups, while higher numbers are associated with greater levels of gradation between areas.In order to further explore the relationships between the graded parcellation and the underlying structural connectivity, tractographic connectivity profiles associated with different parts of the matrix were visualised.Each seed voxel in the matrix is associated with both an area of cortex and its underlying connectivity profile.As such, areas of the matrix that correspond to particular cortical regions of interest can be identified, and the connectivity profiles in those specific areas extracted, summed and visualised.In the current study, in line with the four main tracts of the temporal lobe, four tractographic connectivity profiles associated with different parts of the matrix were visualised.Two approaches for assessing the across-participant consistency were conducted."The first approach sought to assess how each individual participant's spectral reordering correlated with the group voxel orderings, using a leave-one-out cross-validation approach.To do this, a group dataset was created which included the connectivity profile data of every participant excluding one.A permutation vector was obtained for this group-minus-one dataset using the group analysis method described above, which defined how the group-minus-one data should be spectrally reordered.This was taken to be the predicted ordering for the individual test participant."The test participant's data were then permuted for a first time according to the predicted permutation vector, which reordered the participant's data to the group-minus-one predicted order. "This dataset was then reordered using the spectral reordering algorithm to obtain the test participant's individual ordering.This actual ordering was then correlated with the predicted ordering to provide a measure of consistency.This procedure was repeated for every participant.The second approach assessed which voxels showed the greatest variability across participants, in order to assess which areas of the ordering are most reliable.To do this, the group ordering for the complete group of participants was taken as the reference ordering."Each participant's data was then first permuted to the reference ordering and then spectrally reordered to obtain the actual ordering for each participant. "The absolute deviation of each voxel in a participant's ordering to that in the reference order was then computed.Finally, for each voxel, the mean absolute deviation across participants was calculated and plotted onto the brain.The results from the group level graded parcellation for both the left and right hemispheres are presented in Fig. 3.An examination of the individual level parcellations for each participant revealed a similar pattern of connectivity to that of the group level reordering across all 24 participants.An examination of the parcellations reveals that the structural connectivity of the temporal lobe is arranged along two main axes of organisation, one medial to lateral and the other from the anteroventral to posterodorsal temporal lobe.This organisational structure was observed for both the left and right hemispheres, which demonstrated very similar parcellation results.In addition to the two principal axes of organisation, further examination of the sorted matrix and its corresponding projection onto the cortex reveals areas of differentiation throughout the temporal lobe.The main cortical areas at the extreme ends of the matrix are associated with relatively well-defined and distinct core subregions.These two prominent subregions are located around the posterior superior and middle temporal gyrus, and in the parahippocampal gyrus.To explore the white matter architecture underlying these strong divisions in cortical connectivity, the connectivity profiles of voxels corresponding to the two regions were visualised.An examination of these underlying connections reveals distinct tracts which uniquely dominate the connectivity of the two core subregions, with the most ventromedial temporal regions dominated by connections coursing through the parahippocampal branch of the cingulum bundle, and the most posterodorsal temporal regions associated primarily with connections dominated by the arcuate fasciculus.In contrast to these two most differentiated regions, the remaining areas of the temporal cortex exhibit a pattern of more graded and transitional connective subdivisions.This is supported by examination of the second smallest eigenvalue, which was not close to zero in either hemisphere, indicating that the groupings were not well differentiated, and there were similarities between voxels across groups.This, along with a visual inspection of the matrix, points to a graded transition between the connective subregions.Looking at the transitions across the parcellation matrix, it can be seen that the ventral surface of the temporal cortex involving the more lateral anterior aspects of the temporal lobe along the fusiform gyrus and parts of the inferior temporal gyrus, demonstrates a pattern of graded yet spatially contiguous connectivity profiles.An examination of the underlying connective tracts within this region shows connections predominantly via the inferior longitudinal fasciculus.Moving further along the matrix, there appears to be a movement away from the more ventral regions, with the anterior superior temporal gyrus demonstrating a connectivity profile more similar to that of posterior superior and middle temporal areas.An examination of the underlying tracts in Fig. 3 reveals that this area is connected via the middle longitudinal fasciculus, a tract which runs along the anterior-posterior course of the superior temporal gyrus, and which overlaps with the arcuate fasciculus at its more posterior end.In an area of the matrix intermediary to these ventral and posterodorsal regions, there is a region involving the middle temporal gyrus/sulcus, which is associated with a mixed transitional profile, demonstrating connectivity similar to both the ventral surface and the anterior superior temporal gyrus.This transitional zone appears to be associated with fibre tracts which strongly overlap anatomically with the origin/termination areas of both the inferior and middle longitudinal fasciculi, driving the similarity between the connectivity profiles of these temporal subregions."The individual's orderings all highly correlated with their predicted orderings.All correlations were significant.This implies that the group average connective gradations predict the individual gradations very well.All voxels were highly consistent in their ordering with the most consistent voxels being in the medial temporal lobe and the dorso-lateral temporal lobes.This is expected as they are the most separable connectivity profiles.Therefore they will be consistently identified as distinct from other regions of the temporal lobe.The current study utilised a data-driven technique to produce a graded parcellation of temporal lobe based on its varying patterns of underlying structural connectivity.The parcellation approach allowed for the exploration of not only regions of distinct connectivity, but also the relationship between these subregions across the temporal lobe as a whole.The results revealed two key organisational principles underlying the connective architecture of this structurally and functionally complex region.First, the overarching patterns of connectivity across the temporal lobe are organised along two key structural axes.The first axis is characterised by a primary medial to lateral change of connectivity across the temporal lobe, driven primarily by the connective dominance of the cingulum bundle at the medial extreme and the arcuate fasciculus at the dorsolateral end.The second axis arose along an anteroventral to posterodorsal orientation, which reflected the dominance of connections via the inferior longitudinal fasciculus and temporo-frontal fibres for anterior ventral regions, and the arcuate fasciculus at the most posterior and dorsal end of the temporal lobe.The second key principle governing the connective organisation of the temporal lobe is the overall graded and transitional nature of the patterns and changes in connectivity.Early neuroanatomists attempting to delineate patterns of temporal subdivisions based on cytoarchitecture, found that while dissociable regions do exist, other boundaries are not clearly delimited but instead were characterised by a pattern of blending between neighbouring regions.Recent work has also focused on the importance of gradients within both structural and functional neural architecture.The current study reiterates these findings with respect to regions’ connective architecture.We find that the temporal cortex is comprised of a number of core regions underpinned by unique and dissociable structural connectivity, each associated with specific underlying fibre tracts.However, the boundaries between these subregions are not sharp, and instead demonstrate transitional zones of graded similarity reflecting the influence and overlap of shared connective pathways.In the current study, two distinct core regions were identified, one involving the posterior superior and middle temporal gyri and the other the parahippocampal gyrus.These two regions reflected the most connectively disparate areas within the temporal lobe, and correspondingly were associated with disparate and non-overlapping fibre tracts, namely the arcuate fasciculus and the cingulum bundle.Transitioning between these two extremes, are subregions involving the ventral temporal surface, middle temporal gyral/sulcal areas, and the anterior superior temporal gyrus.While these less distinct subregions are also associated with differential white matter fibre bundles, they appear to comprise less spatially distinct pathways with strongly overlapping origin/termination areas.Indeed, the close similarity with neighbouring parahippocampal ventral areas may reflect the existence of abundant U-shaped fibres that connect adjacent areas of cortex within these ventral temporal regions.This is consistent with the finding that individual voxels in the brain may be connected to distant areas by more than one fibre bundle.The difference between the distinct core subregions and these transitional areas may also reflect the relative dominance of inter- versus intra-regional connections, with those areas demonstrating more spatially-contiguous graded connectivity profiles found to demonstrate high within-lobe short-range temporal interconnectivity.The current results elucidated the principal axes in which changes in connectivity across the temporal lobe occurred.Importantly, while other connectivity-based parcellation approaches may indicate different subregions in the medial, ventral and dorsal temporal lobe, the current approach was able to provide additional information about the structural relationships between these regions.The axes of structural organisation identified in the current study can be seen to mirror major functional subdivisions found within the temporal lobe."The medial to lateral shift in connectivity has strong correspondences in the functional divide between medial temporal lobe's episodic memory, emotion and spatial navigation functions, and the diverse lateral temporal lobe's functions including linguistic, semantic, auditory and visual processing.Episodic memory has long been associated with core regions within the medial temporal lobe and related connected areas, including the hippocampal and parahippocampal gyri, and the posterior cingulate gyrus.These regions share strong connections through the cingulum bundle, which has been shown to be associated with episodic memory performance and impairment.In contrast, the lateral and ventral temporal cortex has been associated with a wide range of cognitive tasks."In the left hemisphere, language functioning within the temporal cortex has been found to be more distributed but with a strong dominance of more lateral structures including Heschl's gyrus, and the superior, middle, and inferior temporal gyri with key connectivity via the arcuate fasciculus;).Bilaterally, it also underpins audition, vision and semantic processing.The medial-lateral shift found in the current study appears to map onto the anatomical and connective patterns found between memory and other cognitive functions within the temporal lobe.The anteroventral-posterodorsal axis of connectivity found predominantly along the lateral surface also seems to correspond to key functional subdivisions within the temporal lobe.In the left hemisphere, it most clearly mirrors the division between phonological and semantic processing, or the dorsal and ventral ‘language’ pathways.While phonological processing within the temporal lobe has been found to be located within the more dorsal and posterior regions, particularly those connected to the dorsal language pathway via the arcuate fasciculus, semantic processing has been particularly associated with more anterior and ventral areas including the temporal pole, anterior fusiform and inferior temporal gyri, commonly implicated within the ventral ‘language’ pathway.As such, the principal axis of organisation along the lateral surface may reflect connectivity changes associated with the relative functional specialisation of the dorsal and ventral pathways.In support of this, studies have found that the middle longitudinal fasciculus, particularly associated with the mid anterior temporal subregion in the current parcellation, is implicated in both semantic and phonological processing networks.A similar pattern of graded connectivity was also observed for the right hemisphere, where it would seem less likely that such divisions were associated with linguistic functioning.However, it is important to note that in models of dorsal-ventral stream language processing, the ventral pathway is much more bilaterally organised than the dorsal pathway.In contrast, the dorsal pathway in the right hemisphere is more commonly associated with visuospatial processing, predominantly involving fronto-parietal areas, but with some evidence of a role of posterior temporal regions, particularly those around the temporoparietal junction.As such, the axis of organisation along the lateral surface seen in the right hemisphere may reflect the division between ventralsemantic processes and dorsal spatial processes.Additionally, in relation to linguistic functioning in the right hemisphere, the processing of speech prosody has generally been found to be right-lateralised, involving posterior temporal areas including the superior and middle temporal gyri.Interestingly, there is also evidence that this prosody network may be organised along a dorsal-ventral division, with an auditory-ventral pathway along the superior temporal lobe, and an auditory-motor dorsal pathway involving posterior temporal and inferior frontal/premotor areas.A final finding of note within the current study was the observation that unlike the more phonological- and memory-based temporal regions, those implicated in semantic processing were associated with high levels of graded connectivity between the areas involved.Previous proposals have suggested that the anterior temporal lobe plays a crucial role in conceptual representations.More recent functional imaging and connectivity studies have found that the anterior temporal function may be more graded in nature with partial specialisations arising from the differential patterns of connectivity.Thus, the further ventro-anteriorly along the temporal lobe processing moves, the less specific the areas become to a particular sensory modality, instead becoming increasingly transmodal.The transitional gradations along the anterior-posterior extent of the temporal cortex observed in the current connectivity parcellation study are consistent with these graded shifts and convergence of information along the temporal lobe."Both dissection and tractography suffer from a heavy dependence on the researcher's anatomical knowledge in order to gain meaningful results. "The approach used in this paper draws its strength from being data-driven and hence, has very little reliance on the user's potential prior biases.A common criticism of tractography is that it generates many false positive as well as false negative connections."While this is true for all tractography studies, the impact of this limitation is mitigated in tractographic parcellation since accurate tracing is not essential to look for similarities and differences in a particular voxel's tractographic fingerprint.Clearly, errors in the quality of the underlying data can influence the parcellation profile but in a potentially less dramatic way than false positive and negative tracts affect traditional tractography experiments.Related to this, it could also be argued that the gradations shown in the current study are simply an artefact of the imprecision of tractography and an inability of the method to demarcate clear boundaries.While it is possible that a degree of the gradation found may be due to error in the tracking process, we believe this error is highly unlikely to be the sole cause of the graded boundaries observed since the results match with well-known fibre bundles and functional subregions.Histological studies of fibre pathway terminations have observed patterns of interdigitating termination points for many regions throughout the brain, which, alongside the cytoarchitectonic evidence from Brodmann and his contemporaries, suggest that the graded nature of the areal boundaries identified in the current study underlie a fundamental organisational principle of cortical architecture.Connectivity based cortical parcellations have, to date, focused primarily on the delineation and segmentation of clear and distinct independent regions, that is, hard parcellations.However, from the early days of parcellation it was understood that while some areas of the brain were in fact distinct regions, other zones showed more graded differences to one another.Indeed, more recent studies have provided additional support for the potential graded nature of anatomical boundaries, finding cytoarchitectural and connective gradations in areas such as the insula.The results of the current study also emphasise this finding, with an examination of the second smallest eigenvalue, indicating that the current connectivity data did not form well-defined and well-clustered elements.This is not to negate the idea that sharp boundaries do exist within the brain, or that gradation may be the only way in which the assumption of distinct anatomical homogeneity between parcellated regions may be violated.Heterogeneity may occur within a region when defined by one anatomical architecture or method but not others.Additionally, interdigiation of connections rather than gradation is known to occur in some relatively structurally homologous regions, such as motor-cortical projections within dorsal striatal sites.What the current results regarding the connectivity of the temporal lobe stress is that the strong divisions between cortical areas delineated by classic parcellation approaches may not fully reflect the true underlying nature of the cortex, and new approaches which enable these important architectural characteristics to be revealed are needed.The current study implemented such an approach using spectral reordering.There have been several papers in the recent literature that have investigated different methods of connectivity-based parcellation of the cortex.Spectral reordering was first introduced by Johansen-Berg et al.It is a technique derived from spectral graph theory that uses the Fiedler vector to reorder the data at hand in such a way that points that are similar to one another are forced together within the ordering.A focus on reordering has the advantage of being able to probe a dataset without first assuming that, the data fall into neat clusters and second, without the need for determining an a priori number of clusters.This may be an advantage when not enough knowledge is known about the underlying architecture of a given region to inform clustering approaches, when examining a region where there is no obvious number of clusters that the cortex can be parcellated into, or when the region displays gradations in connectivity.Despite these advantages, spectral reordering may only show the most predominant connectivity gradients in the cortex and may miss finer-grained details in localised zones.It is hence important to state that one must not over-interpret the details of the parcellation but focus primarily on the overall pattern of connectivity it produces.Additionally, the technique may not be able to delineate some architectural structures, and may fail to elucidate some complex regional organisations such as interdigitated islets embedded within a region.It is also important to note the data reduction limitations of the approach that visualises a three dimensional structure in only one dimension.However, despite its limitations, spectral reordering is a useful technique to elucidate the main gradations in structural connectivity of an area.There is clear possibility for additional future work, such as the embedding of the graph into a three dimensional plane and visualisation of more finessed gradations found.This paper explored an approach to extract the major cortical gradient in the temporal lobe based on its patterns of structural connectivity.Two key results have been described in this paper.First, the connective organisation of the temporal lobe is graded and transitional.While core regions with unique connectivity exist, the boundaries between these sub-regions may not always be sharp.They demonstrate zones of graded connectivity reflecting the influence and overlap of shared connective pathways.Second, the overarching patterns of connectivity across the temporal lobe are organised along two key structural axes: medial to lateral as well as anteroventral to posterodorsal.The structural gradients mirror known functional findings in the literature.It is hoped that this work will serve as a reminder of the caveat that Brodmann stressed in his landmark work of 1909.Although cortical regions differ from one another, these differences are not as distinct as the reading of modern neuroscience literature may lead one to believe.In the midst of an increasing number of studies attempting to ‘hard parcellate’ the brain, we must remember that the true underlying structure of our data may often be graded. | The temporal lobe has been implicated in multiple cognitive domains through lesion studies as well as cognitive neuroimaging research. There has been a recent increased interest in the structural and connective architecture that underlies these functions. However there has not yet been a comprehensive exploration of the patterns of connectivity that appear across the temporal lobe. This article uses a data driven, spectral reordering approach in order to understand the general axes of structural connectivity within the temporal lobe. Two important findings emerge from the study. Firstly, the temporal lobe's overarching patterns of connectivity are organised along two key structural axes: medial to lateral and anteroventral to posterodorsal, mirroring findings in the functional literature. Secondly, the connective organisation of the temporal lobe is graded and transitional; this is reminiscent of the original work of 19th Century neuroanatomists, who posited the existence of some regions which transitioned between one another in a graded fashion. While regions with unique connectivity exist, the boundaries between these are not always sharp. Instead there are zones of graded connectivity reflecting the influence and overlap of shared connectivity. |
563 | Uncovering the challenges of domestic energy access in the context of weather and climate extremes in Somalia | Somalia has been devastated by a 20-year long civil war in which the population has suffered from a near-total absence of a functioning national state, frequent natural hazards and a degraded natural resource base."The country's pastoralists and agro-pastoralists are highly vulnerable to weather and climate extremes.For example, a functional safety net in times of food scarcity is to sell livestock in order to purchase food and grains from smallholder communities.This is widely practiced by pastoral communities dependent upon rain-fed agriculture in Somalia."Extreme weather patterns potentially remove this coping mechanism, worsening communities' predisposition to absorb shocks, as droughts leads to both crop failure and to a reduced number of livestock, which deepens poverty, loss of assets, loss of livelihood opportunities and the threat of imminent famine scenarios. "Somalia's National Adaptation Programme of Action identifies four major climate hazards based on extensive consultations with communities throughout the country: drought, extreme flooding events, increasing temperatures and strong winds.In Somalia, drought negatively impacts livelihoods, decreases agricultural and livestock productivity and has forced people to migrate to urban areas or IDP camps while causing a shift in livelihood strategies from agro-pastoralism to unsustainable short-term income-generating activities such as charcoal production.Extreme flooding events in the country decrease the productivity of agricultural land due to the waterlogging of soils, leading to loss of fertile top soils and deforestation.High temperatures have led to failed crop harvests due to increased evapotranspiration rates, reduced availability of water and increased outbreak of pests.Strong winds have also increased the loss of fertile top soils through soil erosion, which in turn affects land productivity.As natural resources become increasingly scarce, conflicts over natural resources ownership and utilization that arise over time has further exacerbated internal conflicts and displacement of people.Furthermore, unsustainable extraction practices intensify negative impacts on the existing natural resource base, already weakened by extreme weather patterns."Since traditional biomass, such as firewood and charcoal, account for 82% of Somalia's total energy consumption there are important linkages between the occurrence of natural hazard events and the availability of energy for the majority of the population.Rural women, particularly among displaced populations, face tremendous challenges when collecting and using woodfuels.These encompass health, nutrition, safety and protection risks.Furthermore, the production of charcoal is a risky and unsustainable livelihood activity practiced primarily by the poorest and most marginalized parts of the Somali population.In order to address this precarious situation, an important initial step is to gain an improved understanding of the context-specific challenges in areas where the impacts of climate change and conflict converge, such as the IDP camps and hosting communities found in Somaliland and South-Central Somalia.The objective of this paper is to understand the risks and challenges faced by vulnerable populations exposed to weather and climate extremes and conflicts in Somalia, in particular women and IDPs who collect and use traditional biomass to satisfy domestic energy needs.The second objective is to provide a set of recommendations to policymakers, development organizations and humanitarian actors on ways to address the specific challenges presented.The theoretical underpinnings for the analysis in this paper include the framing of energy as a multi-sectoral issue, which transcends its mere use as a fuel for cooking, processing and other fuel utilization activities.The analysis is also built on recent discussions about the links between natural resource depletion and the characteristics of fragile states which tends to exacerbate unsustainable utilization practices that worsen the fragile endowments of natural capital, already scarce due to the impact of extreme weather and climatic events.Fig. 1 provides an overview of cascading impacts of weather and climate extremes and unsustainable use of natural resources.Weather and climate extremes have severe impacts on arid and semi-arid lands in the country, which are already fragile, as well as on local communities who are dependent on natural resources for their livelihoods.With a high degree of spatial and temporal variability of rainfall determined by the North and South movement of Inter-Tropical Convergence Zone, there are two distinct rainfall seasons known as the “Gu” from mid-March to June that passes though the North, and the “Deyr” from mid-September to November that passes through the South.Since the variability of these rainy seasons is detrimental to all aspects of life in Somalia changes in temperature and precipitation and occurrence of extreme weather events will have an impact on species survival, forest structure and prevalence of pest and diseases.It is expected that with increasing temperature, climate related hazards will be more frequent and intense.Frequent droughts and floods have had disastrous impacts on communities in Somalia.Droughts have occurred in 1964, 1969, 1974, 1987, 1988, 2000, 2001, 2004, 2008 and 2011 while major flooding events occurred in 1997, 2000 and 2006.Frequent weather and climate extremes is also one of the causes of conflict over natural resources in which customary law cannot be relied upon anymore to settle the growing number of conflicts that are becoming increasingly complex and virulent.Unresolved land-based conflicts due in part from competition over warranted claims to resource access and usage, have shown to have weakened the customary management systems and heightened exploitation of natural resources, in a pattern that is on the verge of worsening the existing resource scarcity and humanitarian crisis.The situation is reducing communities’ resilience and adaptation in the face of extreme climate events."Dehérez argues that the country's history of nationalizing land has allowed the state to have more control and power to share land among influential clan members thus constraining acquisition processes that have only benefited a few individuals rather than larger communities.This has disrupted centuries of traditional order between clans and has increased armed clashes while marginalizing vulnerable groups such as IDPs.The situation has also given rise to illicit and unsustainable production of charcoal which is exponentially driven by profit incentives, disregarding the current fragile state of the environment.The nationalization of land increased the frequency of land grabs and forced evictions by war lords and local jihadists, who entered the lucrative charcoal business in order to finance their activities, thus perpetuating terrorism and forced displacement of people."Charcoal is increasing the rate of deforestation and land degradation, crippling the landscape's ability to absorb or withstand natural hazards, thus aggravating the impact of disasters such as floods, sand storms and droughts.Energy access is key to ensuring food security, particularly in humanitarian settings driven by frequent occurrence of weather and climate extremes.Energy is indirectly linked to food security in humanitarian settings since the large-scale displacement and resettlement of people, whether due to weather and climate extremes or conflicts, often causes significant deforestation and forest degradation in areas surrounding displacement camps due to the demand for wood energy.The combined demand for fuel from both displaced and host populations often causes unchecked cutting of fuelwood and the production of charcoal which puts an increased strain on the local environment and can contribute to soil erosion, desertification, increased exposure to natural hazards such as droughts and floods and to the loss of agricultural livelihoods.These factors can have a long-term impact on the availability of food as a result of the disruption of agricultural livelihoods and food production.Furthermore, deforestation and forest degradation can also have a significant effect on the availability of wild foods and other non-timber forest products on which many crisis-affected people depend.Energy is also a gender-related issue since women are nearly always tasked with the collection of firewood and cooking.Women often spend many hours walking long distances to collect firewood during which they may be exposed to gender-based violence on top of a tremendous work burden which takes time away from child care, income-generating activities and leisure.Livelihoods in crisis settings, including forced displacement contexts, are often reliant on woodfuel-intensive activities such as the production of charcoal and selling of woodfuel.These risks are all highly present in the crisis-affected areas of Somalia.Data for this paper was collected in three districts: Burco, Owdweyne and Doolow.The first two districts are located in the Togdheer Region of Somaliland, while the latter is situated in the Gedo Region of South Central Somalia.Both regions are classified as a tropical and sub-tropical desert climate area according to the Köppen climate classification1, with average precipitation for the year between 193 mm and 281 mm.Weather and climate extremes have a profound impact on local communities in the Arid and Semi-Arid Lands who are dependent on natural resources for their livelihoods and food security.The primary climate extreme is drought, resulting from poor or insufficient rainfall which negatively impacts livelihoods, decreases agricultural and livestock productivity, and forces people to migrate on a seasonal or permanent basis.This situation may exacerbate conflicts between various social groups, such as settled farmers and livestock herders, over competing use of resources.Drought has also led to an increased reliance on charcoal production as an important source of income.The second major threat is extreme flooding events which cause decreased productivity of agricultural land.These floods, mostly affecting areas located in gorges, come as a result of rivers overflowing their banks.The districts of Burco and Owdweyne are in vicinity of the Togdheer River making these areas prone to water degradation.There is a pressing need to ensure sustainable management of natural resources in Somalia."The country's natural resource base has been degraded due to over-exploitation for personal or clan-based economic gains, which has progressively worsened since the country's civil war as communities and clans compete for access to grazing lands, watering holes and fishery resources.The removal of stands of trees has increased over the years and is no longer exclusive to populated areas, while overfishing is increasing as both offshore- and near shore marine species are selectively targeted.The resulting effects have exacerbated desertification, soil erosion and the depletion of water supplies.This situation affects the vast majority of the population, particularly in rural areas where people depend upon their surrounding environment and natural resources for their livelihoods.The vegetation in Somalia is predominantly dry deciduous bushland and thicket, which is dominated by species of Acacia and Commiphora.These forest and woodland areas have been significantly impacted by recurrent droughts, unregulated tree cutting and the presence of lawlessness and chaos largely driven by a relatively lucrative charcoal production venture.Irrespective of the continuing export of charcoal, existing resources are hardly able to meet the local demand for fuelwood, charcoal, building materials, feed, furniture and other uses.According to FAO, forest cover declined from 9,050,000 ha in 1980 to 6,363,501 ha in 2015.In addition, Prosopis spp. has dominated large areas, particularly along the coast.Figs. 2 and 3 show the types and causes of land degradation in the country.Doolow district is affected by reduced vegetation cover, while Burco and Owdweyne districts are faced with extensive extraction of fuel wood, timber and other construction material.This paper is based on the collection and analysis of primary data in Somalia during an FAO mission to support the Resilience Programme and the subsequent analysis of secondary data.An initial review of literature provided the basis for developing qualitative and quantitative field assessment tools.Primary data was then collected in various communities including IDP camps, host communities, rural settings and urban settings in Hargheisa District, Somaliland and in Doolow District, Gedo Region in South Central Somalia.Data on specific energy needs and related challenges was collected through the use of a mixed methods approach.The field methods used included a short, structured questionnaire and two types of Participatory Rural Appraisal techniques: Focus Group Discussions and Venn diagrams.In order to gain an in-depth understanding of the specific energy-related challenges faced by households living in both IDP camps and host communities, the qualitative PRA tools were used in order to collect information about the linkages between energy needs and a range of factors, including the depletion of natural resources, gender-based violence, tension and conflict over the use of forest resources, cooking practices, nutritional, health, sources of income, use of cooking technologies and the perceived presence of NGOs and other stakeholders.Focus Group Discussions focused on questions that helped understand the circumstances behind displacement caused by either conflict or climate hazards, and provided information on the nutritional status of households, coping strategies linked to the lack of cooking fuel, as well as current cooking stove technologies.To understand the conflicts that may arise when sharing a common resource base, respondents were first asked about the relationship between their community and other communities and any challenges they face.Respondents were also asked about protection risks related to the collection of fuelwood.The aim of the Venn diagram was to map the presence of external organizations, such as government agencies and NGOs, and how these relate to communities in terms of their role in enabling or constraining access to fuel.For example, using a flip chart paper, a group member was assigned to draw diagram circles with different sizes and distances in relation to each other, which depicted the perceptions the community had on the strength and influence of organizations.Quantitative primary data was collected using a short and structured questionnaire.The questionnaire covered the following topics: household information, livelihoods, income sources, sources of fuel, charcoal production, fuelwood consumption, collection of fuelwood from forest areas, cooking technologies and wood fuel provision/availability, including tree planting activities.The questionnaire survey was carried out over the course of 15 days in Doolow District in South Central Somalia.A total of 74 households were interviewed.Figs. 4 and 5 show the sampling sites chosen in Doolow, Burco and Owdweyne districts.The sites were selected based on where other activities under the FAO Somalia Resilience Programme were being implemented.The data collected was analysed and, following an initial review of key emerging issues, categorized according to the most relevant sectors and topics linked to the energy-related challenges highlighted by key informants and respondents.In Somalia, it is common for disputes to be resolved using customary laws and traditions which form part of common land use systems involving elders in arbitration and mediation practices.This is a functional arrangement which has for centuries been utilized to minimize tensions among the various clans.However, unresolved land-based conflicts due in part from competition over claims to resource access and usage, have been shown to weaken customary management systems and to increase exploitation of natural resources, in a pattern that is worsening the existing resource scarcity and exacerbating the humanitarian crisis.Table 1 presents information provided by respondents on the main reason for their displacement, and challenges they face in their current settlements.In addition to the current unregulated exploitation of resources - mainly for illicit charcoal production for export purposes, the traditional system of common land use rights has weakened over the years to the detriment of both pastoralists and farmers living in fragile landscapes and ecosystems.Although respondents in IDP camps claim to feel safe within the confinements of their new settlements, their presence in the area has further heightened pressure on woody vegetation surrounding the camps.The production and export of charcoal from Somalia has been in practice since pre-colonial times to meet local and regional energy requirements and to provide livelihood opportunities for rural households.However, the last two decades have witnessed a stark increase in the exploitation of forest and range resources for charcoal production.The current patterns of producing, trading and using charcoal are highly unsustainable, which can be attributed to a range of factors, including the breakdown of state institutions in 1991, protracted conflict and illegal imports of huge quantities of Somali charcoal by neighbouring countries in the region.The increasing realization that the charcoal trade in Somalia was becoming a threat to the security and stability of the country, as well as an obstacle in the peace process, prompted the UN Security Council to issue a ban on the export of charcoal from Somalia.While the export of charcoal has continued there has been an overall reduction in the export of charcoal from southern Somalia and a reduction in the revenue gained by Al-Shabaab from the trade.As extreme weather events and climate negatively affect the fragile natural resource base, reduced agricultural and livestock productivity tend to exacerbate human conflict over scarce resources.Climate change impacts, such as drought, can act as threat multipliers that negatively impact both natural resource availability and food security, which in turn may lead to migration, resource competition and conflict.This scenario has played out in Somalia where human conflict has also disrupted traditional clan structures, rendering them unable to solve land disputes effectively and prevent the interruption of seasonal traditional migration routes used by herders and farmers as an adaptation and coping mechanism.The systematic displacement of people has enabled a profit-driven charcoal venture to flourish, which has led to extensive exploitation of wood/vegetative cover resulting in further land degradation and energy access problems.With lack of enforcement of environmental policies and legislation, the vegetative cover is on the verge of being depleted without having a chance to rehabilitate and regenerate itself.Somalian IDPs are now forced to settle in remote areas where they face hardships in terms of lack of vegetation cover required for their household wood energy needs.Vegetation cover is still utilized as an important source of energy for the preparation of food to ensure optimal nutritional intake which also helps to prevent malnutrition, contamination and diseases.This section will showcase how energy scarcity resulting from natural capital depletion leads to the adoption of negative coping mechanisms by households.Respondents in the sites visited reported on the types of fuels and cooking technologies they use most frequently.Table 2 shows the results of the data collection.In these locations, the 3 stone fire is by far the most commonly used cooking method and fuelwood is the predominant fuel used for cooking.Cooking on 3 stone fires using fuelwood is associated with a myriad of environmental, protection, health and safety risks which will be explored further in the following sections.In addition to the household level, public institutions, e.g. schools and hospitals, also depend heavily on fuelwood for their cooking needs.When fuel is not readily available, this can have a considerable impact on food security and nutrition.The main food security risks associated with a lack of cooking fuel include the undercooking of food, which increases the risk of foodborne illnesses, the skipping of meals which causes malnutrition especially in children, the insufficient boiling of water which may result in the consumption of contaminated water and poorly prepared food as well as selling or trading food for the purpose of obtaining cooking fuel, which leaves vulnerable households with less food.The primary data collected in Somalia confirms the presence of these risks.For example, IDPs in the Ahaya and Kansahley camps reported that the acute shortage of cooking fuel causes their food to be undercooked.In both Haraf and Abaaso villages, respondents reported that lack of water and fuelwood causes food to be undercooked.Table 3 presents information on the main types of food cooked in the sites visited.In the IDP camps visited in Somaliland, households rely predominantly on cooking “Laxoox”- a sourdough-risen flatbread with a spongy texture which is traditionally made out of teff flour - and rice while in villages in Somaliland respondents, in addition to these food items, also consume pasta.In the Kabasa IDP camp the main food cooked is rice and this is consumed at lunch time as the only meal of the day.Respondents in Kabasa also noted that the lack of fuel increases the time needed to cook significantly.Respondents in the Ahaya IDP camp reported that there are very few “coping fuels” used in times of hardship.However, a number of coping strategies were mentioned by respondents such as using wooden fencing, small twigs found nearby and branches from Prosopis shrubs as fuel.Prosopis juliflora is a shrub native to Mexico, South America and the Caribbean which has become established as an invasive weed in Africa.However, these mainly constitute “last resort” strategies.Respondents in all sites stated that they did not practice communal cooking in order to reduce fuelwood consumption.The reason given was that in some cases the practice is poorly aligned with cultural norms and practices.Based on these results it is clear that interventions should address both the supply of fuel for cooking and the technologies needed to ensure that food is cooked properly and that nutrition- and health risks related to cooking are reduced.The promotion of fuel-efficient stoves and a sustainable supply of cooking fuel can contribute to ensuring that food is cooked properly, meals are not skipped, people maintain diverse and nutritious diets rather than switching to less nutritious foods.The fuel needs of an increasing population can become a key driver of environmental degradation, as the collection of fuelwood puts pressure on scarce wood resources.As previously indicated, the negative consequences of weather and climate extremes, including frequent droughts, erratic rainfall and floods, can exacerbate environmental degradation.This also leads women and children to travel ever greater distances to obtain the necessary fuelwood they need for cooking meals for their families.Gender-specific security and protection concerns disproportionally impact IDPs and urban migrants in Somalia.Both in these settings and more generally in rural areas, the task of collecting fuelwood primarily falls upon women and children.When women walk very long distances to gather firewood, they are often exposed to gender-based violence, harassment, assault and rape.Wildlife, including venomous snakes, also pose a serious threat to collectors of fuelwood which could result in loss of life.Fuelwood collection is an arduous and time consuming task that reduces time for women to engage in other productive activities or child/family care.Table 4 presents data collected from various locations in Somaliland and Doolow, on the time spent collecting fuelwood and associated risks and coping strategies.The primary data collected shows that women and girls are tasked with the collection of fuelwood.The frequency of collection trips ranges from every 2 days to every 5 days while the time spent collecting ranges from 3 h in the Ahaya IDP camp to a full day in the Kabasa IDP camp.Respondents reported that the collection of fuelwood takes time away from other activities and causes exhaustion, thirst, hunger, accidents as well as exposing women to attacks from wild animals and psychologically unstable men.Women also reported having to sit continuously to watch and manage the fire while cooking."Improving women's access to cleaner and more fuel-efficient cooking technologies can partly address many of these challenges.However, efforts should also focus on identifying ways in which to provide a sustainable source of fuel closer to camps, settlements and communities.Efforts to reduce the reliance of women on woodfuel intensive livelihoods such as selling firewood and charcoal by promoting appropriate and context-specific alternative livelihood options should also be strengthened.The overexploitation of land and excessive harvesting of trees for charcoal and other commodities have led to environmental degradation and increasing desertification which reduces the availability of fertile land, a key requirement for a primarily pastoral-based economy.Consequently, the unchecked extraction of indigenous Acacia trees for the production of charcoal has been the cause of conflict between pastoralists and charcoal producers.This is because Acacia trees serve important social and environmental functions including their use as shade for people and livestock, the provision of livestock fodder and as landmarks and windbreaks.Primary data collected from respondents in the IDP camps visited confirmed that the collection of firewood has caused tension between displaced households and host communities.In the Kansahley IDP camp for example, respondents mentioned that violent clashes between IDPs and both pastoralist communities and farmers have taken place over the issue of fuelwood collection.Men who accompany women to collect firewood are beaten while in general women are let go.Respondents in the Kabasa IDP camp in Doolow reported that they have good relations with host communities but are chased away when they cut live firewood.When attacks by the host community do occur they report it to the local authorities.Respondents in the Digaale IDP camp reported that the main source of conflict with host communities is the harvesting of fuelwood because women from the IDP camp collect it in the woodlands located around the host communities.This land was formerly communally owned and used but all the trees have been overexploited.Currently, IDPs in Digaale are forced to go and collect fuelwood on the land of agro-pastoralist communities which is causing significant tension and has prompted attacks.As a result, women are being assaulted on a daily basis.Conversely, respondents in the Haraf and Abaaso villages in Somaliland noted that the community in general has good relations with other communities.The relationship between communities is complex.While clashes may occur on a regular basis over the extraction of firewood, for example in Digaale, IDPs often maintain good relations with host communities when it comes to trading agricultural commodities.In Digaale, women from the host communities sell milk while men sell animals and meat to the IDPs.In turn, people from the host communities come to the camp to buy food and clothes from the IDPs.These economic exchanges may provide an entry point for improving relations between the two communities.One option would be the exchange, either through bartering or selling, of fuel or energy-efficient cooking technologies between the IDP community and the host community.An improved conflict mitigation mechanism could support such an initiative, ideally building on existing or traditional mechanisms.A number of communities have established community-based Elder Groups which serve the important function of managing and reducing conflict with other communities.Hence, assessing the replicability of the Elder Groups would provide important insights.Based on the Venn diagram exercise conducted in both host villages and IDP camps there were commonalities when discussing issues concerning the natural resource base.Agro-pastoralists within both groups have expressed the problem of not being able to rely on their drought coping mechanism in utilizing, for instance, drought tolerant vegetation species for their own communities and livestock as these are being depleted due to growing demands for charcoal production.With traditional coping strategies being undermined and as natural resources continue to be unsustainably extracted, community resilience is currently reduced to levels which require protracted humanitarian assistance during and after natural hazard events have occurred.It is estimated that over five million Somalis were affected during the 2010 drought, which has impacted the livestock population in proportional numbers due to the loss of important coping mechanisms.Humanitarian assistance has been more significant within IDP camps for the provision of basic needs such as proper shelters, water and food replenishment, education, sanitation and health services.Some IDP camps were also provided with entrepreneurial skills in setting up small business ventures as well as with solar energy for lighting purposes.Within host villages, humanitarian assistance has been geared more towards farming support, livestock restocking, and health and education services.Unlike IDP camps, host villages have stronger internal community committees organized by groups that mostly deal with social affairs and settling land disputes.In both IDP camps and host villages there were no external environmental interventions supporting communities to reduce their consumption of fuelwood, e.g. through the provision of fuel-efficient stoves.However, there is currently an FAO-supported initiative focusing on the establishment of a community tree nursery in Bantal village with the aim of growing trees in woodlots for supplying fuelwood and other tree products.Both IDPs and host community households stated their disdain for the invasive tree species, Prosopis, the encroachment of which has spread into important areas of land meant for livestock and agriculture production.Despite the use of Prosopis for certain purposes, such as fuel and fencing, there is no interest from both communities in domesticating and utilizing Prosopis as a way to restore and recover important rangelands, reduce reliance on important native tree species for fuelwood, or to create sustainable employment and business ventures through product transformation.Awareness-raising on these alternative livelihood options can support efforts to find solutions to the energy, food security and biodiversity problems faced by agro-pastoralists in Somalia, which could eventually help communities strengthen resilience to shocks.The challenges facing Somalia are complex, multi-faceted and differ according to various political, social and regional contexts.Environmental degradation and energy access challenges recurrently emerge in key policy documents relating to Somalia."Somalia's Intended Nationally Determined Contributions have been developed in line with the UN Framework Convention on Climate Change and the decision of the “Lima Call for Action” to formulate its policy, plans and mitigation and adaptation projects intended to achieve the objectives of the INDCs. "Such policies and planned projects proposed are based on the status of the environment in the country, existing and planned policies for sustainable sector based developments and Somalia's Compact and New Deal.Within the framework of the INDCs, a series of adaptation and mitigations programmes have been proposed, including plans to promote alternative sources of energy to reduce local charcoal consumption, provide alternative livelihood options to households and communities dependent on charcoal production and trade and reforestation and afforestation for the rehabilitation of degraded lands.Awareness creation of alternative livelihood activities, all ranging from the management and utilization of trees and shrubs, while encouraging the establishment of woodlots that include economically important indigenous tree species, could provide important alternative sources of income in both sustainable production and marketing of various products that have good potential for international export.Furthermore, the Somali diaspora sends an estimated US$ 1 billion in remittances per year which exceeds Official Development Assistance.Hence the role of the diaspora in sustaining livelihoods is also considered of key importance and provides an important opportunity for harnessing and channeling these funds into interventions that contribute to job creation, food security and increased incomes, particularly in rural areas of Somalia.Furthermore, these interventions could also help in minimizing pressure on the existing natural capital base and support its recovery.This paper highlights various challenges faced by vulnerable populations related to energy in the context of weather and climate extremes and conflict.It draws on the case of Somalia, through the findings of field work in Hargheisa District, Somaliland and in Doolow District, Gedo Region in South Central Somalia.The participatory techniques were utilized in various contexts in these locations including IDP camps, host communities, rural and urban settings generated a rich body of evidence and knowledge on the energy challenges faced by vulnerable households, particularly women.Competition over natural resources is a key driver of conflict in Somalia which is exacerbated by weather and climate extremes.Hence, in approaching work related to energy in a humanitarian settings driven by weather and climate extremes and conflicts, it is crucial to also address issues relating to social cohesion, trust-building and conflict mitigation.In the case of Somalia, this entails understanding the context of land rights systems in order to help shed light on what needs to be done to shift the business as usual and demand-driven exploitation of wood for energy purposes towards sustainable forest management.In planning and implementing energy interventions, it is important to follow a people-centred approach that ensures full participation of beneficiaries from programme design to implementation."This promotes an underlying tenet that people affected by crises are end users and stakeholders rather than “beneficiaries” of humanitarian assistance, that they have a fundamental right to shape efforts to assist them, and that humanitarian actors have a duty to respond to people's expressions of their rights and needs.In the context of Somalia this may appear to be a challenge due to the remote management of many interventions and extensive use of implementing partners.However, development agencies such as FAO have been able to set up mechanisms to ensure that targeted communities have appropriate ways to provide feedback and obtain information about projects.2,Furthermore, efforts should be made to incorporate conflict-sensitivity policies in order to address these key linkages with impacts of weather and climate extremes.It is also important to recall two recent global events that are shaping the way the international community is supporting the rapidly rising numbers of vulnerable people as a result of crises and climate-related disasters.These events generated a number of outcomes several of which are relevant for work on energy in emergencies.The World Humanitarian Summit called by the United Nations Secretary-General in May 2016 in Istanbul, Turkey marked a shift towards more decisive and deliberate efforts to reduce needs, anchored in political will and leadership to prevent and end conflict and to bridge the divide between efforts across humanitarian, development, human rights, peace and security interventions.A recurring theme was the importance of humanitarian principles and contributing to the protection of individuals from gender-based violence.Amongst other things, through the SAFE approach, FAO promotes the use of fuel-efficient cooking practices to reduce the need for fuelwood, and in turn diminishing the protection risks women and girls face when collecting firewood, particularly in displacement due to weather and climate extremes and conflicts.In September 2016, the UN General Assembly convened a high-level plenary meeting on addressing large movements of refugees and migrants.This Global Migration Summit culminated in the New York Declaration for Refugees and Migrants, expressing the political will of world leaders to protect the rights of refugees and migrants, to save lives and share responsibilities.Once again, protection and gender equality were central to the discussion and relevant to the SAFE approach.The New York Declaration states that Member States will ensure that responses to large movements of refugees and migrants mainstream a gender perspective, promote gender equality and the empowerment of all women and girls, and fully respect and protect the human rights of women and girls.In addition, it recognizes the significant contribution and leadership of women in refugee and migrant communities, and commits to ensure their full, equal and meaningful participation in the development of local solutions and opportunities.Furthermore, it is argued that focusing on energy can be instrumental for contributing to efforts to sustain peace, by reducing the risk of potential conflict between communities who compete for scarce natural resources, including wood for fuel purposes.A number of key themes and guiding principles have emerged which will be of use to policymakers, development organizations and humanitarian actors in addressing the specific challenges presented.Partnerships and collaboration are essential in order to respond to the energy needs of populations affected by climate-related hazards and crisis.These need to be developed and fostered between UN agencies, development and emergency organizations, civil society organizations and NGOs, national partners, government bodies, academic institutions and the private sector.Leveraging the comparative advantages and knowledge of the various actors, will ensure that interventions are streamlined, truly inclusive and holistic.From the perspective of the United Nations, several UN agencies have come together to facilitate a more effective and coordinated response to the energy needs of populations affected by weather and climate extremes and conflict through the inter-agency Safe Access to Fuel and Energy initiative and working group.This means working closely with partners and local governments to harmonize approaches and ensure synergies.Very importantly, the communities themselves need to be consulted and engaged to maximize accountability, inclusiveness and participation in addressing the challenges of energy access in the event of weather and climate-related disasters and conflicts.The authors state that no conflict of interest was involved in the production of this paper. | In Somalia, challenges related to energy access is influenced by both weather and climate extremes and associated conflict. The objective of this article is to gain an improved understanding of these risks and challenges, which are faced by the most vulnerable populations in the country. In particular, cooking energy-related challenges faced by households affected by weather and climate extremes and conflicts include protection risks, malnutrition, health risks, environmental degradation and heightened tension and conflict between social groups. Interventions to address these issues should focus on both fuel supply and fuel demand as well as on improving the livelihoods of affected populations. In the aftermath of an extreme weather event it is recommended that assessments of the energy needs of all affected populations, including both hosts and Internally Displaced People (IDPs), be conducted. Post-disaster support should include the promotion of energy-efficient technologies for cooking as well as alternative sources of fuel where available, including non-wood based renewable energy. The implementation of a field inventory to assess the status of natural resources in areas vulnerable to climate impacts could help to determine woody biomass trends and enable the development of ecosystem restoration plans. These could include provisions for the establishment of woodlots and agro-forestry, thus building resilience to environmental degradation while maintaining woody biomass resources in and around displacement camps. Interventions should also be designed jointly with partners, and activities should be conflict-sensitive to ensure an enhanced state of resiliency and preparedness among vulnerable populations. |
564 | Status of Pratylenchus coffeae in banana-growing areas of Tanzania | Banana is one of the most important food crops in the Great Lakes region of Africa and this region has the greatest level of banana consumption worldwide .Banana is grown and consumed all over Tanzania, and the importance of production locally varies depending on the importance of the crop to the specific area.For example, banana is grown as staple food in Kagera and Kilimanjaro regions in a coffee–banana field system and hence is widely grown and consumed ; however, in areas like Mbeya, most of the bananas grown are sold in the cities of Mbeya and Dar es Salaam .Tanzania is divided into two parts: mainland Tanzania and the islands of Zanzibar in the Indian Ocean.Banana production is high in the cool highland areas of the mainland such as Kagera, Arusha, Kilimanjaro and Mbeya where banana is staple food and the main source of daily consumed carbohydrate.However, most of the areas in the Zanzibar islands of Unguja and Pemba grow banana and plantain in small gardens for the purpose of producing fried snacks.Those who grow banana as the main staple food, especially in mainland Tanzania, sell the surplus for cash in nearby towns and cities or process the crop into banana beer or wine .Rapid growth of urban and informal towns, especially in the mainland, and changes in food behaviour may increase demand for banana as food and fruit .Moreover, improvement to technology and value addition of banana products such as banana biscuits, flours, bread, doughnuts and wine increase demand for improved production.According to FAOSTAT , average annual banana productivity during 2005–2014 was less than 7 t/ha.However, banana has the potential to produce 30 t/ha, which could be achieved in Tanzania with improved management practices .Productivity of bananas in the Great Lakes region of Africa has greatly declined since the 1970s, and is now 7–42% of its potential .Some of the reasons for reduced banana yields are poor soil fertility and pests and diseases .In particular, plant parasitic nematodes are extremely damaging, causing yield losses of more than 40% across all banana crops in Africa , and 20% worldwide .The main nematode species known to affect banana crops worldwide are Pratylenchus goodeyi, Radopholus similis and P. coffeae.Of these, P. goodeyi and R. similis have been previously reported in Tanzania .The former is thought to be indigenous to Tanzania but the latter is introduced and confined to the humid lowlands .However, the current situation of these species in Tanzania in relation to localisation and level of pathogenicity is unknown.This is mainly a result of different challenges, some of which were reviewed by De Waele and Elsen , which includes lack of adequately equipped nematology laboratories, trained taxonomists, routine nematode monitoring surveys and financial support.Generally, little research has been conducted on banana nematodes in Tanzania and thus there is scant information on the status of some nematodes.Pratylenchus coffeae Filpjev and Schuurmans-Stekhoven is one of the few Pratylenchus species known to be a pathogen of banana, and causes root lesions .It is a widespread pest that causes serious damage to banana plants in Latin America, but has not been previously documented in mainland Tanzania.This may be due to both the lack of resources to collect information throughout the major banana-growing areas and its absence at that time from the few surveyed areas .However, P. coffeae was first reported by Rajab et al. in the Zanzibar islands, a small isolated part of Tanzania.The roots samples collected from all regions of Unguja were extracted to get nematodes where the population density of P. coffeae was 74/100 g of fresh roots .Apart from banana, P. coffeae has been known to cause quantity and quality losses to food, cash crops and spices.In Japan, P. coffeae can cause serious losses of sweetpotato , the crop which ranks fourth in importance among food crops in Tanzania and is grown in all areas where banana is grown .Coffee is a good host for P. coffeae and is the leading cash crop in Kagera, Kilimanjaro and part of Ruvuma in mainland Tanzania, areas where they also grow banana for staple food and sometimes intercropped with coffee .The spices ginger and turmeric are susceptible to P. coffeae and play major roles in the economy of Zanzibar.Thus, increases in P. coffeae could directly affect other important crops in the country.The presence and distribution of different nematodes in the area varies with time and is likely due to movement of plant material through the common practice of exchange between farmers or introduction from one country to another .Banana is vegetatively propagated and thus farmers collect materials/corms from neighbours or bordering countries and this can introduce new nematode species .Therefore, information on the specific type and abundance of nematodes is required to assess potential nematode damage in any new banana production area.This study was conducted to assess the status of P. coffeae in the banana production systems of Zanzibar and mainland Tanzania.Information on status of P. coffeae will be useful for development of strategic nematode management and improving banana production in Tanzania.The study was conducted in 10 major banana-growing areas across three agro-ecological zones of mainland Tanzania and one from Zanzibar.The zones and regions for mainland Tanzania follow: the Lake Zone, the Southern Highlands Zone and the Northern Zone.The five regions in Zanzibar were North Pemba, South Pemba, North Unguja, South Unguja and West Unguja.The survey time for one agro-ecological zone was about five days but this was in different months of 2015 for each zone.The month chosen for each zone was according to availability of moisture in soil, which attracts nematodes to the rhizosphere from where soil and root samples were collected.The agro-ecological zones differ in terms of weather conditions including the rainy season and temperature.The maximum and minimum temperatures for three months prior to sampling for each zone are shown in Table 1.In this survey of the 10 regions, we randomly collected 314 composite samples of each of soil and roots to make a total of 628.The samples were collected following a previously described procedure .A 20 × 20 × 20 cm hole was dug adjacent to the corm of the banana plant, and banana roots and soil were collected and placed in labelled plastic bags.From each field, samples were taken from five plants selected at random and pooled to form composite root and soil samples.From the composite sample, about 500 g of soil and 10 roots were packed into labelled plastic bags and stored in cool boxes ready for transfer to the nematology laboratory at the Sugarcane Research Institute, Kibaha, Tanzania.Data on soil pH, texture, nitrogen, phosphorus and potassium for the regions surveyed were collected, based on previous research and a database, for comparison with nematode data.Geographical location data were also collected from the surveyed areas.Nematodes from soil and roots were extracted by the modified Baermann technique using 100 mL of soil and 5 g of roots as described by Hooper et al. .Soil and macerated roots were incubated for 48 and 24 h, respectively.Microscopy was used for morphological identification of structures on P. coffeae that distinguished it from other Pratylenchus species.The morphological features and specific details for P. coffeae used were compared to the information provided by Castillo et al. .Specific features illustrated for P. coffeae are presence of two annuli on the labial region, round to oblong shape of stylet basal knobs and truncated or hemispherical tail shape .Nematode extracts were counted using a 2-mL aliquot on a counting slide under a Leica 2500 compound microscope at ×20 magnification.Using the same microscope, the nematodes were clearly identified with support of immersion oil at ×100 magnification and photos were captured at ×40 magnification.PCR was conducted to confirm the morphological identification.Twenty nematodes from each agro-ecological zone were used for amplification of the ITS and 28S regions of the rDNA.The DNA extraction was conducted according to protocol illustrated by Harris et al. with few modifications.One or two nematodes from the same sample were handpicked and placed on a glass slide with 10 μL of extraction buffer.The mixture was ground and 40 μL of extraction buffer added to make 50 μL of solution.The solution was transferred into a 1.5-mL Eppendorf tube; 1% sodium dodecyl sulphate and 0.5 μL of proteinase K were added, and the solution incubated at 65 °C for 30 min.The lysate was then extracted with an equal volume of phenol/chloroform/isoamyl at the ratio 25:24:1.The solution was mixed well and centrifuged at 4 °C for 5 min at 20,000 × g relative centrifugal force.The upper aqueous layer was transferred into a new tube, then 5 μL of cold 3 M NaAc and 2× volume of 98% cold ethanol was added, and then samples were precipitated at −20 °C for 1 h.The mixture was centrifuged at 4 °C for 15 min at 20,000 × g RCF.The DNA pellets were washed with cold 75% ethanol and re-suspended in 10 μL of sterilised double distilled water, then stored at −20 °C for further analysis.The extracted nematode DNA was used as a template for PCR amplification.The PCR was performed using two primer pairs which were designed by authors at Mikocheni Agricultural Research Institute: PC7ITSF and PC7ITSR that amplifies the ITS1, 5.8S and ITS2 regions; and PC11LSUF and PC11LSUR that amplifies the 28S of rDNA.These primers are specific to Pratylenchus spp.The primers were designed from the internal transcribed spacer1 and 2 and the 28 subunit of the ribosomal DNA region of nematodes using the primer design tool of the National Centre for Biotechnology Information as described by Ye et al. .The PCR conditions were denatured at 94 °C for 3 min followed by 35 cycles of denaturation at 94 °C with 30 s, annealing at 61 °C for 45 s and extension at 72 °C for 2 min.A final extension was performed at 72 °C for 10 min.The amplicons were analysed on 1% agarose gel electrophoresis, photographed over a trans-illuminator and cleaned by ExoSap-IT.The cleaned PCR products were added with their respective sequencing primer pairs and sequenced directly by Bioneer Corp, South Korea.The obtained molecular sequences were compared with other nematode species sequences available in GenBank through NCBI BLASTN homology search.The DNA sequences were analysed by molecular evolutionary genetics analysis version 7.0 software .The sequences were edited on Bioedit software ver.7.0 and aligned by multiple sequence comparison by log-expectation in MEGA 7 software.Phylogenetic trees based on internal transcribed spacers and 28S rDNA sequences were constructed using the maximum parsimony method.All data on nematode counts were subjected to analysis of variance using the GenStat statistical package.If required, the density of nematodes was square-root transformed.The means were compared using least significant difference at P < 0.05.Morphology of P. coffeae found in samples was confirmed using the tail shape of female P. coffeae, which was truncated.Moreover, we also observed body annulation lying from side to side as described by Castillo et al. .We also took measurements of some key parameters specific for P. coffeae: for females, length and vulva; and for males, length and testis length.The PCR amplification using primer pairs PC7ITSF/PC7ITSR and PC11LSUF/PC11LSUR showed the correct fragment sizes for all nematode samples.Molecular characterisation confirmed the morphological and morphometric characterisation of P. coffeae nematodes through PCR and sequencing.The sequencing and phylogenetic relationship of the ITS and 28S rDNA sequences of P. coffeae populations identified in this study were clustered together with other P. coffeae sequences from GenBank and were more closely related compared to other Pratylenchus spp.The incidence of P. coffeae in root samples was highest in Mbeya and Ruvuma regions followed by North Unguja and South Unguja.The three regions of Unguja showed high incidence of P. coffeae, whereas there was low incidence in Kilimanjaro and Arusha.In addition, the range in incidence between regions was wide: 0% in Arusha to 70% in Mbeya.Of the regions, 30% had more than 50% incidence.Nematode counts in the soil samples were high in the three regions of Unguja.The highest counts were from North Unguja, followed by South Unguja and West Unguja.In contrast, nothing was found in Arusha, Kilimanjaro, Pemba South and Pemba North.The density of nematodes in root samples was highest in Mbeya and Ruvuma but these values did not significantly differ to those from South Unguja.The lowest nematode counts in root samples were from Arusha and North Pemba.The highest nematode infections were for medium altitudes of 500–1000 m above sea level; in comparison there were significantly fewer roots infected with nematodes at elevations below 500 and above 1000 m a.s.l. Abundance of P. coffeae was highest in soil of elevations 0–500 m a.s.l. and lowest above 500 m a.s.l.The survey results indicated the presence of P. coffeae in most regions of mainland Tanzania.The morphology and molecular identification confirmed its presence.The results on sequence and phylogenetic analysis demonstrated high relationship of these nematodes to other Pratylenchus coffeae from the database.Moreover, the data show that P. coffeae was spreading in most banana-growing areas in the Zanzibar islands of Unguja and Pemba.The nematodes survive in the warm areas of these islands, where air temperatures can reach 38 °C , and in the cooler areas of Mbeya with maximum temperatures of around 23 °C .Information on temperature conditions, especially for when the samples were collected, is important because previous reports indicated the influence of temperature on abundance of some nematode species such as P. goodeyi .However, the results for P. coffeae collected from different altitudes and temperature showed no significant effect of temperature on abundance of P. coffeae.This is supported by results by Radewald et al. , who found maximum reproduction of P. coffeae at 29 °C but infection occurring at a range of 4–32 °C.In addition to temperature, previous data indicate that soil pH, rainfall and humidity affect the nematode population , which may have contributed to the distribution of P. coffeae in Tanzania.The effect of altitude on nematode populations was considered due to its relationship to temperature, because altitude is usually negatively correlated with temperature .Although we did not find a direct relationship between temperature and nematode abundance, this nematode survived in medium to lower altitudes where temperatures are usually warm.These warm conditions might be one factor enhancing the population, by allowing a shorter life-cycle compared with cool conditions at high altitudes.Previous surveys in Tanzania for banana nematodes reported P. coffeae only in Zanzibar .The current survey showed that P. coffeae was in either roots or soils of the banana samples from surveyed regions with the exception of two regions, implying that one source of its invasion of the mainland may be the Zanzibar islands through movement of plant material.For the past two decades, P. coffeae has been newly reported across banana-growing areas in Africa.For example, the presence of P. coffeae was first reported on banana in Uganda after a nematode survey conducted in 1993 .Moreover, P. coffeae was reported to cause 24% losses in bunch weight and the toppling of banana plants .Similarly, a study conducted in Ghana showed that P. coffeae caused severe symptoms of dead roots, root necrosis and sucker corm lesions in bananas .Together, these studies indicate that the presence of this nematode species on banana contributes to production losses.The results of our study are an alert to banana stakeholders to take such actions as developing effective management methods, sending awareness messages to banana growers and strengthening skills of crop inspectors before the problem in Tanzania becomes serious.The current study revealed the presence of P. coffeae in about 80% of the samples collected, but there are no research data on the effect of this nematode on banana production.The results indicate a high chance of this nematode species surviving in a wide range of conditions where banana is grown if no effort is made to control movement of plant materials and increase farmer awareness.Traditionally, farmers source new vegetative materials from existing fields and neighbours .In areas such as Unguja, where banana is left without recommended agronomic management and in which all studied soils contained P. coffeae, strategic management measures should be considered that must include cultural practices such as thinning of banana mats, sanitation and hot water treatment of planting material.When assessing the banana fields and collecting samples in Zanzibar, we noted that most banana mats contained 4–20 plants and were left to increase after planting the first crop – this high plant density might be one reason for high nematode incidences.The number of other crops intercropped was high, we found mixtures of taro, sweetpotato, cassava, eggplant, pumpkin, coconut, and orange cardamom and weed control was poor.Weeds such as Bermuda grass, nut grass and black nightshade are also host to different nematodes and can support them even when the crop is not in the field .The danger of P. coffeae to banana production has been reported by a number of researchers and the considerable damage to banana roots by P. coffeae has been reviewed in detail .However, compared to the density of nematodes reported by Rajab et al. , in our survey the density had increased in Unguja regions and decreased in Pemba.These two islands are isolated by water and the elapsed time of about two decades may have caused some differences in crops grown, microbial communities and farming activities.In addition, the nature of soil can enhance abundance and adaptability of P. coffeae.This study indicates that incidence and density of nematodes were high in North and South Unguja and no nematodes were found in Arusha.The variation in nematode abundance can be caused by the nature of varieties grown in the two areas.Traditionally, Arusha has extensively grown East African highland banana which is an unique cooking diploid variety called mchare , whereas Unguja is dominated by variation of triploid varieties which include mzuzu, sukari ndizi and mkono wa tembo .Moreover, according to the soil properties of the surveyed regions, the soils in North and South Unguja were slightly acidic, loamy/slightly silty and of low nitrogen, whereas, that of Arusha was alkaline, clay and of high nitrogen content.The soil condition in Arusha might have hindered survival of P. coffeae in this soil and hence limited its availability.Soil properties and their relationship to incidence and density of P. coffeae imply that agronomic practices necessary for improvement of soil properties may significantly affect management of this nematode.The effects of clay texture and high soil nitrogen in hindering and reducing nematodes have been reported in different studies .The information of our survey suggests that nematologists and other banana stakeholders should work together to develop effective measures such as cultural practices, biological agents and rotations to tackle P. coffeae to aid small-scale banana growers in Tanzania.This study indicated that P. coffeae was widespread in banana-growing areas of Tanzania, from the mainland to the Zanzibar islands, regardless of climatic conditions.The results showed that important areas were infested; and further research should be concentrated in these regions to minimise the problem.Based on the survey results, we suggest pathogenicity should be studied in Tanzania to improve understanding of the relationship between P. coffeae and banana and the level of effects caused by this nematode.A knowledge and awareness programme is needed to help farmers apply available nematode management practices for improved banana production. | Pratylenchus coffeae is among the plant parasitic nematodes contributing to yield losses of banana. To determine the status of P. coffeae, a survey was conducted in banana-growing regions of Tanzania and samples collected. The results indicated that in 2015 there was an increase in total counts of P. coffeae extracted from roots compared to that reported in 1999 in Unguja West, North and South. Moreover, we noted its presence for the first time in mainland Tanzania. Generally, the densities of P. coffeae were high on banana roots collected at 500–1000 m above sea level. This information on the status of P. coffeae is important in planning management of nematodes in Tanzania. |
565 | Identification of 5,6,8-Trihydroxy-7-methoxy-2-(4-methoxyphenyl)-4H-chromen-4-one with antimicrobial activity from Dodonaea viscosa var. angustifolia | The development of multi-drug resistant bacterial strains in recent history has resulted in a marked decrease in the efficacy of a vast number of medicines.The key element in the discovery of new bioactive ingredients is access to diverse molecular groups.The availability of novel active entities from plants is far from being exhausted.Dodonaea viscosa var.angustifolia is one plant that has shown potential as a source of antimicrobial compounds.D. viscosa var.angustifolia, a dicotyledon which falls under the Sapindaceae family of plants is a medicinal plant traditionally used to treat various illnesses.Leaves of DVA are used for colds, flu, stomach trouble, measles and skin rash.It has proven analgesic, antipyretic properties and anti-HIV activity.The use of DVA for the oral infections has also been explored.In vitro studies have shown that at high concentrations, DVA can inhibit the growth of Candida albicans and at subinhibitory concentrations, it can inhibit the biofilm and hyphae formation, and its adherence to oral epithelial cells.In addition, DVA can inhibit the virulence abilities of Streptococcus mutans and Porphyromonas gingivalis which are responsible for the development of dental caries and periodontal diseases respectively.Nevertheless, most of these DVA antibacterial studies have been limited to the use of crude extract.This study focused on the isolation and elucidation of a chemical constituent from D. viscosa var.angustifolia and for the first time we report on the isolation and identification of flavone never reported on before.An investigation on the effect of this identified major beneficial compound on a range of pathogens was also conducted.Plant material was collected from the Pypeklipberg, Mkhunyane Eco Reserve in Mpumalanga province of South Africa.The plant was positively identified by a taxonomist from the Herbarium at the University of the Witwatersrand.It was identified as D. viscosa var.angustifolia Benth which belongs to the Sapindaceae Family.Voucher specimen number J 94882, were previously deposited at this Herbarium.Cultures were provided by the Infection Control laboratory, National Health Laboratory Services, Johannesburg, South Africa.Gram negative organisms used were Escherichia coli, Klebsiella pneumonia, Enterobacter aerogenes, Pseudomonas aeruginosa, and Salmonella typhimurium; 1 g positive organism, Staphylococcus aureus and three Candida species C. albicans, Candida parapsilosis and Candida tropicalis were also included.Ethics clearance was obtained from the Institutional Ethics committee.Dried D. viscosa var.angustifolia leaves were milled to a powder and crude extract was prepared using a method described by Eloff.Sixty grams of powder was mixed with 600 ml of methanol in a closed container, agitated for 72 h and centrifuged at 5000 rpm for 20 min.The supernatant was collected in pre-weighed 100 ml beakers.The procedure was repeated three times with the same powder.All the three supernatants were pooled together in the same beaker and the methanol was allowed to evaporate under a cold air stream.A yield of 6.8 g was obtained.Sequential fractionation was carried out using a chromatography column packed with a slurry of activated silica gel.The slurry was prepared using 100% hexane.The methanol extract was dissolved in a 50% v/v mixture of ethyl acetate and hexane.The sample was then loaded onto the silica gel column.Elution was started with hexane.This was followed by increasing gradient polarity through the gradual addition of ethyl acetate up to 100% gradient composition and lastly a 70:30 ratio of ethyl acetate: methanol.The eluent was collected in 8 ml bottles and pooled to give 6 fractions according to their thin layer chromatography profiles, and these were dried and the yield was measured.Ten microlitres of crude extract and the 6 fractions were spotted on silica gel TLC plates at 10 mg/ml concentrations.Development of the TLC plates was done in a saturated environment using a method described by Kotze and Eloff with modifications.Three solvent systems were used; toluene/ethanol/ammonia hydroxide , dichloromethane/ethyl acetate/formic acid , ethyl acetate/methanol/water .The TLC plates were developed by spraying with 30% v/v sulphuric acid solution after which they were heated at 100 °C for color development.Gas chromatography–mass spectrometry analysis was carried out using an Agilent Technologies 5190-2293 gas chromatography instrument with an HP-5 ms ultra inert column interfaced with the Agilent Technologies 19091S-433UI mass spectrometer.The initial temperature was set at 50 °C and increased at a rate of 5 °C per minute up to the set limit of 310 °C.The split ratio was set at 1:50 and helium was used as the carrier gas.The auxiliary transfer port was set at 280 °C.The ion source temperature was set at 230 °C with fixed electron energy set at 70 eV.Solvent delay was set at 3 min.The Cheetah™ MP 100 Flash purification system supplied by Bonna-Agela Technologies was used for flash chromatography.Two milliliters of sample was injected for each run.Equilibration of the column was achieved by using a solvent ratio of 90% Hexane: Ethyl acetate 10%.The flow rate was set at 8 ml/min.The peaks were viewed at 270 nm.IR data were obtained using the Perkin Elmer FT-IR 100 spectrometer.The 1H NMR and 13C NMR data were obtained using the Bruker AVANCE 500 spectrophotometer.All spectra were recorded in DMSOd6.UV–Vis data were obtained using Shimadzu 1800 spectrophotometer.The minimum inhibitory concentration of fractions on the pathogens was determined using a method described by Eloff.The DVA crude extract and fractions were first dried and reconstituted with dimethyl sulphoxide to obtain concentrations of 50 mg/ml.Hundred microliters of double dilutions of constituent plant extracts in DMSO were added to each well of a 96-well round bottom microtitre plate in varying initial concentrations.In addition, 40 μl of 0.2 mg/ml INT dissolved in water was added into each of the wells of the microtitre plates and incubated for 30 min to obtain minimum inhibitory concentration.After incubation, the lowest concentration that showed no red color was recorded as the MIC.The lowest concentration with no growth was recorded as MBC/MFC.Dimethyl sulphoxide was used as a solvent and therefore as a control vehicle.Augmentin was used as a positive control for all the bacterial cultures.Chlorhexidine gluconate was used as a positive control for the Candida cultures.The total activity for the fractions was calculated by dividing the yield with MIC values.Cytotoxicity tests were carried out to investigate the effect of crude extract and the newly identified compound on human embryonic kidney cells."Cells were grown in Dulbeco's modified eagle's medium supplemented with fetal calf serum, l-glutamine and penicillin/streptomycin at 37 °C under CO2.Cytotoxicity tests were done in triplicate using 96 well microtitre plates.A hundred microlitres of media only was added to form blank test control wells.Two percent Triton-X was used as a positive control.Dimethyl sulphoxide was used to dissolve the crude extract and subfraction 5.1 hence a DMSO control was also included.Cell culture controls were also included by pipetting 100 μl of the cell solution.A 100 μl of cell suspension was then pipetted into applicable wells to form the sample test wells.The microtitre plates were then incubated for 24 h at 37 °C under 5% carbon dioxide.Wells were viewed for cell growth under the microscope at × 1000, media was removed from the plates and replaced with 100 μl of fresh medium.Serial dilutions of the crude extract and newly identified compound were prepared using DMSO diluted with PBS to make 1% DMSO.Hundred microlitres of each sample dilution was added in triplicate to the test wells.Each sample well had 0.5% DMSO.The concentrations of the crude extract in the test wells ranged from 25 mg/ml to 0.0125 mg/ml whereas concentrations of newly identified compound were 0.25 mg/ml to 0.002 mg/ml.The plates were then incubated for 24 h at 37 °C and 5% carbon dioxide.The media with the test samples and controls was removed and replaced with 100 μl of the MTT reagent.The plates were further incubated for 3 h at 37 °C and 5% carbon dioxide.The MTT reagent was pipetted out and replaced with 100% DMSO to dissolve the formazan crystals.After 30 min of incubation, the plates were read with a spectrophotometer at 570 nm.The preliminary screening yielded 6 distinct fractions from the crude extract with different quantities and color.These fractions were eluted on the basis of their polarity.A gradient based method was employed in the fractionation of the crude extract and the least polar fractions were eluted first while the most polar were obtained last.A mixture of phenolic compounds was obtained as expected.Each fraction displayed a distinct color.It was evident from the antibacterial screening study that Fraction 5 showed potential.For this reason, flash chromatography was used to separate the subfractions in Fraction 5 to yield subfractions 5.1 and 5.2 respectively.The separation of Fraction 5 further by flash chromatography was carried out at a flow rate of 8 ml/min and in a solvent gradient phase comprised of hexane and ethyl acetate scheduled at gradients of increasing polarity.Normal phase silica gel columns were employed for flash chromatography.Flash chromatography of fraction 5 resulted in the isolation of subfractions 5.1 and 5.2 with retention times of 23.7 min and 8.6 min respectively.Subfraction 5.1 was obtained in greater quantities compared to its 5.2 counterpart.Median MIC values were calculated and presented just for the comparison between the fractions.The MIC values of the crude extract and fractions ranged from 0.39 to 25 mg/ml and the total activity ranges from 19 to 2000.Fraction 5 showed the highest antimicrobial property against S. aureus, P. aeroginosa, C. pararpsilosis and C. tropicalis.Fraction 3 had the highest total activity.For the further identification, fraction F5 was selected due to its broad spectrum antimicrobial activity.On further fractionation of fraction 5, two subfractions F5.1 and F5.2 were separated.The MIC values of subfraction F5.1 were 0.2 to 1.56 mg/ml whereas subfraction F5.2 produced MIC values of 0.78 to 3.12.Total activity of F5.1 subfraction was higher than F5.2.Due to the higher MIC values and total activity, subfraction F5.1 was pursued for identification.DMSO has no effect on any of the cultures.Augmentin killed all the bacteria and chlorhexidine killed all the Candida species.Several phytoconstituents were determined to be present in the crude extract in line with some research studies conducted.The presence of these phytoconstituents was evidenced from UV–Vis spectra obtained for the fractions obtained from the DVA crude mixture shown in Fig. 2.The obtained fractions were mostly mixtures of complex phenolic compounds which could possibly include flavanones, flavonols, flavones as well as chlorophylls.Subfraction F5.1 was isolated, characterized and identified by using techniques such as gas chromatography–mass spectrometry, nuclear magnetic resonance spectroscopy, UV–Vis spectroscopy and infra-red spectroscopy.The GC–MS results showed a molecular ion peak of m/z 326.9 − 3 which is close to the molecular weight of identified compound 330.28 amu.The UV–Vis absorption spectrum of the subfraction F5.1 further confirmed that the obtained sample was a flavone with the two major identifying peaks observed at 338 nm and at 273 nm as observed in Fig. 3.These flavone peaks appeared in the regions cited as identifying regions in literature, where band A is expected in the range of 310–350 nm while band B is in the range 250–290 nm.The different analyses conducted identified subfraction 5.1 as 5,6,8-Trihydroxy-7,4-dimethoxyflavone-4H-chromen-4-one).The isolated and identified compound has the molecular formula C17H14O and corresponding molecular weight 330.28 amu.FT-IR and NMR analysis of the subfraction F5.1 yielded the bands and peaks of the groups and protons respectively in their expected positions.IR : 3230–3550, 2880–2998, 1744, 1469–1651, 1358, 1282–1358, 1000–1168, 806.1H NMR: δ ppm: 12.78, 10.72, 10.26, 7.99–9.92, 6.98–6.92, 6.55, 3.79, 3.76.C13 NMR: δ ppm: 178.64, 160.63, 157.76, 156.14, 152.87, 152.02, 137.75, 131.59, 130.58, 121.07, 116.10, 105.09, 94.43, 60.44, 60.14.Percentage cell growth inhibition was found to be concentration dependent.For the crude extract, it ranged from 96.6% to 14.1% whereas for the newly identified compound it ranged from 98.58% to 9.02%.The crude extract concentration that reduced the absorbance of treated cells by nearly 50% in relation to untreated cells was 0.09 mg/ml, with a calculated value of 51.88% cell growth inhibition.IC50 of the newly identified compound was 0.03 mg/ml with a calculated value of 48.35% cell growth inhibition.The absorbance of DMSO well was 0.508 and the untreated cells was 0.513 which suggest that DMSO had no effect on the cells.The absorbance of Triton X and media without cells were 0.087 and 0.073 respectively.This study has identified a new flavone present in DVA which has antibacterial activities against some of the common bacterial and fungal pathogens.This is not surprising because plants accumulate flavone as phytoalexins in response to microbial attack.Similar flavones have been identified from DVA showing antibacterial activities comparable to this study.Results in this study also showed that there was not much difference in the activity towards gram positive, gram negative and unicellular eukaryotic fungi which suggest that this flavone has no effect on the cell wall.The antibacterial effects of flavones are generally due to the interference with the iDNA synthesis in the pathogens.Wang et al. suggested that the hydroxyl group at position C-5 is important for antibacterial activity.In addition, flavones particularly the flavonolignan derivative inhibits multidrug resistance efflux pumps which are membrane proteins that expel toxic substances including antibiotics.Flavones are a class of flavonoids and the classes depend on the backbone of 2-phenylchromen-4-one, also called 2-pheny-1-benzopyran-4-one.They are found in most fruits and vegetables and they are medically important due to their anti-oxidant, anti-cancer, anti-diabetes and anti-inflammatory activities in addition to the antimicrobial activities.However, in the plant material, the quantities of flavones are small as also seen in this study.Therefore flavone derivatives and synthetic flavones have been developed and studied.These synthetic compounds can be produced in large quantities.In addition, their efficacy can be improved by modifying the structure.Total activity calculated for the test fractions showed that fraction F3 was present in slightly greater quantities in the crude extract compared to the fraction F5.This implies that if these bioactive chemicals are extracted from F3 for the pharmaceutical purposes, the yield would be higher.Nevertheless, F5 total activity was also relatively good and the MIC values were better than F3, therefore, it was decided to purify and identify F5.In this case, if these bioactive chemicals are extracted from F5 for the pharmaceutical purposes, the yield would also be good.In addition, it also produced the best MIC values with a broad spectrum activity against a variety of test pathogens.In the present study, the MIC values for F5.1 ranged from 0.2 mg/ml to 1.56 mg/ml which can be synthetically produced and the activity can be improved.For example, at the 4′-position in ring-B, having substituents like F, OCH3 and NO2 can increase the antibacterial activity.Furthermore, it has been suggested by Ullah Mughal et al. that antibacterial activity can increase as the electronegativity of the halogen atoms on ring-A increase and incorporation of sulfur and nitrogen atoms also improves the antibacterial activity of flavone derivatives.Since the crude extract has shown promising results against S. mutans, a cariogenic bacteria and C. albicans, this newly identified compound can be further studied for its beneficial effect in dental caries and oral candidiasis.Matsuo et al. investigated the cytotoxicity effect of flavonoids including the flavone 4,5,7-Trihydroxyflavone on human lung embryonic fibroblasts and human umbilical vein endothelial cells.The flavone apigenin had an IC50 value of 0.03 mg/ml on both types of cells.The same IC50 result was reported in this study for the isolated flavone, 5,6,8-Trihydroxy-7,41-dimethoxyflavone.In conclusion, the compound 5,6,8-Trihydroxy-7-methoxy-2--4H-chromen-4-one was for the first time isolated, characterized and identified from D. viscosa var.angustifolia crude extract.These new findings will contribute to the pool of compounds that have been isolated and identified from this plant.This newly identified flavone also showed increased antibacterial activity in comparison to the crude and all the fractions obtained from the crude extract and its cytotoxicity was established.Therefore it has the potential to be developed into an antimicrobial agent. | The aim of this study was to isolate a chemical constituent from Dodonaea viscosa var. angustifolia, investigate its effect on a range of pathogens and identify the major beneficial compound. Leaves were dried, milled and a methanol crude extract was prepared. Fractions were obtained using column chromatography. Antimicrobial activity of these fractions against various microorganisms was determined using a microdilution technique. The most effective fraction was purified by flash chromatography, and characterized and identified using GC–MS, NMR, UV–Vis and FT-IR. Cytotoxicity tests were carried out on human embryonic kidney cells. Six fractions (F1–F6) were separated. Fraction 5 had the most antimicrobial activity (MIC 0.39 mg/ml). On further fractionation of Fraction 5, two subfractions (F5.1 and F5.2) were obtained. F5.1 showed better antimicrobial activity (0.2 mg/ml) compared to the F5.2. F5.1 was identified as new compound, 5,6,8-Trihydroxy-7,4-dimethoxyflavone (5,6,8-Trihydroxy-7-methoxy-2-(4-methoxyphenyl)-4H-chromen-4-one) with a molecular formula C17H14O and molecular weight 330.28 amu. The concentrations of newly isolated, elucidated and identified compound that inhibited cells by 50% (IC50) was 0.03 mg/ml. In conclusion, DVA leaves contain 5,6,8-Trihydroxy-7,4-dimethoxyflavone which has antimicrobial activities against common pathogens and unicellular fungi and it proved to be safe. Therefore it has the potential to be developed into a therapeutic agent. |
566 | Antileishmanial activity of selected South African plant species | Leishmaniasis is a major neglected tropical disease that causes a wide spectrum of chronic to deadly infectious parasitic illnesses in humans.The etiological agents responsible for its manifestation are protozoan species of the genus Leishmania, which are transmitted by female sandflies belonging to the genera Phlebotomus and Lutzomyia.Depending on the causative species and the immune response of the host, the infection can clinically manifest as cutaneous, mucocutaneous, diffuse and visceral leishmaniasis.Visceral leishmaniasis which is caused by L. donovani is the most fatal form of the disease and intermittent epidemics of the disease have been linked to high morbidity as well as mortality in some East African countries.Additionally, co-infection of leishmaniasis with HIV is further exacerbating the situation and the disease is currently expanding into the sub-Saharan Africa.Globally, there is an estimated 1.3 million new cases annually and over 20,000 deaths occur as a direct result of the disease.The chemotherapy of leishmaniasis is strongly reliant on pentavalent antimonials, miltefosine, amphothericin B and paromomycin.However, increasing treatment failure rates, high cost and significant side effects are the major drawbacks compromising their effectiveness.Novel antileishmanial drugs that could possibly lessen the burden of the disease in endemic countries are highly needed.Secondary plant metabolism has played a significant role in the discovery of novel chemotherapeutic agents against protozoal infections and could therefore yield novel and selective antileishmanial compounds.South African plant biodiversity has not been extensively explored for antileishmanial plant leads.Previously, we demonstrated that most of the non-polar extracts from indigenous plants used in the treatment of malaria present relatively significant antiplasmodial activity.Given that malaria and leishmaniasis are both protozoal infections and share some unique metabolic pathways, the main objective of the current study was to screen nineteen antimalarial South African plant species against L. donovani parasitic strain.Nineteen indigenous plant species were collected at various locations in Mutale Municipality, Limpopo Province, South Africa.Voucher specimens of the harvested plant species were identified and deposited at the H.G.W.J. Schweickerdt Herbarium of the University of Pretoria.For each collected plant sample, 20 g of ground plant material was extracted in DCM:50% methanol.The mixture was homogenized for 10 min using a blender, sonicated for 10 min in an ultrasonic waterbath and then filtered.The filtrate was transferred to a separating funnel and yielded two layers of different polarities, which were then separated.Residual plant material was collected and the extraction procedure was repeated.Methanol in the polar fractions was vaporized with a rotary evaporator at 40 °C and the resulting aqueous extracts were freeze-dried using a bench top manifold freeze dryer.Non-polar fractions were concentrated under vacuum at 30 °C and the acquired plant extracts were subjected to in vitro screening.Both root and stem extracts were analysed from one of the investigated plant species.The inhibitory effects of the forty crude plant extracts were evaluated against axenically grown amastigote forms of Leishmania donovani following the resazurin assay protocol.Each extract was dissolved in 10% DMSO to afford a stock solution with a concentration of 10 mg/ml.All tests were performed in 96-well microtiter plates, conducted in duplicates and repeated twice."Amastigotes were cultivated in Schneider's medium supplemented with 10% heat-inactivated FBS under an atmosphere of 5% CO2 in air.Plates were incubated for 48 h and resazurin solution was then added to each well in order to assess the viability of Leishmania parasites.The absorbance was read on a Spectramax Gemini XS microplate fluorometer with an excitation and emission wavelength of 536 nm and 588 nm, respectively.The IC50 values were calculated by linear regression based on dose response curves.Miltefosine was used as positive control.The inhibition of mammalian cell growth was assessed in vitro by cultivating rat skeletal myoblast L6 cells in the presence of different extracts covering a concentration range in 96 well culture plates.Podophyllotoxin was used as a positive control.Tests were conducted in duplicates and repeated twice.After 70 h of incubation the plates were inspected under an inverted microscope to assure growth of the controls and sterile conditions.Alamar Blue was then added to each well and the plates were incubated for another 2 h.The plates were then read with a Spectramax Gemini XS microplate fluorometer using an excitation wavelength of 536 nm and an emission wavelength of 588 nm.The IC50 values were calculated by linear regression from the sigmoidal dose inhibition curves using SoftmaxPro software.In the present study, in vitro antileishmanial activity of forty extracts prepared from selected South African plant species was evaluated against the amastigote forms of Leishmania donovani.Plant extracts were also subjected to an antiproliferative bioassay in an attempt to assess their safe therapeutic application against mammalian cells.Leishmanicidal activity of the plant extracts was considered to be significant when the concentration that inhibits 50% of parasite growth was ≤ 5 μg/ml and an SI value of ≥ 10 was established.The results of the antileishmanial activity, in vitro cytotoxicity as well as the selective index values of the plant extracts tested are presented in Table 1.Of all the forty plant extracts assayed, only 10% exhibited significant antileishmanial activity with acceptable selective indices.Generally, there was no notable leishmanicidal activity detected from the polar plant extracts investigated in the study.The lowest IC50 value of the polar plant extracts was found to be 42.3 and the highest selectivity index value was 2, indicating low selectivity for the tested parasite.It is well documented that the majority of lipophilic extracts and compounds tend to exhibit comparatively significant antiprotozoal activity.This trend could possibly be explained by the lack of polyphenols, polysaccharides and other polar molecules, which have been shown to have less antiprotozoal activity.Non-polar root extract of Bridelia mollis exhibited the highest leishmanicidal activity.To our knowledge, no antileishmanial property of B. mollis has been reported and there is no significant data documented on its phytochemical constituents.Nonetheless, methylene chloride leaf extract of B. ferruginea from Ivory Coast exhibited moderate antileishmanial activity.Further phytochemical studies conducted on other members of the genus Bridelia led to the isolation as well as identification of a diverse number of polyphenols, triterpenes, glycosides and lignans, some of which may possibly be attributed to the observed antileishmanial activity.Dichloromethane extracts of Vangueria infausta subsp. infausta, Syzygium cordatum and Xylopia parviflora displayed high inhibitory effects on the growth of amastigote forms of L. donovani with IC50 values of 4.51, 4.95 and 5.01 μg/ml, respectively.Their SI values ranged from 10 to 13, therefore displaying selectivity of the assayed extracts against the parasitic species investigated.While phytochemical analyses of V. infausta subsp. infausta revealed the presence of terpenoids and glycosides, literature information about the inhibitory effects of the plant species and other members of the same genus on L. donovani is still lacking.Detailed biochemical investigations on the antileishmanial activity of chemical constituents from V. infausta subsp. infausta are underway and may lead to identification of their antileishmanial principles.There are no previous scientific reports that account for the significant leishmanicidal activity of S. cordatum and X. parviflora.Nevertheless, phytochemical studies conducted on members of the genus Syzygium have led to the isolation of terpenoids, phenylpropanoids, flavonoids, hydrolysable tannins, lignan derivatives and alkyl phloroglucinol derivatives.Essential oils extracted from aerial parts of a ‘sister’ species, S. cumini, showed significant antileishmanial activity when tested against L. amozonensis promastigotes.Previous chemical investigations of X. parviflora led to the isolation and identification of numerous bioactive components such as isoquinoline alkaloids, acetogenins, terpenes and essential oils, some of which are known to have antiprotozoal properties.Additional chemical studies are needed to determine the specific secondary metabolites which could be attributed to the detected leishmanicidal efficacy of X. parviflora and their mode of action.The nonpolar root extract of Ximenia caffra exhibited significant leishmanicidal effect against the axenically grown amastigotes of L. donovani, however, it also inhibited the proliferation of the mammalian skeletal myoblast cells tested.Lipophilic extracts prepared from the other plants species exhibited good to moderate antileishmanial activity with SI values ranging from 0.1 to 6.The observed lack of selectivity against L. donovani is an indication of inadequate efficacy for consideration of the plant extracts as candidates for further studies involving antileishmanial activity.The results of the present study substantiate the rationale of bioprospecting South African plant biodiversity in search for novel antileishmanial agents.Further phytochemical investigations are currently underway in an attempt to isolate and identify the bioactive chemical entities responsible for the leishmanicidal activity observed in the more active plant species.Subsequently, the identified active principles will be screened from renewable plant parts or be synthesized in the laboratory for future studies. | In vitro screening of forty extracts prepared from selected South African plant species was conducted against Leishmania donovani (MHOM-ET-67/L82). Crude plant extracts were also subjected to an antiproliferative bioassay in an attempt to determine their potential lethality or safe therapeutic application against rat skeletal myoblast L6 cells. Of all the tested plant species, only 10% exhibited significant leishmanicidal activity with acceptable SI values (SI ≥ 10). The current study is the first scientific account on the significant antileishmanial activity (IC50 ≤ 5 μg/ml) of Bridelia mollis (Phyllanthaceae), Vangueria infausta subsp. infausta (Rubiaceae), Syzygium cordatum (Myrtaceae) and Xylopia parviflora (Annonaceae). Further phytochemical investigations are currently underway in an attempt to isolate and identify the chemical constituents that may be attributable to the leishmanicidal efficacy observed in the study. |
567 | Voxelization modelling based finite element simulation and process parameter optimization for Fused Filament Fabrication | The precision and efficiency of Fused Filament Fabrication largely depend on the selection of key process parameters, such as the scanning speed, nozzle temperature, filling density, and molding chamber temperature .Those parameters affect the temperature and stress field distributions during part printing, resulting in potential defects of thermal warpage and shrinkage in the parts .In the past, various experimental methods were designed to investigate the mechanical properties or thermodynamic coupling characteristics of FFF parts influenced by the process parameters based on the analysis of physical phenomena of melting/solidification and heat transfer and mass transfer .The methods, however, are often costly and could not accurately grasp the temperature and stress conditions of FFF parts.To overcome the limit, Finite Element based simulation on FFF parts has been recently becoming a more popular research method .In the meantime, the related FE methods have been also applied to some new materials research fields, such as the research of sensing materials .Nevertheless, the related research still focused on analysis of relatively simple models with regular shapes.In practice, the structures of FFF parts are often complicated, making the simulation and analysis difficult.Aiming to address the above deficiencies, this paper presents a new approach of voxelization modelling-based FE simulation and process parameter optimization for FFF.The approach uses voxelized modelling and sorting algorithms in supporting FE analysis on complicated models.In the approach, based on Acrylonitrile Butadiene Styrene-based parts, the impacts of key process parameters of FFF on the temperature field, including the scanning speed, molding chamber temperature and nozzle temperature, are analyzed using the FE simulation.In the research, an experimental platform of the FFF process with a molding chamber and adjustable temperatures was set up to validate the developed approach.Voxelization refers to converting a three-dimensional model originally described by surface or volume information, wherein voxels are used to fit the shape of the model.Voxelization modelling is suitable to support FE based simulation for complicated models.Thus, the voxelization-based FE modelling processes are adopted in this paper, and relevant steps are as the following steps.Numerical models like STL use triangular facets to fit the surface of a model.Thus, voxelization consists of two steps when processing such a model: the voxelization of the surface of the model, followed by the voxelization of the interior of the model.The procedure of the voxelization is illustrated in Fig. 1.The results of the voxelization of a bunny model in different resolutions are shown in Fig. 2.After the model is voxelized, all the voxel elements need to be sorted to simulate the process of extruding the material along the scan path in the FFF process.Currently, there are two types of scanning filling methods commonly used in the FFF process: Parallel scanning filling, in which scanning line is parallel to the others); Offset scan filling, in which the scanning path of each layer is parallel to the outer contour of the slice layer of the model).Since the parallel scan filling method is relatively simple and easy to implement and control, it is the most widely used method.Hence, this paper uses this scanning method as the primary method to simulate the FFF process.Meanwhile, in the process of parallel scanning filling, since the fibers in each segment are parallel to each other, the shrinkage direction of the fibers is also the same.When the number of printed layers is large, the accumulation of the shrinkage from each layer will cause the excessive overall warping and deformation of the part.To solve this problem, the orthogonal parallel scanning filling method, which is a combination of horizontal and vertical parallel scanning path, is adopted in this paper.During the actual printing, to reduce the invalid journey distance and improve the efficiency of printing, the starting point of the scan path of the next printing layer is controlled as close as possible to the end point of the scan path of the current printing layer.This paper implements this measure by minimizing the distance between the starting and ending voxel elements.The algorithm is shown in Fig. 4.After the voxel elements are sorted according to the printing path, the voxel model needs to be converted into an FE based simulation model.In the FE model, each element and the nodes that connect the elements are directly used in the calculation.To achieve the conversion from the voxel model to the FE model, we need to complete the transformation of the coordinate information of the voxel model to the element and node information of the FE model.The element types suitable for thermal analysis in the ANSYS software include SOLID70, SOLID87 and SOLID90.The three element types are all applicable to a 3D based steady state or transient thermal analysis.Among them, SOLID87 is of a tetrahedron element type.The finite element model in this research is converted from a voxel model, so that the hexahedral element is needed and SOLID87 is not suitable.On the other hand, both SOLID70 and SOLID90 are of hexahedron element types.SOLID90, which is a higher order version of SOLID70, is suitable for models with curved boundaries.However, SOLID90 involves intensive computations in comparison with SOLID70.Since voxel models are all regular hexahedrons, SOLID70 is sufficient to meet the analysis requirements in this research.This type of element has three directions of thermal conductivity, and each node has a temperature degree of freedom, enabling uniform heat flow.The material used in this study is Acrylonitrile Butadiene Styrene, which is a popular thermoplastic polymer.The ABS material has good properties in thermoplasticity and impact strength, so that it has become preferred engineering plastics for 3D printing through FFF process.At present, ABS is mainly pre-fabricated into filament and powder for 3D printing usage.The thermophysical properties of ABS filament used in this research are shown in Table 1.A miniature ashtray model with a bottom diameter of 10 mm is used as an example to illustrate the above methods proposed in this paper, as shown in Fig. 4.The diameter of most commonly used nozzles on 3D printers is 0.4 mm, and the fibers extruded from such nozzles are in the shape of fine cylinders with a diameter of approximately 0.5 mm.Therefore, in this paper, when building the FE model in ANSYS, the shape of the element is set to a cube with a side length of 0.5 mm.The result of converting the three-dimensional model of the miniature ashtray shown in Fig. 5 into an FE model shown in Fig. 5.The FE model includes 1124 elements and 1903 nodes.The FE analysis of the temperature field involves the element birth and death technology and the moving heat source technology .The adoption of the ANSYS Parametric Design Language is beneficial to repeated analyses with the model.APDL can be used for the establishment of a model, the definition of boundary conditions and calculation.It is the main tool for adaptive mesh generation and optimal design .In the temperature field analysis, the position of the heat source needs to be set according to the position of the nozzle, and the element at the position of the nozzle is activated.After that, the simulation calculation is performed.Then, the heat source is removed from the previous step and is added at the new position, the result of the previous step is used as the initial condition of the current step, and the simulation calculation is conducted.Assume that the scanning speed is 50 mm/s, the temperature of the molding chamber is 25 °C, the nozzle temperature is 210 °C, and the time step is 0.01 s.The temperature field distribution at different times in the printing process is shown in Fig. 6.Fig. 6 shows the temperature field distribution at the beginning of the printing of the part.The maximum temperature is the set nozzle temperature.Since the elapsed cooling time is short, the minimum temperature of the part is only slightly lower than the nozzle temperature.Fig. 6 shows the temperature field distribution of the part during the bottom printing process.Because the layer is printed in the vertical parallel scanning path, the temperature field distribution also shows a vertical distribution.The upper layer is printed in the horizontal parallel scanning path, and the temperature field shows a horizontal distribution.Fig. 6 shows the printing process of the ashtray wall.Due to expansion in the heat-affected zone at this time, the elements transfer heat to each other, resulting in the lowest temperature of the part being only slightly higher than the temperature of the molding chamber.Fig. 6 shows the temperature field distribution of the part at the end of the printing.The lowest temperature is at the bottom of the ashtray because the elements on the bottom are printed first, and the cooling time these elements experienced is also the longest.The above analysis shows that the FE simulation results for the temperature field of the model are consistent with the changes during the actual printing process.The FE simulation of the temperature field of the FFF process is carried out using different scanning speeds, molding chamber temperatures and nozzle temperatures.The maximum temperature gradient in the parts at each time point is analyzed.Based on this analysis, the influence of the various process parameters on the temperature field of the part is studied.A comparison of the maximum temperature gradient of the part under different process parameters is shown in Figs. 7–9.Fig. 7 shows that when the heat source moves to the same position of the part, the temperature gradient at a low printing speed is larger than that at a high printing speed.Since the scanning method adopted is orthogonal parallel scanning, the heat source passes through the vicinity of a specific position for several times.When the heat source passes near a particular position for a second time, since the scanning speed is slower, the difference between the temperature of the spot and the temperature of the heat source is large after a long period of cooling.When using a high printing speed, the time interval during which the heat source passes through this point twice is short and the temperature does not drop significantly, so the temperature gradient is small.Therefore, the temperature gradient of the part during high-speed printing is small, and the temperature field distribution is more uniform than that under low-speed printing.Fig. 8 shows that the change trends of the maximum temperature gradients under the three different molding chamber temperatures are consistent; however, the overall maximum temperature gradient when the ambient temperature is high is less than that when the ambient temperature is low.Therefore, appropriately increasing the ambient temperature can reduce the magnitude of the temperature fluctuations in the molding process.Fig. 9 shows that the higher the nozzle temperature is, the larger the corresponding maximum temperature gradient and the greater the temperature fluctuation.Hence, the stress fluctuations when printing at higher nozzle temperatures are likely greater than those when printing at lower nozzle temperatures.In the figures, it can be clearly observed that these curves have some peaks.Analysis shows that these peaks occur at the time when the heat source passes the edge of the model.During the processes, the temperature at the edge of the upper layer reaches the lowest after cooling.When the heat source passes by, the temperature difference will be large, so that the temperature gradient will represent the maximum value, as shown in Fig. 10.Since the scanning method adopted is the orthogonal parallel scanning, the heat source passes through the edge for a number of times, leading to the ripples of the curves.Using the sequential coupling method to study the stress fields during the FFF process, the thermal analysis element needs to be converted to a structural analysis element.Here, the SOLID70 element is converted to the SOLID185 element.After the constraints are set, the calculation results of the temperature field are applied as loads to the FE model according to the procedure of the sequential coupling method and then solved.To study the residual stress and deformation distribution in the part after cooling, a cooling period of 20 s was set in the simulation.Figs. 11 and 12 show the residual stress distribution and residual deformation distribution in the part when the scanning speed is 50 mm/s, the molding chamber temperature is 25 °C, and the nozzle temperature is 210 °C.Fig. 13 shows the warping deformation of the part from the side view.Fig. 12 shows that the minimum deformation position of the part is the central part of the bottom surface of the ashtray, and the maximum deformation of the part is located on the wall of the ashtray.Fig. 13 shows that the bottom surface of the part has warped, and the deformation shows a tendency of increasing along the direction from the center to the edge of the bottom surface.This finding is consistent with the deformation characteristics of the actual printed parts, indicating that the results of the FE simulation are accurate.Tables 2, 3, and 4 show the residual stress and residual deformation of the part at different scanning speeds, different molding chamber temperatures, and different nozzle temperatures.Table 2 shows that the faster the scanning speed is, the smaller the residual stress of the part and the smaller the warpage of the final molded part.Table 3 shows that the higher the temperature of the molding chamber is, the smaller the residual stress of the part and the smaller the residual deformation.Table 4 shows that the higher the nozzle temperature is, the greater the residual stress and deformation of the part.The above analysis shows that increasing the scanning speed and the molding chamber temperature and reducing the nozzle temperature are beneficial to reducing the internal stress and the warping deformation of the part; however, changes in these process parameters should be within a range that enables the parts to be successfully formed.To verify the correctness of the FE simulation method proposed in this paper, an actual printing experiment was used to analyze the effects of the scanning speed, molding chamber temperature and nozzle temperature on the warpage of FFF parts.The equipment used in the printing experiment is an independently researched and developed FFF-3D printing experimental platform with an adjustable molding chamber temperature.The molding chamber temperature is adjustable below 110 °C and can be kept constant.Table 5 shows the basic parameters of this experimental platform.The experimental platform, shown in Fig. 14, integrates a printing mechanism inside an incubator.Due to the addition of the temperature adjustment function of the molding chamber, the internal part selection and structural design of the experimental platform take into account the temperature resistance performance.The experimental platform uses a delta structure and a 32-bit Smoothie firmware control system.The printing material used was a white ABS filament with a diameter of 1.75 mm and a molding temperature range of 180–250 °C.Three parameters, the scanning speed, molding chamber temperature and nozzle temperature, are used for testing, and each parameter has three levels.The factor levels of the three parameters are shown in Table 6.Orthogonal experimental design was used to arrange experiments according to L9 orthogonal tables, and 9 combinations of representative factors were selected to conduct the experiments.To facilitate the measurement of the warpage of the part, a rectangular parallelepiped model was used for the printing experiment in this paper.The model dimensions were 100 mm × 100 mm × 5 mm, and the warpage of the four corners of the part was measured after printing.Fig. 15 shows the nine molded prototypes.Tables 7, Table 8 and Fig. 16 show the warpage values of the four corner points and the results after data processing.R indicates the range of warpage values of the molded parts at different levels.As shown in Table 8, according to the range, the order of the degrees of influence of the three factors on the warpage values of the four corner points can be judged.Considering the influence of the three factors on the warpage of the four corner points, factor B, i.e., the temperature of the molding chamber, has the greatest influence on the warpage value of the formed part.Increasing the molding chamber temperature can reduce the warpage and improve the molding accuracy.Fig. 16 shows that the level 3 of A is the optimal level of factor A. Similarly, we can see that the level 3 of B is the optimal level of factor B, and the level 1 of C is the optimal level of factor C. Hence, for the warping deformation of corner A, the optimal combination of the three factor levels is A3B3C1, i.e., a scanning speed of 50 mm/s, a molding chamber temperature of 80 °C, and a nozzle temperature of 180 °C.This group of optimal factor levels did not appear in the orthogonal test.To verify the correctness of the data analysis results, this set of parameters was used for printing experiments, and the warpage values of the four corner points were measured.The measured warpage values at points A, B, C, and D were 0.96 mm, 0.58 mm, 0.84 mm, and 0.68 mm, respectively.The experimental results show that the warping deformation value can be minimized by using the optimal combination of factor levels, and the correctness of the FE simulation results for the FFF process is also verified.In this paper, a voxelization modelling based FE simulation for FFF is developed.Based on this approach, FE simulations of the temperature field, stress field and displacement field during the FFF process are performed using the APDL and element birth and death technology.The influence of key process parameters on the temperature field is analyzed with the simulation.Meanwhile, an FFF experimental platform with an adjustable molding chamber temperature is established to verify the approach.The results show that among the three test parameters, the molding chamber temperature has the most significant effect on the warping deformation of the molded parts.The optimal combination of parameters for an FFF process with ABS under the analyzed conditions is a scanning speed of 50 mm/s, a molding chamber temperature of 80 °C, and a nozzle temperature of 180 °C.The experimental results also show that the FE simulation method of the FFF process proposed in this paper is feasible for complex models.The proposed method can also provide a reference for FE simulations of other additive technologies.The raw/processed data required to reproduce the findings in this paper will be made available on request.Yong Zhou: Conceptualization, Investigation, Formal analysis, Writing - original draft.Han Lu: Methodology, Validation, Resources, Writing - review & editing.Gongxian Wang: Conceptualization, Resources, Writing - review & editing, Project administration.Junfeng Wang: Validation, Writing - review & editing.Weidong Li: Validation, Writing - review & editing. | In this paper, a novel approach of voxelization modelling-based Finite Element (FE) simulation and process parameter optimization for Fused Filament Fabrication (FFF) is presented. In the approach, firstly, a general meshing method based on voxelization modelling and automatic voxel element sorting is developed. Then, FE based simulation for the FFF process is conducted by combining the ANSYS Parametric Design Language (APDL) with the element birth and death technique. During the simulation, the influence of key process parameters on the temperature field, including scanning speed, molding chamber temperature and nozzle temperature, is analyzed in detail. Furthermore, an experimental platform with an adjustable molding chamber temperature for the FFF parts is established. Case studies for making Acrylonitrile Butadiene Styrene (ABS)-based parts were carried out to validate the approach. Results showed that among the process parameters, the molding chamber temperature had the most significant effect on the warping deformation of the FFF parts. The optimal parameters for the FFF process with ABS under the analyzed conditions were 50 mm/s for the scanning speed, 80 °C for the molding chamber temperature of, and 180 °C for the nozzle temperature, respectively. |
568 | Deep Learning Neural Networks Highly Predict Very Early Onset of Pluripotent Stem Cell Differentiation | Major advances in artificial intelligence have occurred in recent years.New hardware with significantly increased calculus capacity and new software for easier application of complex algorithms allow now to apply powerful predictions in many fields.Neural networks have particularly benefited from this progress.With proper design, these algorithms are highly efficient for machine learning classification tasks.The term deep learning has been coined for these neural networks with extremely high amount of calculations.DL has proved to be particularly useful in computer vision, where it allows image recognition by learning visual patterns through the use of the so-called convolutional neural networks.Roughly, a CNN processes all numbers composing a digital image and identifies the relationship between them.These relations are different according to the different objects found in the image, and in particular at the edges of these objects.The process of finding the optimal weights that makes these predictions is a key step in CNN training.This task is performed through the application of very large amounts of weighted regressions, which can take very high computational requirements, a long time, and a significant number of images.However, once trained, applying the neural network training to get predictions is relatively fast and allows almost instant image recognition and classification.For example, powerful CNN training now allows tasks as diverse as autonomous car driving and face recognition in live images.The expansion of CNNs to biomedicine and cell biology is foreseen in the near future.Several recent reports highlight the possible application of DL in cell and molecular biology.Fluorescent staining prediction, bacterial resistance, or super-resolution microscopy improvement are some of the successful applications that have been described.Based on what has been developed so far using deep learning, the experimental assays where visual pattern recognition is necessary may soon be substantially transformed.One of the areas that could benefit from the advances in DL is the field of mammalian pluripotent stem cells.These cells have the remarkable capability to differentiate to all the cell types of the organism, which has made them gain a lot of attention in areas such as regenerative medicine, disease modeling, drug testing and embryonic development research.There are two main types of PSCs: embryonic stem cells, which are derived from the inner cell mass of peri-implantation blastocysts, and induced PSCs, which are similar to ESCs, but originate through cell reprogramming of adult terminally differentiated cells by overexpressing core pluripotency transcription factors.PSC differentiation is a highly dynamic process in which epigenetic, transcriptional, and metabolic changes eventually lead to new cell identities.These changes occur within hours to days, and even months, and are generally identified by measuring gene expression changes and protein markers.These assays are time consuming and expensive, and normally require cell fixation or lysis, thus limiting their uses as quality-control evaluations necessary for direct application of these cells to the clinic.In addition to these molecular changes, PSC differentiation is followed by an important morphological transformation, in which the highly compact PSCs colonies give rise to more loosely organized cell structures.Although these morphological changes can be quite evident to the trained human eye, they are inherently subjective and thus are not used as a standard and quantitative measurement of cell differentiation.In this paper we test the hypothesis that CNNs are able to accurately predict the early onset of PSC differentiation in plain images obtained from transmitted light microscopy.For this purpose, we used a model in which mouse ESCs maintained in the ground state of pluripotency were differentiated to epiblast-like cells, which are in the formative state of pluripotency.This experimental system, which recapitulates early events that occur during embryonic development, is very efficient and it is completed in only 24–48 h. By applying CNN training at different times from the onset of differentiation, we show that the trained CNN can identify differentiating cells only minutes after the differentiation stimuli.We show that CNNs can also be trained to distinguish mESCs in the ground state of pluripotency from mESCs maintained in serum and leukemia inhibitory factor, a culture condition routinely used to maintain mESCs in the naive undifferentiated state but that displays higher cell heterogeneity and increased expression of differentiation markers.Furthermore, CNNs were also able to accurately classify undifferentiated human iPSCs from early differentiating mesodermal cells.We believe that accurate cellular morphology recognition in a simple microscopic set up may have a significant impact on how cell assays are performed in the near future.Early after the onset of differentiation, mESCs rapidly changed their morphology.By 24 h, they acquired a substantial volume of cytoplasm, some cells detached from each other, and colonies spread with a spindle shape form.We initially took images at 0, 2, 6, 12, and 24 h and trained a CNN based on the ResNet50 architecture, a well-known CNN architecture with proved efficacy.Approximately 800 images per group were provided to the network for each condition, plus 200 per group for testing during training, and 50 images per group for final, independent validation.Approximately 800 images were provided to the network for each condition.As expected, at 0 h the training accuracy was compatible with a random state of prediction between the two states, although some training is seen as the epoch cycles learns from itself.Thereafter, the trained network was able to predict with high level of accuracy in both training and validation samples.Independent test accuracy was 1 at 6, 12, and 24 h, and 0.97 at 2 h after the onset of differentiation."After getting this encouraging level of accuracy, we collected more measurements to improve the network's performance and took a new set of images at 1 and 2 h of differentiation.Also, we had previously observed that, with the initial cell density, there were many image slices with very few cell colonies, and hence we hypothesized the CNN may not extract enough features for proper training, in particular at early time points.Therefore, we increased the initial cell seeding number up to 60 × 10 cells/cm.We also increased the number of images feeding to the CNN to approximately 1,000 per group.We also assessed variants in the architecture of the CNN.We tried different numbers of hidden layers in ResNet, and with another deep neural network with different architecture.Finally, we compared different approaches to preprocess images in order to increase the model performance, a process known as image augmentation.Figure 2 shows the results at 1 h after the onset of differentiation.We achieved a very high training accuracy with all trained networks.We noticed that increasing image preprocessing did not necessarily increased accuracy.Also, increasing the depth of the network or its complexity may not improve results.In our stem cell model, best performance was achieved with ResNet50 with none or simple image augmentation and with DenseNet with simple augmentation.Of note, DenseNet with simple and without augmentation got the lowest validation loss, a measure of training performance.We then used the successful networks ResNet50-SA and DenseNet-SA to train images taken at 2 h from the onset of differentiation, and accuracy was again at 100%.Importantly, we found similar results when using a different mESC cell line, the E14-derived Ainv15 cells.At 1 h, training reached approximately 85% accuracy, peaking to more than 99% after 8 h of differentiation to EpiLCs.The slight difference in accuracy at 1 h with respect to the 46C mESCs might be due to the fact that Ainv15 cells grow more loosely attached to the plate, forming tridimensional colonies that thus take longer to change in morphology even when visually inspected.These results further validate the applicability of CNN classification on early-differentiating mESCs, and also highlight that variability between different cell lines can be efficiently quantified.All tested CNNs are very deep in terms of number of layers, with a significant number of hidden layers and a huge calculation burden.To test if other architecture with less layers was able to train to a similar extent, we ran the same analysis with VGG16, a shallower network, adding simple image augmentation.However, the training of this neural network was unsuccessful with our image set as it reached the futile training function.Eventually, if left running, VGG16 may train the images, but it would take much longer time and resources.Learning rate is critical for training neural networks.LR adjusts, at every training cycle, the rate at which the network weights will be modified in order to find the minimum and best loss.Several algorithms that adjust LR decaying according to training have been developed.In all previous analyses we used the Adam algorithm, but several others have been proposed.We compared them in a limited training of 40 epochs using ResNet50.We found that Adam, Adamax, and Adagrad were equally good, as opposed to Nadam and RMSprop, which both trained at a lower speed.We were unable to train the neural network using stochastic gradient descent, although we cannot rule out that, with proper adjustment, this algorithm would eventually train the image set.Figure S3B shows how LR adjusts itself as epochs progress to the end of training.By the end of training, LR is a small fraction of the initial one, and then allows finding of the minimal loss.Several options can then be used to find the proper LR to train the model.DL neural networks require a significant amount of information to identity features, and thus many images are usually needed.How many images are indeed needed for optimal training is usually unknown and hard to predict, and may significantly change between experiments depending on the sort of images.For our 1-h training of mESCs, we used 2,120 images.We then successively trained ResNet50-SA with less images to identify a minimal number needed to train.The results show that, as the number of images decreased, all parameters of training efficiency also decreased.Underfitting, represented by a much lower validation accuracy than training accuracy, was observed when 1,400 images or less are used.A progressive improvement is seen as images are increased up to the full number available.Validation accuracy and loss reach the highest level with the full set of images.Of note, independent test on these analyses showed accuracy values over 0.9 in all trainings, except those with very low image numbers.These analyses suggest that careful decision should be made when choosing the number of images needed, as a lower number can produce acceptable results, but still underfitted and not according the training possibilities.Training a neural network involves a huge amount of calculations in a series of so-called “hidden” layers.The intermediate calculation values in the hidden layers can be obtained and used to build up intermediate images.It is possible then to plot what the CNN is actually doing by translating the activation layers into pixels, and, hence, to get an insight on how the CNN does see an image and how it performs its classification task.ResNet50-SA has a total of 168 layers, with 49 of them containing activations.Figure 3 shows the representation of the activations in some of the hidden layers of images trained with ResNet50-SA.At the top of the panel both original images from 2i + LIF and differentiating cells are seen.The dimensions on these original images are as given to the CNN: 480 rows by 640 columns by 3 layers.The last dimension corresponds to the red, green, and blue channels.Of note, we fed the CNN with images in greyscale, although with the three-color layers.When CNNs were trained in greyscale, no differences in accuracy or loss were seen.This original size is immediately reduced to 240 × 320 at the entry of the neural network.As the CNN deepens, the activation layers are progressively smaller in the first two dimensions and bigger in the last one.By the end of the network, the final activation layer has a small size but high depth."This last activation’s layer's weights are fed to a binary sigmoid function for prediction.Hence, 80 pixels in 2,048 channels by 256 possible values gives almost 42 × 10 possible pixel variations for each image.The repetitive relation of these values in all images fed to the CNN provides the patterns used for image identification.DenseNet has a different architecture, with 140 total layers and 39 activation layers.An example of some activation layers in DenseNet are shown in Figure S5.In this network, the final layers are bigger with lower depth.Depth expands and then contracts in DenseNet, as opposed to ResNet50.Representation of 2i + LIF cells and differentiating cells show that activations change with different cell morphologies; in 2i + LIF they are rounder than differentiating cells.Given the high accuracy of predictions made in such a short time after the onset of differentiation, we next decided to evaluate what biological changes could be detected in these early time points.The activation of the MEK/ERK signaling pathway is one of the key events that leads to mESC differentiation.Thus, we analyzed whether this pathway was already activated after 1 h of differentiation by assessing the levels of phospho-ERK.Interestingly, most of the early-differentiating cells already displayed an increased level of nuclear and cytoplasmatic phospho-ERK.On the contrary, for mESCs in 2i + LIF, only cells in the M phase of the cell cycle showed a phospho-ERK signal, as reported previously.These results corroborate that differentiation signals are rapidly transduced into cells.We next wondered whether the activation of the differentiation signals led to the modification of the transcriptional profile of the cells at this short time.We assessed the expression of several naive and primed pluripotency markers at 1, 2, 24, and 48 h of differentiation.As expected, the naive pluripotency markers Klf4, Nanog, Esrrb, and Tbx3 were significantly downregulated at 24 and 48 h, while the primed markers FGF5, Oct6, Dnmt3A, and Otx2 were upregulated.Interestingly, we found that during the first 2 h of differentiation there were minor but significant changes in the expression of the naive markers Klf4 and Nanog, as well as in the differentiation marker Oct6.Consistent with our results, KLF4 has recently been shown to be phosphorylated by phospho-ERK, which induces its exit from the nucleus affecting its own transcription very early in the differentiation process.The behavior of Oct6 is also supported by our previous work, where we showed that Oct6 is rapidly induced during exit from ground state pluripotency in another mESCs cell line.The slight but consistent transitory upregulation of Nanog during the first hour is intriguing, and we believe this might be a consequence of a re-organization of regulatory elements in its promoter region, although further research needs to be done.Overall, these results indicate that within this short frame of time mESCs begin to modify their transcriptional profile.It is thus evident that there are several molecular signatures already present at 1 h from the onset of the differentiation stimuli.However, due to the nature of the images used to train the CNN, the morphological transformation of cell colonies is the only parameter that the neural network can detect and use as input for making predictions.As we have previously mentioned, it has been described that CNNs specifically recognize shape borders of the object in the images.To further study these changes at the molecular level, we analyzed the organization of the actin cytoskeleton by staining cells with phalloidin.Interestingly, fluorescent images clearly show that differentiating cells rapidly re-organize the distribution of the actin filaments, with many cells displaying minor spindles protruding from their surface.To get more insight into the morphological differences between the two conditions under study, we finally analyzed the morphological properties of hundreds of colonies in the undifferentiated state or subjected to 1 h of differentiation.We focused on parameters such as colony area, perimeter, circularity, and solidity, the latter being a measurement of how “ruffled” the border of the object is.Compatible with our visual inspection of the images, we quantitatively show that differentiating colonies were less circular, with more ruffled borders and increased perimeter size.A small non-significant increase in colony area was also observed.We thus believe that these features, along with others that may also take into account the pixel intensities within the colonies, may be important for the CNN to be able to display such high predictive power.Of note, all these morphological changes are relatively small, as shown in the density plots of Figure 4C.We then analyzed the performance of the networks in independent biological samples.Once trained, a CNN can be easily used for prediction and run on a simple central processing unit, without the computational requirements of a graphic processor unit.We independently tested two of the successful networks in three more mESC differentiation experiments, completely unrelated from the previous ones.In 1,116 images, CNN were highly accurate.Overall, ResNet50-SA wrongly identified 4 images of 560 as differentiating, when in fact they were in the 2i + LIF group, and one image as pluripotent when in fact it was differentiating.When DenseNet-SA was used, the misidentification was only 2 of 560 in the 2i + LIF group.These independent results confirmed the high accuracy that both models reached in identifying morphological cell changes at a very early stage of differentiation.Table 1 shows the classification report of the prediction of the three replicates.All independent tests showed high precision and recall, with no significant differences between models.When the CNN classifies each image it outputs two probabilities: one for cells in 2i + LIF, and the other for differentiating EpiLCs.Both probabilities sum up to 1, and the call will be for the higher one.To get an insight of the individual probabilities within all the images in the training experiment, we plotted individual probabilities for both networks.Both CNNs can easily identify differentiating cells, with a very high probability for each image.All of them, except for a few, are extremely close to 1.However, probabilities for identifying 2i + LIF cells were less high in both CNNs.We think that this is because CNN performs image recognition by identifying object borders.The morphological changes of differentiating cells, with protrusions and spindles, may offer an advantage in this case.Finally, the 2i + LIF prediction was significantly more precise with ResNet50-SA, based on higher individual probabilities assigned for each image.A possible inference from these results is that ResNet50-SA is able to extract more features than DenseNet-SA.We then wondered if the neural network would be able to correctly classify differentiation at earlier time points.We calculated the classification accuracy on images taken every 10 min from the onset of differentiation, but without re-training the network, i.e., using the 1-h training.We found that the accuracy in these earlier points was still high, reaching a value higher than 0.8 at 20 min.As expected, at earlier time points the CNN tended to classify differentiating cells as being in the 2i + LIF category possibly because colonies did not yet acquire morphological differences.For this reason, the recall of the classification for differentiating images and the precision of classification of 2i + LIF images, respectively, increased with time.Video S1 shows the progressive flattening and morphological changes in the cell colonies.These changes are observed as soon as 10 to 20 min from the differentiation stimulus.Of note, we cannot rule out that accuracy would be higher if prediction were based on a neural network trained at earlier time points.We previously showed that CNNs can be efficiently trained to identify early stages of mESCs differentiation toward EpiLCs.We next decided to explore the applicability of DL into the analysis of morphological differences of PSCs in other experimental setups.First, we assessed whether it was possible to train a CNN to classify mESCs cultured in different conditions.As we previously mentioned, mESCs can be maintained in the ground state of pluripotency when cultured in defined media in the presence of LIF and inhibitors of the MEK/Erk and GSK3 differentiation pathways.Up until the development of these defined conditions, mESCs were routinely cultured in FBS-containing medium in the presence of LIF alone, where they remain in a naive pluripotent state but display high population heterogeneity and increased expression of differentiation markers, among other differences.Interestingly, mESCs in these two naive supporting conditions also display morphological differences.We thus assessed if it was possible to train a CNN to identify the culture condition, and found that the trained CNN reached a very high level of accuracy in predicting which medium was used.Finally, we decided to analyze whether a CNN was capable of identifying a completely different type of PSCs and an associated differentiated cell type.As we previously mentioned, terminally differentiated cells can be reprogrammed into an iPSC, which holds great promise in the field of regenerative medicine.We thus decided to differentiate a previously obtained hiPSC line derived in our lab to an early mesodermal progenitor.To this end, we cultured the hiPSCs in the presence of Activin A, BMP4, and vascular endothelial growth factor for 24 h, and trained a CNN to classify undifferentiated and early mesodermal progenitors.Again, training images using Resnet50 resulted in a very high level of accuracy of classification of independent images.All these data confirm the high capability of CNNs to identify minor, early changes in stem cell differentiation irrespective of the protocol or cell used.In this paper we show that current deep CNNs can be trained with a relatively large series of images taken in a simple transmitted light microscope and then correctly classify images with minor morphological changes in independent, new samples.Close to 100% accuracy was reached in most cases.The neural networks demonstrated to be very sensitive to the morphological changes: in a model of mouse PSCs, only 20 to 40 min after the onset of differentiation the CNN demonstrated detection of morphological changes in most of the images.We also demonstrated its efficacy in several settings of pluripotent stem cell culture, including EpiLC differentiation in other mouse PSCs and in early mesoderm differentiation of hiPSCs.Time-lapse imaging shows that these changes are readily observed by the human eye.However, changes are minimum, and entails subtle variations in the cell surface.At these early time points and with the proper cell imaging settings, CNNs were able to at least emulate human visual recognition.We did not compare these results with human prediction.We think this would be misleading, since humans are not necessarily trained to detect such minimal changes, and if they were, we believe that they would or should recognize morphological changes as effectively as the CNN.There are other advantages of a neural network applied to cell models, such as continuous, automatic, real-time detection with high precision.Altogether, such powerful systems will soon overcome human capacity.The application of DL in this work should be emphasized by its simplicity.We used plain, phase contrast images taken in a transmitted light microscope with a 10× objective.There was no need to process the images in any form or to apply complex protocols for differentiation.Moreover, detection of the morphological changes was performed at a very short time from the beginning of the assay.Training a network has also become simpler with the development of frontend software applications, such as Keras or pyTorch.Finally, the use of GPU allows to process many images with good definition in a relatively short time.Without GPU support, in fact, this training would not have been possible in a sensible amount of time.All these factors make CNNs a field where many image applications will based their analyses in the oncoming years.We believe that several conditions allowed us to reach such a high accuracy with the trained neural network.First, cell seeding at a high density was important to provide enough information to the algorithm.We cannot rule out, however, that with more training and different set up a CNN would get a high accuracy with just one cell colony.Second, the size of the starting images were 480 × 640 pixels, increasing the calculation burden but providing enough details to the CNN.Third, we trained very deep CNNs, with dozens of hidden layers.A shallower network proved useless to train our set of images.Fourth, we made use of image preprocessing, which artificially increases the number of images provided to the CNNs.We found, however, that too much image preprocessing was detrimental for the accuracy and loss.Hence, most effective trainings were reached when image preprocessing was limited to flipping the image in both directions.However, we also divided each original image in four, which may be seen as zooming into the four quadrants.We believe that subtle image augmentation, such as blurring, contrast, or bright enhancements, could eventually improve performance in other settings, but more technical work is needed to confirm this.DL predictions applied on a live imaging setup will be one of the most exciting applications in the next few years.Therefore, we were interested in how the trained network would work on images taken at earlier times.We found that the high accuracy starts approximately 30 min from the onset of differentiation, although a moderate accuracy is already seen at 20 min.This experiment predicts the future use of neural networks on real-time prediction in cell culture experiments.Generalization on each specific context will be critical for the applicability of DL techniques.A few papers are now reporting the use of DL training in the field of cell biology.Some papers processing high complex images have been published, and DL has been shown to provide a great advantage in this setting.Hay and Parthasarathy used a self-developed CNN to identify bacteria in a 3D microscope images, reaching approximately 90% of accuracy.Pärnamaa and Parts used also a shallow network to identify subcellular structures, with an accuracy of approximately 90%.Eulenberg et al. identified the cell-cycle phases in Jurkat cells with high accuracy.However, not too many papers have tried to classify cells based on simple images taken in a transmission light microscope.Recently, Kusumoto et al. used DL to classify PSC versus PSC-derived endothelial cells after 6 days of differentiation.These authors used two shallow CNNs, LeNet, and AlexNet.These network yielded between 80% and 90% of accuracy in positive identification of the cell population.Although encouraging, these results were far from optimal.The use of these shallow networks may be appealing because of lower computational needs, but they proved not to be accurate compared with the deeper networks used in our paper.Even though these results are at the top of the possible accuracy, some caveats should be mentioned.First, we applied these CNNs for a limited type of stem cell differentiation assays.To what extent these results translate to other setting remains to be established.We proved an internal validity of the trained network by applying it multiple times, but an external validation remains to be assessed.However, we think that the ability of CNNs is such that it should be able to classify cell images in many different contexts.Second, the field is growing fast.There are many other CNN architectures and strategies that deserve attention.We did not try any of them as we got excellent results with our strategies.However, it may be possible to apply them and achieve a better performance, such as reducing training time or reducing the number of images needed to train.Finally, although effective, we kept our work simple.We only compared two groups using a 10× objective and we did not use any fluorescence labeling.Any modification of our experimental setting should be extensively tested, but we believe that the strength of the application of neural networks for image recognition in this setting is proved.In conclusion, we trained a CNN to identify PSCs from very early differentiating PSCs.The trained network allowed a very high rate of prediction, almost to 1.Moreover, the ability to differentiate may be as low as 20 min after the onset of differentiation.It is hard to think of any other cell assay that can confirm differentiation in such a short time with such precision and at such a low cost.We believe that DL and convoluted neural networks will change how cell assays are performed in the near future.Mouse ESCs were grown in defined conditions that support the ground state of pluripotency.In brief, 46C mESCs were grown in the chemically defined medium N2B27 supplemented with 1,000 U/mL human LIF, 1 μM PD0325901, and 3 μM CHIR99021, hereafter called “2i + LIF medium.,N2B27 medium formulation is described in detail elsewhere.Cells were grown at 37°C in a 5% CO2 incubator on 0.1% gelatin-coated dishes and were passaged every 2–3 days using TrypLE.To induce EpiLC differentiation, mESCs were plated the day before in 2i + LIF medium at a density of 30 × 10 or 60 × 10 cells/cm.The following day, cells were washed two times with 1× PBS and differentiated in N2B27 medium containing 1% KSR, 12 ng/mL basic fibroblast growth factor, and 20 ng/mL Activin, hereafter called “EpiLCs medium.,For control cells, fresh 2i + LIF medium was added.To analyze the morphology of cells in the presence of FBS and LIF, cells were grown in DMEM supplemented with 15% FBS, 100 mM minimum essential medium nonessential amino acids, 0.5 mM 2-mercaptoethanol, 2 mM GlutaMax with the addition of 1,000 U/mL of LIF, all reagents purchased from Gibco.Cells were seeded at 60 × 10 cells/cm.The 46C cell line used throughout this work was a kind gift of Austin Smith.E14-derived Ainv15 mESCs used in Figure S2, purchased from ATCC, were also seeded at 60 × 10 cells/cm.Human induced PSCs were generated previously in our lab.We regularly grow them in E8-Flex in Geltrex or Vitronectin-coated plates.For early mesoderm differentiation, we replaced E8-Flex medium with StemPro-34 medium supplemented with BMP4, Activin A, and VEGF for 24 h.Random images were taken at consecutive hours post differentiation in an EVOS microscope.Cells were plated at the indicated cell densities in 12-well plates, and cells were seeded approximately 24 h before imaging.We used a 10× objective with light transmission.Light intensity was set at 40%.Image files were saved in jpg format.The standard output of the EVOS images is 960 × 1,280 pixels in three channels.Each picture was then sliced to get images of 480 × 640 pixels by applying the python script ImageSlicer.These dimensions were downsized to 240 × 320 at the time of training.For the images taken in the 24-h experiment, we took images from three biological replicates with two identical wells in each condition, running control and differentiation in parallel.The final number of images was between 300 and 400.For the experiments with 1 and 2 h differentiation, 4 biological replicates were done and between 70 and 100 images were taken from each condition.We then fed the network with 2,134 images for training, and 400 for validation.One hundred images were reserved for independent prediction after training.Independent replicates were run and prepared in the same way.For immunofluorescence experiments, cells were grown on Lab-Tek 8-well chamber slide previously coated for 30 min with Geltrex.Cells were fixed for 20 min with 4% paraformaldehyde, permeabilized with 0.1% Triton X-100 PBS and blocked with 3% normal donkey serum in PBST.Primary antibody against phospho-p44/42 MAPK was added in block solution, incubated at 4°C overnight, and then washed three times in PBST for 30 min.Texas Red-X Phalloidin, secondary antibody and DAPI were incubated in block solution at room temperature for 30 min.Samples were washed as before, mounted, and imaged on an EVOS fluorescence microscope.Gene expression was analyzed as described previously."In brief, total RNA was extracted with TRI Reagent following the manufacturer's instructions, treated with DNAse, and reverse transcribed using MMLV reverse transcriptase.Quantitative PCR was performed in a StepOne Real-Time PCR system.Gene expression was normalized to the geometrical mean of GAPDH and PGK1 housekeeping genes, data were then log transformed and relativized to the average of the biological replicates for the 2i + LIF condition.Primers sequences were reported previously.Statistical significance for qPCR data was analyzed by randomized block design ANOVA.Comparison between means were assessed using Tukey test.Neural network trainings were performed in a p2.xlarge instance from Amazon Web Service.This instance provides cloud computing with 4 CPUs, a RAM of 61 Gb, and one NDIVIA K80 GPU.Computing was done in a preconfigure environment for DL based on Ubuntu.Training was performed in Keras, with TensorFlow as backend.A code example is available in GitHub.Detailed information about CNN training can be found in the Supplemental Experimental Procedures.Morphological analyses of cell colonies were performed using FIJI/ImageJ and custom R scripts.For more information, see the Supplemental Experimental Procedures.A.W., A.L.G., and S.G.M. conceived the experiments.A.W., A.L.G., A.M.B., G.N., A.S., and N.L.S.V. performed the cell experiments.A.W. performed the cell morphology analysis.A.W., M.A.S., and A.M.M. performed the gene expression experiments.A.W., L.N.M., C.L., G.L.S., and A.S.G. discussed the experiments and manuscript.G.E.S. and S.G.M. provided the funding for this paper.S.G.M. performed deep learning training and wrote the manuscript. | Deep learning is a significant step forward for developing autonomous tasks. One of its branches, computer vision, allows image recognition with high accuracy thanks to the use of convolutional neural networks (CNNs). Our goal was to train a CNN with transmitted light microscopy images to distinguish pluripotent stem cells from early differentiating cells. We induced differentiation of mouse embryonic stem cells to epiblast-like cells and took images at several time points from the initial stimulus. We found that the networks can be trained to recognize undifferentiated cells from differentiating cells with an accuracy higher than 99%. Successful prediction started just 20 min after the onset of differentiation. Furthermore, CNNs displayed great performance in several similar pluripotent stem cell (PSC) settings, including mesoderm differentiation in human induced PSCs. Accurate cellular morphology recognition in a simple microscopic set up may have a significant impact on how cell assays are performed in the near future. In this article, Miriuka and colleagues show that deep learning convolutional neural networks can be trained to accurately classify light microscopy images of pluripotent stem cells from those of early differentiating cells, only minutes after the differentiation stimulus. These algorithms thus provide novel tools to quantitatively characterize subtle changes in cell morphology. |
569 | Homoarsenocholine – A novel arsenic compound detected for the first time in nature | Macrofungi are well known for their ability to accumulate enormous amounts of various elements, depending on the fungal species.One of these elements is arsenic, where more than 1000 mg kg−1 dry mass can be taken up by certain species .In addition to this, macrofungi are known to be able to contain a remarkable variety of organoarsenicals.Terrestrial organisms usually only contain inorganic arsenic, methylarsonic acid and/or dimethylarsinic acid , but in macrofungi, arsenic species typically attributed to the marine environment can be found as well.The most prominent is arsenobetaine, which is, like iAs and DMA, the major arsenic compound in many macrofungi .Trimethylarsine oxide, arsenocholine, the tetramethylarsonium ion and arsenosugars have been detected in macrofungi as well, but usually only at low or trace concentrations .Up to now, it is unclear why the arsenic speciation in macrofungi can vary so much between different species.It is also unknown if the macrofungi are metabolizing arsenic to the different compounds themselves, if it is induced by microorganisms or if they are just accumulating it from the surrounding environment.In vitro studies by Nearing et al. have shown that AB is not present during the vegetative life stage of Agaricus spp., but can be found in all parts of the fungi during fruit-body formation, including the mycelium .Another important, yet unanswered question is, why terrestrial macrofungi contain these various arsenic species.Concerning AB, it has been speculated that it might serve as an osmolyte and help maintaining the structure of the fruit-bodies .In a recent publication it has been shown that AB can protect against osmotic and temperature-induced stress, similar to its nitrogen-analogue, glycine betaine .One unusually looking group of terrestrial macrofungi are the so-called clavarioid fungi of the genus Ramaria.Some species, like Ramaria flava, are edible, but there are also Ramaria fungi that can be poisonous to animals .Until now, total arsenic concentrations have been investigated in very few individual samples and range from 0.2 to 11 mg As kg−1 dm .Only one sample of Ramaria pallida has been investigated for its arsenic speciation so far .The main arsenic species was AB, followed by 13% AC and small amounts of DMA, MA and iAs.In our study, we aimed to look into the arsenic speciation of these bizarre mushrooms to broaden the current knowledge and understanding of arsenic speciation in the environment.We investigated the arsenic speciation of six collections of the genus Ramaria.Three samples were collected and identified by J. Borovička, and three samples were collected and identified by W. Goessler in Austria in 2017.In order to characterize the collections we performed ITS rDNAsequencing; the sequences were submitted to the GenBank database under the accession numbers MH366531–MH366536.For storage, the samples were freeze-dried.Sample preparation and determination of the total arsenic concentration as well as of the most common water-soluble arsenic species in aqueous extracts is described in detail elsewhere .Briefly, the freeze-dried fungal samples were digested with nitric acid and then investigated with inductively coupled plasma triple quadrupole mass spectrometry for the determination of total arsenic concentrations.The Standard Reference Materials® 1573a and SRM® 1640a were prepared and measured together with the samples for quality control.The results were in good accordance with the certificates.For speciation analysis, dried fungal samples were extracted with ultrapure water and then investigated with high performance liquid chromatography coupled to ICPQQQMS.Anion-exchange and cation-exchange chromatography were used to detect and quantify arsenate , MA, DMA, AB, TMAO, AC and TETRA.A Q-Exactive Hybrid Quadrupole-Orbitrap MS was used for high-resolution electrospray ionization mass spectrometry measurements.It was coupled to an HPLC with a LC cation-exchange column and 30 mM ammonium formate, pH 2.3% and 8% methanol as mobile phase.The flow rate of 1.5 mL min−1 was split with a T-piece after the column to reduce the input to the MS.The total arsenic concentrations in the six investigated samples ranged from 1.7 to 61 mg kg−1 dm, with a median of 18 mg kg−1 dm.Extraction with water resulted in an extraction efficiency of 90 ± 10%, and a column recovery of 93 ± 5%.The main arsenic species in the extracts was unambiguously AB, accounting for 84 ± 9% of the extracted arsenic.We also detected small amounts of As, MA, DMA, TMAO, AC, TETRA, trimethylarsoniopropanate and dimethylarsinoylacetate in all six samples.Their identity was confirmed with spiking experiments and co-chromatography.The most important results are given in Table 1.Concentrations of all detected arsenic species can be found in Appendix A, Table S4.TMAP and DMAA are known compounds from the marine environment , and DMAA has been identified as urinary metabolite of arsenosugars , but they have never been found in natural terrestrial samples before.Further, we found several unassigned peaks in the anion- and also cation-exchange chromatograms.With spiking experiments, we excluded dimethylarsinoylethanol, dimethylarsinoylpropionate, dimethylarsinoylbutanate and the glycerol-, phosphate-, sulfate- and sulfonate- arseno-riboses as possible candidates.Oxidation experiments proved that no known thio-arsenic compounds were present.One of the detected unknown compounds was attracting our attention, because it was eluting from the cation-exchange column very late, even after the permanent cation TETRA.Thus, UNK A was isolated by injecting an aqueous fungal extract multiple times onto the cation-exchange column and collecting the respective fractions.The mobile phase was removed by freeze-drying, and the residue was dissolved in a small amount of ultrapure water.The presence and concentration of UNK A was controlled with HPLC-ICPMS.Next, the isolate was subjected to HPLC single quadrupole ES-MS to get an idea on the molecular mass of the compound.At the elution time of UNK A, we found a signal with m/z 179.With this information, we started the investigation of UNK A with HR ES-MS.We were able to detect a molecule with an exact m/z of 179.0411 and a sum formula of C6H16OAs.Fragmentation experiments revealed characteristic fragments of m/z 161, 121, 105 and 59, as shown in Fig. 2.The molecular mass of 179 and the corresponding fragments have already been reported by McSheehy et al. There, the authors subjected a solution of inorganic arsenic and acetic acid to UV irradiation, and then investigated the solutions with ES-MS.They found several products, including a molecule with m/z 179.We agree with them that m/z 161 represents a water loss, m/z 121 is a protonated Me3As, and m/z 105 is Me2As.m/z 59 is the protonated allyl alcohol corresponding to the loss of Me3As.Thus, we identified UNK A as the trimethylarsonium ion, a homologe of AC, which we called homoarsenocholine.For verification, AC2 bromide was prepared according to an updated literature procedure .Its purity and structure was confirmed with NMR experiments.Further, a solution of the pure compound was subjected to cation-exchange HPLC-ICPMS and HR ES-MS.The results were in accordance with our findings for UNK A. Successful spiking of UNK A with AC2 on HPLC-ICPMS was our final confirmation that UNK A is indeed AC2.This species has never been reported in a natural sample before, and the already discussed paper by McSheehy et al. is the only one that mentions the finding of AC2 in a lab experiment .Homocholine, which is the nitrogen-analogue of AC2, is only occasionally investigated and hardly ever discussed as a naturally occurring compound .According to the existing proposed biotransformation pathways of arsenic , AC is thought to be a precursor of AB.Early investigations with rat liver cells and a recent study on the function of AB showed indeed that AC can be converted to AB .Interestingly, the oldest of these works found AB aldehyde as intermediate .This has not been reported in any other publication since then.In analogy to AC and AB, one could assume that AC2 serves as a precursor for TMAP, a compound that is also present in our investigated Ramaria samples.Still, when taking a closer look at the different hypothesized biotransformation mechanisms for arsenic, the existence of AC2 cannot be explained easily.Alternatively, DMAP, which is present in marine organisms , but not in our fungal samples, or TMAP could be regarded as precursors for AC2, but proof for this is not existing and would have to be found through appropriate experiments.This is the first report of DMAA and TMAP in the terrestrial environment and the overall first report of the natural occurrence of AC2.Our findings give fresh input to the attempts to understand the geo-bio-chemical pathways of arsenic compounds.Subsequent future work should deal with the identification of other small arsenic compounds in environmental samples, which could help to complete the hypothesized arsenic biotransformation pathways.Possible candidates are the aldehydes of AB and TMAP or a reduced form of DMAP.Finally, it has to be noted that it is very likely that AC2 was found but not identified in other macrofungi before.When comparing published data, especially chromatograms, two possible candidates of the fungal kingdom are Amanita muscaria and Cortinarius coalescens .It will be interesting to verify this surmise and show that AC2 is not only present in fungi of the genus Ramaria. | The arsenic speciation was determined in macrofungi of the Ramaria genus with HPLC coupled to inductively coupled plasma mass spectrometry. Besides arsenic species that are already known for macrofungi, like arsenobetaine or arsenocholine, two compounds that were only known from marine samples so far (trimethylarsoniopropanate and dimethylarsinoylacetate) were found for the first time in a terrestrial sample. An unknown arsenical was isolated and identified as homoarsenocholine. This could be a key intermediate for further elucidation of the biotransformation mechanisms of arsenic. |
570 | Gamification for health and wellbeing: A systematic review of the literature | The major health challenges facing the world today are shifting from traditional, pre-modern risks like malnutrition, poor water quality and indoor air pollution to challenges generated by the modern world itself.Today, the leading global risks for mortality and chronic diseases – high blood pressure, tobacco use, high blood glucose, physical inactivity, obesity, high cholesterol – are immediately linked to a modern lifestyle characterized by sedentary living, chronic stress, and high intake of energy-dense foods and recreational drugs."In addition, following calls from the World Health Organization's inclusive conception of health, researchers, civil society, and politicians have been pushing to extend policy goals from preventing and reducing disease towards promoting people's holistic physical, mental, and social well-being. "Practically all modern lifestyle health risks are directly affected by people's individual health behaviours — be it physical activity, diet, recreational drug use, medication adherence, or preventive and rehabilitative exercises.By one estimate, three quarters of all health care costs in the US are attributable to chronic diseases caused by poor health behaviours, the effective management of which again requires patients to change their behaviours.Similarly, research indicates that well-being can be significantly improved through small individual behaviours.Behaviour change has therefore become one of the most important and frequently targeted levers for reducing the burden of preventable disease and death and increasing well-being."A main factor driving behaviour change is the individual's motivation.Even if different theories contain different motivational constructs, “the processes that direct and energize behaviour” feature prominently across health behaviour change theories.Motives are a core target of a wide range of established behaviour change techniques.However, following self-determination theory, a well-established motivation theory, not all forms of motivation are equal.A crucial consideration is whether behaviour is intrinsically or extrinsically motivated.Intrinsic motivation describes activities done ‘for their own sake,’ which satisfy basic psychological needs for autonomy, competence, and relatedness, giving rise to the experience of volition, willingness, and enjoyment.Extrinsically motivated activity is done for an outcome separable from the activity itself, like rewards or punishments, which thwarts autonomy need satisfaction and gives rise to experiences of unwillingness, tension, and coercion.In recent years, SDT has become a key framework for health behaviour interventions and studies.A large number of studies have demonstrated advantages of intrinsic over extrinsic motivation with regard to health behaviours.Not only is intrinsically motivated behaviour change more sustainable than extrinsically motivated change: satisfying the psychological needs that intrinsically motivate behaviour also directly contributes to mental and social well-being."In short, in our modern life world, health and well-being strongly depend on the individual's health behaviours, motivation is a major factor of health behaviour change, and intrinsically motivated behaviour change is desirable as it is both sustained and directly contributes to well-being.This raises the immediate question what kind of interventions are best positioned to intrinsically motivate health behaviour change.The last two decades have seen the rapid ascent of computing technology for health behaviour change and well-being, with common labels like persuasive technology or positive computing."This includes a broad range of consumer applications for monitoring and managing one's own health and well-being, such as the recent slew of “quantified self” or “personal informatics” tools for collecting and reflecting on information about the self.One important sector is serious games for health, games used to drive health-related outcomes.The majority of these are “health behaviour change games” or “health games” affecting the health behaviours of health care receivers.Applications and research have mainly targeted physical activity, nutrition, and stroke rehabilitation, with an about equal share of “exergames” or “active video games” directly requiring physical activity as input, behavioural games focusing specific behaviours, rehabilitation games guiding rehabilitative movements, and educational games targeting belief and attitude change as a precondition to behaviour change.Like serious games in general, health games have seen rapid growth, with numerous systematic reviews assessing their effectiveness.A main rationale for using games for serious purposes like health is their ability to motivate: Games are systems purpose-built for enjoyment and engagement.Research has confirmed that well-designed games are enjoyable and engaging because playing them provides basic need satisfaction.Turning health communication or health behaviour change programs into games might thus be a good way to intrinsically motivate users to expose themselves to and continually engage with these programs.However, the broad adoption of health games has faced major hurdles.One is their high cost of production and design complexity: Health games are typically bespoke interventions for a small target health behaviour and population, and game development is a cost- and time-intensive process, especially if one desires to compete with the degree of “polish” of professional, big studio entertainment games.Thus, there is no developed market and business model for health games, wherefore the entertainment game and the health industries have by and large not moved into the space.A second adoption hurdle is that most health games are delivered through a dedicated device like a game console, and require users to create committed spaces and times in their life for gameplay."This demand often clashes with people's varied access to technology, their daily routines and rituals, as well as busy and constantly shifting schedules.One possible way of overcoming these hurdles is presented by gamification, which is defined as “the use of game design elements in non-game contexts”.The underlying idea of gamification is to use the specific design features or “motivational affordances” of entertainment games in other systems to make engagement with these more motivating.1,Appealing to established theories of intrinsic motivation, gamified systems commonly employ motivational features like immediate success feedback, continuous progress feedback, or goal-setting through interface elements like point scores, badges, levels, or challenges and competitions; relatedness support, social feedback, recognition, and comparison through leaderboards, teams, or communication functions; and autonomy support through customizable avatars and environments, user choice in goals and activities, or narratives providing emotional and value-based rationales for an activity.Since its emergence around 2010, gamification has seen a groundswell of interest in industry and academia, easily outstripping persuasive technology in publication volume.By one estimate, the gamification market is poised to reach 2.8 billion US dollars by 2016.It is little wonder, then, that several scholars have pointed to health gamification as a promising new approach to health behaviour change.Popular examples are Nike+2, a system of activity trackers and applications that translate measured physical exertion into so-called “NikeFuel points” which then become enrolled in competitions with friends, the unlocking of achievements, or social sharing; Zombies, Run!3,a mobile application that motivates running through wrapping runs into an audio-delivered story of surviving a Zombie apocalypse; or SuperBetter4, a web platform that helps people achieve their health goals by building psychological resilience, breaking goals into smaller achievable tasks and wrapping these into layers of narrative and social support.Conceptually, health gamification sits at the intersection of persuasive technology, serious games, and personal informatics: Like persuasive technology, it revolves around the application of specific design principles or features that drive targeted behaviours and experiences.Several authors have in fact suggested that many game design elements can be mapped to established behaviour change techniques.Like serious games, gamification aims to drive these behaviours through the intrinsically motivating qualities of well-designed games.Like personal informatics, gamification usually revolves around the tracking of individual behaviours, only that these are then not only displayed to the user, but enrolled in some form of goal-setting and progress feedback.Indeed, many applications commonly classified as gamification are also labelled personal informatics, and gamification is seen as a way to sustain engagement with personal informatics applications.The reasons why gamification is potentially relevant to health behaviour change today, and the shortcomings of other digital health and well-being interventions include:Intrinsic motivation.Like games, gamified systems can intrinsically motivate the initiation and continued performance of health and well-being behaviours.In contrast, personal informatics can lack sustained appeal, and persuasive technologies often employ extrinsic motivators like social pressure or overt rewards.Broad accessibility through mobile technology and ubiquitous sensors.Activity trackers and mobile phones, equipped with powerful sensing, processing, storage, and display capacities, are excellent and widely available platforms to extend a game layer to everyday health behaviours, making gamified applications potentially more accessible than health games which rely on bespoke gaming devices.Broad appeal.As wider and wider audiences play games, games and game design elements become approachable and appealing to wider populations.Broad applicability.Current health gamification domains cover all major chronic health risks: physical activity, diet and weight management, medication adherence, rehabilitation, mental well-being, drug use, patient activation around chronic diseases like Diabetes, cancer, or asthma.Cost-benefit efficiency.Retro-fitting existing health systems and enhancing new ones with an engaging “game layer” may be faster, most cost-benefit efficient, and more scalable than the development of full-fledged health games.Everyday life fit.Gamified systems using mobile phones or activity trackers can encompass practically all trackable everyday activity, unlike health games requiring people to add dedicated time and space to their life."Whereas standard health games typically try to fit another additional activity into people's schedules, gamification aims to reorganise already-ongoing everyday conduct in a more well-being conducive manner.Supporting well-being.Beyond motivating health behaviours, engaging with gamified applications can directly contribute to well-being by generating positive experiences of basic psychological need satisfaction as well as other elements of well-being like positive emotions, engagement, relationships, meaning, and accomplishment.In short, gamification may realize what games for health doyen Ben Sawyer dubbed the “new model for health” games should pursue: sensor-based, data-driven, “seductive, ubiquitous, lifelong health interfaces” for well-being self-care.Promising as gamification for health and well-being may be, the essential question remains whether gamified interventions are effective in driving behaviour change, health, and well-being, and more specifically, whether they manage to do so via intrinsic motivation.These questions are especially relevant as general-purpose literature reviews on gamification have flagged the lack of high-quality effect studies on gamification, and critics have objected that gamification often effectively entails standard behavioural reinforcement techniques and reward systems that are extrinsically motivating, not emulating the intrinsically motivating features of well-designed games.To our knowledge, there is no systematic review on the effectiveness and quality of health and well-being gamification applications available.Existing reviews include a survey spanning several application domains which identified four health-related papers, a review of gamification features in commercially available health and fitness applications, a topical review on the use of games, gamification, and virtual environments for diabetes self-management, which identified three studies on gamified applications, a review focused specifically on the use of reward systems in health-related gamified applications and a review on the persuasion context of gamified health behaviour support systems.While these reviews offer important and valuable insights, none have examined gamification for both health and well-being nor the effectiveness of gamification.Additionally, existing reviews do not directly consider and evaluate the quality of evidence underlying the conclusions drawn.We therefore conducted a systematic literature review of peer-reviewed papers examining the effectiveness of gamified applications for health and well-being, assessing the quality of evidence provided by studies.We developed four guiding research questions:RQ1.What evidence is there for the effectiveness of gamification applied to health and wellbeing?,What is the number and quality of available effect studies?,This follows the observation that gamification research is lacking high-quality effect studies.What effects are reported?,This follows the question whether health gamification is indeed effective.RQ2.How is gamification being applied to health and wellbeing applications?,What game design elements are used and tested?,These questions follow whether health gamification drives outcomes through the same processes of intrinsic motivation that make games engaging, and whether directly supporting well-being through positive experiences.What delivery platforms are used and tested?,This probes whether current health gamification does make good on the promise of greater accessibility, pervasiveness, and everyday life fit through mobile phones or multiple platforms.Which theories of motivation are used and tested?,This explores to what extent health gamification explicitly draws on motivational theory and to whether design incorporating these theories leads to better outcomes.RQ3.What audiences are targeted?,What effect differences between audiences are observed?,These questions probe whether current applications indeed target a broad range of audiences with equal success. ,or whether they only target presumed gaming-affinitive audiences or show less success with non-gaming-affinitive audiences as well as whether.Is gamification shown to be more effective with gaming affinitive audiences?,This assesses whether the benefits of gamification are limited to audience already familiar with or drawn to game elements as engaging and motivating.Have the benefits of health gamification been shown to extend to audiences that are not already intrinsically motivated?,This explores whether there is evidence of gamification working when users are not already intrinsically motivated to perform the target activity.RQ4.What health and well-being domains are targeted?,Beyond a general scoping of the field, this tests whether the claimed broad applicability of gamification indeed holds.The protocol for the review was developed and agreed by the authors prior to commencement.It followed all aspects recommended in the reporting of systematic reviews, namely the PRISMA Checklist and MOOSE Guidelines.All studies that explored the association between gamification and health were considered for this review.“Gamification” was defined and operationalised as “the use of game design elements in non-game contexts”."“Health” and “well-being” were collectively defined and operationalised using the World Health Organization's’s’s’s inclusive definition of health as “a state of complete physical, mental and social well-being and not merely the absence of disease or infirmity”.The electronic databases in this review were searched on November 19th, 2015 and included those identified as relevant to information technology, social science, psychology and health: Ebscohost; ProQuest; Association for Computing Machinery, ACM; IEEE Xplore; Web of Science; Scopus; Science Direct and PubMed.Three additional studies were identified with a manual search of the reference lists of key studies, including existing gamification reviews, identified during the database search process.Based on prior practice in systematic reviews on gamification and health and well-being, we used full and truncated search terms capturing gamification, health outcomes, and well-being in the following search string:Gamif* AND.Mental health related search terms were added as initial searches failed to capture some expected results.Our review focused on high quality scholarly work reporting original research on the impact and effectiveness of gamification for health and wellbeing.From this focus, we developed the following inclusion criteria:Peer-reviewed,Full papers,Empirical research,Explicitly state and described gamification as research subject,Clearly described gamification elements,Effect reported in terms of:Impact, and/or,User experience — any subjective measure of experience while using the gamified or non-gamified version of the intervention,Clearly described outcomes related to health and well-being,Criteria 1–4 were chosen to ensure focus on high-quality work reporting original research.Criteria 3, 4, and 7 were also included to enable assessment of quality of evidence.Criteria 5–6 ensured the paper reported on gamification, not serious games or persuasive technology mislabeled as gamification.Criteria 7–8 were chosen to assess reported health and well-being outcomes and potential mediators, with user experience included given its prevalence as an outcome measure in gamification research.Our exclusion criteria mirror the focus on high quality scholarly work that reports the impact and effectiveness of gamification for health and well-being.They were particularly framed to exclude duplicate reporting of earlier versions of studies fully reported later.We excluded papers with the following features:Extended abstracts or ‘work-in-progress’ papers,Covers complete games not gamification,Gamification is mentioned but not evaluated,Criteria 1–2 exclude peer-reviewed yet early and incomplete versions of studies.Criteria 3–4 exclude studies that mislabel serious games as gamification or fail to report the concrete intervention in sufficient detail to assess whether it constituted gamification.We used the quality assessment method presented by Connolly et al.The tool was explicitly developed to assess the strength of evidence of a total body of work relative to a particular review question.Connolly et al. used the tool to assess the overall weight of empirical evidence for positive impact and outcomes of games.We applied the tool to our more focused interest in the empirical evidence for the effectiveness of gamification in the health and wellbeing domain.Each final paper included in the review was read and given a score of 1–3 across the following five criteria:How appropriate is the research design for addressing the research questions of this review,High — 3 RCT,Medium — 2, quasi-experimental controlled study,Low — 1, case study, single subject-experimental, pre-test/post-test design,How appropriate are the methods and analysis?,How generalizable are the findings of this study to the target population with respect to the size and representativeness of the sample?,To what extent would the findings be relevant across age groups, gender, ethnicity, etc.How relevant is the particular focus of the study for addressing the question of this review?,To what extent can the study findings be trusted in answering the study question?,The total weight of evidence for each paper is calculated by adding the scores of all five dimensions, with a range from 5 to 15."Connolly et al.'s analysis of the empirical evidence regarding games and serious games found a mean rating of 8.56 and a mode of 9, which gave us a baseline to evaluate gamification studies against.Connolly et al.found 70 of 129 or 54% of studies to be above the mode, constituting “stronger evidence”.We elected to categorise in slightly more detail, with papers with a rating 8 or below categorised as “weaker evidence”, papers with a rating above 8 to 12 as “moderate evidence”, and papers with a rating above 12 as “stronger evidence”.Based on an initial survey, we categorised delivery modalities as mobile, website, social network application, analog, or bespoke device.Given the lack of consensus in the literature regarding definitions and categorizations, game design elements were coded using an adaptation of the systemisation provided by Hamari, Koivisto and Sarsa.Hamari and colleagues identified the following typology: points, leaderboards, achievements/badges, levels, story/theme, clear goals, feedback, rewards, progress and challenge.In the current review, we elected to combine points and badges with other digital rewards into a single category labelled ‘rewards’.Additionally, we also coded for the inclusion of an ‘avatar’ or ‘social interaction,’ as these were found to be commonly employed game design elements in the reviewed papers.We categorised health and well-being effects as relating to affect, behaviour, or cognition.These categories were chosen based on the three-component model of attitudes with the primary adaptation being the inclusion of knowledge of the target domain as part of the cognition category.In addition, multiple studies also assessed user experience, which we coded separately.Furthermore, we coded effects as positive, negative, or mixed/neutral, the latter meaning that results were inconclusive or positive for one group and negative for another.If a study assessed health and well-being impacts for multiple dimensions, these were counted separately.For example, a study that finds positive effects on stress and life satisfaction would be counted as two positive impacts on cognition.In contrast, a study that finds a positive impact on life satisfaction for one group of users and negative impact for another would be coded as one neutral/mixed impact on cognition.All studies were independently coded by a second reviewer.Inter-rater reliability was determined by the intra-class correlation coefficient.This statistic allows for the appropriate calculation of weighted values of rater agreement and accounts for proximity, rather than equality of ratings.A two-way mixed effects, average measures model with absolute agreement was utilized.Independent ratings demonstrated an excellent level of inter-rater reliability.Our search identified 365 papers.After removing duplicates 221 papers remained.Of these 191 were removed based on screening of title and abstract.The remaining 30 articles were considered and assessed as full texts.Of them eleven did not pass the inclusion and exclusion criteria.Nineteen final eligible studies remained and were individually assessed for this review.The study selection process is reported as recommended by the PRISMA group in Fig. 1.The final 19 articles eligible for review were then rated for quality of evidence.Following Connolly et al. we calculated the mean and mode as a means of determining which papers provided relatively weaker or stronger evidence.However, we departed from the approach taken by Connolly and colleagues who assigned papers to two categories and instead categorised papers into three categories.This decision was made as an equal number of papers fell above and below the mode of 10.5, which in turn meant that classifying papers with the modal/median score as either weaker or stronger evidence arbitrarily resulted in that category appearing as a majority.Based on this, 8 papers were categorised as providing weaker evidence, 3 papers were categorised as providing moderate evidence and 8 papers were categorised as providing stronger evidence.See Fig. 2 for a histogram displaying quality of evidence ratings.A closer look into methodologies helps unpack these ratings.The majority of studies collected data at multiple timepoints from multiple groups or conditions; 6 studies collected data from a single group at multiple timepoints, two from a single group at a single time point.Notably, more than half of the studies did not compare gamified and non-gamified versions of the interventions studied.Sample sizes ranged from 5 to 251, sampling methods included both convenient and systematic.Chief modalities employed were mobile applications and websites, with several studies offering an intervention across both.Two studies each used analog techniques, social networking sites, or bespoke devices, namely a modified fork and a Wii console and Wii Fit board.Game design elements included avatars, challenges, feedback, leaderboards, levels, progress indicators, rewards and story/theme and social interaction.A total of 46 instances of implemented gamification elements were found across the 19 papers.The most commonly employed elements were rewards, leaderboards and avatars.There was a broad variety without discernible patterns in outcome measures, target audiences, or contexts, including medical settings, home recovery, self-assessment, health monitoring, stress management, improving eating behaviours, and increasing physical activity.Overall, positive effects of gamified interventions were reported in the majority of cases, with a significant proportion of neutral or mixed effects and no purely negative effects reported.The majority of assessed outcomes were behavioural or cognitive.Affect was rarely assessed.Beyond health and well-being impacts, 12 studies assessed user experience impacts, with 5 reporting positive, 5 reporting mixed and 2 reporting negative impacts.For the most part, gamification has been well received; it has been shown to foster positive impacts on affect, behaviour, cognition and user experience.The majority of studies reported gamification had a positive influence on health and well-being.In those cases where gamification had mixed or negative effects, the primary issues seemed to be: 1) the context in which gamification was used, 2) the manner in which gamification was applied, or 3) a mismatch between the gamification techniques used and the target audience.We assessed evidence based on the number, quality and the reported effects of available studies.We identified a total of 19 studies assessing the effects of gamified health and wellbeing interventions published since 2012.The most comparable serious games for health meta-analysis in terms of inclusion and exclusion criteria is DeSmet et al., which found 53 studies published between 1989 and 2013.This provides evidence that health gamification research like gamification research in general is progressing at a fast pace.Quality of evidence ratings of existing research conducted by two raters, indicated an equal number of papers were of weak or strong quality, and the remainder were of moderate quality.This suggests that health and wellbeing research is approximately in line with the low evidence quality of gamification research in general or perhaps slightly better.It is also consistent with the quality of research found in game research in general: our study found a mean quality rating of 10.3.In comparison, Connolly et al. reported a mean rating of 8.56.While the number of studies included in the current review precludes any firm conclusions, the slightly higher mean quality score found in the current study could indicate the quality of evidence for empirical effectiveness is slightly higher in gamification in health and wellbeing than the broader serious games literature.More broadly, it is worth noting that the small number and low quality ratings of studies included in this review reflect the relative infancy of the gamification field and the formative nature of research conducted to date.It should also be noted that this analysis of quality of evidence is not intended as a critique of the peer review the selected papers underwent.The papers were categorised as providing lower, moderate or stronger evidence solely with respect to the weight of empirical evidence for health and well-being effects; studies may well be considered differently based on other aims and criteria.The impact of gamified interventions on health and well-being was found to be predominantly positive.However, a significant portion of studies reported mixed or neutral effects.More specifically, findings were largely positive for behavioural impacts, whereas the evidence for cognitive outcomes is less clear-cut, with an approximately equal number of reported positive and mixed/neutral impacts.Notably, no direct negative impacts on health and wellbeing were reported, although 2 of 12 studies that additionally assessed user experience reported negative impacts on the latter.This picture is more positive than comparable general gamification reviews.Current results suggest gamification of health and wellbeing interventions can lead to positive impacts, particularly for behaviours, and is unlikely to produce negative impacts.That being said, gamification should be used with caution when the user experience is critical, e.g. where users can voluntarily opt in and out of the intervention.For example, Spillers and Asimakopoulos documented user complaints about the poor usability of gamified running apps, which resulted in individual users ceasing to use them.Boendermaker et al. similarly suggest that gamification may detract from usability and user experience by adding task demands to the interface.The majority of papers explored mobile devices or websites as the delivery platform."Positive effects were also found outside the digital domain including a gamified physical display in the classroom and a sensor-equipped fork designed to influence children's eating habits.This is in line with the identified promises of everyday life fit and broad accessibility of gamification through mobile and ubiquitous sensor technology.That being said, there are few studies directly testing the differences and effects of everyday life fit and accessibility in mobile/ubiquitous versus PC/bespoke device-based interventions.Boendermaker et al. found no difference in effectiveness between a web-based and mobile gamified cognitive bias modification training for alcohol use, but did not explicitly design and control for everyday life fit and accessibility as independent variables.Although the assessed studies included a broad range of game design elements, there was a clear focus on rewards, constituting 16 of a total of 46 instantiations of game design elements across studies, followed by leaderboards and avatars.A notable 84% of all individual studies involved rewards in some form.Not a single included study captured effects of game design elements on intrinsic motivation as a direct outcome or mediator for other health and wellbeing outcomes.Taken together with the fact that the majority of studies focused purely behavioural outcomes, this indicates that the dominant theoretical and practical logic of the studied health and wellbeing gamification interventions is positive reinforcement.In other words, the promise of intrinsically motivating health behaviour by taking learnings from game design is currently neither explored nor tested.Eighteen of the 19 included studies implemented multiple game elements, and no study tested for the independent effects of individual elements.This makes it difficult to attribute effects clearly to individual game elements, and again underlines the need for more rigorously designed studies.With this caveat, the strongest evidence available does support that rewards5 drive health behaviours: Hamari and Koivisto found rewards in the form of points and achievements to be associated with improvements in desire to exercise.Thorsteinsen et al. saw points to contribute significantly to increased physical activity.Chen and Pu similarly found that rewards and leaderboards led to an increase in physical activity among dyads working cooperatively, but not among dyads working competitively.Allam et al. found that rewards were associated with increased physical activity and sense of empowerment as well as decreased health care utilization among Rheumatoid Arthritis patients.Cafazzo et al. saw rewards to contribute to the frequency of blood glucose measurement among individuals with type 1 diabetes.Riva et al. similarly found a positive impact of points on outcomes related to chronic back pain, including reduced medication misuse, lowered pain burden, and increased exercise."With a group of highly trait-anxious participants, Dennis and O'Toole found rewards associated with reduced subjective anxiety and stress reactivity.In contrast to these positive outcomes, Maher et al. report mixed results: rewards led to a short-term increase in moderate to vigorous physical activity, but no long-term effects.Similarly, they found no impact of gamification on self-reported general or mental quality of life.Studying a mobile application designed to increase routine walking, Zuckerman and Gal-Oz similarly found no differences between gamified and non-gamified versions.Relatedly, in a qualitative study of gamified mobile running applications, Spillers and Asimakopoulos observed poor usability of gamified applications leading to users stopping to use them.Avatars are commonly employed as a gamification technique to represent the user in the application context.Again, the majority of studies found avatars were associated with positive outcomes.Kuramoto et al. developed an application with an avatar that ‘grew stronger’ the longer users were standing instead of sitting on public transport.They found evidence for increased motivation to stand."Dennis and O'Toole compared a gamified mobile attention-bias modification training for anxiety using virtual characters with a placebo training and found it to significantly reduce subjective anxiety and stress reactivity.In a series of two studies, Jones et al. found that avatars led to increased fruit and vegetable consumption among children.Assessing the effectiveness of a gamified application designed to moderate alcohol use, Boendermaker et al. observed a positive impact on motivation to train; however, participants reported greater task demand associated with the gamified version of the application.Social Interaction was also commonly employed as a means to engage users and was found to increase user experiences of fun and motivation in the context of moderating alcohol consumption, to have a positive influence on physical activity and flourishing mental health.Less commonly employed gamed design elements across studies included levels, progress, story/theme, challenges and feedback.With respect to theories of motivation, very few studies provide insight regarding the extent to which gamification that draws on relevant theory is more effective.Only a minority of studies explicitly discuss motivational theory and very few studies are conducted in a manner that assesses whether a motivational construct is associated with positive outcomes.Most commonly, self-determination theory and intrinsic/extrinsic motivation were the theories discussed in relation to health gamification.Other theories that were considered include design strategies to reduce attrition and guides for behaviour change, empowerment and the transtheoretical model of behaviour change.As discussed above, most studies considered multiple gamification elements simultaneously making it difficult to isolate the effects of individual elements.In some cases, this also makes it more difficult to consider the impact of specific theories of motivation.Hamari and Koivisto found a positive impact of social norms and recognition providing support for self-determination theory in terms of relatedness of social influence."Similarly, although mixed evidence was found for the impact of the gamification elements used, Zuckerman and Gal-Oz interpret their results as confirming the value of Nicholson's concept of ‘meaningful’ gamification and the self-determination driven ideas of informational feedback and customizable elements.Further affirming the notion of ‘meaningful’ gamification, Ahtinen et al. discuss how their findings highlight the importance of meaningful experiences rather than rewards.A broad range of audiences were targeted throughout the research reviewed.While some studies focussed on younger participants to adolescents, the majority of studies were conducted with adults.Regardless, positive outcomes have been found for children, adolescents and young adults."A small number of studies focussed on specific audiences, such primary school teachers, participants with specific health issues like chronic back pain Riva et al., 2014, rheumatoid arthritis, or high levels of trait anxiety.It is not immediately clear from the reviewed studies what relationship exists between existing gaming affinity or expertise and the effectiveness of gamification as previous experience with digital games is not commonly reported."Beyond demographics, factors relevant to the potential effectiveness of gamification seem to include the users' personality, as well as their level of knowledge, expertise, abilities, and basic motivation to engage in the target activity initially.In a study where 15 first-time Wii Fit users were asked to use a Wii balance board to increase their fitness, findings about the effectiveness of gamification were mixed.Only beginners responded positively to gamified elements incorporated into the exercise activities, while these same features had a negative effect on experienced fitness users, leading them to abandon the system as a fitness tool.Non-beginners reported that gamified features slowed down the pace of the exercise, leading to their disengagement, and feedback was disliked, as praising was considered exaggerated.Importantly, the studies reviewed suggest that the benefits of health gamification extend beyond audiences who have pre-existing motivations to engage in the target activity.Although many of the studies involved participants who were likely to have pre-existing motivation, of the studies conducted with participants without existing motivations, the majority showed some positive results.Positive impacts of gamification were found with young children around eating behaviours; university students regarding alcohol consumption; commuters with respect to standing Kuramoto et al., 2013 and teachers in relation to positive psychology training.Furthermore, when comparing beginners and experts, Reynolds and colleagues found positive impacts of gamification on exercise behaviour only for the beginners.Across fields, the most popular and successful context for the application of gamification is physical health and more specifically, its use for motivating individuals to increase their physical activity, or to engage in self-monitoring of fitness levels.Notably, a positive impact of gamification on physical activity related outcomes are observed in 8 of the 10 studies with mixed effects observed by Maher et al. and Spillers and Asimakopoulos.Motivation to exercise is increased largely through “fun” activities, through cooperating, competing, and sharing a common goal with peers or exercise buddies, or through various other social incentives.There is evidence that gamification features may be more motivating than exercise alone.Some elements can stimulate increased exercise and reduce physical fatigue.There is also evidence to suggest that social influence may play a key role in the influence of gamification on willingness to exercise.While gamified elements can provide motivation to maintain or increase physical activity, such outcomes may not be sustained over time; these responses are not necessarily consistent for all types of users; and not all types of elements help users achieve their fitness goals or positively impact user adoption.Nevertheless, these studies combined lend support to the use of gamification as a viable intervention strategy in fitness contexts.Outside of activity, within the domain of physical health a positive influence of gamification was also found in three studies of nutrition.The remaining studies exploring the impact of gamification within the domain of physical health examined illness related issues.Gamification was found to have a positive influence on healthcare utilization, the reduction of medication misuse and blood glucose monitoring.In two studies these changes were also associated with a positive influence on patient empowerment."In the domain of mental health, gamification has been shown to have positive effects on wellbeing, personal growth and flourishing as well as stress and anxiety.This supports the identified promise of gamification to directly support wellbeing.More mixed results were found with respect to substance use, with evidence of an increased motivation to train with a gamified version of a tool, alongside evidence of lowered ease of use.However, in a study of mental wellness training, which involved concentration, relaxation and other techniques to encourage changes in thoughts and negative beliefs, gamification was received with skepticism by just over half of the users.Participants suggested that points, rewards and achievements were a poor fit in the context of mental wellness and mindfulness.However, it is not clear to what extent this point of view is related to the specific types of gamification used in the study and whether the finding would extend to a broader sample.As noted throughout the discussion, the small number and wide variability in the design, quality and health behaviour targets of the gamification studies included in this review limits the conclusions which can be made.There is a need for more well-designed studies comparing gamified and non-gamified interventions: we need randomized controlled trials and double-blind experiments that tease out the effect of individual game design elements on mediators like user experience or motivation and health and wellbeing outcomes, with adequately powered sample sizes, control groups and long-term follow up assessments of outcomes.The studies included in this review typically conflated the assessment of multiple game design elements at once, often involved small sample sizes, did not feature control groups, or only focused on user experience outcomes.Additionally, very few studies have explored the long-term or sustained effects of gamified products, which means that current support for gamification may in part reflect its novelty.Finally, the heuristic used in the current review to evaluate impact, was considered appropriate given the heterogeneity of included studies.However, once more studies on individual gaming elements are completed, future reviews should consider using a more complex heuristic to evaluate impact.As the main contributors to health and wellbeing have shifted towards personal health behaviours, policymakers and health care providers are increasingly looking for interventions that motivate positive health behaviour change, particularly interventions leveraging the capabilities of computing technology.Compared to existing approaches like serious games for health or persuasive technology, gamification has been framed as a promising new alternative that embodies a “new model for health”: “seductive, ubiquitous, lifelong health interfaces” for well-being self-care."More specifically, proponents of gamification for health and wellbeing have highlighted seven potential advantages of gamification: supporting intrinsic motivation, broad accessibility through mobile technology and ubiquitous sensors, broad appeal across audiences, broad applicability across health and wellbeing risks and factors, cost-benefit efficiency of enhancing existing systems, everyday life fit, direct wellbeing support.That being said, little is known whether and how effectively gamification can drive positive health and wellbeing outcomes, let alone deliver on these promises.In response, we conducted a systematic literature review, identifying 19 papers that report empirical evidence on the effect of gamification on health and wellbeing.Just over half of the studies reported positive effects, whereas 41% reported mixed or neutral effects.This suggests that gamification could have a positive effect on health and wellbeing, especially when applied in a skilled way.The evidence is strongest for the use of gamification to target behavioural outcomes, particularly physical activity, and weakest for its impact on cognitions.There is also initial support for gamification as a tool to support other physical health related outcomes including nutrition and medication use as well as mental health outcomes including wellbeing, personal growth, flourishing, stress and anxiety.However, evidence for the impact of gamification on the user experience, was mixed.Further research that isolates the impacts of gamification is needed to determine its effectiveness in the health and wellbeing domain.In terms of the highlighted promises, little can be said conclusively.No intervention examined intrinsic motivation support, as the majority of studies subscribed to a behaviorist reinforcement paradigm.Most studies did employ mobile and/or ubiquitous technology, yet no study directly assessed whether they differed in accessibility compared to stationary delivery modes.The range of participant samples employed across studies suggests likely broad appeal across audiences and the wide range of health and wellbeing issues addressed across studies does support broad applicability in principle.None of the studies included assessed cost-benefit efficiency or everyday life fit."On a positive note, multiple studies found evidence that gamified interventions did directly support participants' wellbeing. | Background Compared to traditional persuasive technology and health games, gamification is posited to offer several advantages for motivating behaviour change for health and well-being, and increasingly used. Yet little is known about its effectiveness. Aims We aimed to assess the amount and quality of empirical support for the advantages and effectiveness of gamification applied to health and well-being. Methods We identified seven potential advantages of gamification from existing research and conducted a systematic literature review of empirical studies on gamification for health and well-being, assessing quality of evidence, effect type, and application domain. Results We identified 19 papers that report empirical evidence on the effect of gamification on health and well-being. 59% reported positive, 41% mixed effects, with mostly moderate or lower quality of evidence provided. Results were clear for health-related behaviours, but mixed for cognitive outcomes. Conclusions The current state of evidence supports that gamification can have a positive impact in health and wellbeing, particularly for health behaviours. However several studies report mixed or neutral effect. Findings need to be interpreted with caution due to the relatively small number of studies and methodological limitations of many studies (e.g., a lack of comparison of gamified interventions to non-gamified versions of the intervention). |
571 | Middle East respiratory syndrome coronavirus vaccines: current status and novel approaches | Coronaviruses are the largest positive sense single stranded RNA viruses.There are six human coronaviruses to date; HCoV-229E, HCoV-OC43, HCoV-NL63, HCoV-HKU1, severe acute respiratory syndrome-CoV, and Middle East respiratory syndrome-CoV.Prior to the SARS-CoV epidemic in 2002–2003, CoVs were known to cause mild respiratory infections in humans.SARS-CoV, on the other hand, infected around 8000 cases causing severe respiratory disease with a 10% fatality rate .Ten years later, MERS-CoV emerged in the human population also to cause severe respiratory infection .In contrast to the SARS-CoV epidemic, which was contained within one year, MERS-CoV still continues to cause outbreaks with increasing geographical distribution, four years after its first identification.As of March 2nd 2017, 1905 cases in 27 countries have been reported to the WHO with 677 deaths accounting for a 35% case fatality rate.Like SARS-CoV, MERS-CoV emerged as a result of zoonotic introduction to the human population.Despite its close genome similarity with bat coronavirus HKU4 and HKU5 , accumulating serological and molecular evidence pointed to dromedary camels as the most probable reservoir for MERS-CoV .This poses a continuous risk of virus spill-over to people in contact with camels, such as those working in slaughter houses and animal farms, evidenced by the presence of MERS-CoV antibodies in sera of those individuals .Nosocomial transmission, however, accounts for the majority of MERS-CoV cases reported in outbreaks , although a substantial part of infections that occur result in unrecognized asymptomatic or mild illnesses .Thus, in addition to camel contacts, other highly-at-risk groups are healthcare workers and patient household contacts .Considering the ongoing MERS-CoV outbreaks, it is crucial to develop intervention measures among which vaccines play an important role.Despite the fact that the emergence of MERS-CoV and SARS-CoV has dramatically changed the way we view CoVs, there is no licensed CoV vaccine or therapeutic drug available to date .A cornerstone for rational vaccine design is defining the determinants of immune protection.Accumulating data from studies done so far on MERS-CoV and other coronaviruses revealed that a combination of both virus-specific humoral and cellular immune responses is required for full protection against coronaviruses.Especially neutralizing antibodies are considered key players in the protective immunity against CoVs.Neutralizing monoclonal antibodies reduced viral loads in MERS-CoV receptor-transduced mice, rabbits and macaques .Similarly, convalescent camel sera increased virus clearance and decreased lung pathological outcomes in mice with an efficacy directly proportional to anti-MERS-CoV-neutralizing antibody titers .Also polyclonal sera produced in transchromosomic bovines protected mice against MERS-CoV challenge .Evidence for the protective role of antibodies also comes from recent studies analyzing immune responses in patients that survived or succumbed to MERS-CoV.Although neutralizing antibodies were only weakly inversely correlated to viral loads, serum antibody responses were higher in survivors compared to fatal cases but viral RNA was not eliminated from the lungs .Administration of convalescent sera, however, did not lead to significant reduction in viral loads .The presence of mucosal IgA Abs, on the other hand, was found to influence infectious virus isolation .Besides humoral immunity, cellular immune responses are also considered to play a crucial role in protection against coronaviruses.While B-cell deficient mice were able to clear MERS-CoV, those lacking T-cells failed to eliminate the virus, pointing out the crucial role of T-cells in viral clearance .This is supported by the observation that T-cells were able to protect aged mice against SARS-CoV infection and the fact that a reduced T-cell count was associated with enhanced disease severity in SARS patients .Along with other studies, these data highlight the importance of T-cells for virus clearance and protection against MERS-CoV and SARS-CoV .It is also noteworthy to mention that while neither antibodies nor memory B cells were detectable 6-years post-infection , SARS-CoV-specific memory T-cells, despite being low in frequency, persisted up to 11 years post-recovery .Nonetheless, the protective capacity of such memory response is not known.Hence, taking into account the waning of virus-specific humoral responses, generating a long-lived memory T cell response through vaccination could be favorable, but as proper B- and T-cell immune responses are required for efficient protection, vaccination should target the induction of both.At the moment we lack information concerning the longevity of anamnestic immune responses following MERS-CoV infection, except for a recent study showing that antibody responses, albeit reduced, persisted up to 34 months post-infection .The role of immune responses in protection is also in line with the observed increased fatality among the aged population following MERS-CoV infection.Retrospective studies on MERS-CoV patients from Saudia Arabia and South Korea have found a significant correlation between old age and mortality , a pattern that has been also reported for other respiratory viruses such as SARS-CoV and influenza virus .This is most likely caused by immunosenescence; a failure to produce protective immune response to new pathogens in elderly due to impaired antigen presentation, altered function of TLRs, and a reduced naïve B and T cell repertoire .This age-related increase in mortality was also reported in SARS-CoV laboratory-infected animals, that is, mice and nonhuman primates , and was associated with low neutralizing antibodies and poor T-cell responses .Several factors that play a role in T-cell activation were also found to be dysregulated in an age-related manner.Age-related increase in phospholipase A2 group IID, and prostaglandinD2 in the lungs contributed to a diminished T-cell response and severe lung damage through diminishing respiratory dendritic cell migration .Likewise, adoptive transfer of T-cells to mice enhanced viral clearance and survival , highlighting the contribution of a reduced T-cell response in severe disease outcome.These observations also highlight the need for more effective preventive measures for the elderly.In this sense, induction of a potent airway T-cell response may be crucial to protect against CoVs .Thus, a promising approach to protect against MERS-CoV-induced fatality is to enhance virus-specific tissue resident memory T-cell responses through intranasal vaccination.Although the MERS-CoV genome encodes for 16 non-structural proteins and four structural proteins, the spike, envelope, membrane, and nucleocapsid , the viral structural proteins, S and N, show the highest immunogenicity .While both S and N proteins can induce T-cell responses, neutralizing antibodies are almost solely directed against the S protein, with the receptor binding domain being the major immunodominant region .Thus, current MERS-CoV vaccine candidates mainly employ the spike protein or the gene coding for this glycoprotein.These MERS-CoV vaccines candidates were developed using a wide variety of platforms, including whole virus vaccines, vectored-virus vaccines, DNA vaccines, and protein-based vaccines.Although live attenuated vaccines produce the most robust immune responses, they pose a risk from reversion to virulence.Inactivated virus vaccines may cause harm due to incomplete attenuation or the capacity to induce lung immunopathology .Viral-vector-based vaccines, on the other hand, provide a safer alternative and have been developed using modified vaccinia virus Ankara , adenovirus , measles virus , rabies virus , and Venezuelan equine encephalitis replicons , all expressing MERS-S/S1 proteins.Additionally, VRP expressing the N protein have also been developed .A major hurdle facing these viral-vector-based platforms is preexisting immunity in the host which potentially can impair the vaccine efficacy.However, this can be prevented by using virus strains not circulating in the targeted population or immunization strategies involving heterologous prime-boost immunization, for example, MVA and AdV. Although plasmid DNA vaccines are considered to be of low immunogenicty in humans, current versions developed seem to induce potent immune responses.DNA-based vaccines directed at inducing anti S responses were also shown to exert protection in NHPs .Noteworthy to mention is that a combination of DNA and recombinant protein in a heterologous prime-boost immunization strategy induced higher immune response compared to each component alone .Additionally, protein-based vaccines were developed in various platforms as virus-like particles , nanoparticles , peptide-based , and subunit vaccines directed against various regions of the spike protein S1 , the N-terminal domain , and the RBD .Those vaccines have the highest safety profile among vaccine platforms but confer variable degrees of immunogenicity which need adjustment for the dose, adjuvants, and site of administration to get optimal protective responses.Adjuvants influence the type and magnitude of immune response produced by vaccines, and thus the doses used .Additionally, the route of administration is a determining factor for the type of vaccine-induced immune response produced in the host.While intranasal vaccination with SARS-N produced a protective airway T-cell response against SARS-CoV in mice, subcutaneous vaccination, inducing systemic T-cell responses, did not .Likewise, i.n. vaccination with MERS-RBD induced a significantly higher neutralizing and IgA antibody responses in the mice airways compared to s.c. vaccination .This is important because mucosal immunity and airway memory T-cell responses are crucial players in protection against respiratory viruses, since these areas are the first to encounter the virus.Therefore, along with selecting antigens for a vaccine, the route of vaccination and adjuvants are key players that cannot be neglected in vaccine design.Because the spike protein and more specifically the S1 domain, is highly divergent among different CoVs, neutralizing antibodies only provide homotypic protection.Thus far, the variability in the amino acid sequence of the spike protein observed among MERS-CoV strains is low , and circulating MERS-CoV strains did not show any significant variation in the serological reactivity , implying that the development of a vaccine that is effective against one strain is likely to be protective against all circulating strains.Another risk posed from the development of antibody escape variants is still present , although this is not likely to happen as mutations in two epitopes may be required, and mutants that develop may have reduced viral fitness .While the RBD is considered an ideal vaccine candidate for MERS-CoV, the spike S2 domain and N protein are more conserved, and thus adaptive immune response directed against these proteins can potentially lay the basis for a more broadly acting coronavirus vaccine.However, evidence for cross reactive immune responses against different CoVs is limited to a few studies.Convalescent SARS-CoV patient sera weakly neutralized MERS-CoV and SARS-S reactive antisera showed low level neutralization of MERS-CoV .Extra-RBD S1 or S2 epitopes could be responsible for this effect, as some neutralizing epitopes have been identified in these regions of the S protein .These may not be as immunodominant as the RBD epitopes but could provide a rationale for the development of a cross protective CoV epitope-focused vaccine.A recent study also demonstrated the potential role of adaptive response against N protein in protection against MERS-CoV infection as this vaccine candidate produced a protective T-cell response against MERS-CoV challenge which was also partially protective against SARS-CoV .Moreover, infection of mice with SARS-CoV reduced MERS-CoV titers 5 days p.i. upon challenge suggesting the development of a cross reactive T-cell response .Thus, mapping and focusing the immune response towards these critical neutralizing and T-cell epitopes, which could be subdominant, may provide a way to induce immune responses with a broader activity against different CoVs.Immune focusing may also be beneficial for the generation of a robust virus-specific immune response.As during vaccine preparation, some epitopes which are normally hidden in the full length protein structure get exposed.Some epitopes could be immunodominant and have a negative contribution on the overall neutralization capacity produced by the vaccine .This also holds true for some non-neutralizing immunodominant epitopes, as S1-based vaccines induced slightly higher neutralization than whole S ectodomain-based ones .Additionally, the RBD induced higher neutralizing antibodies compared to an S1 subunit vaccine , and shorter regions of RBD induced even higher neutralization responses , indicating that additional regions inducing non-neutralizing antibodies may contribute negatively to the overall neutralization response produced.Additionally, antibody-dependent enhancement of the viral infection by non-neutralizing antibodies , despite not being reported so far for MERS-CoV, needs also to be taken into consideration when developing a coronavirus vaccine.One approach to enhance the efficacy of subunit vaccines is to mask those negatively-contributing epitopes through glycosylation .Other approaches are immunefocusing and epitope-based vaccines, all aiming at narrowing the immune response to target only critical or beneficial epitopes to produce a stronger protective response.A prerequisite to reach that goal is to map epitopes targeted by the immune system and identify their biological role as being neutralizing, non-neutralizing, infection enhancing, containing a T-cell epitope, and so on.This can be achieved by analyzing the activity and fine specificity of convalescent patient sera, infected animal polyclonal sera, monoclonal antibodies, animal and human PBMCs.Subsequently the predicted epitopes can also provide a basis for potential vaccine candidates when produced as nanoparticles or VLPs.Further characterization of the immune responses induced by these vaccine candidates when evaluated in an animal model may be utilized to optimize the vaccines for efficacy.This epitope-focused vaccine approach may allow for targeting less immunodominant B- and T-cell epitopes having broader protection, avoid eliciting immune responses against epitopes playing no role in protection or having a negative or harmful role.In addition to better targeting of protective immunodominant epitopes, a combination of those epitopes, B- and T-cell epitopes targeting different viral proteins, could be used to produce a broader and stronger protective immune response for both strain-specific and universal CoV vaccines.Next to the choice of the MERS-CoV vaccine candidate, it is also important to take into account the target population that needs to be protected through vaccination.Populations at risk of MERS-CoV infection include camel contacts, healthcare workers and patient contacts.The latter two groups could benefit from the rapid onset of immunity though passive immunization using Mabs or convalescent sera, provided that it is given in time.Another alternative strategy for short-term protection is the use of vaccines capable of rapidly inducing high titers of neutralizing antibodies.This will provide a short-term immunity beneficial to protect those highly-at-risk groups when a new case is identified, to prevent outbreaks.To prevent virus infection of primary cases, vaccination of the dromedary camels may also be considered.So far, among the available vaccine candidates, only two have been tested in dromedary camels, pVax1-S and MVA-S.pVax1-S, a DNA-based vaccine, induced neutralizing antibodies in two of three camels tested so far, but has not been tested for efficacy .The other candidate MVA-S, a viral-vector-based vaccine, induced systemic neutralizing antibodies and mucosal immunity which conferred protection against MERS-CoV challenge and reduced virus shedding in vaccinated camels Therefore, this vaccine candidate may provide a means to prevent zoonotic transmission of the virus to the human population.For camel contacts and healthcare workers in endemic areas, being at a continuous risk of MERS-CoV infection either from infected camels or patients, respectively, it would be beneficial to induce a longer-term protection.While these could be rewarding approaches to stop MERS-CoV outbreaks, it is still worthwhile to develop platforms and vaccines that aim to induce more broad protection against different related CoVs, that could potentially cause future outbreaks.Learning from previous epidemics, the WHO issued a list of priority pathogens posing a risk to the human population and requiring urgent research and development for intervention measures, among which MERS-CoV and highly pathogenic CoVs are of high priority.The lack of intervention measures along with the increase in geographical area and ongoing MERS-CoV cases, raise worries for the future occurrence of larger epidemics as a result of virus adaptation in the human population and more efficient human-to human transmission.Further development of MERS-CoV and other CoV vaccines thus needs proactive collaborative efforts from researchers filling knowledge gaps, and market stakeholders providing funding for this costly process.The latter can be insufficient and/or unsustainable, therefore hindering development of even some promising candidates.In an initiative aiming at accelerating vaccine R&D process by providing sustained funding to be prepared for future epidemics, the World Economic Forum launched the Coalition for Epidemic Preparedness Innovations .CEPI is an international non-profit association aiming at removing barriers facing vaccine development for epidemic infections and getting ready for future epidemics, including MERS-CoV.However, we still face a number of challenges despite the fact that various promising MERS-CoV vaccine candidates are currently available.There is a lack of animal models mimicking the disease in humans in which vaccine platforms can be tested prior to human use.We need to take into account the populations to target with vaccination, with camels and camel handlers being the most relevant ones.The lack of full understanding of the pathogenesis and immune responses to the virus in humans and camels, which is crucial for vaccine development, also needs further investments.In addition, the longevity of immune responses post-vaccination has not been evaluated for vaccine candidates, which is important for the vaccination scheme development and for the choice of the best candidates for further development.Lastly, most of the vaccine candidates are developed against the highly variable spike protein and thus may not be able to provide protection against CoV strains evolving in the future.A more targeted vaccination approach aiming at conserved epitopes should be considered for the development of a more broadly-acting CoV vaccine.Given the propensity of CoVs to jump the species barrier, current efforts to develop a MERS-CoV vaccine may also be of benefit to prepare for potential novel CoVs that may emerge in the future.Papers of particular interest, published within the period of review, have been highlighted as:• of special interest,•• of outstanding interest | Middle East respiratory syndrome coronavirus (MERS-CoV) is a cause of severe respiratory infection in humans, specifically the elderly and people with comorbidities. The re-emergence of lethal coronaviruses calls for international collaboration to produce coronavirus vaccines, which are still lacking to date. Ongoing efforts to develop MERS-CoV vaccines should consider the different target populations (dromedary camels and humans) and the correlates of protection. Extending on our current knowledge of MERS, vaccination of dromedary camels to induce mucosal immunity could be a promising approach to diminish MERS-CoV transmission to humans. In addition, it is equally important to develop vaccines for humans that induce broader reactivity against various coronaviruses to be prepared for a potential next CoV outbreak. |
572 | Discussion of laser interferometric vibrometry for the determination of heat release fluctuations in an unconfined swirl-stabilized flame | Reducing noise and thermo-acoustic instabilities has become increasingly important in thermal turbomachinery used in aero-engines and ground based gas turbines for power plant operation.This is due to EC legislation and needs to improve the reliability of gas turbines.In aero-engines, steady improvements of fan, compressor and turbine have reduced the noise level.On the other side, next generation low-emission combustion modes burn more unsteadily, resulting in increased noise from combustion .Additionally, in thermal turbomachinery the flame is stabilized by a swirling flow, responding to incoming disturbances nonlinearly .Not only is the direct noise of turbulent flames of importance, but also indirect combustion noise from hot spots convected by the flow and vorticity fluctuations caused by unsteady combustion .Direct combustion noise is caused by volumetric expansion and contraction due to fluctuations in heat release.In experimental research, the heat release rate is commonly measured by OH* chemiluminescence or by OH Planar Laser Induced Fluorescence.Ayoola et al. give a detailed discussion of these techniques for experimental investigations of turbulent premixed flames.He and his co-authors conclude, that for perfectly premixed flames global fluctuations in heat release rate can be recorded by OH* and CH* chemiluminescence in amplitude and phase.Other authors come to the same conclusion but question the adequacy of chemiluminescence as a measure for spatially resolved heat release rates and recommend interpreting chemiluminescence data with care, based on the observation that mixture gradients and strain rate can influence the local emissions.Heat release fluctuations, hot and cold spots convected by the flow, and sound waves are related to density fluctuations, a number detected by interferometry.Chemiluminescence and interferometry are both line-of-sight methods and therefore need optical access and tomographic reconstruction, except symmetry can be assumed .Throughout the last decades, Laser Interferometric Vibrometry became a powerful tool in the visualization and detection of sound fields , as well as in the detection of turbulent vortices and boundary layer transition .This development had been made possible by the wide availability of laser vibrometers, used to detect surface vibrations from machinery .Modern systems record fluctuations in the optical path length along the laser beam with picometre resolution.These changes in optical path can then be related to changes in the refractive index field or density fluctuations.Giuliani et al. first reported the visualization of density fluctuations in a reacting flow.Köberl et al. demonstrated the local detection of density fluctuations by two correlated vibrometer signals in an unconfined methane-jet diffusion flame of 7.5 kW power.In a cooperation between TU Graz and École Centrale Paris the link between density and heat release fluctuations was employed for an unconfined, acoustically excited and premixed laminar methane flame, including a first comparison to data recorded by chemiluminescence with additional work on this burner published by Li et al. .Finally, a feasibility study was performed at TU Munich, investigating a confined swirl-stabilized methane/air flame in perfectly and technically premixed mode at 40–55 kW power .In contrast to these previous investigations, the objective of this work is a detailed discussion of LIV as an alternative or extension to chemiluminescence when detecting global and local heat release fluctuations.Therefore an estimation of uncertainties due to methodology, instrumentation and confounding effects is carried out, with a detailed discussion of the influence of local densities, Gladstone-Dale constant and ratios of heat capacities.Additionally convection of heat in the non-reacting flow field and sound waves radiated by the flame, are investigated.For this purpose, LIV and chemiluminescence were used to record a 3.4 kW unconfined methane/air swirl-stabilized flame at ambient conditions and perfect pre-mixing.A siren modulated the axial airflow before entering the burner, improving the signal-to-noise ratio for all measurement techniques.Using Eq., LIV is applied to the detection of acoustic fields around machines or music instruments .In Section 6.3 it is shown that it is also possible to visualize the sound field which is radiated by the flame.Sound power in the far field of a turbulent swirl-stabilized flame can be calculated directly if density fluctuations in the flame are known as a function of time and space .For measurement data, the projection h is not given as a function but in the form of a digital array, which makes a numerical solution necessary.Additionally, the function f must be zero at the projection limit at radial position R.The algorithms used for inversion are provided by the software package IDEA and are discussed by Pretzler et al. .An error estimation of the numerical method was done by reconstructing the projected data from the Abel transformed and comparing it to the original one.For Abel transform a Fourier-series-like expansion of the unknown distribution function f was used with 10 cubic polynomials for a field size of 1024 × 768 pixel with a flame diameter of app.219 pixel.In case of asymmetry, multiple projections must be recorded and a tomographic algorithm applied.In such case we used the convolution technique implemented in IDEA, with 6 projections interpolated to 180 projections, a Hanning window parameter of 0.54 and 40 as length unit for reconstruction within a 1201 × 1201 pixel field.In a swirl stabilized flame G might vary due to changes in molecular fractions between reactants and products, but also due to periodic variations of the equivalence ratio ϕ.Later is not the case in this perfectly premixed flame.The influence of the molecular fractions of reactants and products on G is discussed in Table 2 using data from Gardiner et al. and GASEQ software.Table 2 indicates that the error in G due to mixing burnt and unburned gases is in the order of 2 percent for lean combustion when an average Gladstone-Dale constant is used in the combustion zone.Even a slight variation of ϕ is in this error margin.While for the reaction and convection zones a uniform G was used, the values of G at the flame boundary were linearly interpolated until they reached the value for ambient humid air.The uniformity of the equivalence ration in the reaction zone was tested by the local OH*/CH* ratio.To test LIV in a turbomachinery related flame, an unconfined swirl-stabilized methane-fired burner was operated at ambient conditions.Giuliani et al. discusses the details of the burner, Peterleithner et al. presents the adaptation to rotational symmetry and first LIV measurements in the flame field.Figure 1 shows the working principle of this burner.Methane was injected into the air feed-line far upstream the burner to ensure uniform mixing.A siren developed by Giuliani et al. modulated the axial air/methane mixture before entering the burner, blocking or releasing the full cross-section of the flow by a rotating disc with rectangular shaped teeth.At the operation point investigated, the siren modulated the burner exit velocity by approximately 10 %.Since the whole air mass flow was already mixed with methane before entering the siren or combustor it turned out that in the reaction zone the equivalence ratio had a uniform value.The siren helped to improve the signal-to-noise ratio when detecting heat release fluctuations, the best flame response was found at a siren frequency of 212 Hz.All experiments were carried out with this frequency at a fixed operation point with a global equivalence ratio of 0.7 and a thermal power of 3.4 kW.The tangential air/fuel mixture then entered the main axial flow through 32 cylindrical bores, aligned tangentially around the burner axis.According to the definition given by Candel et al. this tangential flow generated a swirl number of 0.52.Through a perforated metal plate on top of the burner, additional cooling air was added to the flow field.To record fuel and air mass flow rate, caloric mass flow meters were used.A previously performed characterization of the burner by a flame transfer function showed flow velocities in the range of 2-6 m/s.To scan the flame, the burner was mounted on a traverse while the vibrometer was fixed.For LIV, a laser vibrometer based on a small Mach-Zehnder interferometer including a Bragg-cell was used .This brand of vibrometers record velocity rather than displacement, providing a higher signal-to-noise ratio at high frequencies.The laser beam from the interferometer head was collimated to a diameter of 2 mm by a lens with -40 mm focal length and reflected by a surface mirror on a heavy steel block.The distance between interferometer and mirror was kept as small as possible in order not to lose the signal, since the density gradients in the flame deflect the laser beam slightly.Before recording fluctuations in heat release rate, structural vibrations in the setup were excited to check that no vibrational resonances were present above 150 Hz.Since the flame is axisymmetric, only half of the flame was scanned with a grid size of 2 mm and a total number of 58 points in height and 16 points in width, resulting in a field of 114 × 30 mm².To avoid acoustic reflections or other noise sources, two layers of curtains were used to cover the 3 × 3 × 2.5 m³ test section.As reference for the LIV measurement, OH* and CH* chemiluminescence images were recorded by an Intensified Charge Coupled Device.For the OH* emission a 310 nm and for the CH* emission a 430 nm band-pass filter were used.For full-field data, an ICCD camera detected the chemiluminescence together with above filters.The camera characteristic and the corner shading was carefully calibrated using neutral density filters for visible and UV radiation.For all measurements background images were recorded and subtracted.From the ICCD OH* and CH* emission intensity data, the local equivalence ratios were calculated using the tables from Panoutsos et al. and Lauer .Since these relations depend on the flame geometry we chose the tables closely related to our geometry.In Fig. 2, a spectrograph is seen opposite to the ICCD camera.Since narrowband emissions of OH* and CH* are superimposed by broad-band emission from CO2, the corresponding intensities must be corrected for precise measurements.Therefore, the flame image was projected onto the entrance slit of the spectrograph using above UV lens.The entrance slit was oriented horizontally so that it captured the full diameter of the flame.This setup resulted in spectra for each radial line for different axial heights by scanning the combustion field by traversing the flame in x-axis direction.The ICCD camera mentioned above, recorded these spectra through the spectrograph.1200 of such spectra were averaged for each axial height scanned resulting in line-of-sight time-averaged spectral data.Spectra for some positions along the x-axis at y = 0 are plotted in Fig. 3.Together with the LIV and the chemiluminescence signals, a photomultiplier and a microphone signal were detected.Due to sensor directionality, both microphone and photomultiplier were traversed with the flame to keep them in the same position relative to the flame.All signals were referenced or triggered by the siren."The photomultiplier's field of view was tested by recording data at different distances first.The shearography technique used for this measurement is discussed in detail by Pretzler et al. .It uses a Schlieren type setup shown in Fig. 2 including a 20 mW He-Ne laser, a microscope lens, two spherical mirrors and a small Mach-Zehnder interferometer instead of the Schlieren stop.The small and rigidly fixed interferometer shears the wave front against itself and superimposes a parallel carrier fringe system to the interferograms recorded by a high-speed camera.To estimate the thermal radiation of the flame and therefore the systematic errors in density fluctuation measurements with LIV), measurements with a calibrated thermopile were performed.For a better understanding of the differences between local fluctuations in heat release rate detected by LIV and OH* chemiluminescence, velocity data recorded by a frequency modulated Doppler-global-velocimeter were used.This technique measures the Doppler frequency shift of laser light, scattered at seeding particles, moving with the flow.The applied system uses a high power continuous wave laser, and records all three components of the flow velocity simultaneously.The scattered light is detected through three absorption cells filled with caesium gas.Due to the frequency dependent absorption line of caesium, the Doppler shift of the light frequency results in a change of the light intensity behind the cell, which is detected on an array of eight fibre-coupled avalanche photodiodes for each cell.The measurement points are linearly aligned along the incoming laser beam with a spatial distance of 0.88 mm.Since the detected light intensity depends not only on the light frequency, but also on the intensity of the incident laser light, a sinusoidal laser frequency modulation is applied, which results in a modulation of the light intensity behind the absorption cell.Due to the non-linear transmission of the Cs-cell the modulated intensity exhibits a first and second harmonic and the ratio of the harmonics amplitude depends only on the light frequency.All chemiluminescence ICCD images were recorded with DaVis software, with further post processing done in MATLAB R2015a.For the 212 Hz flame oscillation, the corresponding period was divided in 16 equidistant instants in time or phase.For each of these 16 instants in time 6400 frames were averaged, resulting in a series of 16 phase-averaged images.Gating time of one frame was 100 µs.The gate was set symmetrical around the trigger, meaning that for the frequency of 212 Hz approximately ± 3.8° of one period were averaged for each of the 16 phase images.To correct dark noise, background images were recorded for OH* and CH* chemiluminescence and subtracted.Also the broadband emission of CO2 was subtracted from OH* and CH* images using the local spectra discussed above.All line-of-sight data were transformed into local data by Abel transform.From the equivalence ratio the molecular fractions and the Gladstone-Dale constant were derived).Figure 4 plots the line-of-sight and the local distribution of the time average value for the total OH* chemiluminescence signal, including background subtraction and correction for the CO2 spectral signal.Additionally, the distribution of the equivalence ratio can be seen in plot.Assuming complete combustion, it was possible to calculate a scaling factor for the chemiluminescence images with a dimension of W/counts.Finally, the fluctuations in heat release rate for the 16 different phases of one excitation cycle were obtained by subtracting the averaged OH* data field from each of the 16 phase resolved fields.Time resolved global fluctuations in heat release rate were then calculated by integrating the intensities in each of the 16 line-of-sight OH* fields.It has to be considered, that besides the dominant excitation frequency also higher harmonics were recorded with the ICCD.To reduce the data to the fundamental frequency, a fast Fourier transform was performed with the 16 single global fluctuation values at different time steps, to get the phase resolved amplitude at 212 Hz.The OH* emissions reached up to 110 mm in axial and 25 mm in radial direction.The size of the combustion field had to be considered when defining the LIV measurement grid, which was set to a size of 30 mm in radial direction and 119 mm in height, starting 5 mm above the burner exit.Due to the low amplitudes above a height of x = 80 mm in the chemiluminescence figures as well as in the LIV recordings, data was only plotted up to this distance above the burner exit.To relate local refractive index to local density, the local Gladstone-Dale constant G must be known for shearography and LIV.The burner was operated at perfectly premixed conditions with only slight changes of the local equivalence ratio.According to the discussion in Section 2.2.and Table 2, G varies only slightly in the reaction zone due to mixing of burnt and unburnt gases, but G changes significantly between combustion zone and ambient air.For the reaction and convection zones we used a Gaverage = 2.5 10−4 m3/kg, and for ambient air with a humidity of 50 % G = 2.23 10−4 m3/kg.At the flame boundaries the values of G were linearly interpolated between these two values.The result is shown in Fig. 5.A temperature of approximately 1200 °C was measured in the central combustion zone where the adiabatic flame temperature for this kind of flame should be in the range of 1560 °C.There are two effects which may explain the differences.First, the radiation of the flame was measured with 180 W, which is approximately five percent of the total thermal power.Second, the combustion zone was diluted with cold ambient air, since the burner was operated without confinement, this additionally cooled down the flame.To verify the results obtained from shearography, thermocouple measurements were done in the combustion zone, showing the same peak temperatures.A detailed discussion on the accuracy of this technique is given by .According to Eqs., and the local fluctuations of heat release rate were computed from local density fluctuations.This local field of amplitudes of fluctuations in heat release rate at 212 Hz were then back-transformed to a line-of-sight data field.The phase lag Δφ between the LIV signal for each position scanned and the siren were then combined with the LOS amplitude maps.Summing up the complex LIV data resulted in a global amplitude and phase for the 212 Hz disturbance.To compare these data with chemiluminescence, 16 equidistant phase delays between 0° and 360° were added to obtain the corresponding amplitudes for one excitation cycle.For the discussion and visualization of local fluctuations of the heat release rate, the real part of the complex function was Abel transformed again to plot 16 time steps of the heat release rate fluctuations for one cycle at 212 Hz.The same procedure using FFT and cross-correlation was performed for the photomultiplier OH* chemiluminescence data, using a trigger signal provided by the siren as reference.As for the chemiluminescence images the calibration was performed by using the time average value of the photomultiplier signal and the total thermal power of the combustor.By detecting the Doppler shift of the laser frequency using the FM-DGV system described in chapter 4, the flow velocity can be evaluated based on a calibration of the system.With a modulation frequency of 100 kHz and a time averaging procedure applied to the data, the resulting cut-off frequency for the detection of velocities is 2 kHz with a temporal resolution of 250 ms. Note that the flame radiation does not interfere with the FM-DGV measurement, because the measured laser intensity signals are modulated with 100 kHz and 200 kHz.The FM-DGV standard uncertainty is below 0.02 m/s when considered the amplitude of velocity oscillations .Further details can be found in .The objective of this work is a discussion of the possibilities, limitations and uncertainties in the application of LIV in combustion diagnostics by comparing it against chemiluminescence measurements of a perfectly premixed flame.Since local chemiluminescence emission in swirl-stabilized flames does not only dependent on heat release rate but also on the flow field, local data are difficult to compare.On the other side global data – integrated over the whole flame volume – can be compared for perfectly premixed flames .Figure 7 shows a comparison between LIV data, and OH* chemiluminescence data recorded by the photomultiplier and the ICCD.For the evaluation of LIV data, first global data for density, ratio of heat capacities and Gladstone-Dale constant were used, then the local data were considered.Table 3 gives the amplitudes of global fluctuations of heat release rate at 212 Hz in Watt.Comparing Figs. 6 and 7, uniform values for ρ, γ and G provide different peak amplitudes of local fluctuations, compared to the fully corrected field.When integrating the local fluctuations to obtain the global ones, one must keep in mind that the summation was done with respect to the local phase of each position.Since the wavelength of the heat release fluctuations in the flame at 212 Hz was only 21 mm, high local amplitudes in a field larger than this wavelength cancel out, resulting in only low variations of the global values for heat release rate.For the fully corrected LIV evaluation, the amplitude of the global fluctuation of heat release rate at 212 Hz is app.3% less than the value recorded by the photomultiplier.This difference might be caused by the assumptions which led to Eq., since a thermal radiation with 180 W power was recorded for all frequencies for the operating point using the thermopile."This energy is lost and doesn't contribute to the density changes recorded by LIV.Beside the systematic errors introduced by the assumption leading to Eq. or the assumption on constant density, Gladstone-Dale constant or ratio of heat capacities, there are random errors affecting the result.While these errors are low for the FFT process, due to the high number of sample points, the Abel transform or tomographic reconstruction strongly depends on the parameters chosen.In all these algorithms we implemented the possibility to reconstruct the input data fields from the output data fields, e.g. the projection data from the local data.This provides an error estimate for the local data or the fringe evaluation.These errors are indicated as error bars in the fully corrected LIV data in Fig. 7.The OH* chemiluminescence data recorded by the ICCD underestimates the result and is shifted in phase compared to the photomultiplier signal.This effect is nearly gone when the field of view was corrected for the shading of corners.For this correction a reference image from a uniformly illuminated surface was taken.This comparison shows that the global values detected by LIV are practically identical to those obtained from OH* chemiluminescence.Since OH* chemiluminescence is widely accepted in literature as a measuring method for global fluctuations of heat release rate it can be seen as a benchmark for LIV in this work.Contrary to this, local heat release data from chemiluminescence measurements is erroneous in this type of flame , an effect discussed in the next chapter.Figure 8 compares the local heat releases recorded by LIV and OH* chemiluminescence and the vorticity recorded by FM-DGV for four phase delays.In order to compare OH* and LIV data, also the AC components at higher harmonics must be considered.For LIV data the complex values of heat release rate fluctuations at the base and the harmonic frequencies were superimposed.To better discuss the results, the corresponding vorticity recorded by local FM-DGV is also plotted in Fig. 8.The vorticity is dominant along the inner shear layer of the conical swirl-jet.In the outer shear layer, a counter-rotating second vortex can be found.For an uncertainty assessment one must keep in mind that the evaluation of LIV data is done in two steps.The first step links the refractive index fluctuations to fluctuations in density by the Gladstone-Dale constant which varies slightly between reactants and products.In the reaction zone an average value has to be taken due to the permanent mixing of fresh and exhaust gases, resulting in a ± 2 % error margin.The error margin is even less when measuring the sound field surrounding the unconfined flame with LIV since ambient conditions prevail.The convection zone above the flame is more complex, here hot products and cold ambient air are mixing with a maximal difference of 9 % in the Gladstone-Dale constant.So, in a worst case - having only hot reactants or cold ambient air along line-of-sight during the average process – the density fluctuation amplitudes might be 9 % off from one scanning position to the next.,The second step of evaluation links density fluctuations to heat release rate fluctuations in the reaction zone according to Eqs. and.To verify the assumption of Eq. also for siren excitation, we recorded the density fluctuations for 212 Hz using LIV in the reaction zone with and without flame but with operating siren excitation.It turned out that the density fluctuations induced by the siren were three orders of magnitude smaller than that measured at the same position in hot flow.The uncertainty assessment due to the local changes of the ratios of heat capacities is more complex and was performed in chapter 2.1.Here an average value between reactants and products was used, which was corrected for each scanning position by the temperatures derived from the density field.This reduces the error margin to ± 1 %.Again, this error is smaller in the sound field and larger in the convection zone.On the other hand, the local chemiluminescence signal in a highly-turbulent swirl-stabilized flame is influenced by turbulence and strain .Steinberg and Driscoll discusses how a counter rotating vortex pair affects an initially undisturbed flame front.By curling up the surface, a strain is induced in the now curved flame front, ahead of and behind the vortex pair.This pair of strain rate structures is spatially separated from the vortical structures and exert a positive strain in the leading strain-rate structure and a negative strain in the trailing strain-rate structure.The area with high positive strain in front of the vortex pair is marked by a white circle in Fig. 8 for the phase delay of 112.5°.Lauer et al. show how the strain impacts the local chemiluminescence in lean and highly turbulent flames.Small chemiluminescence signals will then emerge from regions with strong heat release and vice versa, leading to erroneous information about local heat release.Comparison of LIV, chemiluminescence and FM-DGV data in Fig. 8 reveals how strong local chemiluminescence data from swirl-stabilized flames are affected by this effect.Since LIV data are unaffected by turbulence and strain, a tomographic reconstruction from six equally spaced directions was performed and is given in Fig. 9 for a cross-sectional cut in the x-z plane showing a nearly perfect rotational symmetric distribution.While refractive index fluctuations caused by heat release fluctuations are of high amplitude, refractive index fluctuations caused by sound waves emitted by the flame show significantly smaller amplitudes.Using Eqs. and, the ideal gas equation, 212 Hz excitation frequency and a geometrical path length of 0.1 m, the minimum detectable pressure fluctuation can be estimated with 0.03 Pa or 63 dB from the picometer resolution of the LIV used.In 300 mm distance from the flame axis 70 dB were recorded when the siren caused a 212 Hz flame oscillation.This means that beyond fluctuations in heat release in the reaction zone and heat convected by hot spots downstream of the reaction zone, acoustic waves emitted by the flame can also be detected by LIV but as LOS data only.This is due to the fact that for tomographic reconstructions the function f in Eq. must have zero values at the projection limit.This is never the case for an acoustic wave.One also has to consider, that the amplitude is underestimated when close to the center of emission, since in this position oscillations along the sampling laser beam average out.LOS density fluctuations in and around the flame are shown in Fig. 10 for the different zones in the flame field.To better discuss low amplitude fluctuations, a square root presentation of data was chosen.In the left lower corner, heat release fluctuations in the combustion zone can be seen.Above the flame, density fluctuations caused by convection of hot spots are present.From 50 mm to 150 mm in radial direction pressure fluctuations of the sound waves are the only one to alter density.The velocity of the fluctuations in heat release rate can be estimated from the wavelength of app.21 mm and the frequency of 212 Hz, the sound wave is much faster, and the entropy waves are convected with the flow velocity downstream the flame.Considering this, pressure and heat contributions in the convection zone can be separated by spatio-temporal correlations with respect to the local phase.It can also be seen, that the sound waves radiated from the reaction zone are strongly reflected by the base plate underneath the flame and deflected upwards with a sound power maximum between 60° and 70° from the base plate."This finding is confirmed by a previously published research, investigating Strahle's assumption .LIV is capable to provide data of heat release fluctuations by recording refractive index fluctuations, which are linked to density fluctuations by the Gladstone-Dale constant.The relation between both quantities is valid under the assumption that no molecular weight changes occur during combustion in the regions investigated, the radiation loss is small, and that there is no or little conduction to a combustion chamber wall.When density fluctuations caused by pressure fluctuations are significantly smaller than that caused by heat release rate in the reaction zone, a direct relation between density and heat release fluctuation is possible.At first the LIV data are line-of-sight data, but with methods of optical tomography local data can be derived.Under above assumptions, precise data can be obtained when the local temperature or density field is recorded and used for the data reduction and when corrections for the local ratio of heat capacities and the Gladstone-Dale constant are performed.Outside the flame density fluctuations are caused only by acoustic waves, emitted by the oscillating flame.Modern LIV systems provide sensitivity high enough to visualize the acoustic field with precision and accuracy.In the convection zone above the flame both pressure and hot spots contribute to the signal, but while hot spots are convected with flow velocity, acoustic waves propagate with the local speed of sound.Unlike chemiluminescence, LIV does not dependent on turbulence and strain.This means that essential errors present in local chemiluminescence are not present in LIV measurements in swirl-stabilized flames and therefore it is possible to calculate accurate local data.Chemiluminescence can still be used to determine global variations in heat release.Outside the combustion zone, LIV also provides information on the convection of heat and, most importantly, sound waves emitted by the flame.Current work focuses on the development of a camera based full-field-LIV system, which captures the whole combustion field with a single measurement in high resolution.Additionally to density fluctuations, the system is capable to measure the mean local density.Via spatio-temporal correlations a local phase field can be obtained, which is used to calculate a local, phase averaged velocity field. | Fluctuations in heat release, hot and cold spots convected by the flow and sound waves are all related to density fluctuations, a number easily detected by interferometry. Same as chemiluminescence, interferometry is a line-of-sight method and therefore needs optical access and tomographic reconstruction of density fields. In this work we discuss the use of Laser Interferometric Vibrometry (LIV) as an alternative or extension to chemiluminescence when detecting heat release fluctuations in the flame, convection of hot and cold spots in the non-reacting flow field and sound waves emitted from a 3.4 kW turbulent, unconfined swirl-stabilized methane/air flame under perfectly premixed conditions. For this discussion local temperatures, local ratios of heat release and local Gladstone-Dale constants are determined and their influence on the result discussed. Global and local fluctuations in heat release rate recorded by LIV and chemiluminescence are compared, resulting in the same global values if local temperature fields and local equivalence ratios are used for LIV data evaluation. In contrast, local fluctuations in OH* chemiluminescence emission are influenced by turbulence and strain in this type of flame. |
573 | Designing context-aware systems: A method for understanding and analysing context in practice | Technological developments, such as big data and the Internet of Things, result in more and more software applications taking the context into account.There are plenty of examples of developments that require such systems.A typical and traditional example in the literature is that of a context-aware tour guide.Such tour guides provide information to their users that is relevant considering, for instance, their location or personal preferences .Another example are devices that are used to maintain balance on a smart energy grid .This requires the monitoring of energy points in houses .This contextual information might then be used to coordinate the charging of electronic vehicles .To achieve its purpose, such a system should interact with a variety of systems that monitor energy consumption and that switch on and off the charging of the vehicles.Furthermore, the system should be designed such that it meets the requirements of a variety of businesses, families and other parties that use and provide energy and information.In addition, it needs to be able to maintain a balance under a variety of circumstances, such as a heat wave, which leads to high energy consumption.The system thus requires taking into account the context.At the same time, the environment of the system is highly complex.All in all, the types of opportunities enabled by developments such as the IoT and big data are growing and they often involve the sensing of contextual information.For example, IoT offers the opportunity to monitor almost every link in a supply chain and to adapt to changes in the market fast .Such systems that sense and adapt to context are context-aware .The environments in which these systems operate are often highly complex.In 1994, Schilit and Theimer were among the first to introduce the term ‘context-aware’ in relationship to computing.An overview of context-aware systems by Hong, Suh and Kim shows that context awareness involves acquiring, sensing or being aware of context, as well as adapting to it or using it.Correspondingly, according to Dey and Abowd context awareness falls into two categories, namely using context and adapting to context.The environment in which a context-aware system exists can be viewed as open and infinite.However, when designing a system, only part of this environment is relevant to take into account, i.e. the context of the system.The identification of what parts of the environment are parts of the relevant context is non-trivial, as the environment can be highly complex and involve a high number of elements that are possibly relevant.Therefore, a method is needed to investigate the context of context-aware systems in complex environments and to determine what is relevant to take into account in their design.The need for such a method originated from practice.In the international container-shipping domain, new systems supporting business-to-government information sharing are required .The main need for such a method originates from the high number of elements that possibly could be taken into account.There are several factors that make this environment highly complex.First of all, the idea that is currently dominant in the domain, is to support business-to-business information sharing such that customs can reuse this data and piggy back on it.This means that the system should support B2G information sharing, as well as B2B information sharing in a supply chain.A supply chain is a massive set of tangled branches .Supply chains include a high variety of businesses having different interest and a variety of legacy systems.These businesses vary on the type of services they provide, their economic activity, level of technical capability and interdependencies with others, amongst others .The information needs differ per organization and for high quality information the companies are dependent on each other.Those who make profit from selling data might not benefit from freely sharing it and they thus have no incentive to do so.The same business might perform different functions in different supply chains or within the same supply chain.In addition, often businesses authorise others to act on their behalf.Not only the businesses involved in the information sharing vary considerable.Different types of goods might be shipped as well, including high value, perishable, or dangerous goods.Furthermore, information sharing will often be international, which means that it is governed by legislation from a variety of sources.The information that is shared also can be of different types.There is IoT data from sensors, such as temperature or GPS location.In addition, businesses might share various contracts, invoices and other documentation with each other.Furthermore, they share various declarations with customs, such as Entry Summary Declarations and import declarations.In addition, a number of systems might be involved in information sharing and these might vary as well, for example, email, phone, and electronic data interchange .This complexity requires a context-aware system to support information sharing.It is likely that in different situations the information sharing needs to be supported in a different way by the system.Many of the things above could have an impact on whether and with whom information should and can be shared and in what way.At the same time, this complexity makes it difficult to establish exactly what belongs to the context.It is difficult for many of these elements to see at first sight whether they should be taken into account as context in the design of the B2G information sharing system to sense and adapt to.A method is needed to either avoid the making assumptions, or spending a lot of resources on investigating each element.The former would make the design process less effective, while the latter would make the design process less efficient.Instead, we developed a new method for investigating context.A detailed understanding of the context is needed to understand what a context-aware system should sense and adapt to.The involvement of, for example, many actors with different and even opposing requirements and various legislations based on the context, means that it is not easy to determine which elements in the context should be sensed and adapted to by the system.Identifying what belongs to the context requires an identification of the situations in which the system needs to operate and the balancing of the requirements of the stakeholders in those situations.In complex large-scale multi-stakeholder environments, the variety of these situations and stakeholders is larger.At the same time, what belongs to the relevant context influences what the design of a context-aware system should look like.That is, it determines what sensors are needed to gather context information and what adaptors are needed to adapt to the context.It also determines what rules the system should incorporate for adapting to different situations based on the context information obtained from the sensors.This means that a thorough investigation of the context should be part of the integral design process for such systems.Having a method to easily and systematically decide what belongs to context can help to improve the efficiency of the design process.Furthermore, having a method to gather and structure knowledge of the relevant context and systematically derive the design of the system from that, might improve effectivity.The current literature on designing context-aware systems does not provide a method to systematically investigate context and base the design of a context-aware system on this.In this research, we address this gap in knowledge by proposing a method that can be used to investigate context in a structured manner, to clearly distinguish irrelevant elements in the environment from relevant context elements and to base the design of a context-aware system directly on this insight.In this paper, we use two systems as running examples, viz. a context-aware tour guide and a context-aware B2G information sharing system.The example of the tour guide will conform with the idea that most readers have of what a traditional context-aware system is, as there is quite some work on it .It provides an easy to understand and familiar example.However, the method will be most useful in more complex cases, such as that of the B2G information sharing system.The development of such a system was what initially motivated the development of the method.The complexity is mainly in the high number of things that could belong to context.For the examples, we focus on what elements we found to be part of the context and how we identified them.It is impossible to show all elements that would likely have been taken into consideration fruitlessly without the use of the method.Therefore, we cannot use this example to show the full complexity of the domain.Instead, we use it to illustrate how the method can be used in practice.This paper is structured as follows.In the following section, we discuss related work on context-aware systems.We then discuss our methods in section 3.Context can only be investigated effectively and efficiently when it is possible to easily decide what does and what does not belong to context.This requires a definition of context that makes clear what distinguishes relevant elements in the environment from irrelevant ones.In section 4, we provide a more specified definition of context.This definition is the basis for the gathering and analysis of data that is necessary to get insight into context in step 1 of our method.The method itself is presented in section 5.We provide a detailed discussion of what a designer should do to perform each step.Furthermore, we discuss why we believe that this is the appropriate course of action for the designer in that step.In section 6, we draw conclusions and discuss the implications of the use of the proposed method.We also describe how future work could help to further evaluate the method.There is much related research available.However, this research is not focused on designing context-aware systems in the highly complex environments we discussed in the introduction.First, we discuss the work on designing context-aware systems.Then, in section 2.2, we discuss the work on requirement engineering in designing context-aware systems.Insight into context does not only need to be obtained during the design process and translated into design.The context-aware system also needs to sense context and adapt to it at runtime.Therefore, context needs to be modelled in the system as well.In section 2.3, we discuss some of the work on representing context information and rules for adapting to context.Determining what belongs to the relevant context of context-aware systems requires an unambiguous and easy to understand criterion for deciding this, as most actors are non-experts and have limited technological knowledge.Such a criterion should be derived from a clear definition of context.We discuss the related work on definitions of context in subsection 2.4.We provide our own definition of context in section 4.In 2009, Hong, Suh and Kim found that about 5.5% of the papers on context-aware systems provide guidelines for development.A portion of this work, and more recent similar work, focuses on solving issues in a specific application domain or with specific types of sensors.The majority of the work is on providing technical tools, frameworks and infrastructures that a designer can use to build context-aware applications.In this ‘infrastructure-centred approach’ there is an assumption that the complexity of developing the systems can be reduced by using an infrastructure that can gather, manage and distribute context information .These tools and frameworks for designing context-aware systems can be quite useful to designers, helping them to elaborate the technical details and create the system.However, this assumes that the context is investigated.In a more recent survey, Alegre, Augusto and Clark provide an overview of methods for engineering context-aware systems.An analysis of the described methods shows that they do not include the investigation of context as an explicit and fundamental stage in the design process.In concordance, Alegre, Augusto and Clark conclude that the work does not include techniques and tools for understanding the context.The research by Alegre, Augusto and Clark does show that insight into context is important.On the basis of the results of a questionnaire completed by 750 researchers, they determine that among the most important features of a method for developing context-aware systems is the ability to represent situations in which the system should adapt in order to better understand them .On the basis of an analysis of the literature, they state the following: “All context information modelling and reasoning techniques need to enable the situation representation, but there is no support for understanding the situations and the contexts that they are going to be represented, stemming from the requirements.” .Our method, in which getting insight into context is a fundamental stage in the design process, thus addresses a problem that is important to researchers involved in the design of context-aware systems.In their work on context-aware services, Finkelstein and Savigni make a distinction between fixed goals and requirements.They state that goals are fixed objectives and requirements are volatile and can be influenced by context .Due to their dependence on context, the functional requirements in the case of a context-aware system can be viewed as conditional.For example, ‘filtering pricing information from the data’ could be a requirement that needs to be met under the condition that a certain situation happens at runtime, e.g., when a competitor would get access otherwise.This conditionality requires situations to be found in which the functional requirements for the system are different.This insight into context is necessary to establish what adaptors are needed to fulfil the functional requirements in those situations.Furthermore, relating these situations to the functional requirements is necessary to ascertain according to what rules the system should adapt.In addition, it needs to be determined what elements in those situations need to be sensed in order to identify the situation the system is in at runtime.In general, the requirements engineering process does not seem to provide a way to get the insight into context that is needed to develop context-aware systems.Nuseibeh and Easterbrook describe the requirements engineering process as consisting of context and groundwork, eliciting requirements, modelling and analysing requirements, and communicating requirements.The stage of context and groundwork is viewed as preparation and is used to determine the feasibility of the project and to select methods for further development .It thus does not involve the systematic investigation of context that we need.The stage of requirements elicitation involves identifying stakeholders and goals .This also does not involve linking situations to functional requirements.The work specifically on requirements engineering in light of context awareness is limited.The notion of context-aware systems is often not mentioned explicitly in this work.Instead, the work discusses, for example, context-aware services, dynamic adaptive systems or self-adaptive systems .This literature does, however, acknowledge the importance and different type of relationship between context and requirements in the case of context awareness .In some cases, it even goes as far as viewing adaptation as requirements engineering by the system itself or as requiring requirement awareness by the system .Even though the existing work appreciates the complexity of requirements engineering in the case of context awareness and the need to have insight into context, it does not provide a way to systematically get this insight.Instead, it proposes the use of existing usability methods or interviews, for instance .Of course, such techniques might be useful in gathering the appropriate data necessary for getting insight into context.Nevertheless, these methods do not provide the structure necessary to investigate context in the case of large-scale multi-stakeholder environments.More specifically, they do not provide a way to easily decide what belongs to the context, and this is necessary to ensure efficiency.Requirements engineering is part of a larger process of designing systems.According to Hevner , design science consists of a relevance cycle, rigour cycle and a design cycle.The connection to the environment made in the relevance cycle is to ensure that the design problem is relevant, that the artefact is usable in practice and that it actually offers a solution to the problem .This cycle involves determining the requirements for the system and field testing of the system .The distinctive property of context-aware systems is that they sense and adapt to context to deliver a solution to the design problem.Designing such systems thus involves determining what the system needs to sense, what adaptations it needs to make, and in response to what sensor information.Context awareness mainly impacts the design process in the relevance cycle, as this is where the connection to the environment is made.However, for context-aware systems, an additional connection needs to be made to determine what conditions should be included for their conditional requirements.In other words, it needs to be determined what design could deliver the solution to the problem in different situations.When such a connection is not made, then the context taken into account is based on assumptions of the designer.These assumptions could be checked to some extent in the relevance cycle by determining whether a solution to the problem is found.However, this checking is indirect and thus leads to an inefficient design process.Furthermore, this way of testing makes it very hard, if not impossible, to rule out that parts of the context are included that do not help in solving the problem.This can lead to a design of a system that is needlessly complex.Peffers et al. describe several activities that are common in design science research, viz. 1) problem identification and motivation, 2) define the objectives for a solution, 3) design and development, 4) demonstration, 5) evaluation, and 6) communication.To ensure that the problem, objectives and solution are relevant, a connection to the environment needs to be made in the design process during activity 1 and 5."In activity 3, the artefact's desired functionality and its architecture is determined .Based on this, the actual artefact is created .The design of context-aware systems should follow the same steps.However, the additional connection to the environment that is necessary for developing context-aware systems should be made in activity 2 and 3, because for context-aware systems the functionality and architecture partially depends on what elements of the environment are parts of the relevant context.For functionality, insight into context is needed to determine what the system needs to be able to sense, what adaptations it needs to be able to make and what situations should lead to what adaptations.The architecture should have the necessary sensors and adaptors to be able to offer this functionality.An in-depth overview of existing work on representing context and reasoning with context information is not within the scope of this research.This area is very extensive and the literature already contains several overviews.The main difference between our work and the existing work lies in the way the representations and rules for reasoning are established.In our case, these follow directly from an investigation into the context of a context-aware system.To explain the relationship with the existing works on representation and reasoning, we start with Winograd , who states the following: “The hard part of this design will be the conceptual structure, not the encoding.Once we understand what needs to be encoded, it is relatively straightforward to put it into data structures, data bases, etc.” .Winograd stresses the importance of having a conceptual model of context.This requires insight into the nature of context.However, there is currently no shared understanding of context .This lack of shared understanding is associated with imprecise definitions of context in literature and a lack of consensus on context definitions.In this work, we provide a definition of context that is intended to be applicable in a variety of application domains.The proposed definition is based on definitions of several other concepts, such as ‘focus’, ‘situations’, ‘context elements’ and ‘context relationships’.The method helps designers to build the conceptual model for the context of their system based on these definitions.In fact, this is the first step of the method.The conceptual model is then used as a basis for expressing rules in the system and determining its required sensors and adaptors.The conceptual model thus bridges the gap between investigating context and representing context in a context-aware system.This allows for a direct translation of insight into context into the rules with which the system should reason.Winograd notes that encoding or representing context itself is not the hard part.Yet, Pertunnen, Riekki and Lassila note that conceptual models have received little attention in literature, compared to context representation.There is even less work on determining what context and rules should be represented in a specific context-aware system.The research that does focus on this issue covers only a specific domain, such as m-commerce or web engineering .This work is thus of limited application in other domains.The work of Crowley focuses on a specific domain as well, viz. observing human behaviour.However, he mentions some ideas that we will adapt and extend.Crowley states that designers should only include entities and relationships that are relevant to a system task to prevent the system from becoming very complex.The relevant entities and relationships are selected by “first specifying the system output state, then for each state specifying the situations, and for each situation specifying the entities and relations” .The relations that he refers to are the properties of the entities.They are closer to what we define as context elements than to the context relationships that describe the impact of context in our method.Determining the context relationships would be akin to determining what situation should be specified for an output state in the work of Crowley .In contrast to our work, Crowley does not provide explicit guidance on how to investigate this and on how to express context relationships as rules with which the system can reason.The large volume of literature on context-aware systems contains many different definitions.There is currently no consensus on the definition of context in the literature .The earlier work on context awareness contains definitions that use synonyms for context or use examples .This leads to generality in the former case and to incompleteness in the latter case .For designers of context-aware systems, such definitions thus respectively provide too little guidance for investigating context, or could exclude parts of context that should be included in the design of the system.In the literature, several attempts have been made to define context for operational use without relying on synonyms or examples.Especially the work of Dey and Abowd is often used as a basis for application-specific or domain-specific definitions.Dey and Abowd define context as follows: “Context is any information that can be used to characterize the situation of an entity.An entity is a person, place, or object that is considered relevant to the interaction between a user and an application, including the user and applications themselves.,According to the definition given by Dey and Abowd , the most important characteristics for belonging to context are 1) characterising a situation and 2) being considered relevant.However, the definition cannot be used as a basis for, for instance, quickly deciding whether something belongs to context.Their definition still leaves implicit what it means to be considered relevant to an interaction and to characterise a situation.We need to know what the notions ‘relevance’ and ‘characterising’ mean to be able to decide whether something belongs to context.Winograd argues that the definition given by Dey and Abowd is too broad.He states that: “Something is context because of the way it is used in interpretation, not due to its inherent properties” .Zimmerman et al. also mention this issue with the definition given by Dey and Abowd .Their solution is to categorise context into the fundamental categories of individuality, activity, location, time and relations.According to them, the activity predominantly adds the relevance of context elements.According to Winograd and Zimmerman et al. , something is context because of its relationship to something else.This conforms with the interactional view on context described by Dourish .According to Dourish , when viewed as an interactional problem, “contextuality is a relational property that holds between objects or activities”.The interactional view implies that something belongs to context when it has a relational property with something else.However, we still have no certainty about when exactly this is the case.Brézillon states that context cannot be spoken about out of its context.Context thus is always a context of something.According to Brézillon , this ‘something’ is a focus of an actor.Brézillon explains focus as follows: “Context surrounds a focus and gives meaning to items related to the focus.The context guides the focus of attention, i.e. the subset of common ground that is pertinent to the current task.,He views context as knowledge and the focus helps to discriminate irrelevant external knowledge from relevant contextual knowledge.However, as Brézillon himself states, “the frontier between external and contextual knowledge is porous”.When in his model for task accomplishment, a discrepancy is found between the model and what a user does, the user is simply asked for an explanation .The new knowledge is then added to the model.This means that Brézillon does not make explicit what belongs to context either.This decision is ultimately left to the user.The notion of a contextual element is central to the definition of context given by Vieira et al. , namely that a contextual element is “any piece of data or information that can be used to characterize an entity in an application domain” .In a similar vein as in the work of Brézillon , contextual elements are relevant to a focus, which is determined by a task and an agent .A contextual element is an attribute of a contextual entity , which is an entity that should be considered for the purpose of context manipulation .Contextual elements can be identified from the attributes and relationships the entity has .Vieira et al. already note that the criterion for identifying a property as a contextual element in their case is subjective and depends on the context requirements and a conceptual model.Therefore, the question of what belongs to context becomes a question of what should be in the conceptual model.The problem of determining what belongs to context has thus been moved rather than solved.In the work above, there is a focus on the relevance of something as arising from an activity or actor.Similarly, other work discussing relevance focuses on determining the relevance of something at runtime and dynamically defining context for the specific task or activity at hand.It is important to make clear the distinction between such work and our work.The work presented in this paper is concerned with supporting the determination of what is relevant and what should be included in the context when designing a system.We thus focus on what a context-aware system should be able to sense and what adaptations it should be able to make, and in what possible situations.To summarise, our work adds to the existing work a definition of context that can be used to clearly decide what does and does not belong to context.Furthermore, it provides a method for getting insight into context and deciding what belongs to the relevant context, based on this definition.It also provides a way to identify the sensors and adaptors that the context-aware system needs to fulfil its functional requirements.In addition, it supports the identification of the rules according to which the system should adapt, as well as expressing these rules in such a way that the system can use it to reason with.The new method was developed using a design science approach.Design science research starts with a problem that is important as well as relevant .We established the need for a new method in a triangular fashion.First, we identified the need for the method in practice when developing a B2G information sharing system in the domain of international container shipping.Next, we searched the relevant literature for a suitable method that we could use.However, the relevant literature did not provide such a method.Furthermore, we found that a lack of a method for investigating context is mentioned as an issue in the literature as well .Design science research involves two processes, namely build and evaluate .The method we propose requires a specified definition of context.Without such specification, it is not possible to clearly distinguish elements in the environment that are relevant and thus part of the context, from those that are not.We therefore started the ‘build’ process by searching the literature for a suitable definition of context.However, the literature did not contain such a definition.Therefore, we developed a new definition of context.Furthermore we formalised this definition to ensure that it is precise enough for our purposes.We then developed the method based on this new definition.The evaluation of a method like this is highly complex.We evaluated the method by applying it in practice when developing a context-aware system for B2G information sharing in international container-shipping.We established in the first step that a new method is needed, because the design process of context-aware systems in complex environments otherwise is likely to become either ineffective or inefficient.According to Pries-Heje et al. the evaluation of a process, such as a method, can rely on the idea that a good process will lead to a good product.We can establish whether using the method leads to an effective design process by determining whether the model of the relevant context established using it is correct.We did this by letting experts in the domain check the model.We keep track of the efficiency and discussed this with the project partners.In addition, we explained the concepts of context elements and context relationships to the parties we interacted with in the project, such as project partners and interviewees and used them to discuss context.We discuss the results of the evaluation in section 6.In this section, we provide a definition of context that can be used to develop a criterion for deciding what belongs to context in step 1 of our method.It can also be used to structure knowledge on context in such a way that the sensors and adaptors the system needs and according to what rules it should adapt can be easily determined.In section 4.1, we discuss our choice for using a logic programming paradigm to formalise the definitions.We make a distinction between semantics and syntax.We provide our basic syntax in section 4.2.Next, in section 4.3, we define some basic notions that our definition of context is built upon.In section 4.4, we provide our definition of context.There are several important benefits of representing the definition of context and related concepts using a formal notation.We set out to define context as precisely as possible.As we discuss in the previous sections, this can help with clearly identifying what belongs to context and this can improve efficiency and effectivity of the design process.The process of formalising enforces a certain level of precision.The other reasons for formalising the definitions have to do with the way in which we intent to use the definitions in the method itself.In step 1.1 of the method, the designer builds a model of the context.However, the environment that they are investigating can be highly complex.Consistently and systematically expressing knowledge about context might help designers to deal with this high complexity.More specifically, it could help them with detecting inconsistencies and gaps in knowledge.The model of the designers is based on the definition of context.The second reason for formalising the definitions is thus that it provides designers with the language and semantics to consistently and systematically model the context of their system.In step 2 of the method, the sensors that the system needs to collect context information and the adaptors that the system needs to adapt are directly derived from the model of context.Furthermore, in step 3, the context rules for determining what adaptations to make based on the context information are also directly derived from the model of context.The formalisation supports this by providing a way to model context that allows for such a direct derivation.In addition, the same formalisation is used to provide a way to express context information gathered by the sensors of the system at runtime.Furthermore, it allows for expressing a command to an adaptor to perform a certain action.In addition, it allows for expressing context rules.The context rules express how the system should adapt, based on the context information obtained by the sensors.This is where logic programming more specifically plays a role.By using the logic programming paradigm for our formalisation, the model of the context made by the designer can almost directly be translated to context rules.Furthermore, these rules are then suitable for use in a logic program that can be used by a reasoning component of a context-aware system to decide what is the appropriate adaptation in different situations.This makes the step of going from a model of context to a logic program that can be used by the reasoning component of the context-aware system very small.This makes it easier for the designer to make such a translation.Furthermore, it helps to ensure that the logic program is very close to what it should be according to the model of context.In early work on logic programming, Kowalski noted the usefulness of predicate logic in programming.In 1977, Warren, Pereira and Pereira implemented a more efficient version of the logic programming language Prolog, based on this work and the work of Colmerauer and van Emden .Subsequently, many others build on this fundament to further extend and refine logic programming.Lifschitz provides an extensive survey of the foundations of logic programming with classic negation and negation as failure.Unless otherwise stated, we follow the notation and terminology of Lifschitz to describe our syntax.We start with a nonempty set of atoms A.The choice of A depends on the language used.In our case, the atoms are simple predicates as defined below.We directly introduce variables in the language and introduce the notion of schematic atoms.Atoms are called positive literals.Atoms preceded by a classical negation symbol ‘¬’, are called negative literals.Following Lifschitz again, we refer to a positive literal or a negative literal as a literal.A schematic atom is called a schematic literal.A ground atom is called a ground literal.We follow the convention that terms with a capital as their first letter denote variables.Context rules are the same as basic rules defined by Lifschitz .Again, we also introduce the notion of schematic context rule in the same definition.In addition to context rules, we need to express context relationships.The definition of a context relationship rule is similar to that of a context rule and it is also based on the definition of a basic rule by Lifschitz .However, it uses a different operator to connect the head and the body of the rule.The use of a different operator signifies that context rules and context relationship rules are different types of rules.Context rules are used to express what adaptations should be made in different situations.The body of the rule expresses the situation in which the adaptation needs to be made.The head expresses what adaptation should be made by the system, namely an adaptation that makes the head true.Context relationship rules express a dependency of the truth of the head of the rule on the situation expressed in its body.In other words, context relationship rules express that their head is true when their body is true.This is a different type of relationship.Context rules are further discussed in section 5.3.Context relationships are further discussed in section 4.4.A program is a set of rules.In this case, we will use the context rules to reason with in a logic program to derive what adaptations need to be made based on the context information sensed.As we will further explain in section 4.4, context relationship rules have a different function and we do not need to use them in a logic program.A logic program is a set of context rules.Context-aware systems should sense context and adapt to context.To sense, there needs to be something in the real world that can be observed.For instance, the GPS coordinates of a user can be observed.Furthermore, to adapt, there needs to be something in the real world that can be manipulated; for instance, the information provided to a user can be manipulated.These things should be part of the context.First, we thus have to define what elements of the environment can be observed and manipulated by a system.These elements are candidate for being part of the relevant context of the context-aware system.In our previous work on defining context , we focused on which attributes of objects do belong to context and which do not.The attributes were viewed as possibly having different values.For instance, according to this work, the attribute ‘colour’ of an object ‘apple’ could have values such as ‘green’ or ‘red’.Relationships were also reduced to attributes, as the added value of including them was unclear at the time and attributes seemed easier to deal with.Furthermore, we built on other work that also describes context as attributes .As the research progressed, we realised that the definition of context can be used not only for deciding what belongs to the context of a system, but also as a basis for guiding the information gathering process for getting insight into context.While investigating the context of the B2G information sharing system, we came across complex relationships that should be taken into account in the context-aware system.Using a definition that only includes attributes requires each of these relationships to be reduced to an attribute.When relationships are complicated, this is counterintuitive and makes the investigation of context more complex.We illustrate the counterintuitivity of reducing relationships to attributes with an example.For the B2G information sharing system, we want to ensure that it supports flows of information in which businesses are willing to participate.In some cases, business A might not want business B to have certain data elements, because A and B are competitors.When data is shared in such a way that B can access it, A might not be willing to participate.However, business A might want to share other data elements with business B that are not sensitive.Sensitivity in that case can most intuitively be described as a relationship between businesses A and B and a data element.Attributes can be easily described as relationships.For instance, an apple can have a ‘has colour’ relationship with ‘green’ or ‘red’.For the context-aware tour guide, we want to provide information about sights that is relevant to the user.What information is relevant probably depends on where the user is.In the previous work, the user would have had an attribute ‘location’ that has a coordinate as a value.In the new definition of context, the user has a ‘has location’ relationship with a coordinate.In addition, it only makes sense to try to sense or manipulate something that could change or vary.For example, the location of a user can vary as they move around.When multiple people use a system, then who is the user might vary as well.Both of these things might be useful to sense.However, other things are less likely to vary, or are not variable at all.For example, the speed of light is not variable.For the tour guide, it might turn out that all users have a preference for being provided the information in the same language.In that case, it is not useful to sense what language users prefer.The relationships are the elements of the environment for which we want to decide whether they are part of the relevant context.Therefore, we will refer to such a relationship as an environment element.It is important to note, that by ‘environment’ here we refer to the environment of the system.We made a selection of what in the environment could be manipulated or sensed.We have not yet made a selection of what of those things are part of the relevant context that should be manipulated or sensed.In this case, ‘environment’ thus should be interpreted in the broadest sense possible.Informally, an environment element is a relationship between objects in the environment of a system.The objects in the environment could include physical objects, but also other things, such as qualities, or locations.Syntactically, they are represented by literals.The semantics of environment elements, i.e., whether the literal is true or not, will be determined in the system using sensors, or will be manipulated by an adaptor.In our modelling, we only want to include literals that can have different truth values.A state of the world, or a situation, is different from another state of the world or situation when the truth value of at least one environment element changes.For a system to be context-aware, it should sense or adapt to these differences when they are relevant.Furthermore, it should only consider situations that could exist in the real world, and for instance not situations that are inconsistent.A situation is a state of the world determined by the environment elements that are true.Syntactically, they are represented by a set of ground literals.The origin of the notion of context lies in the domain of linguistics .In this domain, the meaning of a text has to be constructed based on the surrounding text .This surrounding text can be viewed as the context of the text for which the meaning is constructed.Outside the domain of linguistics, context is always a context of something as well.We need to identify what this something is and what to call it.In linguistics, this ‘something’ is called a ‘focal event’ .In the domain of context-aware systems, for Dey it is the interaction between a user and an application, and for Brézillon and Vieira et al. it is a focus.Accordingly, we refer to the thing that context is a context of as a focus.Context is thus the context of a focus.Now we have a name for it, we need to determine what a focus is in the case of context-aware systems.We want our definition of context to be generic enough to make our method useful for a variety of application domains in which a context-aware system is designed.Therefore, we believe that limiting the focus to the interaction between a user and an application, like Dey does, is too restrictive.We have to look more broadly at and examine what the nature is of the relationship between contexts and their focus for context-aware systems.The designer of a context-aware system has a goal and they want to design the system to reach that goal .This design goal is determined by the designer based on the design goals of the different stakeholders in the context-aware system.The design goal is the same in all situations .However, whether the design goal is reached can depend on the situation.Everything that could affect whether the design goal is reached at runtime should belong to the context; that is, the situations in which the design goal is not reached should be sensed.This should then result in an adaptation by the system that changes the situation in such a way that the design goal is reached.Anything that does not impact whether the design goal is reached should not belong to the context that the designer should take into account.The focus of the context should thus be related to the design goal of the designer.The focus of context is the environment element that a designer of context-aware systems needs to be true to reach their design goal.As it is an environment element, it is syntactically represented by a literal, in the same way as other environment elements.It is important to note the use of schematic literals in the examples here.The design goals of a designer are usually high level."The designer's goal is not to provide Mary and Bob with relevant information, but to provide all users with relevant information.Schematic literals can be used to reflect this.A focus of context can be expressed using a literal in the same way as other environment elements.At first sight, achieving a definition of context seems problematic, since what belongs to context might be different for each context-aware system.However, context is determined by its relationship with its focus.In fact, something is only context if it has some context relationship with the focus.For instance, the weather forecast only belongs to the context of the tour guide if it has a context relationship with the relevance of information provided to the user.The type of relationship is not specific to a certain focus, but the same for all foci.In this way, it can be used to formulate a definition of context from which what belongs to the context of a specific focus can be derived.A context relationship is a relationship between a focus and a set of environment elements, where in each situation where these environment elements have the same truth value, the focus has the same truth value.We say that these situations restrict the focus.Syntactically, context relationships are represented by context relationship rules.Note that the context relationship is not the same type of relationship as the environment elements.It connects different environment elements with each other.The context relationship is thus on a higher level and can more naturally be represented by an operator than by a predicate.For the context-aware tour guide, the focus is the relevance of the information provided to the user.Let us assume that information about a sight is relevant when the user is close to the sight and the information is about the sight.This means that in all situations in which the ‘has location’ relationship of the user has a coordinate within a few metres of a sight and the ‘subject of’ relationship of the data provided is this sight, the value of the focus is such that the information provided is relevant to the user.In that case, there is a context relationship.For completeness, it should be noted that, for instance, the ‘is user’ relationship between the user and the context-aware system and the ‘provided to’ relationship restrict the focus as well.It is important to note that businesses might be either willing or unwilling to share when the information is not sensitive, the system is not part of the flow of information or the system does not broadcast the information.This is possible considering the context relationship we have identified.There is only a dependence on the focus when all the environment elements have the values mentioned in the example.Context relationships, however, might in other cases constrain the value of a focus for another truth value of their context elements as well.In addition, there might be multiple sets of context elements that have a context relationship with a single focus.For example, what the authors of this paper have for dinner does not have a context relationship with either the focus of the tour guide or the focus of the B2G information sharing architecture, as this does not restrict them.Using the notion of context relationship, we can determine whether an environment element belongs to context.A context element of a focus is an environment element that is part of a set of environment elements that have a context relationship with the focus.As it is an environment element, it is syntactically represented by a literal in the same way as other environment elements.For example, the location of a user is a context element of the focus in the tour guide example.If it is unlikely that the location of a sight will change, or if it is impossible for it to change, then the ‘has location’ relationship of the sight with a coordinate does not have a context relationship with the focus, because its truth value never changes and therefore it is not an environment element.Furthermore, the sensitivity relationship is a context element of the focus of the B2G information sharing example, as it has a context relationship with that focus.In contrast, what the authors of this paper have for dinner is not a context element, as it does not have a context relationship with the focus.A context element can be expressed by a literal in the same way as other environment elements.When an environment element is a context element of a focus, this means that it is relevant to the designer.A designer achieves their design goal when the focus has a certain value.To achieve the design goal, they thus have to design the context-aware system such that the focus has this value when it is used.A context element of the focus influences the value of that focus.Therefore, the system needs to be designed such that it can sense the context elements and manipulate them if the focus has an undesired value.This makes the context element relevant to the design and therefore to the designer.The definition of context is based on the other notions defined above.The context of a focus is the set of all its context elements.For example, the context of the focus of the tour guide example includes the location of a user.In addition, the context of the focus of the B2G information sharing example includes the sensitivity relationship.Syntactically, context it is represented by a set of literals.In this section, we present the three steps of the proposed method for designing context-aware systems based on insight into context, viz. 1) getting insight into context, 2) determining the components needed to sense context and adapt to context, and 3) determining the rules for reasoning with context information.The method takes a practical problem that a designer wants to solve as a starting point."In step 1, it is determined what context relationships and context elements the system should take into account to solve the problem and attain the designer's goal.In step 2, the list of context elements is used to determine what sensors and adaptors are needed.In step 3, the list of sensors and adaptors, together with a list of context relationships from step 1, is used to derive and express the rules according to which the system needs to adapt.Steps 1 and 2 consist of several sub-steps.For each sub-step we provide an illustration of how it can be performed for the examples of the tour guide and the B2G information sharing system.Fig. 1 provides an overview of the steps in the method,The objective of the designer in the first step is to get insight into context.The first step of our method should be preceded by the identification of the practical problem that the context-aware system should solve and a determination of the relevance of that problem.How to do this is already described extensively in scientific literature.This is thus not new, and therefore not part of our method.However, what context should be taken into account in the design of the system is related to the design goal of the designer.We can derive this design goal from the problem that is identified.The specification of the practical problem is thus the input for this step.The sensors and adaptors of a context-aware system and the rules that the system should reason with depend on the context that should be taken into account.This means that in this step insight into context needs to be gained to determine what belongs to context and what the impact of different situations is.Information about this should thus be the output of steps 2 and 3.This information should be structured in such a way that the necessary sensors and adapters for the architecture, as well as the rules, can easily be derived from it.For the first step of our method, we propose that the designer performs the following sub-steps: 1.1) determine the focus, 1.2) gather data, 1.3) analyse the data.The overall process for determining the focus is shown in Fig. 2.The input of this step is the practical problem and the output is a focus.The focus of the context is related to the design goal of the designer.The design goal of the designer, in its turn, is based on the problem that they want to solve with their system.To perform step 1.1, the designer can use the specification of their problem to specify their design goal.The design goal of the designer can be viewed as solving the practical problem.The design goal can thus be expressed as what the system should be able to do at a very high level in order to solve the problem.A design goal is reached when the world is in a certain state.Therefore, the next step for the designer is to describe the state of the world when their design goal is reached.According to Definition 7, a focus of a designer is the environment element that the designer needs to be true to reach their design goal.This relationship can be identified from the description of the state of the world.Fig. 3 shows the steps for deriving the focus for the tour guide example.Fig. 4 shows the steps for deriving the focus for the tour guide example.It is possible that a more complex design goal can lead to multiple foci, or that there are multiple design goals.This should not be a problem.However, each of the foci will have its own context relationships and context elements and will require its own investigation by the designer.Illustration step 1.1:Illustration step 1.1:The overall process of performing step 1.2 is shown in Fig. 5.The input is the focus from step 1.1 and the output is a table with situations that restrict the focus.After the designer has identified the focus, they should first select the data collection methods and sources that they will use to investigate context.An important requirement is that the data that the designer gathers should provide information on what environmental elements restrict the focus.There are many different approaches that could be taken.A designer could, for instance, perform case studies in which the focus has certain values and then determine why the focus has these values.Another possible approach is to do a literature search, using a description of the focus.Furthermore, a designer could conduct interviews and ask the interviewees directly or indirectly what they think impacts the focus.In addition, the usual considerations, such as the accuracy and accessibility of data, will also play a role in making a choice.Illustration Step 1.2: For the tour guide example, the selection should depend on what data says something about the level of relevance of information about sights to users.This could very well be the users themselves, and the designer might ask them to complete questionnaires or interview them about in what cases they find information relevant.Among the other possibilities are interviewing experts or doing a literature study on the matter.Illustration Step 1.2: For the willingness focus, we needed information about in what situations businesses are and are not willing to participate in information flows.We did a secondary analysis of case study data from a project in which different systems for information sharing were implemented and tested in the container shipping domain.The systems had different designs and were used by different parties.The case study data thus offered insight into the reasons for these differences and the concerns of businesses.We studied different deliverables and reports describing the design of the systems, the progress of the project and the obstacles the researchers came across.Furthermore, we studied some of the case study notes.The data provided a broad perspective on what things affect the willingness of businesses.Furthermore, this data was rich and accessible.It is hard to identify the best data source and the best data collection method without knowing the focus that data should be collected for.However, as we are investigating what belongs to context, we can say that data gathering should be of an exploratory nature.This is most important in the early stages of the investigation to ensure that no important parts of the context are excluded.In later stages, additional support should be sought for the context elements and relationships found.Unfortunately, the exploratory approach can be in conflict with the feasibility and efficiency of the data gathering.In a large search space, such as in the case of the complex environments discussed in the introduction, it would be easy to end up investigating a lot of things that later turn out to be irrelevant to the design of the system.We propose several strategies to minimise the time that the designer spends on investigating things that turn out not to matter for the design of the context-aware system.First of all, we already restricted the search space by using the focus to select appropriate data collection methods and data sources.Second, we can restrict the kind of information that the designer should gather.The designer only needs to know in what situations the focus is restricted.The designer will need very specific descriptions of the situation and the way in which the focus is restricted.However, they do not need to do a more in-depth analysis of why the focus is restricted in that situation, than is necessary to determine that there is a connection and that the information is reliable.For instance, the designer needs to know in what situations a user finds information about sights relevant, but does not need to know the details of theories on human information processing that make that information relevant.After all, this is not something that the context-aware system can directly take into account.An additional way to increase the efficiency of the data gathering process is to provide a way for designers to quickly decide whether something has a context relationship with the focus.We therefore provide a criterion for deciding whether a relationship in the environment has a context relationship with the focus.Furthermore, we provide a simple test to decide whether the criterion is met for a set of environment elements.This criterion can be used to discern pieces of data that are interesting for further analysis and those that are not.The criterion follows directly from the way we defined the notion of environment elements and context relationships in section 4.Criterion: A relationship in the environment has a context relationship with a focus if and only if:whether the relationship exists can vary, and,it is part of a set of environment elements, such that in each situation where these environment elements have the same truth value, the focus has the same truth value.The criterion is met when a situation is that restricts the truth value of the focus and the relationship is part an environment element that is part of this situation.Of course, we cannot list all possible situations to check whether the criterion has been met, because in the real world there are far too many other environment elements that vary.Therefore, it is also not possible to be certain that all environment elements belonging to a set that have a context relationship with a focus have been found.The solution is to reduce the testing of the criterion to testing whether the information collected or analysed by the designer supports the conclusion that the criterion is met.The designer should thus determine that their information supports the conclusion that the focus is restricted to a certain truth value in a specific situation.If information is found that indicates that the truth value is restricted for a focus to either true or false in a certain situation, the criterion is met for all the environment elements in the situation.To test whether the criterion is met, it is thus enough to describe the situation that impacts the focus.This is more intuitive and efficient, and at this stage we do not need to discern the different context elements.Furthermore, depending on what data the designer gathered, they may need to generalise.Consider an example in which Mary is interviewed by the designer and she states that she thinks information about the Pyramid of Cheops is relevant when she is near the pyramid.On the basis of this, we could say that in the situation in which Mary uses the system and her location is near the Pyramid of Cheops and the information provided is about the Pyramid of Cheops, the information provided to Mary is relevant.However, unless the system is designed specifically for Mary and the pyramid, this is not very informative.Generalisation in this case is easy: replace ‘Mary’ with ‘the user’ in the description of the situation and ‘the Pyramid of Cheops’ with ‘the sight’.By generalising, the description stands for a whole set of situations with particular properties in which the designer believes that the focus is restricted.Of course, the designer should be confident that they can make generalisations based on the information that they gathered.If a relationship meets the criterion but the designer is not sure whether they can generalise, then gathering further data on the situation that relates it to the focus and similar situations and their impact on the focus might be fruitful.It is not always necessary for designers themselves to generalise.For instance, when scientific research shows that location and relevance are related, designers do not need to generalise.To ensure that the designer has found the appropriate information, they should describe explicitly and precisely the restriction on the value of the focus as well as the situations.Designers should ensure that everything they say in the description of the situation follows from the information that they have.We propose that designers fill in a table shell similar to that in Table 1.If they can fill this in based on the information they have gathered, they have found environment elements that have an impact on the focus and that meet the criterion.If they cannot fill it in, then they cannot conclude that the criterion is met, based on the information they gathered.When filling in the table shell, designers should keep in mind that situations should be possible in the real world.Illustration Step 1.2:Illustration Step 1.2: For the investigation of context for this example, we went through the case study data.For everything we thought might restrict willingness, we attempted to fill in the table shell and find additional information.We explicitly made the step to generalise from specific situations by replacing objects in our descriptions with their type.When different situations generalised to a similar overall description, we added the new situation to the older one as support.Table 3 below shows an example of the results of filling in the table shell.We took an example for which the support was already generalised.By filling in the table shell, designers can determine what things are and what things are not interesting to further look into.Then, they can concentrate their data gathering efforts on further specifications of the context elements and relationships that they found, and on finding more support for them.In fact, to make everything computable, the description of the situation and the value for the focus need to be as specific as possible.In the tour guide example, the designer might, for instance, attempt to try to find out what exactly is near enough to the sight for the information to be relevant.The filled-in table shells are further analysed in step 1.3.This means that it is important for the designer to add further specifications to the table shell.Filling in the table shell not only serves as a test that the criterion is met, but is also used in further steps to determine what the context relationships and context elements are of the focus.As the information in the table is further processed later on, it seems wise to include a reference to the data source or citation of the text that the information is based on.This can help to ensure reliability and allow for reinterpretation in later steps.The overall process of performing step 1.3 is shown in Fig. 6.The input is a table with descriptions of situations from step 1.2.The output is a list of context elements and context relationships.As the output of step 1, we require the information found about the context to be structured in such a way that it is easy to identify what sensors and adaptors the system should need and such that rules can be abstracted from it.The first thing that needs to be done to achieve this is to abstract the environment elements and their truth values from the situations described in the table shell that is filled in step 1.2.There are many ways to do this.One is by looking at the descriptions of the situations and for each one identifying all the physical things in the real world that are mentioned.Physical things are physical objects in the world, such as a user.Other things are not physical.For example, qualities are things that are attributed to those objects, such as colour or speed , but are not physical objects themselves.Both physical things and qualities can be related in an environment.To find the environment elements, everything that is true about the physical objects in the situation can be listed and then it can be determined what of these things vary.Illustration Step 1.3: On the basis of the filled-in table shell, we can identify the following objects: the user, the location of the user, the sight, the location of the sight, and the information.Then we can list everything that is true about these objects in the situation: there is a user, there is a sight, there is information, the user has a location, the user is provided with the information, the location of the user is near the location of the sight, the information is about the sight, and the sight has a location.Illustration Step 1.3: For the first situation in the table shell resulting from step 1.2, we can identify the following objects: the data, the flow of information, a business, and another business.What we can say about these objects is the following: there is data, there is a flow of information, there is a business, there is another business, the data is sensitive from one business to the other, and the data is shared with the other business in the flow.In the previous step, we already generalised the information we found on context.For instance, we replaced ‘Mary’ with ‘the user’.This generalised information can be expressed using schematic literals that represent a set of instances.As discussed in section 4.4, this avoids having to provide a very long list with similar context relationships.It also introduces the need to express constrictions in some cases, which can be added as literals as well.We can add the schematic literals expressing each of the environment elements to extend the table shell we used for testing the criterion.In each row, we describe a situation and the way in which the focus is restricted in that situation.We are thus describing the relationship between context and the focus, or in other words, a context relationship.Therefore, we can also add a column to name the context relationship.The resulting extended table shell is shown in Table 4.Illustration Step 1.3: Table 5 shows the literals that can be assigned to the statements for the tour guide example.Illustration Step 1.3: Table 6 shows the recorded information on context relationships for the B2G information sharing system.The column with the support was left out to save space.When filling in the table shell, it could turn out that information is still missing.In that case, steps 1.3 and 1.2 should be alternated until all information that needs to be recorded in the table shell has been acquired.In this step, we only need to look at the environment elements that are classified as context elements, because these are the environment elements that impact the focus and thus should be sensed and, in some cases, manipulated by the system.We can thus discern two types of context elements, namely sensor elements and adaptor elements.In this step, we should determine which is which.The output of this step is a high-level, partial description of the architecture of the context-aware system that includes only the sensors and the adaptors, and their input and output and connections to the environment.In the literature, there are several proposals for what the overall architecture of a context-aware system should look like.It is not up to us to decide which one of them is best or which one the designer should choose.However, any architecture that a designer considers most apposite for the design of their system can be used in our method.This is possible, as all architectures for context-aware systems have some components and connections in common.The design choices we want to help the designer with only concern these common components and connections.The similarities between the possible architectures are dictated by the nature of context-aware systems.Ultimately, a context-aware system should sense context information and adapt to it.Its architecture should thus always include sensors, adaptors and a component for reasoning with the information from the sensors to derive what adaptors should make what adaptation.This means that the sensors and adaptors should have some direct or indirect connection to the environment and to the reasoning component.It is exactly these sensors, adaptors and connections to the environment that are common to the architectures that we want to support making design decisions on.We want to support designers in basing their design on insight into context.What sensors and adaptors should be included in the architecture is directly determined by what context the system should take into account.Furthermore, the same is the case for what things in the environment the sensors and adaptors should directly or indirectly connect to.Therefore, we only want to guide designers in making choices on this.The functionalities that a system offers can be divided into basic functionalities and adaptive functionalities .The sensors and adaptors provide these adaptive functionalities.The architecture of the system should also include components for providing these basic functionalities.For example, for the tour guide system, the basic functionality is to provide information about sights.The architecture needs to include the components that can deliver that functionality, such as a database with information about sights and a screen to present the information.We assume that the system to provide the basic functionalities is already designed at the start of this step.What functionalities belong to basic functionalities or to the adaptive functionalities is in part a design choice that a designer has to make up front.A designer has to make a choice on what goals they want to reach using context awareness.This choice should be based on whether they believe that reaching the goal requires providing different, or adaptive, functionality in different situations.Only after they make this choice, the proposed method plays a role.However, in case they base a focus on such a goal but they have a hard time finding context relationships for that focus, it is a sign that the way in which the goal is reached probably does not depend on context.In that case, they might go back on their choice.Step 2 of the method can be divided into two sub-steps: 2.1) determine what adaptors are needed and 2.2) determine what sensors are needed.The overall process of performing step 2.1 is shown in Fig. 7.The input for this step is a list of context elements.The output is a list with descriptions of adaptors that can manipulate context elements.To complete this step, we need to identify for each context element whether it could and should be manipulated by the system.By manipulation, we mean that the system performs an action that changes the truth value of the context element.This allows the system to adapt and change the situation, such that the value of its focus corresponds to its design goal.We call the component of the system that performs this action an adaptor.The input of the adaptor is a decision of the reasoning component on what value the adaptor needs to achieve for the context element.This value is expressed as a literal.It is important that for each context relationship, there is at least one context element that can be manipulated by the system.Otherwise, the system cannot adapt and it is not possible for the system to take the context into account.Of course, there can be more than one context element in a context relationship that can be manipulated.However, whether we need more than one context element to be manipulated depends on the type of context relationship.Context relationships describe in what situations a focus is restricted to a certain value.The design goal of the designer is to ensure that the focus has a certain value.On basis of this, we can distinguish two types of context relationships, namely negative and positive context relationships.A positive context relationship restricts the focus to a value that conforms with the design goal of the designer.A negative context relationship restricts the focus to a value that does not conform with the design goal of the designer.For example, the restriction on the focus such that the information provided to the user is relevant in a situation in which the user is within 150 metres of the sight and the information provided is about the sight, is a positive context relationship for the tour guide example.The restriction on the focus to ‘not willing to participate’ in a situation in which the data in the flow is sensitive for a business to another business and this data is shared with the business that the data is sensitive to, is a negative context relationship for the example of the B2G information sharing system.For a positive context relationship, we need to ensure that all its environment elements have the value specified.This means that the system should contain adaptors for each context element for which this is possible.For a negative context relationship, we only need to ensure that at least one context element has a value different from that specified.This means that it is sufficient for the designer to choose one context element that should be manipulated, and that the system should contain this adaptor.In principle, it could also be possible to manipulate multiple context elements of a negative context relationship.However, for negative context relationships, this causes these manipulations to have a disjunctive relationship with each other; either one can be performed to ensure that the focus is not negative.This offers the advantage of having multiple options for manipulation.However, it also leads to complications, as it introduces a form of nondeterminism.It thus requires a more complex reasoning mechanism to deal with this, which might not weigh against the advantages of having multiple options.Therefore, we choose one context element to manipulate for negative context relationships.What context element to manipulate will often be an obvious choice.It is often clear what context elements cannot be manipulated, because it is not possible or it is undesirable or too costly.These options can be eliminated.The remaining context elements are then the adaptor elements.Illustration Step 2.1: It would not be possible for the tour guide system to change the location of the Pyramid of Cheops.It might be undesirable to tell users to go to another location to ensure that the information they receive is relevant.Manipulating what information is provided to the user clearly is feasible and desirable for the tour guide.Illustration Step 2.1: It is not possible to manipulate the sensitivity of information in the system.However, it is possible to manipulate with whom the data is shared in the information flow.To determine what adaptors the context-aware system requires, the designer needs to first determine how the value of an adaptor element could be manipulated to achieve the value that is appropriate for the situation.For this, the designer needs to look at the terms of the environment element and see what needs to be changed, and then determine what components the system requires to perform the appropriate manipulations.As there might be several possibilities, for which it is easier or harder to find a component that can perform them, the designer might need to alternate between the selection of possible manipulations and finding accompanying components, before an appropriate component is found.Each component should be described and incorporated in the overall architecture of the context-aware system.The adaptor components need to connect directly or indirectly to all objects connected in the context element to be able to manipulate it, and to the reasoning component to get the input necessary for it to know what actions to perform.Thus, in the tour guide example, it needs to connect in some way to the information and to the user.For the information, this is done by searching the database with the information about sights.For the user, this connection is less direct and is done via another component, namely the screen of the system.In the example of the B2G information sharing system, the adaptor components need to connect to the data, which is done by encrypting it, and to the businesses, which is done by the access control component.The overall process of performing step 2.2 is shown in Fig. 8.The input for this step is a list of context elements.The output is a list with descriptions of sensors that can sense the context elements.To complete this step, we need to identify for each of the context elements that are not adaptor elements how it can be determined whether they are true in a situation.First, a decision should be made on what object in the world will need to be monitored.Then a measurement for establishing whether the object has a certain relationship should be found.Subsequently, it needs to be determined what component could carry out the measurement.Just as in the case of the adaptors, this might require some alternations between identifying the object to monitor, identifying possible measurements and finding an appropriate sensor.The last step is to determine what connections the sensor should have to the environment.Each environment element, according to Definition 7, connects things in the environment.For a sensor to be able to monitor, it should monitor a physical object in the environment.The object that the sensor should monitor should be one of the objects in the context element.If there are multiple objects that could be monitored to obtain the same information, the designer should choose which ones to monitor.The object that is the most appropriate to monitor will be different for different context elements and might be influenced by the available techniques and by practical limitations.A designer who wants to improve the accuracy of the context information might choose to monitor more than one type of object for a context element; for instance, they might monitor both the user and the location.However, it is important to note that this will come at additional costs, as it requires the addition of some kind of conflict resolution for dealing with contradictory sensor information.Once an object to monitor has been chosen, the designer needs to decide how to measure what it is connected to according to the relationship in the context element.It is important to be aware that it is often useful to use the same type of measurement to measure things that are similar.In the tour guide example, it would make sense to use the same measurements for the location of the user and for the location of the sight.The next step is identifying what kind of sensor components could provide the measurement and how this could be done.The designer should provide a description of these sensor components and include them in the overall architecture.Like the adaptor components, the sensor components belonging to a sensor element should connect to all objects in that element.Illustration Step 2.2: For the location context element it was decided to monitor the user.We now need to find a measurement for the location of the user.We could choose coordinates for this, as this is quite a common measurement for location.A very common way to determine coordinates is to use a GPS sensor.This would thus be an appropriate sensor to measure the location of the user.To connect to the user, the GPS sensor should be placed on something that the user carries with them; this could be the tour guide itself.To connect to the location, the sensor performs its measurement.Illustration Step 2.2: For the sensitivity context element, it was decided to monitor the business that thinks that the data is sensitive.We thus need to find a measurement for what data they consider sensitive, and from what businesses.For the businesses, we can assign IDs to the data that is shared and we can name businesses.The most obvious way to sense what data is sensitive and to what businesses, is to ask the business that thinks the data is sensitive.A sensor should thus request this data from businesses.In the case of the B2G information sharing system, the component simply requests the information from businesses when new data is shared.In section 4.4, we discussed that in some cases relationships can be included that are not context elements, but express constraints.These relationships do not change over time.This means that to ‘sense’ them, no external connections need to be made.Instead, this information needs to be stored somewhere in the system itself or calculated.For instance, for the distance between locations, the ‘sensor’ could be a component that calculates the distance between two coordinates.The overall process of performing step 3 is shown in Fig. 9.The inputs of this step are the outputs of steps 1 and 2, namely a list of context relationships and a list of adaptors and sensors.The output of this step is a list of context rules that the context-aware system can use to derive what adaptations to make in different situations.The input of the reasoning component of a context-aware system comprises information gathered by the sensors.They are expressed as ground literals in a logic program.This might require the raw data of the sensor elements to be translated into these literals by middleware.There are ample descriptions in the literature of how middleware can be used to process raw context information.The outputs of the reasoning component also are ground literals.They express the value that the adaptor elements should have to ensure the appropriate value of the focus.They are input for the adaptors and can be viewed as commands to perform an action that results in the adaptor element being true.For this, the middleware between the reasoning component and the adaptors themselves should include a mapping between literals and the actions of adaptors.A context rule is a rule that expresses that the system needs to perform a manipulation to make the environment element in its header true in the situation that the environment elements in its body are true.The syntax of a context rule can be found in Definition 2.Examples of context-rules can be found in Example 2.The context relationships provide information on what manipulations we want to perform in what situations.As discussed, for the positive context relationships, we want the situation described for them to exist, and for the negative context relationships, we want the situation described for them not to exist.Based on this principle, we already established what context elements should be manipulated in step 2.1.We can translate each positive context relationship into a context rule where the head is a schematic literal representing the required value of one of its adaptor elements, and the body is the set of all schematic literals representing the values of its relationships that are not adaptor elements.The number of different rules there are for a positive context relationship is the number of adaptor elements it has.A designer should derive all possible rules for each positive context relationship in this way.We cannot translate this context relationship into any other rules, as it has only one adaptor element.We can translate every negative context relationship into a single rule, where the body is again the set of all literals representing values for context elements in its situation that are not adaptor elements.As negative context relationships have only one adaptor element, and we do not want the situation in the negative context relationships to exist, the head of the rule is the negation of the schematic literal representing the value of the adaptor element in the situation described in the context relationship.The negation is that of classical logic, in the sense that the double negation of a literal is equivalent to the literal.To complete step 3, the designer should generate all context rules in this way.These context rules can then be reasoned with in a logic program such as described by Lifschitz , together with the context elements that are input for the reasoning component.From the logic program it can be derived what manipulations the system should perform in a certain situation.Because we used the logic programming paradigm for our formalisation, the context rules and context elements are in the appropriate format to be part of the logic program.Of course, the ground literals in the program can be derived as well.However, they are already true, so no action needs to be performed to make them true.It would be easy to let them be filtered out by the middleware of the system by comparing the input and the output of the reasoning component.In general, logic programs will be quite simple, because only a single rule, rather than a sequence of rules, is needed to derive a manipulation.However, what specific variety of logical program and corresponding reasoning mechanism will be chosen is up to the designer as there are a variety of practical factors that might play a role.For instance, a context-aware system that should respond to changes in the environment very fast and that contains only a couple of rules, could benefit from a reasoning mechanism that derives all manipulations in a bottom-up approach each time new sensor information is available.On the other hand, the designer of a system that contains a lot of rules and facts might prefer a top-down approach in which the adaptors periodically query the system for the next manipulation that they need to perform.Furthermore, whether negation as failure is enough in the body of rules could depend on the required evidence based on which a manipulation needs to be derived.The reason for these issues is that for the more complex cases, the insight into context from step 1 is typically incomplete.In section 5.1.2 we described that in practice it is not feasible to establish what comprises the complete set of environment elements that have a certain context relationship with the focus.This is not an issue specific to the method or definitions we propose; rather, it is a fundamental issue when investigating complex environments in the real world.There are just too many variables to take into account.In fact, defeasible logics were developed to deal with this same issue.Defeasible logic programs, for example as described by García and Simari , could also be used to reason with the rules.Alternatively, this issue can be dealt with in the middleware between the adaptors and the reasoning mechanism.There are many ways to solve this, and again, the best way depends on a lot of practical considerations.In some cases it might, for instance, be useful to ask the user to choose between alternative manipulations.In other cases, it might be better not to bother the user with this and to implement an algorithm that makes a choice between alternative manipulations.If necessary, such an algorithm could also get insight into the rules and the facts that manipulations were derived from.The development of the method was driven by a practical need we experienced in a project in which we developed a context-aware B2G information sharing system in the highly complex domain of international container shipping.While the literature on context-aware systems is extensive, there is no systematic approach to get insight into context and to use this insight to design context-aware systems in complex environments with, for example, many actors and in which legislations might have an influence.Such an approach is necessary to ensure the efficiency and effectivity of the design process.Current work too often has to rely on assumptions about what belongs to context and how the system should adapt to context.This is associated with risks of ambiguity and not taking into account the appropriate context in the design of the system.To address this problem, in this paper we proposed a method that provides a way to structure the investigation of context and to clearly distinguish irrelevant elements in the environment from relevant context elements.It also provides steps to derive the design of a context-aware system from the insight into context.The method describes how designers can investigate context and collect the information they need to design a context-aware system.Furthermore, designers can use the method to directly establish what sensors and adaptors should be included.In addition, we provide a way for designers to translate the knowledge on context into rules that the system can use to determine what adaptations need to be made.The method thus consists of three steps: 1) getting insight into context, 2) determining the components needed to sense context and adapt to it, and 3) determining the rules according to which the system should adapt.Step 1 is critical, as steps 2 and 3 directly depend on it.When a designer has followed the steps, they can further work out the technical details of the system using existing tools and frameworks that are covered in the literature.Concerning the effectiveness of the design process, we checked the context elements and relationships we found with researchers with domain expertise.Based on this, we only had to make minor additions and changes to the context elements and relationships found.The method is based on a criterion for determining whether something is a context element and a way to quickly check whether the criterion is met.The structured manner in which data on context is gathered and analysed when the method is used, also helps to make it explicit when this is ambiguous.For instance, it is easy to establish that there is a problem if the same situation has different impacts on the focus according to different data sources.When such a conflict is found, a designer can concentrate their efforts on solving it.They could do so by, for example, finding out whether the conflict results from not including enough context elements in the situation description, or by additional checking of whether their data sources are reliable.In complex environments, a variety of parties are involved in information sharing.Efficiency of the design process will also be influenced by the interactions with these parties.We explained the notions of ‘context element’ and ‘context relationship’ to domain experts and juridical experts we interviewed to obtain insight into context.The interviewees all indicated that they understood what was meant by these notions.Only in a few cases in the beginning of the interviews they started to use these notions spontaneously themselves.However, each of the interviewees seemed to concentrate on finding things that impact the focus.This might be due to the use of these concepts in the introduction of the interviews.In addition, one of the lawyers we collaborated with indicated that using these concepts helped her to understand what we were looking for and to structure the information obtained on context.Furthermore a researcher with expertise in the B2G information sharing domain stated that she believed that the structuring that the notions of ‘context element’ and ‘context relationship’ offer, help to provide scope to discussions.In addition, she states that it provides a shared vocabulary facilitating discussions with the various organisations.An underlying assumption for the method is that the appropriate elements to take into account in the design of a context-aware system are those that should be sensed or manipulated in order to reach the design goal.Interpreting ‘appropriateness’ in this way, the use of the method reduces the risk of not taking into account the appropriate context in the design of the context-aware system.That is, the method supports systematic data collection on context elements and context relationships.Furthermore, once data is gathered, the appropriate sensors and adaptors can be identified quite straightforwardly based on the data.In addition, the data gathered on the context relationships can be translated into rules for the system.Thus, the method supports not only obtaining the necessary insight to take the appropriate context into account, but also basing a design on this insight.One issue that might make it difficult to take the appropriate parts of the context into account is that the search space for things that belong to context can be very large.As we are dealing with complex and open environments, it is impossible to guarantee that all context elements will be found by a designer using the method.However, the method might make the search process more efficient by supporting the designer in specifying what exactly they are looking for, by letting them determine and specify their focus.The method then provides guidance on how to use the focus to select appropriate data sources and how to use the focus to determine which parts of the gathered data to focus on.The focus, in a sense, thus determines the scope of the context-aware system and what is taken into account as context.This means that it is of paramount importance for a designer to consider and choose their focus carefully.The usual procedure for designing a system is to start with a specification of a goal, or a problem to solve.The focus is directly derived from this and determining the focus therefore is unlikely to require a lot of additional effort by the designer.What the new method adds to this procedure is that it makes it possible to link parts of the environment with the focus, thereby providing a designer with an easier way to determine what belongs to their scope, or in other words, the context that they should take into account for their design.The use of a focus to select context elements and thereby sensors and adaptors means that several things that are usually not considered context, context-aware systems or sensors can be included.For example, the ‘traditional’ notion of sensor would refer to devices measuring things like GPS location or temperature.Using the method, it could also refer to organisations, for example.We believe that this deviation from the traditional interpretation of context and associated notions is due to the practice being leading instead of the literature.Furthermore, we believe that from a pragmatic point of view, it is useful to extent these notions in this way.Including businesses as sensors, for example, is clearly necessary in the case of the B2G information sharing system as there is no traditional sensor that can measure things like the relationships between businesses.Information on this is needed for the system to reason with and adapt to, which are things that are very typical for context-aware systems.In addition, future research could focus on the conflict resolution mechanism for dealing with incompatible manipulations.First of all, it needs to be determined whether conflict resolution should happen when generating the rules, when reasoning with the rules or in the middleware between the reasoning component and the adaptors.Second, it seems that this conflict resolution should rely, at least partially, on whether a rule was derived from a positive context relationship or a negative one.When it is not possible to perform one of the manipulations that stem from a positive context relationship, the desired situation cannot be produced.It might then not be useful to try and fulfil the others.Future research should determine how to deal with this issue.In addition, further research should focus on determining whether the use of literals and context rules for the system, as generated in our method, are expressive enough to also be used in other domains. | Context-aware systems are systems that have the ability to sense and adapt to the environment. To operate in large-scale multi-stakeholder environments, systems often require context awareness. The context elements that systems in such environments should take into account are becoming ever more complex and go beyond elements like geographic location. In addition, these environments are themselves so complex that it is hard to determine what parts of them belong to the relevant context of a context-aware system. However, insight into what belongs to this context is needed to establish what the design of a context-aware system should be to meet its goal. The ambiguity of what belongs to context in these complex organizational environments causes the design process to become either inefficient or less effective. In this paper, we provide a method to identify what elements of the environment are relevant context and to then base the design on this insight. The proposed method consists of three steps: 1) getting insight into context, 2) determining what components are needed to sense and adapt to context, and 3) determining the rules for how the system should adapt in different situations. To reduce ambiguity by organizations, the method requires a more specified definition of context than the ones in current literature, which we also provide in this paper. In addition to reducing ambiguity, the highly structured way in which the components and rules are derived from insight into context provides a way to further deal with the high complexity of the context. The method was applied for the development of a context-aware system for business-to-government (B2G) information sharing in the container shipping domain. Information sharing in this domain is highly complex, as legislation, many stakeholders, and a mix of cooperation and competition result in a highly complex environment. The development of this B2G information sharing system thus provides an example of how the method can be used to develop a context-aware system in a highly complex environment. |
574 | Synthesis and surface characterization of new triplex polymer of Ag(I) and mixture nucleosides: cytidine and 8-bromoguanosine | Based on the interactions of intermolecular hydrogen bonding, guanine and cytosine nucleobases can assemble to make long chains and forming networks, according to the known Watson-Crick pairing in both DNA and RNA .Hydrogen bonding played an important role in the field of supramolecular chemistry by forming different structures such as ribbons or fibres.Nucleobases and nucleosides are good examples of natural materials that depend on hydrogen bonding in their self-assembling.Another factor that helps forming larger structures in these compounds is the presence of multi binding sites in their structures which increase the prone of these materials to self-assemble and produce structures of one dimension, two dimensions, and three dimensions.Nucleobases and nucleosides showed attractive attention due to multifunctional properties of these structures such as conductivity , magnetism , luminescence , drug delivery, bioactivity, porosity , gas sensor, and nanotechnology .Guanosine nucleoside has shown the ability to form G-quartet by self-assembling via hydrogen bonding .Some of guanosine G-quartet were formed in the presence of metal ions such as K+, Na+, Ag+, etc. , while others were formed in the absence of the effect of metal ions .Triplex moiety is common in the reactions that involve guanosine and cytidine.Triplex structure consists of three triple structures where the third strand is binding to a duplex Waston-Crick purine strand, when the binding of the third strand occurs in an antiparallel way by reverse Hoogsteen hydrogen bonds; then the triplex is called purine motif, such as GGC, AAT, and TAT.In contrast, when the binding takes place in a parallel way via Hoogsteen hydrogen bonds, then this kind of triplex motif is called pyrimidine motif, e.g., CGC, , Fig. 1 displays structures of R and Y. Both 8-bromo guanosine and cytidine showed the ability to form hydrogel with Ag ions, while in triplex CGC no hydrogel was reported for these nucleosides with Ag ion and this probably owing to the lack in the binding sites that occurs by forming triplex CGC with Ag ion.Studying surface roughness properties of the compounds, e.g., surface of semiconductor materials, is very important to know the changing that occurs on in the surface during the chemical reaction by forming or removing a layer from the surface.The atoms in the surface possess high chemical reactivity than that in the bulk as these atoms have the ability to alter their electronic structure by reaction with different kinds of chemical environments, these properties have found applications in different industrial technologies .Semiconductor materials such as silicon and GaN characterise with high electron velocity and high electron mobility, and for this reason numerous researches have been reported to study the surface of these compounds which used in different applications such as electronics, optoelectronics, and sensors .In addition, in micro fluid devices, studying how the micro/nano scale influences in surface roughness is very important to develop devices with small-scale .Here, one dimensional polymer of Ag with a mixture of nucleosides: cytidine and 8-brpmoguanosine was prepared.The morphology of the polymer was characterised by using AFM and TEM techniques.The surface texture of the polymer was analysed with Gwyddion software program to indicate the statistical parameters of the surface roughness and to find the probability density, in addition to the diameter of the fibres.Preparing nanomaterial compounds with a large scale to use in nanotechnology applications still represents a hard task due to the difficulty controlling the synthesis process and the self-assembling of these compounds, as these compounds can change with the surrounding environment especially their chemical and stability properties rather than bulk materials .Surface chemistry has the potential approach to solve this problem by understanding the relationship between physical-chemical properties of materials and their surface topography based on AFM characterizations, statistical analysis, and surface roughness analysis to provide better insight on the useful applications of these materials.All chemicals were purchased from Sigma Aldrich and were used as received without further purification.1H and 13C NMR spectra were performed on a Bruker Advance 300 spectrometer at 300 MHz, with DMSO-d6.P-silicon wafers were used to achieve AFM measurements by cutting the wafers into 1 cm2 and cleaning with 1:4 H2O2:H2SO4 for 1 h, followed by washing the chips with deionised water and drying with nitrogen gas.Sample for AFM measurements was prepared by drop-casting 2 μL of the sample onto 1 cm2 clean silicon chip, then the sample was dried by air prior to scan with NanoScope Analysis 1.5 software.TEM measurements were carried out by using Philips CM100 electron microscope at accelerating voltage 100 kV.2 μL of the sample was dropped onto a carbon coated copper grid, the sample was left to dry by air overnight before imaging.Gwyddion software program was used to analysis AFM images.8-Bromoguanosine was prepared according to the procedure of Srivastava .N-Bromosuccinimide was added to the suspension of guanosine the suspension was constantly stirred for 24 h at room temperature.The resulting clear yellow solution was concentrated under reduced pressure to remove the solvent at 50 °C.The residue was collected by adding some water, followed by filtering the solid product and recrystallized by using hot water and drying by air.1H NMR of guanosine spectrum in Fig. 2 showed: 10.63 δ, 7.93, 6.46, while 1HNMR spectrum of 8-bromoguanosine in Fig. 3 displayed: δ 10.82, 6.55.Disappearing the signal that belongs to in Fig. 3 confirms successful preparation of 8-bromoguanosine.The one-dimensional polymer of Ag:cytidine and 8-bromoguanosine was prepared with stoichiometry 1:1:1 of Ag:cytidine:8-bromoguanosine as followed: The solution of 8-bromoguanosine was added to cytidine solution, the mixture was shacked for several minutes then a solution of AgNO3 was added, then the mixture was shacked quickly and left in a dark place at room temperature.After 30 min colourless viscose solution was formed, the viscosity of the sample was increased with the time.After four days of the preparation, the sample be more viscous, but it was not stable to inversion test to indicate the stability, and this confirms that the sample was not gel.1.5 μL of the sample was drop-casted on a silicon chip and left to dry by air prior to scan by AFM.The reaction of Ag ions with equimolar equivalents mixture nucleosides, cytidine and 8-bromoguanosine, in aqueous solution leads to form triplex pyrimidine motif, CGCAg+, as shown in Fig. 4.The coordination of Ag ion in the triplex CGC structure occurs via N3 atom of the cytidine molecular , in fact, the presence of Ag ions play fundamental role in the stability of the this structure .Protonation N3 atom in natural medium in cytidine molecular is necessary for complementarity binding with 8-bromoguanosine via Hoogsteen hydrogen bonding in CGC structure , this process provides appropriate site for coordinating Ag ion to the cytidine molecular, and gives a rise to increase the stability of this structure, such protonation does not occur in the triplex GCG, and this confirms that the motif of the triplex structure of this fibrous polymer is CGC rather than GCG motif.The structure of the triplex Y motif in this polymer consists of two pyrimidine molecules and one purine molecular to form CGCAg+, purine and pyrimidine molecules are clustered in the same strand by assisting of strand-switch mechanism.Building triplex block of CGCAg+ needs that the position of the third strand must be in the major groove of the double helix that is formed by Hoogsteen hydrogen bonding .Triplexes structures of Y motif can be found in both RNA and DNA by forming intramolecular triplexes.Fig. 5 showing the Triplet structure of parallel motif CGC bases with Ag ion, while Fig. 5, modified from Sugimoto , displays building the triplex block, the green strand represents the third strand that be in the major groove of the double strands.Substitution C8 in guanosine molecular in Y motif can increase the stability of the triplex structure in addition to the stability that produces from the presence of Ag ions and the influence of Hoogsteen hydrogen bonding in this structure.Most triplexes structures that concern RNA are synthesised regarding functionally RNAs such as ribosomal RNAs , telomerase RNAs , and long noncoding RNAs .In addition, these materials played a great role in molecular biology and nanotechnology applications .The AFM technique was used to address the morphology of the polymer.Examination the dried polymer revealed formation of nanofibres extending many microns in length with a height in range of 2–3 nm, some few fibres revealed with height up to 4 nm.Fig. 6 displays tapping mode height AFM image that obtained by scanning area 5 × 5 μm2.Many loops can be seen in image, Fig. 6 that formed as a consequent of binding of these complementary nucleosides.Fig. 7 displays more AFM images with 3D view for those loops that formed in the triplex polymer.Statistical analysis was carried out to investigate the height of the loops that can be seen in a single polymer in AFM image in Fig. 8.The data displayed that the height values were in the range of 10–14 nm, as shown in Fig. 8 which represents the profile of the three sloping lines in the image.Image in Fig. 8 is a small area of image with scale bar 150 nm.These findings indicate that the complementary binding cytidine and 8-bromoguanosine, that formed triplexes structures of parallel pyrimidine CGCAg+, can self-assemble to form nanofibres.Self-assembling of nucleosides was seen for guanosine and its derivatives.Different architectures structures such as cyclic , lamellae , fibres , micelles , and films have been reported for self-assembling of some complementary nucleobases and their derivatives.However, this is the first report that presents a simple and direct way to prepare nanofibers by self-assembling complementary nucleosides cytidine & 8-bromoguanosine.RMS is the root mean square average deviation of the roughness, it is known as Rq, also.L and Z are the length of the profile and the function of height profile, respectively.The data demonstrated that the root mean square waviness was 147.3 pm, while the waviness average was 128.9 pm.The height of the waviness is normally higher than the average roughness height by three times .Tables 1 and 2 summarised the data of Figs. 9 and 10, respectively.Kurtosis parameter describes the height distribution of the surface , the data in Table 1 shows that the value of Kurtosis was 2.89, and this confirms that the distribution curve has low height peaks and the morphology of the loop in the AFM image is valley rather than platykurtic valley, and this observation is very important as it gives a good estimation about the nature of the surface roughness of the polymer.Low peaks are expected to be found in the distribution curve when the value of this parameter <3 and vice versa .The height distribution of the AFM image in Fig. 12 is fitted to a sum of Gaussian functions to obtain the probability density of the fibres, as shown in Fig. 13, and the straight line in Fig. 14 is fitted through the peak values to get an estimate of ∼0.4 nm diameter for the fibres.TEM technique was used to investigate the morphology of the polymer.Carbon coated copper grid was used as a substrate to prepare the sample.1.5 μL of the sample was drop-casted onto the substrate, left to dry by air prior to imaging.The inspection revealed the formation of a very long, entangled fibres, as shown in Fig. 15.The surface of the single polymer was investigated for the TEM images by using Gwyddion software program to obtain parameters of waviness and roughness, Figs. 16 and 17 display the data for the loops and for the flat part of the single polymer, respectively.Table 4 presents the data of Fig. 16.The data shows that the root mean square roughness was 1.0 pm, and the roughness average was 0.8 pm.In addition, the value of Kurtosis was 2.33.The data of TEM images were in a good agreement with that of AFM images and this confirms the accurate measurements for analysis the surface texture of the triplex polymer CGCAg+.In summary, one dimensional triplex parallel pyrimidine polymer of Y motif based on self-assemble of Ag with mixture complementary bases nucleosides was prepared, to the best of our knowledge, this is the first report displays that complimentary nucleosides, cytidine & 8-bromoguanosine, are capable of self-assembling directly to produce nanostructure material with such length and height as shown by AFM measurements where the height of the polymer was in the range of 2–3 nm and the length was many microns.This feature makes this polymer analogous to the duplex DNA .Surface roughness was carried out to indicate the probability density of the fibre.The data displayed that the diameter of the fibre was ∼0.4 nm.Waviness, Roughness, and Kurtosis parameter values for the fibrous structure were also investigated by analysis AFM images and TEM images where the data showed a good agreement.Lamia al-Mahamad: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper.This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.The authors declare no conflict of interest.No additional information is available for this paper. | In this work one-dimensional (1D) triplex polymer of silver (I): mixture nucleosides of cytidine and 8-bromoguanosine was synthesised. The polymer showed high stability due to the presence Ag(I) ions in the structure of the polymer in addition to the stability that produces from the effect of Hoogsteen hydrogen bonding in the triplex CGC. Atomic Force Microscopy (AFM) and transmission electron microscopy (TEM) were used to investigate the morphology of the polymer. The AFM images revealed formation of nanofibres extending many microns in length with height in the range of 2–3 nm. Statistical analyses carried out to analyse the AFM images to determine the height of the loops that formed in the polymer. The data displayed that the height value was in the range between 10 nm to 15 nm. The data of TEM images were consistent with the data of AFM images by displaying a very long fibre. Gwyddion software program was used to investigate surface parameters (roughness and waviness), diameter (size distribution), and probability density of the fibre. The data showed that the diameter of the fibre was ∼0.4 nm. |
575 | High-Throughput Assay and Discovery of Small Molecules that Interrupt Malaria Transmission | Malaria is a vector-borne disease caused by apicomplexan eukaryotic protozoa of the genus Plasmodium.The parasites have a complex life cycle involving vertebrates and anopheline mosquitoes.Their asexual replication and destruction of erythrocytes give rise to the symptoms of malaria, including fever and chills.In response to cues that are not well understood, a subset of asexual parasites differentiate into male and female gametocytes, a process that takes ∼8–12 days in P. falciparum.During this period, the parasites metabolize the host red cell hemoglobin, while progressing through five morphologically distinct stages that can be identified by light microscopy.Commitment to sexual development occurs well before parasites show morphological changes, and male and female gametocytes are produced at a ratio of 1 to 3–5 with females maturing slightly later.In the human body, immature gametocytes sequester in different host tissues and emerge only when fully mature.An infected individual may carry gametocytes for up to 55 days, and mature gametocytes are the only form that can survive in the mosquito midgut, mate, undergo meiosis, and give rise to the next generation of parasites to be transmitted to a new human host.Current first-line treatment of falciparum malaria is artemisinin combination therapies, which do not block transmission.Follow-up treatment with 8-aminoquinolines like primaquine or tafenoquine is needed to block transmission.However, 8-aminoquinolines can be toxic to individuals with glucose-6-phosphate dehydrogenase deficiency, a genetic condition with a high prevalence in malaria-endemic regions.Even though assays are available to detect compounds with transmission-blocking potential, most of them are not adapted for very large chemical libraries due to multiple purification steps or lower throughput formats.In addition, some assays rely on the use of gametocyte reporters that may restrict their use to genetically modified parasites.Here we describe high-throughput assays that overcome these issues.We apply the assays to characterized and uncharacterized chemical libraries.Our analysis reveals features of chemical compounds that are likely to block malaria transmission and may serve as starting points for unique transmission-blocking drugs.To create a homogeneous, stage-specific gametocyte population, we optimized a previously described protocol and induced gametocytogenesis in asexual, triple synchronized P. falciparum NF54 parasites by high parasitemia and partly spent media.Microscopic staging of gametocytes collected over the 12 days of development according to description by Carter and Miller showed purities upward of 75% per stage with a reproducible parasitemia of 1.2%–1.6% over the screening period.To detect viability, we used the dye MitoTracker Red CMXRos, which fluoresces at ∼600 nm in parasites with intact mitochondrial membrane potential.Parasites were detected using automated microscopy and showed a good correlation between the number of viable parasites added per well and the number of MitoTracker Red CMXRos positive objects.To reduce the number of liquid transfer steps and make the assay more robust and less costly for use with large, unbiased libraries, we experimented with the use of saponin, an amphipathic glycoside that creates pores in red cell membrane bilayers, leading to red cell lysis.We found that treating gametocyte cultures with 0.13% saponin caused red blood cells in serum-free media to lyse, simplifying the identification of parasites with automated microscopy.Gametocytes at a parasitemia of 0.5% to 0.75% and a hematocrit of 1.25% created a monolayer on the bottom of the well.After MTR Red staining ∼1,000 objects could be counted per DMSO control well.This allowed compound exposure and imaging in the same plate without an additional transfer step.We refer to this serum-free one-step protocol as Saponin-Lysis Sexual Stage Assay.We found SaLSSA to be more sensitive to few compounds like the amino alcohols.Thus, an older, serum-containing assay was used in some cases.The quality of the assay was found to be robust at all gametocyte stages: Z prime scores calculated with infected, DMSO-treated red blood cells and uninfected red blood cells ranged from 0.71 to 0.80 for the one-step protocols.We evaluated the fluorescence intensity over time and did not observe significant differences between 30 and 360 min.To further benchmark SaLSSA, we evaluated 50 compounds currently used as antimalarials or antimalarial tool compounds in dose response against individual gametocyte stages.EC50 values of different chemical classes showed distinct patterns of activity for the different gametocyte stages as summarized below.Most compounds yielded higher EC50 against stage V gametocytes.Endoperoxides were characterized by low nM EC50 values for stages I–IV with most failing to generate a dose-response curve for stage Vs, in agreement with standard membrane-feeding data: Although DHA, artesunate, and OZ439 have been reported to have some activity in standard membrane-feeding assays, none completely eliminated oocysts at 100 nM, and only OZ439 eliminated oocysts at 1 μM, a concentration well above the concentrations in blood when being used against blood-stage infections.Interestingly, these results correlate well with previous publications reporting that hemoglobin digestion ends at stages III to IV, supporting the endoperoxides’ activity against this process.The 4-aminoquinolines demonstrated low nM EC50 values for stages I and II but showed a drop-off in activity beginning in stage III and were ineffective against stage Vs. It is generally thought that 4-aminoquinolines interfere with the formation of hemozoin, resulting in the death of the parasites, and previous reports have shown that chloroquine is only active against early-stage gametocytes in line with low activity in SMFAs.The 8-aminoquinolines showed consistent but high EC50 values across all gametocyte stages.Primaquine, the only drug approved for blocking transmission, exhibited a 6.5 μM EC50 against stage V gametocytes in vitro.Tafenoquine, a primaquine derivative, had an EC50 of 2.4 μM against stage V gametocytes.The substantially higher EC50 than the ED50 values were expected because 8-aminoquinolines need to be metabolized for activity.The mechanism of action of primaquine as well as the identity of the active metabolites remains unknown.Amino alcohols are not used to prevent transmission, but some amino alcohols showed activity across all stages using SaLSSA.We note, however, that this sensitivity could be completely reversed by the addition of human serum to cultures during compound incubation using the TSSA.Mefloquine only shows SMFA activity at high concentrations, ∼20× above the asexual growth inhibition values.Testing some compounds in the presence of human serum would be expected to give more accuracy and eliminate false positives, but at the expense of efficiency.The antifolates and sulfonamides, which interfere with nucleic acid synthesis and include dihydropteroate synthase and dihydrofolate reductase inhibitors, were inactive across all gametocyte stages, except for chlorproguanil, which had μM EC50 values against all stages.Antibiotics used to treat malaria, such as doxycycline and azithromycin, which act against the apicoplast, were also inactive.As expected, atovaquone was inactive.Evidence that this compound has some transmission-blocking activity in animals after repeated exposure may be because it inhibits ookinete formation.Some antimalarial compounds that are not used in humans did show activity against stage V gametocytes in our assay.Thiostrepton is a macrocyclic thiopeptide antibiotic that inhibits prokaryotic translation and has been reported to dually target the proteasome and apicoplast.Thiostrepton had μM EC50 values at all stages.The mode of action of methylene blue remains controversial, but it may inhibit glutathione reductase or hemozoin formation.It showed low nM EC50 values for stages I to IV, with some minor loss of activity at stage V, consistent with reports that methylene blue reduced transmission by 99% in SMFAs at 38 nM.Pentamidine, which is clinically used for treatment and prophylaxis of Pneumocystis carinii pneumonia and sleeping sickness but not malaria, inhibited gametocytes of all stages with an EC50 between 0.39 and 2.14 μM.Its mechanism of action is unknown, although it has been reported to inhibit hemozoin formation in Plasmodium by interaction with ferriprotoporphyrin IX.Newer classes of compounds in development, including the spiroindolones, imidazolopiperazines, imidazopyrazines, and quinoline-4-carboxamides, all have reported transmission-blocking activity, and members of these compound classes were tested using SaLSSA.KAF246, a spiroindolone closely related to the clinical candidate KAE609 that acts against the plasma membrane ATPase PfATP4, showed the expected activity.GNF179, an imidazopiperazine closely related to the clinical candidate KAF156, showed the expected low nanomolar activity in the SaLSSA and complete transmission-blocking activity in SMFAs at physiologically relevant concentrations of 15 nM.The PIK-inhibitor KDU691, which inhibits transmission at 1 μM in SMFAs, had submicromolar EC50 values across all five gametocyte stages, similar to the values seen against blood stages.The Plasmodium falciparum translation elongation factor 2-inhibitor, DDD107498, likewise showed potent activity, in line with reported activity in other cellular and standard membrane feeding assays.To determine whether the loss of activity against stage V parasites was typical or reflects the historical focus on compounds derived from quinine and artemisinin, 400 compounds from the MMV malaria box were examined.These compounds, which were all identified in asexual blood stage screens, were first examined at a single dose of 12.5 μM against each gametocyte stage with TSSA.As expected, the highest number of compounds were active against early-stage gametocytes: 216 compounds inhibited the viability of stage I gametocytes by more than 70%, 78 compounds inhibited stage III gametocytes, and 79 compounds inhibited stage V gametocytes.Dose-response analysis against stage I, III, and V gametocytes for the 50 most active compounds confirmed activity <5 μM for 28 of the 50 compounds with the TSSA.EC50 values for stage V were significantly higher than for stage I gametocytes for 42 of the 50 compounds.A few compounds showed EC50 values of ≤1.5 μM against stage V gametocytes in both TSSA and SaLSSA, including MMV665941 followed by MMV019918.SMFA studies with MMV665941 using GNF179 as a control at a concentration of five and ten times the EC50 calculated from the above described stage V gametocyte assay showed that mosquitoes fed on the compound-exposed gametocytes had no oocysts in their midguts, whereas the DMSO control group did, most likely because the compound-treated gametocytes did not exflagellate.We further investigated a subset of 18 compounds with reported activity against gametocytes and available luciferase-SMFA data using 1,536-well SaLSSA against gametocytes stages I, III, and V. Of this control set, 14 of the 18 were active in at least one stage in SaLSSA with an EC50 of less than 10 μM, and 13 of the 14 were active in SMFAs.A possible false-negative compound was MMV665882, which showed an incomplete curve in SaLSSA but little activity in the SMFA.This compound showed some activity in viability readouts by others.The four potential false positives included MMV020492, which was previously reported to have low activity against male gametes but gave no inhibition in SMFAs and was inactive in SaLSSA.Two other false positives were MMV665827 and MMV007116, which had been shown to reversibly inhibit male gamete formation.The fourth compound, MMV666021, which had been reported active in luciferase assays with late-stage gametocytes, showed some inhibition in the single-point studies but was not reconfirmed in dose response.This compound was weakly active in luciferase-based SMFAs.The reasons for the discrepancies for this compound are unclear, but it is possible that compound source, solubility, or parasite genetic background could play a role, especially as literature-reported blood-stage values vary from 90 nM to 2 μM.To further investigate the rate at which transmission-blocking compounds would be identified in sets of compounds with known blood-stage activity, we investigated the GNF malaria box.This set of 3,558 compounds was created after screening proliferating asexual parasites at a final compound concentration of 1.25 μM.The set was screened at a single concentration of 1.25 μM against stage V gametocytes with the 384-well SaLSSA.Of these, 145 compounds inhibited stage V gametocytes at greater than 72.3%.Dose-response analysis showed 108 of the 145 compounds reconfirmed as having activity of less than 1 μM with 22 compounds giving EC50 values below 100 nm against stage V gametocytes.Unlike the clinical antimalarials, most of which showed a steep drop-off in activity with mature gametocytes, these scaffolds were almost all equipotent against asexual blood stages and stage V gametocytes.Some of the most active scaffolds were carbamazide thioureas as well as naphthoquinones, a compound class known to be active against gametocytes.Several of these scaffolds were also active against a P. yoelii hepatocyte development and invasion assay that predicts causal prophylactic activity.In order to determine the fraction of active compounds that would be found in a larger library that was not preselected for activity against asexual parasites, we tested compounds from the diversity-oriented synthesis library.This library was designed to populate chemical space broadly with small molecules having both skeletal and stereochemical diversity.Two sets of compounds from the DOS compound library were screened against stage V using 1,536-well SaLSSA at 2.5 μM in duplicate.The first was an “informer set,” which includes 9,886 compounds selected to represent a sampling of the structural diversity of all of the DOS scaffolds while also capturing preliminary structure-activity relationships and stereochemical structure-activity relationships.25 compounds inhibited stage V gametocytes in both replicates by >30%, and 17 were inconclusive.To reconfirm and investigate the SARs as well as SSARs, 41 compounds were retested in dose-response along with 37 stereoisomers and seven analogs of select compounds.13 of the hits and one inconclusive exhibited EC50s < 5 μM upon retest, resulting in a retest rate of 23%.A second compound set included 89 compounds that had previously been shown to have activity in a blood-stage assay against P. falciparum Dd2.These compounds had not been further assessed for mechanism of action or additional stage-specific activity against Plasmodium prior to this study.An identical screening pipeline was used for the blood-stage active compounds; in this case, 15 compounds were identified as hits.These hits and two additional stereoisomers were retested at dose-response, whereupon ten of the compounds exhibited EC50s < 5 μM.Taken together, the 35 hits encompassed 12 different scaffolds, with five singletons and seven scaffolds with two or more representatives.Representatives from six of these scaffolds are shown in Figure S2C; one scaffold was eliminated due to a lack of SSARs and SARs in the hits.While the activity needs to be validated with resynthesized compounds, some of these do show interesting patterns of activity, including one compound with greater activity against gametocytes, four compounds with activity across all three parasite stages, and one compound with activity against just gametocytes and the asexual blood stages.Additional studies will be of interest to validate these data and investigate the mechanisms of action of these compounds.To further validate the screens and to identify compounds that can serve as starting points for the development of the transmission-blocking drugs, all compounds that had been screened were hierarchically clustered based on their scaffold similarity.We then identified clusters of structurally related compounds that showed enrichment in the sexual-stage active set at rates higher than expected by chance.For example, a cluster with the highest enrichment score consists of 13 compounds related to GNF-Pf-3202 and GNF-Pf-3600, with 10 out of 13 compounds being active in the gametocyte assay.Another enriched cluster contains 21 compounds structurally similar to GNF-Pf-5511 and GNF-Pf-5386-dihydroisoquinolones,-SJ733) with seven of the 21 compounds being active.This is not unexpected, given that other PfATP4 inhibitors are active against late-stage gametocytes and-SJ733 potently blocks transmission.A final scaffold family that is highly overrepresented contains four of the five 2-furancarboxamides in the library, all of which showed moderate activity against stage V gametocytes but weaker activity against asexual blood stage parasites.To our knowledge, this compound class has not previously been associated with blocking transmission.These data suggest that screens of very large libraries will likely yield starting points for the discovery of transmission-blocking drugs.The present data differ from those that have been reported by a number of other laboratories.Some assays have shown that compounds such as artemether and OZ439 have late-stage gametocytocidal activity of less than 1 μM.The consensus is that mature gametocytes are resistant to endoperoxides.Most previous reports combined gametocyte stages for late-stage gametocyte testing, which might account for conflicting compound activity.Another consideration is that the readout of different gametocytocidal assays might vary with the mode of action of certain drugs, depending on which biological pathway the specific assay is interfering with.Overall, our data suggest that low-cost SaLSSA gives few, if any, false positives, compared to available SMFA data.On the other hand, the SaLSSA assay may give a few false-negatives, and will likely miss reversible inhibitors of gamete formation.In the majority of cases, our data showed that stage V gametocytes had a lower susceptibility to compounds than stage I gametocytes, suggesting decreased metabolic activity during their maturation in preparation for subsequent development in the mosquito midgut.The acquired standard membrane-feeding data, which still represent the gold standard for transmission-blocking activity, suggest that compounds that inhibit stage V gametocytes can block transmission as well.On the other hand, the SMFAs may also find compounds that our cellular assay would miss, including compounds that have a contraceptive effect.Given that fertilization occurs in the mosquito midgut over minutes while late-stage gametocytes can persist in the human body for days, stage V gametocytes are arguably more attractive therapeutic targets.The data from this study indicate which proteins and pathways might be targeted by transmission-blocking drugs.Compounds that interfere with hemoglobin digestion would be poor candidates for transmission-blocking drugs, although they could yield a reduction in gametocyte numbers as early-stage gametocytes might be killed.In addition, the process of DNA replication should probably not be targeted, nor apicoplast function.Targets for compounds that act against mature gametocytes include proteins that play a role in protein translation and processing, including protein secretion, as well as protein degradation.Interestingly, functional genomic studies had previously shown that gametocytes acquire and store RNA transcripts that rapidly convert to proteins during gamete formation, creating a particular vulnerability.Targets involved in maintaining ion homeostasis, such as PfATP4 as well as lipid kinases, are transmission-blocking targets as well as asexual stage targets.It should be noted that all these targets are also essential for asexual parasites.Targeting exclusively gametocytes could be achieved through inhibiting translational repression, autophagy, sperm function, as well as meiosis.It is expected that compounds inhibiting these processes would be found at lower rates in large libraries, emphasizing the need for ultra-high-throughput screens.One example is BRD0608, whose EC50 against asexual blood stages was 15× higher than against stage V gametocytes and whose selectivity for sexual stages might be improved through medicinal chemistry.The advantage of compounds like BRD0608 is a reduced potential for emergence of drug resistance.There are billions of asexually replicating parasites in an infected human, each of which has the capacity to develop a drug resistance mutation and pass it on to their progeny.This has been a major reason why malaria control is so difficult.An open ethical question is whether drugs, which do not relieve malaria symptoms but benefit the community as a whole, should be licensed.Vaccines may be given that provide little benefit to an individual—for example, the rubella vaccine in boys, mainly recommended to cohort-protect pregnant women in order to prevent congenital rubella syndrome of newborns.Vaccines are also not without risk, and one could argue that the benefit to humanity that would be achieved with malaria eradication would outweigh the risk.Asexual P. falciparum parasites were grown at 5% hematocrit in O+ human erythrocytes in serum-containing complete media at 37°C under low-oxygen conditions and a parasitemia between 0.5% and 3%.Ring-stage parasites were triple synchronized with 5% D-Sorbitol, and cultures were expanded from T25 to T225 culture flasks.The hematocrit was adjusted to 5% until day −4.On day −2, only 50% fresh media was substituted during a high parasitemia of 7%–10%.Media was exchanged daily from day −1 onward.For stages I–IV, magnetically activated cell sorting was performed on day 0 and cultures were sorbitol synchronized on day 1.All gametocytes were treated with 50 mM NAG on days 0–9.See Supplemental Experimental Procedures for more details.Gametocyte stages I–V were diluted to 0.50% gametocytemia and 1.25% hematocrit into complete media for the two step protocol or 0.5%–0.75% gametocytemia and 1.25% hematocrit into serum-free SALSSA screening media.Cultures were dispensed into 384 or 1,536-well plates containing 50 nl or 2.5 nl of compound using a MultiFlo dispenser.Plates were incubated at 37°C for 72 hr under low-oxygen conditions.For SaLSSA 3 μl or 10 μl of 2.5 μM MitoTracker Red CMXRos and 0.13% saponin solution in screening media was added to each well, and plates were incubated for 60–120 min at 37°C.For 384-well TSSA, 5 μl MitoTracker Red CMXRos in screening media was added to each well.After 20 min at 37°C, 5 μl was transferred from the assay plate to a new 384-well imaging plate, that already contained 40 μl MitoTracker Red CMXRos in serum-free screening media.For both TSSA and SaLSSA, plates were imaged after 30 min incubation.Imaging of 384- or 1,536-well plates was performed using a high content imaging system and Harmony software for image analysis.Viability indices were calculated by dividing the particle count of each compound-treated well by the average particle count of the DMSO wells per plate and range from 0 to >1.Z values were calculated using DMSO-treated gametocytes as positive and uninfected red blood cells as negative wells.P. falciparum NF54 parasites were grown at 0.5% parasitemia and 5% hematocrit and continuously cultured with daily media changes until they reached stage V.After incubation for 24 hr with compound in DMSO, the SMFA was performed.Briefly, 4- to 6-day-old female A. stephensi STE 2 mosquitoes were fed with the treated gametocytes for 15 min using a membrane feeding apparatus.Midguts were dissected after 8 days, and the number of oocysts counted.P. falciparum NF54-L1 stage V gametocytes were pre-incubated for 24 hr with compound in six dilutions in duplicate.DMSO was used as negative and DHA as positive control.The compound was washed out, and the gametocytes were fed to Anopheles stephensi mosquitoes.At day 8 post-infection, luminescence signals were determined for 24 individual mosquitoes per cage.EC50s were determined by applying a four parameter logistic regression model.The SMFAs, which were organized by a consortium of laboratories including this one, have been deposited at ChEMBL-NTD and were performed by TropIQ in Nijmegen, The Netherlands.Liver stage assays were performed as previously described.Briefly, 103 P. berghei luciferase-expressing sporozoites were used to infect HepG2-A16-CD81EGFP cells in a 1,536-well plate.After incubation for 48 hr, 2 μl BrightGlo was added, and the EEF growth was quantified by bioluminescence on an Envision Multilabel Reader.13,844 tested compounds were clustered using the Scaffold Tree algorithm.Each scaffold node was then assigned an enrichment score reflecting the degree of overrepresentation of active compounds.We calculated the accumulative hypergeometric p value as probability of observing at least as many hits as we observed within each scaffold.The tree was then pruned, so that only scaffolds with p values < 0.001 were retained.The final resultant tree in Figure 3 was rendered with Cytoscape.To focus on those nodes where the scaffold of a node could reasonably resemble the full structures of all associated compound members, the average Tanimoto similarity score between each scaffold node and its associated compounds were calculated based on ChemAxon topological fingerprints,and those tree nodes and leaves with at least 0.85 Tanimoto scores and with at least three hits are highlighted in colors in Figure 3.D.M.P. and M.W. designed experiments, performed screens and dose response assays, analyzed data, and wrote the manuscript; A.Y.D. prepared parasite materials; S.M. performed P. berghei liver stage assay; F.L., K.P., and A.L. performed standard membrane feeding assays; S.L.O. designed experiments; E.L.F. analyzed data; O.T. and Y.Z. performed compound clustering; E.C. performed data analysis; N.K. performed Dd2 blood stage assays; C.A.S. assisted with experimental design analyzed data; S.L.S. and D.L. performed data analysis; and E.A.W. designed experiments, analyzed data, and wrote manuscript.All authors edited the manuscript and contributed to writing. | Preventing transmission is an important element of malaria control. However, most of the current available methods to assay for malaria transmission blocking are relatively low throughput and cannot be applied to large chemical libraries. We have developed a high-throughput and cost-effective assay, the Saponin-lysis Sexual Stage Assay (SaLSSA), for identifying small molecules with transmission-blocking capacity. SaLSSA analysis of 13,983 unique compounds uncovered that >90% of well-characterized antimalarials, including endoperoxides and 4-aminoquinolines, as well as compounds active against asexual blood stages, lost most of their killing activity when parasites developed into metabolically quiescent stage V gametocytes. On the other hand, we identified compounds with consistent low nanomolar transmission-blocking activity, some of which showed cross-reactivity against asexual blood and liver stages. The data clearly emphasize substantial physiological differences between sexual and asexual parasites and provide a tool and starting points for the discovery and development of transmission-blocking drugs. |
576 | Carbon constrained design of energy infrastructure for new build schemes | The reduction of energy related emissions from buildings is expected to provide a significant contribution to the emissions targets set by UK energy policy.Part of this effort includes the elimination of emissions from the operation of new build schemes by appropriate planning and design.One approach is to reduce energy consumption through improved use of building construction and material standards.Recent initiatives such as the code for sustainable homes provide developers with a framework for the construction of domestic dwellings, and similar schemes have been mooted for the non-domestic sector .Achieving significant emissions savings solely by improving the building fabric is however expensive and often impractical.An additional approach is to move away from the established practice of using natural gas boilers for the provision of heat and grid supplied electricity for appliances, lighting and cooling.Several low carbon or renewable technology options are available to developers and the challenge is to identify the appropriate choice for each scheme.This combined approach is mirrored by the emergence of whole scheme planning initiatives such as the zero carbon homes .A cost effective design of low carbon energy supply systems requires an understanding of the technical implications of using each available technology, an appraisal of the cost and financial viability of the development, and an assessment of the energy related emissions for the scheme.Building level technologies such as heat pumps and solar PV can have a significant effect upon electricity demand and the design of the electricity distribution network as examined in .Other technologies such as solar thermal panels can be used to supply heat .These technologies may also be combined as hybrid systems that consist of two or more renewable heating or electricity supply technologies to meet the overall energy demand, as examined by .Life cycle assessment is used to compare alternative designs that have higher initial costs but lower operating-related costs over the project life span than the lower-initial-cost design.LCA was performed in building integrated design by to investigate and evaluate the total cost of ownership and environmental impacts of maintaining the infrastructure and service over its lifetime.At the community level, district heating is a potential means of using local heat sources.This technology is used widely in regions with cold winters such as Northern and Eastern Europe, North East Asia and Canada and the design of such schemes is supported by a wealth of research, as in .A number of researchers have considered a whole scheme approach to energy supply infrastructure design over the years.Refs. and , for example, examined the integrated design of combined district heat and electricity networks.Ref. provides a more generalised multi-carrier approach with hydrogen, natural gas, electricity, district heating and district cooling all considered as options.In most of these cases, the appraisal of cost is considered using a relatively simple single actor financial model.However, future community schemes are likely to consist of complex multi-actor and multi-objective organisational structures as discussed in .The carbon constrained design of new schemes through initiatives such as the Zero Carbon Homes requires a detailed analysis of energy related GHG emissions at the planning stage.Different approaches to planning energy systems subjecting to carbon constraints have been reported in .These studies investigate energy resource planning for low carbon energy system design.Several researchers consider the emissions performance of energy supply technologies implemented at distribution level.Of particular importance is the dependence of the performance of technologies such as heat pumps, PV and combined heat and power upon the emissions intensity of electricity supplied by the grid .This paper presents an integrated design tool that determines the optimal cost mix of energy supply technologies for a scheme subject to local emissions reduction targets.The tool models the interactions between the energy supply technologies installed at each building, the technical design of the local energy infrastructure, the greenhouse gas emissions resulting from energy use and the financial performance of the scheme.An example case study was used to illustrate the application of the tool for the carbon constrained energy infrastructure design of a UK community.The cost of investment, the viability of a public sector energy services company and the cost of energy supply to each consumer were considered within the financial model.The interaction between the design of the scheme and the projection of emissions associated with grid supplied electricity was also examined.Problem of the optimal design of energy supply schemes for new build communities, adhering to carbon emission constraints was considered.The optimisation objective was to minimise the cost of build to the developer, CInf whilst delivering the on-site infrastructure design constraints, including carbon emissions targets.The optimisation variables were chosen to define the type and capacity of the energy supply technologies installed at the new build scheme.These included the type of heating appliance used within each building; the installed area of photovoltaic and solar heating panels; and the maximum heat output of the generation plant used to supply a district heat network if required.The structure of the integrated optimisation design tool used to solve this problem is shown by Fig. 1.The infrastructure model was used to represent the physical layout of the scheme and simulate the performance of the scheme over time.This model included: the location, size and type of each building; the community energy centre; the gas, electricity and heat distribution networks; and the connection of each network to any existing infrastructure.A set of analysis modules was used to evaluate the scheme design at each iteration of the optimisation.The optimisation solver was used to evaluate adherence to any design constraints; to determine whether convergence to an optimal solution has occurred; and to select new values for the optimisation variables until convergence is achieved.The spatial aspect of a new build scheme was modelled by grouping buildings into Nc sets referred to herein as building clusters.Each building cluster was defined by a geographical area Ac containing NB buildings of identical occupancy type, occupied floor area AB and available roof space ARoof.The fraction of building floor space supplied by each available type of heating technology, the installed area of solar thermal panels and the installed area of photovoltaic panels were defined for each building cluster.To model the temporal aspect of the scheme, each year was divided into a set of Nd representative days each further sub divided into discreet Np time periods of length 24/Np.The energy demand was assumed to be constant within each time period.The community energy centre was modelled as a set of NG heat generation units connected to the heat network via a single stratification type heat accumulation tank .Each generation unit was defined by its plant type, fuel type, rated heat output, rated power generation, and rated fuel consumption.The heat accumulation tank was defined by its total volume, hot water storage temperature, cold water temperature, heat flow into/out of tank, and total heat stored at each time step.The scope of this paper was limited to energy technologies that may be supplied by the existing natural gas and electricity networks or by renewable resources available on site.This includes natural gas boilers, natural gas combined heat and power, ground source and air source heat pumps, solar PV and solar thermal hot water as possible generation technologies and district heating as a possible distribution network option.Technologies requiring the development of new fuel supply chains such as biomass or energy from waste were considered beyond the scope of this work.Solar photovoltaic and solar thermal hot water panels were both modelled as passive generation technologies by defining a peak generation output and annual generation profile per unit panel area.The relative daily generation profile shown by Fig. 3 and the seasonality factors shown by Table 1 were used to derive an annual generation profile with a total annual generation output of 1 kW h/year.The generation profile for each solar technology was modelled by multiplying each time step of the normalised profile by the total annual generation per unit installed area. .Daily energy consumption profiles were defined per unit floor area for each building occupancy type.Five types of end use consumption were considered: space heating, domestic hot water, space cooling, appliance and lighting and cooking.Data from Refs. was used for energy consumption profiles and seasonality factors provided by Refs. were used to scale the profile for each representative day in the year.The energy consumption profiles of each building cluster were thus obtained by multiplying the appropriate profile by the building floor space and number of buildings per cluster over all time steps.The peak, minimum and annual profile of the demand upon the electricity, gas and district heat networks were also defined.These were evaluated by the cluster analysis module as described in Section 3.6.The electricity distribution network was modelled as shown in Fig. 4.The network consists of NEN,n busbars interconnected by NEN,e network elements representing either an 11/0.4 kV transformer or a 3-phase underground cable.The parameters used to define the network are detailed in Table 2.The extension of the network into each building cluster was modelled using a generic layout consisting of a set of 11/0.4 kV substations interconnected by 11 kV cable, and a set of low voltage feeders.The actual building cluster network was determined by the number of 11/0.4 kV substations, NSS, and number of feeders per substation, NFeed, required at each building cluster.These also determined the length of each section of cable and the load at each busbar within each cluster.The design of the electricity network was performed by the electricity design module described in Section 3.8.The natural gas network was modelled as a graph consisting of NGN,n nodes interconnected by NGN,e network elements.Each network element represented either a polyethylene gas pipe or a medium pressure to low pressure reduction installation.The district heat network was modelled as a dual pipe system hydraulically isolated from the generation plant and consumers by heat exchangers.This was modelled as a radial system of NDHN,n nodes interconnected by NDHN,e elements.Each element represented both the supply and return line of the dual pipe system.Each node represented either a joint between network pipes or a heat exchanger interconnecting the supply and return lines.The gas and district heat networks within each cluster were modelled using the generic topology shown by Fig. 5.The cluster analysis module was used to determine the peak demand and minimum demand, and demand profiles for the electricity, gas and district heating networks at each building cluster.The maximum heat output of each generation plant within the energy centre was used as an optimisation variable.The corresponding electrical power output, fuel consumption, energy conversion efficiency and capital cost was determined by the energy centre design module.A set of modules were used to design the electricity, gas and district heat networks required within the scheme:A two-stage algorithm was used to design the electricity distribution network.The first stage determined the minimum number of 11/0.4 kV transformers per cluster, the number of 0.4 kV feeders per transformer and the size of each section of 0.4 kV cable so that:The maximum power across each transformer does not exceed the rated power of the largest available transformer size.The maximum feeder current does not exceed the rated current of the largest available cable size.And the voltage tolerance of for the LV network is not breached at any busbar .The second stage selected the 11 kV cable sizes required to ensure adherence to voltage tolerance and rated cable currents.To consider the effect of reverse power flows, the combination of maximum energy centre electricity generation and minimum cluster electricity demand was considered.A radial steady state load flow algorithm was used at each stage to evaluate network voltages and cable currents.The gas network design module was used to determine the diameter of each section of pipe and the rated capacity of each pressure reduction installation.A radial gas load flow algorithm was used to determine the minimum diameter required at each pipe by ensuring:A minimum pressure of 0.5 bar within medium pressure networks .A minimum pressure of 22.5 mbar within low pressure networks .A maximum flow velocity of 20 m/s .The diameter of each section of district heating pipe was determined by the district heating design module.A district heat load flow algorithm was used to evaluate the pressure drop and heat losses at each pipe.Pipe diameters were determined by first specifying the smallest available diameter at each pipe and then increasing the diameter of the pipe with the highest pressure drop until a maximum head constraint of 14 bar was met at the point of supply.The financial analysis module determined the capital cost of the energy infrastructure, the revenues obtained from on-site generation and the costs associated with on-site consumption.These were used to assess the financial performance of each actor within the scheme.The detail of the ownership and organisational structure will vary from scheme to scheme and a flexible modelling approach was applied.A detailed financial model is described within Section 4.The structure for the emissions target calculation is shown by Fig. 6.The emissions due to electricity supplied by the grid was determined using the marginal carbon emissions factor approach as recommended by the Clean Development Mechanism Executive board of the UN Framework Convention on Climate Change Committee .This method uses a reference case to define a benchmark annual consumption of grid supplied electricity.The annual emissions of this case are determined using the average carbon emissions factor of the generators that supply the grid, CEFAve.Any change of electricity consumption relative to the benchmark results in a corresponding change of emissions that is determined by the marginal dispatch carbon emissions factor, CEFMarg.This corresponds to the generation plant that are expected to change their output in response to a change in demand.Several projections for CEFMarg are found in literature as illustrated by Fig. 7.The organisational and ownership structure may vary for different community energy schemes.The structure shown in Fig. 8, was adopted for the formulation of the optimisation.The capital expenditure for the energy infrastructure was incurred by the developer.It was assumed that the gas and electricity networks were built and operated by the gas and electricity distribution network operators respectively, with the total construction cost passed to the developer.No consideration was given to network operator’s revenue streams such as use of system charges.A publicly owned energy services company provided the operation and maintenance of the community heating scheme, the energy centre and the energy supply technologies installed within buildings owned by the local authority.The installed capacity of solar technology and the rated heat generation capacity of each plant within the energy centre were considered as continuous variables.The network design constraints are incorporated within the design modules described by Section 4, and were therefore not required to be explicitly defined for the optimisation solver.The design tool uses a Social Cognitive Optimisation solver developed by Xie .This algorithm applies a type of evolutionary optimisation strategy whereby autonomous agents use and update a library of best points from within the solution space.By defining a set of optimisation variables from the infrastructure model design parameters, the solver was used to determine the set of values that return the minimum value of the objective function.The Social Cognitive Optimization solver was chosen due to its availability as a ready to use open source Java program.Other evolutionary optimisation strategies such as differential evolution, particle swarm optimisation and ant colony optimisation are equally applicable to the design tool.A new build community redevelopment scheme in South Wales was considered.“the works” is a joint venture between the Welsh Assembly Government and Blaenau Gwent Council who shall be referred to herein as “the developer”.The scheme is new build development consisting a mix of business units, schools, leisure facilities and a local general hospital.720 domestic properties are also scheduled for construction, 20% of which are classed as “affordable homes”, which are local council owned social housing for low income families.A detailed breakdown of the building clusters within the scheme is provided by Table 3, with the site layout illustrated by Fig. 9.The works is considered a flagship project for sustainable development in Wales.The energy strategy for the scheme sets a 60% target reduction of regulated emissions relative to a benchmark defined as all buildings built to 2006 Part L standards and supplied using natural gas boilers and grid supplied electricity .The strategy requires a minimum build standard for domestic dwellings equivalent to the code for sustainable homes level 5, for which the annual space heating demand is reduced to 15 kW hth/m2/year .For the purpose of this paper, the 2006 Part L standard was assumed to apply throughout the analysis for all non-domestic premises.The emissions and financial performance was evaluated assuming a 20 year analysis period.For the purpose of this study, it was assumed that all buildings and infrastructure was fully constructed and commissioned at the start of the first year with 100% occupancy at all buildings.The organisational and ownership structure assumed for the scheme has been discussed in Section 4.The design cases examined for the Ebbw Vale case study are described in Table 4.More detail can be found in Section 6.The design tool was used to evaluate a reference case for the scheme with natural gas boilers at all premises, and with no installed capacity of photovoltaic or solar thermal panels.The resulting infrastructure is shown by Fig. 10.This provided the benchmarks for development build cost, and the total annual energy bill for local authority owned buildings.Fig. 11 shows the annual total emissions, regulated emissions and emissions target of the reference scheme, each of which drop over time due to the decarbonisation of electricity supplied by the grid.The accumulative emissions target of the reference case over a 20 year period starting 2012 was 81,256 tCO2e.The design tool was applied to determine the optimal cost carbon constrained infrastructure design assuming a 20 year analysis period starting from 2012.The DECC June 2010 projection for CEFMarg was used to evaluate the emissions corresponding to any change of electricity supply from the grid.The result is shown by Fig. 12 and consists of a district heat network supply to all public buildings and business units.This was supplied by a 2.4 MWel natural gas CHP plant with a 170 m3 heat accumulator.The heat demand at all residential dwellings was met using a mix of ground source and air source heat pumps.A total of 10,016 m2 of PV and 5490 m2 of solar heating was required for the scheme.The structure of the electricity network was unchanged from the reference case and therefore omitted for clarity.Table 5 provides a summary of the key results.The total cost to the developer was £23.235 m, so that the cost of achieving the emissions reduction target was £18.025 m.This consisted of the cost of technologies installed at each building, including £7.2 m for the cost of PV and £6.3 m spent to meet the CSH level 5 standard for domestic dwellings as discussed in Section 5.The discounted cost of the infrastructure owned by the ESCo was £2.15 m, assuming a UK social discount rate of 3.5%.The technology mix and the corresponding cost are expected to be sensitive to the discount rate.A discount rate of 3.5% was chosen for the Ebbw Vale development as it is an area of social deprivation.A breakdown of the reduction of the annual emissions for case A relative to the reference case is shown by Fig. 13.The contribution the total emissions reduction by each technology used is best understood by considering the change of energy use relative to the reference case.An improvement of the building fabric and the installation of solar thermal heaters both reduce the natural gas consumed for the provision space heating and hot water.The corresponding emissions reduction is therefore constant over time if the fuel emissions intensity is assumed to be constant.The installation of PV results in a change of the amount of electricity annually imported from the grid.The resultant change of emissions is in this case determined using CEFMarg, which when applying the DECC projection of June 2010 has a constant value until 2025 from which point it decreases linearly.This characteristic is therefore also displayed in the plot of emissions reduction over time.The emissions reduction achieved by using district heating with Natural gas CHP, and by using heat pumps, is the result of a combination of these two mechanisms.The use of natural gas CHP to supply heat instead of gas boilers results in an increase of the annual natural gas consumption, but also a decrease of the amount of electricity supplied to the scheme from the grid.The net result is a constant reduction of annual emissions until 2025 from which point it shows the linear decrease characteristic resulting from the CEFMarg projection.Beyond 2031, the emissions from the additional natural gas consumption exceeds the savings obtained from reducing grid supplied electricity and thus natural gas CHP becomes a net contributor to total annual emissions relative to the reference case.The use of heat pumps on the other hand results in a reduction of annual natural gas consumption compared to natural gas boilers, but increase the amount of electricity supplied from the grid.Thus, the reduction of annual emissions by using heat pumps increases as CEFMarg decreases from 2025.An additional consequence of the opposing mechanisms of emissions reduction for different supply technologies is the sensitivity of the optimal solution to the starting year of the analysis period.This was examined using the design tool by changing the start of the 20 year analysis period to 2020.The results are summarised by Table 6.The optimal solution now consists of gas boilers at cluster 4, residential clusters 6 and 13, and at the business park cluster 14.All remaining clusters were supplied using heat pumps.A total of 372 kWel PV and 4.03 MWth of solar hot water panels were required for the scheme.None of the clusters were supplied using a district heat network.The total cost of the optimal energy infrastructure was £20.151 m, which was comprised of £1.797 m for PV, £1.792 m for solar hot water and £7.336 m for heat pumps.The total cost of introducing the carbon emissions constraint was therefore £14.941 m.Fig. 14 provides a comparison of the variation of annual on-site emissions over time for the optimal solutions of case A and case B.It is observed that the emissions performance results from the emissions constraint being applied as an accumulation of the 20 year period as a whole rather than to each year individually.The case A optimal solution initially achieves a significant reduction of annual emissions until the assumed linear reduction CEFMarg begins in 2025.From this point the total on-site emissions reduction due to the avoided import of grid electricity decreases by the mechanism previously described.This is observed as the sudden change from a constant reduction of emissions in 2025.Case B, which primarily consists of heat pumps and PV, initially delivers a lower the on-site target emissions reduction than case A until 2028 from which point the case B delivers the highest reduction.Both heat pumps and PV are effected by a decrease of CEFMarg, but with the annual emissions reduction increasing for heat pumps and decreasing for PV.These opposing mechanisms cancel out so that the sudden change observed in case A is not as prevalent for case B.The optimal solutions so far described were obtained on the basis that the DECC June 2010 projection of CEFMarg applied to the design of the scheme.However, as described within Section 3.11, several alternative projections have been proposed.The impact of the choice of CEFMarg upon the optimal infrastructure design was examined using design case C.The results are summarised by Table 7.The scheme now consists of a much smaller district heating scheme to supply the highest load density buildings including the leisure centre and hospital, with natural gas boilers at all remaining premises.The minimum additional cost of meeting the emissions target is £7.658 m, which is a 57.5% decrease relative to the use of the DECC June 2010 projection.A comparison of the optimal results is provided by Fig. 15.This highlights the significant change to the cost and composition of the energy supply infrastructure when applying a carbon constrained approach to design.A significant proportion of the costs result from the installation of PV, solar thermal panels or heat pumps.A significant contribution to overall costs is also incurred from meeting the domestic CSH level 5 building standard.Each optimal design specifies a mix of PV, solar thermal and a primary heating technology at each premise.In case A, the annual revenues raised by the ESCo are insufficient to recover the total upfront capital costs.The shortfall of the revenue over the capital cost is accounted for as an unrecoverable cost in the model and thereby holds the financial constraint imposed.In cases B and C, however, the capital costs are recovered within the 20 year project period resulting in a net overall profit.Since the scheme is publicly funded and owned, the total profit is accounted for as a reduction of the required up front capital cost.The formulation of the model applies several assumptions that can be adjusted where the required detail differs compared to the presented case study.The building fabrication was modelled by using end use consumption values expected at the standards specified at each case.Variables to represent the building fabric can be introduced in future work in order to allow an examination of the interaction between building design and energy infrastructure.It was assumed in the model that for each cluster all buildings were supplied using the same type of heating technology.A cluster can be defined based on smaller segments, e.g. each building, for the model to consider using various types of heating technologies for different buildings.A constraint was applied by assuming that the ESCo scheme was required to break even on an annual basis as a minimum condition of financial viability.This can be changed to require a specified annual rate of return as a minimum.The carbon constrained design of energy infrastructure for new building schemes requires consideration of the interactions between the technical performance, carbon emissions and financial performance of the available supply technologies.An integrated model was developed to combine the various aspects of system performance within a single optimised design tool.This was applied to investigate the carbon constrained design of a new build energy supply infrastructure in South Wales, UK.The requirement to deliver a reduction of greenhouse gas emissions using on site technologies has a significant impact upon the infrastructure design and cost.For the investigated case study, a 60% reduction of regulated emissions was achieved by using a mix of PV, solar thermal, heat pumps and a district heating network supplied using a natural gas combined heat and power unit.Each of these technologies is capital intensive and results in an increased investment of £18.023 m above the reference case with no on-site carbon constraint.This is a significant increase of costs for a relatively small community scheme consisting of 750 dwellings and public amenities, and may provide a significant obstacle against access to investment capital.The optimal carbon constrained design of the scheme was shown to change significantly with the year of build completion.This results from the interdependency between the reduction of on-site emissions achieved using technologies such as PV, CHP and heat pumps and the emissions intensity of electricity supplied from the grid.Technologies such as PV and CHP-DH deliver emissions savings via the displacement of grid supplied electricity and are deployed extensively within the scheme built at 2012 when the marginal emissions factor is relatively high.Heat pumps on the other hand, deliver emissions savings by displacing the consumption of natural gas for heating with grid supplied electricity and are the predominant technology deployed within the scheme when built at 2020.This would suggest that NG-CHP is unsuitable as a long term option for new community schemes, with heat pumps providing a more cost effective option to developers in the long term.As a major component of the optimal solution in the near term however, NG-CHP may provide a means of developing heat networks to allow the future use of emerging technologies such as large scale heat pumps, biomass gasification, anaerobic digestion and energy from waste.The optimal design and cost were shown to be sensitive to the projection used to estimate the marginal carbon emissions factor of grid supplied electricity.A cost decrease of £11.367 m was obtained by a change from the DECC June 2010 projection of marginal grid supplied electricity emissions to that proposed by the Zero Carbon Hub in 2011.This resulted in a significantly reduced capacity of PV installed within the scheme and extent of the district heat network required on site.Several potential issues that may arise from the current lack of consensus upon the choice of projection include an over estimate of the real emissions savings obtained for a scheme, a significant capital overspend if the use of high capital low carbon technologies such as PV is over prescribed, and the possibility of developers cherry picking a projection that favours the use of a particular technology.This highlights the necessity for establishing a consistent approach for estimating a projection of the marginal emissions factor for grid supplied electricity. | The carbon constrained design of energy supply infrastructure for new build schemes was investigated. This was considered as an optimization problem with the objective of finding the mix of on-site energy supply technologies that meet green house gas emissions targets at a minimum build cost to the developer. An integrated design tool was developed by combining a social cognitive optimisation solver, an infrastructure model and a set of analysis modules to provide the technical design, the evaluation of greenhouse gas emissions and the financial appraisal for the scheme. The integrated design tool was applied to a new build scheme in the UK with a 60% target reduction of regulated emissions. It was shown that the optimal design and corresponding cost was sensitive to the year of build completion and to the assumptions applied when determining the emissions intensity of the marginal central generators. © 2013 The Authors. |
577 | Chemical utilization of hydrogen from fluctuating energy sources - Catalytic transfer hydrogenation from charged Liquid Organic Hydrogen Carrier systems | As the production of electricity from wind and sun is highly intermittent in character, storage technologies are required to adapt production to demand.For energy systems with high shares of fluctuating renewable electricity, it is necessary to develop high-value applications for energy equivalents that are produced at times with very little demand.Apart from electric and mechanical storage options the conversion of excess electricity into hydrogen by water electrolysis is considered as most attractive .Besides energetic use of the produced hydrogen, the latter can be used as feedstock in catalytic hydrogenation reactions.However, this economically very interesting way of hydrogen utilization often requires transport of hydrogen from the place of renewable electricity production to a chemical production site.In this context, chemical hydrogen storage and transport systems are highly interesting.These should allow storing large amounts of hydrogen and release of pure hydrogen on demand.For both requirements the application of Liquid Organic Hydrogen Carrier systems is very attractive.LOHC systems consist of a pair of high-boiling, liquid organic molecules – a hydrogen-lean compound and a hydrogen-rich compound – that can be reversibly transformed into each other by catalytic hydrogenation and dehydrogenation reactions .Historically, the pair toluene/cyclohexane has been proposed as LOHC system , however, the low boiling points of this system and the toxicological profile of toluene are not ideal .In contrast, high-boiling aromatic and heteroaromatic compounds allow dehydrogenation in the liquid phase with easy condensation of evaporated parts of the hydrogen carrier.A system that gained greater attention by the scientific community is N-ethyl-carbazole/perhydro-N-ethyl-carbazole introduced in 2004 by the company Air Products and Chemicals .However, despite its unquestionable attractiveness for low temperature hydrogen release , this system has a number of important drawbacks, namely the limited technical availability of NEC from coal tar distillation, the solid nature of the fully dehydrogenated NEC at room temperature, and the limited thermal stability of NEC .Recently, the use of well-established, industrially widely used heat transfer oils as LOHC systems has been proposed .In particular, mixtures of isomeric benzyltoluenes and dibenzyltoluenes that are industrially applied on large scale show excellent performance in reversible hydrogenation/dehydrogenation cycles.These systems are characterized by very good technical availability, high hydrogen capacities without solidification, very high thermal stability, and full toxicological and ecotoxicological assessment of their hydrogen-lean forms .The hydrogenation of the commercial isomeric mixture of dibenzyltoluenes has been found to proceed readily using commercial Ru on alumina catalysts .Here, we describe a novel application of the liquid, hydrogen-charged carrier perhydro-dibenzyltoluene, namely its direct application as sole source of hydrogen and solvent in industrially relevant hydrogenation reactions.The here proposed technology does not target the laboratories of synthetic organic chemists where hydrogen is best provided from cylinders with compressed hydrogen.Our aim is to replace hydrogen from fossil sources in industrial, larger-scale continuous hydrogenation processes by “green” hydrogen from water electrolysis based on renewable energy equivalents.Thus the storable, hydrogen-rich LOHC compound is used as a hydrogen buffer system to link intermittent hydrogen production from wind and sun energy with steady-state industrial hydrogenation.We anticipate, that the here proposed technology is most interesting for medium-sized industrial sites that do not operate their own methane reformer due to unfavorable economy of scale but still need significant amounts of hydrogen, e.g. for specialty or fine chemicals production, activation of catalysts or the treatment of materials.The here proposed technology offers a very attractive short-cut compared to the sequence of catalytic LOHC dehydrogenation, hydrogen compression and high pressure hydrogenation, as shown in Scheme 2.As a very favorable feature of this approach, direct compensation of the heats of reaction of LOHC dehydrogenation and target molecule hydrogenation takes place.Thus, the here proposed concept avoids complex heat transfer installations in both the classical LOHC dehydrogenation and feedstock hydrogenation reactors .In future applications of the technology we anticipate the LOHC medium to be used as solvent for the desired transfer hydrogenation reaction thus shifting hydrogenation equilibria due to a large excess of the hydrogen carrier.We are fully aware that the selection of potential substrates to be hydrogenated by the here proposed transfer hydrogenation is not fully unrestricted.As separation of hydrogenated substrate and un-charged LOHC material is an important step of the overall process, we expect the technology to be particularly useful for hydrogenation reactions in which substrates and products show a significant boiling point difference with the applied perhydro-dibenzyltoluene/dibenzyltoluene transfer hydrogenation system.In overview, the potential advantages of transfer hydrogenation based on LOHC systems include the following aspects:Potential direct link of unsteady green hydrogen production to steady-state continuous industrial hydrogenation processes due to the hydrogen storage function of the diesel-like hydrogen carrier;,Potential for almost thermoneutral hydrogenation processes as the exothermicity of the substrate hydrogenation and the endothermicity of the LOHC-dehydrogenation are in balance;,Potential for low pressure hydrogenation processes in which only very little “free” hydrogen is present;,Potential for new hydrogenation mechanisms and – linked to that – potential for novel hydrogenation selectivities for well-established reactions.Catalytic transfer hydrogenation per se is not a new concept and has been intensively studied in the past years .Heterogeneous palladium catalysts used to be considered as the most active catalysts and have been widely used for CTH reactions .Also other metals, e.g. ruthenium or raney nickel proved to be applicable for direct hydrogen transfer.Various organic substances, in particular secondary alcohols and cyclohexene have been applied as source of hydrogen for functional groups reduction or the hydrogenation of unsaturated compounds.However, to the best knowledge of the authors of this paper, transfer hydrogenation from a hydrogen storage material, i.e. a material designed for storing hydrogen under a technical scenario, has not been reported yet.In this contribution, we focus on the transfer hydrogenation from perhydro-dibenzyltoluene to toluene.Toluene has been selected as a model compound for the industrially highly relevant hydrogenation of monoaromatic compounds, e.g. in the context of the technical production of adipinic acid or caprolactam .Of course, many other substrates would be also applicable and interesting to be studied using this technology, such as e.g. the hydrogenation of alkynes, alkenes, functionalized aromatic compounds, ketones, acids etc.However, we decided to investigate first toluene hydrogenation as it offers – in addition to a pure feasibility study – conclusions on the driving force of dibenzyltoluene dehydrogenation in comparison to toluene hydrogenation.Such study is of general importance to strengthen the thermodynamic foundations of the LOHC system dibenzyltoluene/perhydrodibenzyltoluene.The first set of experiments aimed for determining the reaction conditions under which effective transfer hydrogenation of toluene with perhydro-dibenzyltoluene takes place.For this purpose we first carried out a temperature variation.These experiments were carried out using a defined amount of the applied commercial Pd on carbon catalyst and a ratio of perhydro-dibenzyltoluene to the toluene substrate corresponding to a 3:1 excess in transferable hydrogen.The results are shown in Fig. 1 and Table 1.Remarkably, already at a temperature as low as 210 °C clear transfer hydrogenation activity is observed over 4 h reaction time.At such a low temperature the thermal release of hydrogen from perhydro-dibenzyltoluene with the applied Pd-catalyst would be very low.As expected, the degree of toluene conversion increases strongly with temperature up to 270 °C.At this temperature, toluene hydrogenation to methylcyclohexane proceeds within 2 h to a toluene conversion of above 95%.There is no further increase in toluene hydrogenation indicating that the obtained value of 96.5% toluene conversion represents the equilibrium state of the transfer hydrogenation under the applied conditions.Interestingly, at higher temperatures of 290 °C and 300 °C toluene conversion does not accelerate in comparison to 270 °C.We interpret this finding as a very strong indication for mass transfer limitation of the transfer hydrogenation process above 270 °C degree.Fig. 2 shows the results of a variation of the LOHC to substrate ration at 270 °C using the same Pd-catalyst as in Fig. 1.Experiments with a molar ratio of LOHC to toluene between 3:1 and 1:1 have been carried out.It is obvious and not surprising that a greater excess of hydrogen offered in form of the LOHC hydrogen transfer liquid accelerates the reaction significantly.It was surprising, however, to see that even under conditions with relatively small excess hydrogen the reaction proceeds smoothly and reaches within 4 h reaction time a toluene conversion of 60%.All results are summarized in Table 2.From a practical point of view, it would be desired to perform the transfer hydrogenation reaction at a minimal excess of perhydro-dibenzyltoluene that still allows a certain, desired substrate conversion and product yield.Due to the high structural similarity of toluene and dibenzyltoluene the driving force for the transfer hydrogenation reactions is presumably rather small.One would therefore expect an equilibrium constant close to unity for stoichiometric hydrogen conditions and consequently a maximum conversion due to the reaction equilibrium of about 50%.To explore this experimentally, we applied in our next set of experiments a molar ratio of perhydro-dibenzyltoluene to toluene of 1–3.With this stoichiometry the amount of available hydrogen equals the amount of consumable hydrogen.The reactions were allowed to proceed for much longer till equilibrium conditions were reached.Interestingly, as shown in Fig. 3, the well reproducible results of these experiments yielded toluene conversions of about 62% at 270 °C.In all experiments, methylcyclohexane was found as the sole product and no partially hydrogenated cyclic compounds were identified.Note in this context that reaction equilibria leading to maximum conversions in the order of about 50% are very sensitive to small differences in the thermodynamic driving force.A conversion of 62% for a stoichiometric mixture would be the result of an equilibrium constant expressed in concentrations of 7.1.Calculating backwards from the equilibrium constant to the Gibbs free energy of the transfer hydrogenation reaction results in a value of only −1.0 kJ/mol-H2 at the applied reaction conditions.Taking effects of non-ideal behavior into account by using the UNIFAC model a similar value of −1.2 kJ/mol-H2 is obtained.Results from combustion calorimetry indicate a difference of enthalpies of hydrogenation for dibenzyltoluene and toluene in the liquid phase of 2.0 ± 1.1 kJ/mol-H2.Assuming a small entropic contribution to the Gibbs free energy of reaction, which seems reasonable for a transfer hydrogenation reaction, these values are consistent with the Gibbs free energy derived from the reaction equilibrium.At first glance the uncertainty of these combustion calorimetry measurements of ±1.1 seems to be very high.However, it should not be compared to the enthalpy of transfer hydrogenation, but rather to that of hydrogenation.From this point of view, the difference between thehydrogenation reactions of toluene and dibenzyltoluene as well are rather small.Nevertheless, all results point into the direction that hydrogen release from perhydro-dibenzyltoluene is thermodynamically slightly favored over the dehydrogenation of methylcyclohexane.The slight exothermicity of the transfer hydrogenation reaction can also be confirmed when looking at the calculated temperature dependence of equilibrium conversion.In the experiments at temperatures of 270 °C and higher, equilibrium is nearly reached."For these experiments conversion after 4 h is slightly decreasing with increasing temperature as predicted by Le Chatelier's principle for exothermic reactions.To reach a toluene conversion of 50% at 270 °C about 0.19 mol perhydro-dibenzyltoluene per mol toluene would be required thermodynamically.To allow for higher toluene conversions either lower temperatures or a higher excess of perhydro-dibenzyltoluene would be required.Due to the weak exothermicity the effect of temperature on equilibrium is rather small.Thus, reduction of temperature for thermodynamic reasons does not justify the negative effect on reaction kinetics.Hence, the ratio of hydrogen donating to accepting compound has to be higher than stoichiometric for full conversion.For the structurally very similar reactants toluene and dibenzyltoluene the number of donating sites has to exceed the number of accepting sites significantly to reach high conversions.A ratio of n to n of 3 is required to reach toluene conversions above 95%.However, if hydrogen from perhydro-dibenzyltoluene would be transferred to another substance class, e.g. an aliphatic olefin, the excess required could be drastically reduced.For this transfer hydrogenation the thermodynamic driving force would be so high that even with a stoichiometric mixture in hydrogen the reaction equilibrium would be close to total conversion.For example, transfer hydrogenation from perhydro-dibenzyltoluene to 1-octene at 270 °C allows from a thermodynamic point of view for 1-octene conversion above 98% for a mixture of n to n of 9.This numbers confirm impressively the potential efficiency of the LOHC-material perhydro-dibenzyltoluene in relevant transfer hydrogenation reactions.Finally, we were interested to find out whether the observed transfer hydrogenation is only possible with the so far applied Pd on carbon catalyst or whether other catalyst systems promote the same kind of reaction.The following set of experiments represents a catalyst screening under strictly comparable reaction conditions of a toluene to metal ratio of 400 at 270 °C in a stoichiometry of n to n of 3:2, that is a ratio of 4.5:1 in hydrogen.The here reported results show very interesting technical potential for the use of hydrogen-charged LOHC systems as source of hydrogen in the chemical industry via a direct transfer hydrogen reaction.This utilization of hydrogen-charged LOHC systems would create a direct link between hydrogen production from unsteady renewable sources via electrolysis and its high-added value use in the chemical industry in steady-state hydrogenation processes.It is obvious that direct LOHC transfer hydrogenation has multiple advantages compared to the two-step catalytic release of hydrogen from the LOHC followed by hydrogenation of the substrate of interest with the released hydrogen.Most striking is the fact that the substrate hydrogenation delivers at least a very substantial part of the heat of dehydrogenation otherwise required for hydrogen release from the LOHC system.Moreover, the heat of substrate hydrogenation is compensated to a large extent by the endothermic dehydrogenation reaction making heat management in the transfer hydrogenation reactor much easier.Note that also hydrogen compression that would otherwise be necessary in the two step process is not necessary in direct LOHC transfer hydrogenation.Based on the fact that the transfer hydrogenation from perhydro-dibenzyltoluene is in most cases a thermo-neutral or even a exothermic reaction, the relatively high temperature level of the transfer hydrogenation reaction is not a drawback but an advantage as it allows using the reaction heat for steam production or heating.The here investigated transfer hydrogenation from perhydro-dibenzyltoluene to toluene represents only a model reaction for industrial relevant hydrogenations.Nevertheless, very interesting conclusions could be drawn from the here presented work regarding parameter space of the reaction, suitable catalysts and thermodynamic aspects.It is anticipated that economically much more interesting examples can be found in the fine and specialty chemicals industry where hydrogen supply is typically not available on-site from a cheap methane reformer so that the additional advantage of safe and efficient hydrogen logistics via the diesel-like LOHC system results.It is furthermore anticipated that some of these industries can “market” the benefit of using green hydrogen from renewables much better than big refineries or petrochemical sites.Materials: Perhydro-dibenzyltoluol was purchase in suitable quality from Hydrogenious Technologies GmbH, Erlangen.Alternatively, it can obtained by hydrogenation of dibenzyltoluene in a stirred batch autoclave at 150 °C and 30 bar hydrogen pressure over 12 h using a Ru on AlOx catalyst.The applied commercial catalysts for this study are listed in Table 4.All experiments were performed by using a 500 mL stainless-steel Parr batch autoclave equipped with a four-blade gas-entrainment stirrer.To assure an inert atmosphere in the pressure vessel, the reactor was purged with argon three times.The reactor was heated to the desired reaction temperature with an external electrical heating jacket.To determine the progress of the reaction, liquid samples were taken and analyzed by using gas chromatography. | Liquid Organic Hydrogen Carrier (LOHC) systems offer a very attractive way for storing and distributing hydrogen from electrolysis using excess energies from solar or wind power plants. In this contribution, an alternative, high-value utilization of such hydrogen is proposed namely its use in steady-state chemical hydrogenation processes. We here demonstrate that the hydrogen-rich form of the LOHC system dibenzyltoluene/perhydro-dibenzyltoluene can be directly applied as sole source of hydrogen in the hydrogenation of toluene, a model reaction for large-scale technical hydrogenations. Equilibrium experiments using perhydro-dibenzyltoluene and toluene in a ratio of 1:3 (thus in a stoichiometric ratio with respect to H2) yield conversions above 60%, corresponding to an equilibrium constant significantly higher than 1 under the applied conditions (270 °C). |
578 | Systems-based approaches to unravel multi-species microbial community functioning | Microorganisms make up the main portion of biomass on Earth and are ubiquitous within the environment.In situ, they coexist in mixed microbial communities whose concerted actions greatly contribute to sustaining life on our planet.Microorganisms are indeed the main drivers of biogeochemical cycles and as such ensure the recycling of essential organic elements like carbon and nitrogen.In addition, microbial communities interact with plant and animal hosts, and in the context of human biology, our microbiome is now considered to be our last organ .Understanding how microbes interact in situ and how microbial communities respond to environmental changes has been identified as one of the major challenges for the coming years with relevance to evolution, human health, environmental health, synthetic biology, renewable energy and biotechnology .To tackle the exciting task of deciphering microbial interactions, systems biology approaches constitute an ideal experimental strategy.By considering microbial communities as metaorganisms and investigating all the levels of biological information together with the metadata characteristic of the environmental conditions in situ, systems biology can study the interactions between the different parts of complex ecosystems responsible for their emergent properties.The success of systems biology is strongly dependent on the true integration of experimental observations and the development of mathematical models, which require iterative validation and refinement.Systems biology offers a holistic approach for the characterisation of microbial communities.In such experimental designs metagenomics, metatranscriptomics, metaproteomics and metabolomics are typically employed.Each level of biological information provides a different level of characterisation of the metaorganisms.The metagenome informs on the potential of microbial communities by providing insights into the genes that could possibly be expressed by the metaorganism.The metatranscriptome, including messenger and non-coding RNAs, provides some information about the regulatory networks and gene expression at the time of sampling.Therefore, together with the metaproteome, the metatranscriptome informs on the functionality of microbial communities.Furthermore, the metaproteome also gives access to regulatory networks and, together with the metabolome provides some strong insights into microbial activities.Importantly, the co-extraction of DNA, RNA, proteins and metabolites enables the generation of rigorous interrelated datasets.Each of the omics techniques has inherent bottlenecks, such as metagenome annotation, metatranscriptome assembly, or protein and metabolite identification.These bottlenecks however can be largely overcome by generating integrated datasets whereby the detection of RNA transcripts and amino acids can guide the process of metagenome annotation .This, in turn, can radically facilitate metatranscriptome assembly, while increasing significantly protein identification rates .Meanwhile, however, metabolomics remains a complex technology.Untargeted experimental strategies are typically limited by the low number of metabolites identified.Indeed, while DNA and RNA are composed of nucleotides and proteins composed of amino acids, metabolites do not share any common characteristics making their systematic identification challenging.In addition, metabolite databases, containing mass spectra or NMR spectra, are still relatively poorly populated compared to gene or protein databases.Nevertheless, metabolite databases are constantly growing and targeted metabolite identification can be guided by protein detection.Even though metabolite detection ultimately correlates with microbial activity, metabolite production in mixed populations cannot be easily linked to any specific microbial identity.Besides, metabolomics offers a limited level of information regarding the connectivity of metabolic pathways .However the combination of isotope labelling, such as 13C and 15N, with omics can provide insights into the carbon and nitrogen fluxes in microbial communities and inform on microbial interrelationships.Overall omics datasets encompassing metagenomics, metatranscriptomics, metaproteomics, metabolomics and SIP-omics have the potential to provide unprecedented access to the functioning of ecosystems.For the purpose of this review, the advancement of each omics technology will be discussed.Metagenomics is employed to determine the sequences from DNA directly extracted from environmental samples.This high-throughput technology, which overcomes the well-known culture-based-method biases, has transformed our understanding of microbial ecosystems in terms of diversity, population dynamics and potential.Commonly, metagenomic studies initially conduct 16S and 18S rRNA surveys to examine microbial diversity and community composition while informing on the sequencing depth required to access high levels of metagenome coverage .The resulting amplicon sequences, typically generated using Illumina or pyrosequencing platforms, are subjected to quality filtering before taxonomic assignment is performed commonly using computational tools such as QIIME and mothur .These data can then be used to calculate sample diversity and microbial community distance metrics in the context of comparative investigations.In addition, correlations between species and metadata can be uncovered when the microbial communities are analysed under different environmental conditions .While small subunit rRNA profiling, at the DNA level, can provide insights into community structure, the potential, flexibility and robustness of an ecosystem can only be investigated with the elucidation of deep metagenomes.A recent interesting development, however, in the exploitation of SSU rRNA data has been brought about with the introduction of PICRUSt, a computational tool to predict the functional profile of microbial communities based on gene marker surveys and the availability of reference genome databases .Different sequencing platforms can be employed for metagenomics , and commonly metagenome sequences are composed of short-length reads, which render the process of assembly and annotation particularly challenging.In order to assemble and recover single genomes from metagenomic data, sequences are classified into discrete clusters commonly referred to as bins.Binning algorithms have been specifically developed for metagenomic sequence read assembly; examples of these include Meta-IDBA , AbundanceBin , MetaVelvet and Metacluster .Further binning strategies can then be employed to retrieve single genomes from the fragmented assembled contigs.One of the most widely used binning approaches to do this relies on emergent self-organising maps.ESOMs can be based for example on tetranucleotide frequency distribution or time series abundance profile .In both contexts, individual bins are commonly selected manually from graphical outputs.To circumvent this, novel automated binning algorithms have been recently developed to recover genomes from fragmented assembled metagenomic contigs.Computational tools for metagenomic annotation are also widely available such as MG-RAST and RAMMCAP .Obtaining meaningful functional information from metagenomic datasets can be very difficult and particularly costly in term of computational process time.This can partly be attributed to the large proportion of uncharacterised taxa prevailing in many environments.In order to address this issue, a novel manually curated database was built, FOAM, which has been demonstrated to screen metagenomic datasets for functional assignments with higher sensitivity and 80 times faster than BLAST .Depending on the research question and the motivation for conducting metagenomics, assembly might not always be required.Indeed, in order to explore the metabolic potential of a microbial community, Abubucker et al. developed a computational pipeline to determine the relative abundance of gene families and metabolic pathways from short-read sequences characteristic of metagenomic datasets .Similarly, Rooijers et al. designed an iterative computational workflow using raw metagenomic sequences to mine metaproteomes .These two pipelines , however, have been developed for the human microbiome and rely heavily on the availability of numerous robustly annotated genomes from relevant single microorganism.Predictive modelling approaches, such as PRMT have been recently designed to explore multi-species community functioning in the context of metagenomics.PRMT uses metagenomic information to predict metabolite environmental matrices and generate PRMT scores.Correlations between these scores and relative phylogenetic abundance can then be investigated to infer potential metabolic role of specific taxa within an ecosystem, therefore providing a useful strategy to access community functioning from metagenomic data .Metagenomics is a powerful tool to identify and in some instances isolate novel microorganisms and help uncover the distribution of metabolic capacities across the tree of life.For example, the analysis of acid mine drainage metagenomes revealed the presence of a unique nif operon, which led to the isolation of the only nitrogen fixer from the bacterial community by cultivating the acid mine drainage biofilm in the absence of nitrogen .Recently, 12 bacterial near complete genomes were reconstructed from activated sludge metagenomic datasets .These included rare, uncultured species with relative abundance as low as 0.06%, highlighting the power of metagenomics to uncover novel microorganisms .Similarly, metagenomics from a premature infant gut microbiota led to the recovery of 11 near complete genomes .Amongst these, the first genome of a medically relevant species, Varibaculum cambriense, could be reconstructed.Genomic-based metabolic prediction of V. cambriense unveiled the metabolic versatility of this bacterium in terms of carbon sources and electron acceptors during anaerobic respiration .In addition, the dataset indicated a possible metabolic exchange between V. cambriense and the rest of the microbial community.While V. cambriense has the ability to produce nitrite, which could be further metabolised by other species, the microorganism could be dependent on the community for its source of trehalose .Metagenomics of sediment samples from a site adjacent to the Colorado River revealed a surprising phylogenetic diversity and novelty coupled with metabolic flexibility .The microbial communities displayed a high level of evenness with no single organism accounting for over 1% of the communities.The most abundant species in deeper sediments, RBG-1, was found to represent a new phylum.The genome of RBG-1 was recovered from the metagenomic dataset and counted over 1900 protein-encoding genes .Genomic-based metabolic profile reconstruction of RBG-1 highlighted its potential role in metal biogeochemistry with the capacity of iron cycling both under aerobic and anaerobic environmental conditions .Metagenomic datasets from the same site were further mined to investigate the metabolic diversity of the Choloroflexi phylum in sediments .Choloroflexi were found to be metabolically flexible with the ability to adapt to varying redox conditions.They were predicted to play a role in carbon cycling being able to degrade plant material such as cellulose .In addition, known pathways previously not associated with this phylum were found to be encoded in the newly reconstructed genomes recovered from metagenomic datasets .After discovering that thawing permafrost was commonly dominated by a single archaeal phylotype with no cultured representative, as indicated by SSU rRNA profiling from DNA samples, Mondav et al. recovered its genome from a metagenomic dataset in order to assess its metabolic capacity .This novel archaea was found to be present in 33 locations widely geographically distributed and dominant in some cases accounting for up to 75% of detected archaeal sequences .Metabolic reconstruction of this archaea indicated its ability to perform hydrogenotrophic methanogenesis.This was confirmed in situ by metaproteomics, and conferred a significant role to the novel methanogen in global methane production .This illustrates how metagenomics can help develop biological hypotheses that can be further tested employing other omics.An important pitfall of metagenomics and its interpretation, when used in isolation, is the inherent assumption that microorganisms have the same potential and therefore perform the same function regardless of their environment.Freilich et al., together with similar work , could demonstrate that microbial interactions can be manipulated through changes in environmental conditions , which cannot be easily accounted for when analysing metagenomic datasets.Therefore to embrace the full potential of metagenomics, and particularly to test the derived biological hypotheses, the combination with other omics is required.While metagenomics informs on the genes present in an ecosystem, metatranscriptomics investigates gene expression and therefore provides access to messenger and non-coding RNAs.As the majority of RNA in a cell is composed of ribosomal and transfer RNAs, metatranscriptomics typically comprises rRNAs depletion steps to enrich for mRNAs .Metatranscriptomics commonly involves reverse transcription to generate cDNA, which can then be sequenced using the same platforms as for metagenomics .Direct RNA sequencing, bypassing cDNA generation and its associated biases, is also available but has not yet been employed in the context of mixed microbial communities.Although not usually performed in metatranscriptomic studies, 16S and 18S rRNA surveys from RNA samples are recommended prior to metatranscriptome investigations.The SSU rRNA data can then be analysed as indicated above in the context of metagenomics .This can provide some insight into which operational taxonomic units are likely to be active at the time of sampling, information that cannot be deduced from similar data generated at the DNA level.In order to access in situ microbial gene expression metatranscriptomes have to be investigated.Metatranscriptomics offers the unique opportunity to identify novel non-coding RNAs, including small RNAs reported to play key roles in central biological processes such as quorum sensing, stress response and virulence .Shi et al. detected a large fraction of small RNAs in marine water reportedly involved in the regulation of energy metabolism and nutrient uptake .One of the main challenges of metatranscriptomics is the assembly of non-continuous short-read sequences with uneven sequencing depth due to variation in mRNA abundance within and between microorganisms.In addition, different mRNAs commonly contain repeat patterns, reflecting functional redundancies in proteins, which render the process of assembly even more difficult.Binning and functional annotation strategies similar to those used for metagenomic sequences are employed .Metatranscriptomic data analysis can be considerably facilitated when performed in tandem with metagenomics.Xiong et al. developed an experimental and analytical pipeline for the analysis of metatranscriptomes in the absence of extended sets of reference genomes .Their workflow employs a peptide-centric search strategy by performing in silico translation of detected transcripts.While Leung et al. specifically designed a new algorithm for metatranscriptome assembly , HUMAnN, which processes unassembled short-read sequences can be used for the analysis of transcribed gene families and pathways and the determination of their corresponding abundance within a microbiome .Interestingly, Desai et al. developed a computational pipeline to compare metabolic reconstructions from metagenomic and metatranscriptomic datasets .Such comparisons can highlight the discrepancies between metabolic potential and actual transcription, as observed in the case of marine microbial communities .Metatranscriptomics has been successfully employed to investigate the effect of xenobiotics on the human gut microbiota .Indigenous microbial communities were found to respond to xenobiotics by activating drug metabolism, antibiotic resistance and stress response pathways across multiple phyla.This study therefore captured the collateral consequences of xenobiotic treatment.Metatranscriptomics in combination with isotope labelling was also used to decipher the fate of methane and nitrate in anaerobic environments .Impressively, using internal standards for quantitative metagenomics and metatranscriptomics, Satinsky et al. could suggest different contributions to geochemically relevant processes of free-living and particle-associated microbiota in the Amazon River Plume during a phytoplankton bloom .Particularly, free-living microorganisms were found to express genes involved in carbon, nitrogen and phosphate cycles, while particle-associated microbial communities transcribed genes with relevance to sulphur cycling .The authors, however, recognise the limitations of metatranscriptomics, as mRNA abundance cannot be used as a proxy for microbial activities.In term of ecosystem functioning mRNAs only reflect potential functions since it cannot account for post-transcriptional regulation.Indeed not all mRNAs are translated into proteins and a lack of correlation between mRNA and protein levels has been previously reported .Even though the detection of proteins cannot be strictly correlated with microbial activities and process rates, metaproteomics provides useful insights into microbial functions.Metaproteomics investigates the proteins collectively expressed within a microbiome and together with metabolomics provides access to ecosystem functioning.The identification of proteins and metabolites can be directly used to construct metabolic models reflecting active pathways and in this context, metaproteomics and metabolomics complement each other very well.Metaproteomics, however, presents some valuable advantages over metabolomics as proteins can be assigned to specific taxa and therefore their detection informs not only on what pathways are active within an ecosystem but also on the identity of species involved in specific functions.In this respect, metaproteomics offers a powerful approach to link community composition to function.The success of metaproteomics is strongly dependent on the availability of relevant genomes to enable high protein identification rates .It is therefore recommended to use metaproteomics in combination with metagenomics, an experimental approach which will result in a synergistic effect since the detection of peptides can assist and validate metagenome annotation .Compared to metagenomics and metatranscriptomics, metaproteomic computational workflows are somewhat less developed .Software tools like MEGAN can be used for metaproteomics, in which case the initial BLAST files are generated directly from protein files, and HUMAnN is also suggested to be amenable for metaproteomic datasets .One of the limitations of MEGAN is that it employs a naïve pathway mapping strategy.Proteins can be involved in more than one biochemical reaction and, consequently, can participate in several metabolic pathways.Also significant in the context of metabolic reconstruction from metagenomic datasets, a naïve pathway mapping strategy can lead to an overestimation of the functional diversity of microbial communities.Parsimony approaches, as employed in the HUMAnN pipeline, are then applied to offer a more accurate representation of the functionality of a microbial community by specifically identifying the minimum set of biological pathways that can account for all the protein families detected .While for metagenomics and metatranscriptomics relative quantification and even absolute quantification with the use of internal standards are accessible, protein abundance is harder to determine.In the context of pure-culture proteomics, labelling methods, such as iTRAQ have been developed , while in multi-species communities, normalised abundance factors are commonly calculated .The comparison of summer and winter metaproteomes from West Antarctic Peninsula seawaters, using spectral counts for the determination of protein levels, revealed seasonal shifts in abundance of specific taxa through protein assignments, which could be correlated with differences in metabolic activities .Of particular note was the observation that ammonia oxidation was exclusively carried out by archaea during the winter, while bacteria were predominantly involved in this process in the summer.Interestingly, metaproteomics has been used as a tool to compare the physiological states of microbial communities under different environmental conditions .Specifically, the characterisation of the metaproteome from acid mine drainage biofilms grown under laboratory conditions enabled the fine-tuning of the media composition to mimic the natural environment of these microbial communities .Recently, metaproteomics combined with isotope labelling has uncovered a novel family of enzymes involved in hydrocarbon bioremediation .Metaproteomics has also revealed an increasingly important role for a clade of Gammaproteobacterial sulphur oxidizers in marine nutrient cycling in response to climate change .Even though metaproteomics is a powerful tool to link microbial community composition to function, one of the main challenges of metaproteomics is to relate protein abundances to microbial activities, which are ultimately reflected by metabolic fluxes.Metabolomics is employed to characterise the intermediates and end-products of metabolism.Metabolites are typically of low molecular weights and are mostly in a state of flux, which implies that their compositions and concentrations vary significantly as a function of time within an ecosystem.Metabolomics offers a powerful approach for the characterisation of ecosystem phenotypic traits resulting from the network of interactions occurring between the members of the microbial communities.This methodology therefore plays a significant role in determining ecosystem emergent properties and thus is widely used for biomarker discovery and diagnostics .Two experimental workflows can be employed in metabolomics; a targeted approach where known metabolites are quantified and a non-targeted strategy aiming at characterising entire metabolomes .Due to the great variation in metabolite chemical structures, non-targeted metabolomics is commonly characterised by the detection of large fractions of unknown metabolites .In addition, metabolite databases can contain incomplete information and are unsuitable for the identification of isomers .Faecal metabolite profiling of cirrhotic patients revealed the differential detection of 1771 features when compared to control groups.Amongst these, only 16 metabolites could be identified .Despite the low identification rate, liver cirrhosis was shown to correlate with nutrient malabsorption and disruptions in fatty acid metabolism .Over 3500 metabolic features were detected in acid mine drainage biofilms, from which only 56 were identified with more than 90% classified as unknown .Some of these likely represent novel metabolites but this observation was largely attributed to the incompleteness of MS/MS databases.Indeed they are limited to commercially available compounds, which are estimated to represent as little as 50% of all biological metabolites .In this study, metabolomics was combined with isotope labelling, which led to significant improvements in chemical formula prediction particularly for large metabolites .In order to gain some insights into unknown metabolites typically detected in untargeted investigations, modification-specific metabolomics was developed .This novel approach involves the detection of metabolite modification encompassing acetylation, sulfation, glucuronidation, glucosidation and ribose conjugation.The inclusion of modification information to the mass feature during database searches drastically reduces the number of matches for metabolite identification and therefore significantly decreases the time required for this process .Similarly, in order to improve metabolite identification rates in untargeted metabolite profiling, Mitchell et al. developed an algorithm for the detection of functional groups within metabolite databases .Targeted metabolomics, whereby a pre-determined selection of metabolites are detected and quantified, also constitutes a very valuable experimental approach and has been widely employed in the context of human biology.The monitoring of 158 target metabolites belonging to 25 pathways in serum samples allowed the discrimination between three patient groups .Specifically, 13 and 14 metabolites were identified for the differentiation between colorectal cancer patients from healthy individuals and from polyp patients respectively, thus demonstrating the potential of such an experimental strategy for diagnostics.Targeted metabolite profiling of 212 compounds in blood samples over a period of seven years revealed that over 95% of individuals showed at least 70% of metabotype conservation .In addition over 40% of individuals were uniquely identified by their metabolite profile after seven years.In order to appropriately select relevant metabolites to target, PRMT can be employed when metagenomic sequences are available .The application of PRMT to a time-series bacterial metagenomic dataset from the Western English Channel supported a correlation between bacterial diversity and metabolic capacity of the community .Specific bacterial groups could be linked, for example, to carbohydrate utilisation or total organic nitrogen availability.Importantly, PRMT uncovered some novel biological hypotheses by linking specific taxa to organic phosphate utilization or chitin degradation .Overall the success of metabolomics in the context of mixed microbial communities is limited compared to other omics technologies and importantly the identification of metabolites is not particularly informative in terms of microbial interactions.In order to overcome this limitation and to gain some insights into microbial taxa involved in metabolite production, the combination of metabolomics and metaproteomics can be very useful.Metabolic exchange in an acid mine drainage ecosystem between a dominant protist and the indigenous bacterial community was examined by employing a proteo-metabolomic strategy .The protist was found to selectively secrete organic matter in the environment, which amongst other effects led to a nitrogen bacterial dependence on the protist activities.Even though metabolomics and metaproteomics can be successfully combined to investigate microbial interactions, microbial interrelationships and more specifically microbial cross-feeding can be investigated using stable isotope probing techniques.Although omics approaches, particularly when used in combination, can provide unparalleled insights into the functioning of mixed microbial communities, specific elemental fluxes and microbial interrelationships cannot be easily uncovered from such datasets.SIP, using for example 13C, 15N or 18O isotope labelling, can be employed to elucidate the fate of specific compounds in complex microbial networks.A drawback of these experimental designs however is the inherent necessity of microcosms or multi-species microbial communities culturing set-ups in laboratory environments, which only approximate in situ conditions.Ideally, isotope labelling should be combined with omics and help tackle specific research questions.In order to verify the activity of a novel pathway, suggested by omics analyses, Ettwig et al. employed a complex experimental strategy involving the incubation of enrichments cultures with 13C labelled methane, 15N labelled nitrite and 18O labelled nitrite .Haroon et al. could not only demonstrate, using 13C and 15N labelling, the anaerobic methane oxidation coupled with nitrate reduction in a novel archaeal lineage but also that the nitrite generated by this pathway was subsequently used by an annamox population.This microbial interrelationship was then further confirmed by the co-localisation of the two microbial taxa .At the DNA and RNA level, isotope labelling has been widely used to capture the identity of the active members of microbial communities involved in the degradation of specific compounds.In this context, labelled and unlabelled microbial fractions are separated by density-gradient centrifugation and SSU rRNA genes are typically amplified .More recently, metagenomic analysis of the separated fractions has been carried out but is mostly limited to targeted approaches as opposed to deep metagenomes.For example, SIP enabled the identification of glycoside hydrolases in metagenomic sequences from labelled fractions of soil microbiota .Targeting the same enzyme families directly from bulk soil resulted in a 3-fold decrease in relative abundance, highlighting the enrichment benefit of combining SIP with targeted metagenomics .SIP was also recently combined with metatranscriptomics.Dumont et al. analysed metatranscriptomic sequences from both heavy and light fractions after incubating lake sediments with 13CH4 .While the unlabelled metatranscriptome displayed a wide phylogenetic diversity, the labelled sequences were predominantly assigned to methanotrophs.A high abundance of methane monooxygenase transcripts were detected in labelled datasets, which also provided insights into carbon and nitrogen metabolism .SIP metaproteomics is quite widely used and presents some advantages over RNA-SIP and DNA-SIP.Indeed, labelled and unlabelled protein fractions are not separated and the level of isotope incorporation into amino acids can be measured, which informs on protein turnover and acts as a direct proxy for activity .Furthermore, the limits of detection of heavy labelled isotopes are very low, which allows for i) the use of lower labelled substrate concentrations and ii) access to rare taxa .Pan et al. developed an algorithm to accurately determine 15N percentage incorporation into proteins .In this study, isotope labelling was employed to investigate the microbial processes involved in biofilm development and recolonisation.A low protein turnover was observed in the mature biofilm, while the opposite was found in the early stage growth biofilm, reflecting the requirement for de novo protein synthesis in the latter conditions .Protein-SIP was recently employed to investigate the degradation of naphtalene and fluorene in groundwater .Proteins involved in naphtalene metabolism were mostly assigned to Burkholderiales, which were strikingly estimated to obtain over 80% of their carbon from the labelled environmental contaminant.Proteins involved in fluorene degradation could not be identified in situ, while Rhodococcus was found to play a major role in this process under laboratory conditions .The authors emphasise the significance of this observation, which indicates a biassed enrichment under artificial conditions and a crucial need for in situ investigations to properly examine microbial processes.Some form of metabolomics is always involved in SIP experiments since the detection and concentration of specific labelled metabolites are necessarily investigated.However SIP can also be employed in the context of untargeted metabolomics.Using an elegant experimental strategy comparing unlabelled to labelled substrate metabolic measurements, Hiller et al. developed a computational method to quantitatively detect metabolites derived from a specific labelled compound .Combined with other omics, the quantitative NTFD should facilitate the discovery of novel pathways while highlighting metabolic pathway connectivity and microbial interrelationships.SIP metabolomics and metagenomics were recently employed to investigate the microbial anaerobic degradation of cellulose .In this study, labelled and unlabelled fractions were not separated before downstream analyses and only 16S rRNA, 18S rRNA and carbohydrate-binding domain information was extracted from the metagenomic dataset.13C labelled cellulose was found to be mainly degraded by clostridial species and resulted in the production of 13C acetic acid and 13C propionic acid .Overall SIP represents a very attractive experimental strategy to track down the fate of specific compounds and uncover metabolic pathway connectivity within microbial ecosystems but must be combined with other omics in order to fully exploit its potential.Overall, progress in omics technologies is advancing at a fast pace but in order to fully adopt systems biology approaches, omics datasets need to be integrated and to constitute the basis for ecosystem predictive modelling.Furthermore, since the emergent properties of microbial systems are a direct consequence of the network of interactions between the members of the microbial communities and their environment, both physical and microbiological processes need to be considered.Microbial interactions are inherently dependent on temporal and spatial scales and are subject to stochastic processes.To illustrate the importance of spatial organisation, Frey discusses two scenarios involving the Escherichia Col E2 system, in which the outcome of microbial interactions is in direct opposition .The production of the Col E2 toxin by Escherichia coli allows the producing strain to kill sensitive competitors but confers a competitive advantage to resistant strains.Indeed, even though resistance has an inherent fitness cost, the toxin-producing strain is also bearing the toxin production cost.When grown on agar plates, the three types of strains can coexist, while in agitated liquid medium, only the resistant strains survive .This example highlights the necessity to elucidate the spatial organisation of microbial species within an ecosystem in order to resolve microbial interrelationships.Modelling microbial interactions based on single-species metabolic network reconstruction has led to the prediction of environmental conditions promoting either cooperation or competition between microbial pairs .This kind of strategy typically involves stoichiometric constraint-based modelling using Flux Based Analysis.In this framework, metabolite fluxes are constrained by mass conservation, thermodynamics, assumption of steady-state intracellular metabolite concentrations and nutrient availability .These constraints are then used in silico as boundary conditions to find a set of metabolic fluxes that satisfies stoichiometry and maximises a pre-defined biological objective function commonly chosen as biomass production.To refine the prediction of metabolic flux distribution, quantitative proteomics and metabolomics were integrated together with genome-scale metabolic reconstruction .This novel modelling approach was found to predict more accurately the metabolic state of human erythrocytes as well as of E. coli deletion mutants , notably illustrating the versatility of computational methods, applicable to diverse biological contexts.Using dynamic flux balance analysis and stoichiometric models, a novel computational framework, COMETS, could predict the equilibrium species ratio of a three-bacterium community .Interestingly, COMETS can integrate both manually curated and genome-based automated reconstructed stoichiometric models.COMETS is proposed to be scalable to more complex microbial communities and as demonstrated by Yizhak et al., the integration of other omics could positively impact on COMETS by refining the stoichiometric models employed.Metagenome-based metabolic reconstructions have recently started to emerge, as illustrated by the development of HUMAnN to determine the relative abundance of gene families and pathways from metagenomic datasets .In parallel, comparative metagenomic tools, such as LEfSe, have been designed specifically for metagenomic biomarker discovery .A very interesting concept in systems-based microbial ecology is the newly developed reverse ecology framework, which aims to translate genomic data into ecological data by predicting the natural environment of a species, including its interactions with other species from genomics .Using this framework, Levy and Borenstein addressed the forces driving microbial community composition within the human microbiome .They developed a computation framework that could predict co-occurrence patterns from metagenomic datasets, which were verified using experimental observations.Excitingly, they could demonstrate that microbial species composition was predominantly governed by habitat filtering, whereby competitors co-occurred, and not by species assortment.The two patterns, however, are not mutually exclusive.While community composition was found to be mainly dictated by resources for which microorganisms compete, species with complementary requirements were also found to co-exist within microbial communities .Furthermore, Levy and Borenstein also observed an increase in habitat-filtering signatures within phyla, which indicated that even though phylogenetic closeness can be linked to co-occurrence patterns it cannot solely explain the habitat-filtering dominant structure observed within the human microbiome .Strikingly, mathematical models developed to date in the context of mixed-species microbial communities have only focused on metagenomic datasets while bypassing metatranscriptomics, metaproteomics and metabolomics.These omics methodologies, however, provide valuable insights into ecosystem functioning and, therefore, are imperative for the accurate prediction of ecosystem emergent properties.The field of omics, along with corresponding computational workflows, is expanding very rapidly and overall a clear move from proof-of-concept studies to real investigations has taken place.A recent breakthrough in metagenomics and metatranscriptomics has been realised with the introduction of internal standards, allowing the corresponding technologies to enter the realm of absolute quantification .Over 1013 genes and 1011 transcripts were detected per litre of seawater in the Amazon River Plume representing the first quantitative in situ investigation .Carbon and nutrient flux through this natural ecosystem could be resolved and the level of expression of relevant genes was compared in different microenvironments .Tools to accurately quantify protein levels are starting to emerge and this should be followed by the development of adequate internal standard procedures to access absolute quantification, similarly to metagenomics and metatranscriptomics .As discussed above, targeted metabolomics can be powerful in the context of diagnostics .Also, methodologies are being developed to gain some insights into the large fraction of unknown metabolites typically identified in untargeted experimental strategies .Despite the wealth of information that can be derived from omics datasets, pathway connectivity and microbial interrelationships are not easily accessed.This can be partly overcome by combining omics with SIP, which requires a precise experimental setup.Indeed, the use of labelled substrates cannot be performed in natural environments and necessitates laboratory settings, which impose inevitably some artificial constraints resulting in data biases .Therefore, a thorough investigation of the physiological state of microbial communities under laboratory conditions should be carried out and compared to that of their natural habitat prior to SIP, as elegantly demonstrated by Belnap et al.Datasets obtained from integrated omics approaches can provide unprecedented insights into ecosystem functioning.However, to enable their full exploitation they need to form the basis for mathematical modelling.The concept of reverse ecology and its integration into the computational framework developed by Levy and Borenstein is a very promising tool to tackle the challenging task of microbial community modelling and constitutes an excellent starting point for the integration of multi-omics datasets.Finally the development of such models will necessitate a true integration of experimental observations and model development with systematic iterative validation and refinement. | Some of the most transformative discoveries promising to enable the resolution of this century's grand societal challenges will most likely arise from environmental science and particularly environmental microbiology and biotechnology. Understanding how microbes interact in situ, and how microbial communities respond to environmental changes remains an enormous challenge for science. Systems biology offers a powerful experimental strategy to tackle the exciting task of deciphering microbial interactions. In this framework, entire microbial communities are considered as metaorganisms and each level of biological information (DNA, RNA, proteins and metabolites) is investigated along with in situ environmental characteristics. In this way, systems biology can help unravel the interactions between the different parts of an ecosystem ultimately responsible for its emergent properties. Indeed each level of biological information provides a different level of characterisation of the microbial communities. Metagenomics, metatranscriptomics, metaproteomics, metabolomics and SIP-omics can be employed to investigate collectively microbial community structure, potential, function, activity and interactions. Omics approaches are enabled by high-throughput 21st century technologies and this review will discuss how their implementation has revolutionised our understanding of microbial communities. |
579 | Impact of insulin signaling and proteasomal activity on physiological output of a neuronal circuit in aging Drosophila melanogaster | Aging neural circuits undergo morphological and functional changes that underlie different types of behavioral impairment."In humans, circuit-level changes during normal, nonpathological aging affect gustatory function, spatial learning, working memory, and emotional states.In aging Drosophila, impairments of neural transmission in the olfactory system accompany decline in attraction behavior, and in Caenorhabditis elegans, reduced neurotransmission drives aging-associated sensory neural activity and behavioral declines."Insulin/insulin-like growth factor signaling plays important physiological roles throughout the central nervous system to regulate neuronal function, metabolism, learning, and memory. "Furthermore, insulin resistance is associated with several age-related diseases, including type II diabetes and Alzheimer's disease.However, protective effects of impaired IIS have been reported in a number of species during normal aging and in models of neurodegenerative diseases.These seemingly contradictory findings define the so called “insulin paradox”.We have recently demonstrated a beneficial effect of reduced IIS on transmission through the escape response pathway of aging Drosophila melanogaster.Systemic or circuit-specific suppression of IIS prevents the decrease in transmission speed with age by increasing membrane targeting of gap junctional proteins via small GTPases Rab4 and Rab11.Lowered IIS preserves gap junctions in the neural circuit, resulting in a youthful functional output even in old flies.Here, we have expanded these findings by further dissecting the mechanism of IIS action on the escape system function, and we have identified the proteasome as an important regulator of circuit functionality.In addition, cell culture experiments showed direct and specific impact of reduced IIS on the levels of recycling-mediating proteins Rab4 and Rab11.The neuroendocrine axis regulates longevity and antitumorigenic response in a number of species by governing nutrient homeostasis and immune response.We have tested the impact on longevity of IIS manipulations in adult neurons and demonstrated the importance of this signaling axis in neurons in organismal aging.Giant fiber-specific and ubiquitous expression was achieved with the GAL4-UAS system.The daughterless-GAL4 line was obtained from the Bloomington Drosophila Stock Center.The dominant-negative UAS-InRdn transgene encodes an amino acid substitution in the kinase domain of the Drosophila insulin receptor.UAS-InR was also obtained from BDSC.The A307-GAL4 line was a kind gift from Dr. P. Phelan; the UAS-Rpn11 line was a gift from the lab of Dr. Masayuki Miura.To standardize genetic background, parental GAL4 and UAS strains used to generate experimental and control genotypes were backcrossed to laboratory control strain white Dahomey for at least six generations, beginning with an initial cross between wDah females and transgenic males, followed by five subsequent back-crosses between transgenic females and wDah males.The wDah stock was derived by incorporation of the w1118 mutation into the outbred Dahomey background by back-crossing.All stocks were maintained, and all experiments were conducted at 25 °C on a 12 hour:12 hour light:dark cycle at constant humidity using standard sugar/yeast/agar medium.Adult-onset neuronal expression was induced by adding mifepristone to the standard SYA medium at 200 μM.For pharmacological experiments, 10 μM of peripherally synapsing interneuron or 50 μM of MG132, dissolved in DMSO, was added to the standard medium.Corresponding concentrations of DMSO were added to the flies maintained on the medium without the proteasome inhibitors.For the rapamycin experiment, 5 μm of rapamycin was added to the chemically defined medium using the previously published protocol and recipe; this concentration has been shown to significantly reduce egg-laying capacity.For all experiments, including life span experiments, flies were reared at standard larval density, and eclosing adults were collected over a 12 hours period.Flies were mated for 48 hours before separating females from males.Preparation of flies and recordings from the giant fiber system of adult flies were performed as described by Allen et al.; a method based on those described previously.Briefly, flies were anaesthetized by cooling on ice and secured in wax placed inside a small Petri dish, ventral side down, with the wings held outward in the wax to expose lateral and dorsal surfaces of the thorax, and the proboscis pulled outward and pushed into the wax so that the head lied slightly forward and down on the surface.A tungsten earth wire placed in the posterior end of the abdominal cavity served as a ground electrode.Extracellular stimulation of the GFs was achieved by placing two electrolytically sharpened tungsten electrodes through the eyes and into the brain to deliver a 40 V pulse for 0.03 ms using a Grass S48 stimulator.The stimulating and ground electrodes do not need to be replaced during a recording session.Threshold for the short-latency, direct excitation for GF stimulation was previously demonstrated to be a pulse of ∼10–20 V for 0.03 ms.Intracellular recordings were made following GF stimulation from the tergo-trochanter muscle and contralateral dorsal longitudinal muscle using glass micropipettes.The possibility that descending neurons other than the GFs might be simultaneously activated, leading to a possible TTM or DLM response, was previously excluded.The glass electrodes were filled with 3M KCl and placed into the muscle fibers through the cuticle.Responses were amplified using Getting 5A amplifiers, and the data were digitized using an analogue-digital Digidata 1320 and Axoscope 9.0 software.For response latency recordings, at least 5 single stimuli were given with a 5 seconds rest period between each stimulus; measurements were taken from the beginning of the stimulation artefact to the beginning of the excitatory postsynaptic potentials.The signals were amplified and stored on a PC with pCLAMP software.Analysis was performed on the PC using pCLAMP and Microsoft Excel 2010 software.Flies in life span experiments were reared at standard larval density, and eclosing adults were collected over 12 hours periods.Newly eclosed flies were transferred to new bottles without anesthesia and allowed to mate for 48 hours.Sexes were separated by brief CO2 exposure, and the female flies were transferred into experimental vials.Flies were maintained in vials on standard SYA medium at a density of ten flies per vial and transferred to new vials every 2–3 days by CO2 anesthesia and scored for deaths.Retinal pigment epithelial-1 cells were maintained at 37 °C, 5% CO2, in a complete medium.Cells were regularly tested for mycoplasma contamination.For all assays, retinal pigment epithelial-1 cells were seeded in appropriate culture dishes and grown as monolayers for four days in complete medium.Cells were then treated as follows: IIS was lowered on 1-hour treatment with an insulin receptor and insulin-like growth factor-1 receptor dual inhibitor or on removal of insulin by incubating the cells in serum-free medium for 11 hours.IIS was elevated by addition of 1 μM insulin for 1 hour.Treated cells were collected in Laemmli sample buffer, sonicated and boiled.Samples were run on NuPage 4%–12% Bis-Tris gel, transferred to PVDF membranes, blocked in 5% skimmed milk and incubated successively with primary and secondary-HRP coupled antibodies, and finally visualized with ECL or Luminata Crescendo HRP reagents depending on the strength of the signals.Signals were captured on Amersham Hyperfilm ECL, developed using a Xograph Compact X4 film developer and analyzed using ImageJ software.Signals used for quantifications were captured at a pre-saturation intensity.Results are derived from triplicate biological repeats and represent signals that were normalized to a glyceraldehyde 3-phosphate dehydrogenase loading control and to the signals from resting cells.Antibodies used were mouse anti-Rab4, mouse anti-Rab5, rabbit anti-Rab7, rabbit anti-Rab8, rabbit anti-Rab11, mouse anti-GAPDH, and rabbit anti-GAPDH.Statistical analyses were performed using GraphPad Prism 5 software.A two-way analysis of variance test was used to perform interaction calculations.For other comparisons between two or more groups, a one-way analysis of variance followed by a Tukey-Kramer post hoc test was used.In all instances, p < 0.05 is considered to be statistically significant.All error bars denote the standard error of the mean.The log-rank test was used to calculate p values and compare survival distributions between pairs of cohorts.Microsoft Excel was used for these analyses.Escape responses in many invertebrate and lower vertebrate species are mediated by giant nerve fibers.First described in the squid ganglion, the simple “Giant Fiber” circuits are a convenient system for studying neural development and function.In the fruit fly Drosophila, the GFS comprises a small number of anatomically and functionally well-defined neurons amenable to molecular, genetic, electrophysiological and behavioral studies.The GFS is composed of electrical, chemical and mixed synapses, with transmission via Shaking-B-encoded GJs responsible for the predominantly ‘electrical’ character of the circuit.The circuit mediates flight initiation following either visual or olfactory stimuli, through activation of both flight and jump muscles.We measured the speed of signal propagation through the GFS by directly stimulating the GF cell bodies in the fly brain and recording “response latencies” from the downstream muscles.Increased response latency indicates slower transmission and diminished circuit function.A dominant-negative form of the insulin receptor can be used to attenuate IIS in flies, and diminished IIS in the GFS prevents age-associated increase in response latency.Escape response circuit-specific IIS reduction was achieved using the A307-GAL4 line, which drives expression strongly in the GFs and, to a lesser extent, in the TTM and DLM motoneurons and peripherally synapsing interneurons.Pharmacological suppression of proteasomal activity neutralized the beneficial effect of reduced IIS on the circuit function in old flies.Although reduced IIS does not ameliorate the age-associated reduction in chymotrypsin-like peptidase activity of the proteasome in the fly nervous system, these results suggest that proteasomal activity is required for the prevention of functional decline in the GFS.To further investigate the effect of the proteasome on the GF circuit, we overexpressed Rpn11, a component of the proteasomal regulatory subunit, in the neurons of the GFS.Rpn11 is one of the “lid” subunits in the 19S proteasomal regulatory particle and was previously reported to suppress the age-related decline in proteasomal activity and progression of the polyglutamine-induced neurodegenerative phenotype in aging flies.In line with these findings, Rpn11 overexpression prevented the age-related functional decline in the GFS.Together, these results demonstrate the importance of the proteasome on the function of the escape response pathway in aging flies and suggest increased proteasomal activity as a way to improve age-related functional decline of neural circuits.Previously, we demonstrated a correlation between GF transmission and synaptic levels of gap junctional proteins.As inhibition of the lysosome increases the density of GJ aggregates, the results presented here suggest that reduction of proteasomal activity has a negative effect on their synaptic accumulation.Reduced proteasomal activity likely compromises proteostasis of other proteins, thereby indirectly impairing recycling and membrane targeting of gap junctional proteins.To further examine how attenuated IIS affects GFS function, we measured response latencies in flies with ubiquitously reduced IIS in the presence or absence of Foxo, a well-described downstream mediator of IIS action in flies and mammals.Interestingly, deletion of dFoxo had no effect on the ability of reduced IIS to maintain TTM response latencies in old flies at the youthful level but reversed the effect of low IIS on the DLM branch of the circuit.These results indicate a complex role for IIS in regulating the GFS physiology in aging flies.Unlike the more “electrical” nature of the TTM branch, the DLM part of the circuit is dominated by chemical synaptic connections; it is therefore intriguing that Foxo may have role specifically as a regulator of chemical neurotransmission downstream of IIS.We have previously shown that GJs are regulated at the protein level in response to acute and long-term IIS.Elevated IIS induces the targeting of GJ proteins to lysosomes and degradation, thereby decreasing their cell surface assembly.This phenotype could be suppressed by enhancing endosomal recycling by overexpressing wild-type or constitutively active forms of Rab4 or Rab11, mimicking IIS attenuation.We therefore asked if the endosomal recycling machinery itself could similarly be regulated by IIS.Strikingly, Rab4 and Rab11 were present at significantly lower levels in mammalian cells on insulin addition, indicating a direct impact of IIS on the levels of these recycling Rab proteins.Interestingly, this effect was specific for the recycling Rabs, as Rab7 and 8 were unaffected.Cumulatively, these results show that IIS has a marked impact on protein levels at multiple junctures in the cell including the protein degradative and recycling machinery.Together with IIS, the mammalian target of rapamycin signaling network plays a key role in regulating metabolism and in life span in Drosophila and other species.Inhibition of the fly TORc1 complex by rapamycin did not affect response latency in aging flies, indicating a specificity of lowered IIS action on GFS function.Reduced IIS in either the GFS or adult nervous system abolishes the prolonged response latencies seen in aged wild-type or control flies, with IR up-regulation exacerbating the phenotype.As improved function and life span are frequently correlated in various species, we assessed the effect of IIS manipulations in the GFS, or adult neurons, on longevity.We overexpressed the dominant-negative variant of the Drosophila insulin receptor using the inducible GS ELAV-GAL4 nervous system driver.While GFS-specific reduction of IIS had no effect on life span, likely due to the relatively small overall size of the circuit, IIS attenuation in all adult neurons using the inducible driver extended median life span, implicating the adult nervous system as playing a key role in overall health during aging in flies.Proteasomal activity in the brain of D. melanogaster declines with age, in line with reports of age-related alterations in proteasome-mediated proteolysis in the aging mammalian brain and in neurodegenerative diseases.Here, we show that proteasomal up-regulation can maintain neurotransmission through the escape response circuit in aging flies, and that basal levels of proteasomal activity are required for the beneficial effect of attenuated IIS on age-related circuit function.Consistent with the effect of proteasomal activation in other species, overexpression of Rpn11 suppresses the accumulation of ubiquitinated proteins in the fly brain, likely by promoting 26S proteasome assembly.Because of the predominantly postmitotic status of neurons, the nervous system is particularly prone to age-associated increase in oxidative stress.Together with a decrease in antioxidant capacity during normal aging, these changes cause the accumulation of damaged and misfolded proteins in aging neuronal cells.In addition to the lysosome and autophagy, the proteasomal system is critical for degradation and disposal of abnormal proteins, and decline in proteasomal function may further increase the buildup of aberrant protein aggregates.These age-dependant changes likely contribute to morphological and physiological defects in neurons, such as their ability to maintain synaptic and cytoskeletal integrity and regulate intracellular signaling and protein trafficking.Considering its wide role in protein homeostasis and quality control, the proteasomal system is likely to affect many components of the neuronal machinery.Reduced proteasomal activity will therefore inevitably lead to compromised cellular health and impaired synaptic function.Indeed, both degradation and synthesis of synaptic proteins are disrupted following pharmacological inhibition of the proteasomal machinery.Rapamycin, a well-described activator of autophagy that extends life span in flies, had no effect on the GFS function, further underscoring the requirement for proteasomal degradation, presumably of proteins other than those involved in GJs and the recycling machinery, in the maintenance of circuit functionality.Previously, we showed that beneficial effect of reduced IIS on the neurophysiological output in aging flies requires the presence of the recycling machinery.Our experiments in cultured cells presented here also identified insulin as a negative regulator of the levels of the small GTPases Rab4 and Rab11, suggesting a complex interplay between IIS and the trafficking pathways mediating endosomal recycling.Since insulin/insulin-like growth factor receptors are themselves recycled through the recycling machinery, this finding suggests a novel mechanism of feedback in IIS itself.The functional consequences of these interactions in healthy and pathological organismal aging remain to be explored.In the context of GJs, increased endocytic recycling activity through Rab4 and Rab11 rescues internalized GJ proteins from terminal degradation in the lysosome, promoting their accumulation in the plasma membrane."IIS are evolutionarily conserved growth-promoting pathways that play critical roles in both developing and adult brains.Mutations that reduce IIS, however, can dramatically extend life span in a number of species.The nervous system is one of the most important sites for life span extension by IIS.For example, brain-specific knockout of either the IR substrate-2 or insulin-like growth factor-1 receptor has been reported to extend life span in mice; in the nematode C. elegans, the nervous system is also critical for increased longevity by IR inactivation.Here, we corroborated the importance of neuronal control of life span by identifying that IIS activity only in the adult nervous system reduces longevity in Drosophila.Recently, reported life span extension by means of reduced IIS in the nervous system throughout life could therefore be due to adult-only IIS suppression.Further studies are required to reconcile the seemingly contradictory effects of IIS on organismal function and longevity.The authors have no actual or potential conflicts of interest. | The insulin family of growth factors plays an important role in development and function of the nervous system. Reduced insulin and insulin-growth-factor signaling (IIS), however, can improve symptoms of neurodegenerative diseases in laboratory model organisms and protect against age-associated decline in neuronal function. Recently, we showed that chronic, moderately lowered IIS rescues age-related decline in neurotransmission through the Drosophila giant fiber escape response circuit. Here, we expand our initial findings by demonstrating that reduced functional output in the giant fiber system of aging flies can be prevented by increasing proteasomal activity within the circuit. Manipulations of IIS in neurons can also affect longevity, underscoring the relevance of the nervous system for aging. |
580 | Modifying the electrical properties of graphene by reversible point-ripple formation | Graphene's excellent electronic, optical and mechanical properties make it an ideal candidate for flexible electronics, sensors and opto-electronics .However, strain, ripples and wrinkles in graphene reduce charge transport, can open up a band gap and increase contact resistance .Ripples could also be the dominant form of scattering in graphene, leading to measured charge mobilities much lower than theoretically predicted .As a strong, thin and flexible material graphene is well suited for integration in to oscillating nano-electromechanical systems.Yet strain and ripples can alter the flexural modes of graphene, or the induced vibrations can themselves lead to strain which then alters the material properties .Understanding how strain and deformation in 2D materials alters the electronic transport is critical to integrating them in to devices .Few-layer graphene is less affected by substrate and impurity effects and can mitigate some of these detrimental strain effects .To understand the effects of strain on graphene transport it is essential to decouple the in-plane and cross-plane contributions to charge transport.However most studies of conductivity changes in few-layer graphene repeatedly deposit single or bi-layer graphene measuring the change in total conductivity of the stack, leading to orientation mis-match, mechanical damage and contamination .The lithographic formation of contacts can also contaminate the sample .Direct probe contact to nano-materials instead provides a local, non-destructive and comparably fast technique for electronic transport measurements .Here we use a multi-probe method with local probe electrostatic manipulation to controllably and reproducibly perturb mechanically exfoliated few-layer graphene creating a localized ripple or wrinkle in the layers while simultaneously measuring resistance.As the probe is retracted, all sheets are initially pulled with the probe via electrostatic attraction until restoring mechanical forces cause each sheet to detach one at a time from the probe.Discrete drops in tunnelling current are observed as each sheet detaches, just like flipping through a deck of playing cards.With precise control over the process we reverse and repeat the nano-scale manipulation, controllably and reversibly inducing and removing strain in few-layer graphene.By fitting the experimentally-observed current response to a network model we measure in-plane sheet resistance increases of 78% and out of plane increases of 699% due solely to the locally-induced ripple in the graphene.Releasing the ripple restores the original conductivities, and by clustering the observed current steps we are also able to count the number of graphene layers.Performed within an ultra-high vacuum chamber, conductivity changes arise solely from the induced strain, confirming that such localized ripples in graphene can alone account for measured conductivity reductions, and offer a way to directly study the transport changes in graphene when used in flexible electronics and NEMS.Graphene samples were prepared by mechanical exfoliation of highly-oriented pyrolytic graphite on to the surface of a 90 nm SiO2 layer on Si grown by thermal oxidation and calibrated by ellipsometry.Flakes containing few-layer graphene were identified initially by scanning electron microscopy and then confirmed by Raman spectroscopy and atomic force microscopy.Samples were annealed at 200 °C for an hour in ultra-high vacuum, within an Omicron multi-probe system .In the analysis chamber two tips were approached under SEM guidance to contact the sample for measurement at room temperature as described in the caption of Fig. 1.Tips were electrochemically etched from tungsten and annealed in the UHV chamber to remove surface oxide .While most STM investigations use a feedback-on approach and change the tunnelling conditions , the resulting tip displacements couple the change in tunnelling set point height with the tip movement that accounts for graphene height changes.Here instead we employ the less-used feedback-off method, where the tip height is controlled, voltage is fixed for each set of measurements, and the resulting tunnelling current change is measured when the probe moves up and down in z only without x or y displacement.Using standard notation these are classified as feedback-off I measurements with a fixed voltage V, where I is the tunnelling current and z the out-of-plane displacement of the probe.Pristine highly oriented graphene is mechanically exfoliated on to the technologically-relevant insulating substrate SiO2 and annealed in UHV to remove contaminants.No further processing steps or contact fabrication is required.The graphene deflects out of plane remaining in contact with the tip, such that its deflection away from the initial contact point is expected to be equal to the tip z displacement.The lateral extension of the graphene manipulation is however unknown, making quantitative determinations of the locally-induced strain or loading not possible.The incremental current drop behaviour was observed more than one hundred times on this sample, where the probe voltage was held constant and a script automatically approached and retracted the tip at constant speed to collect I data.The repeated results for each voltage form a single data set which are clustered and fitted as described to extract circuit parameters.The fits presented in the few-layer section to all six layers of graphene are a single fit, with a second example given in supplementary data for comparison.To confirm this behaviour was not a localized effect this sample was tested at other locations on the flake, and the method was applied a similar number of times on a second thicker few-layer graphene sample.The results have been re-created with two other tips which all displayed similar behaviour.The results presented here are all taken sequentially at the same location with no lateral movement of the STM tip between the start and finish of the experiment.Fig. 1a shows two STM probes positioned over a flake of few-layer graphene on a SiO2 substrate.To aid the eye the dashed white lines show the edge of the flake within the image, with the bottom right of the flake folded back over on itself.The left probe is in direct mechanical contact with the graphene and held at ground in order to provide a constant resistance path back out of the graphene, which is included in the model .The right probe is biased to ±0.1 V, ±0.2 V and ±0.5 V, and moved towards and away from the graphene in the z-direction perpendicular to the graphene layer at a constant speed ten times for every voltage.The schematic in Fig. 1b shows the interaction regimes which occur during probe movement, with the same labels used on an example current measurement in Fig. 1c.Initially the tip is out of contact with the graphene, and starting from z = 0 moves towards the graphene.The measured tunnelling current is zero until at point B the electrostatic field of the probe causes the graphene stack to deflect up towards the probe with a near-discontinuous increase in measured current.STM of graphene is known to create a local ripple as the sheet or sheets ‘jump’ to tunnelling contact with the tip .Our measured current of ∼1.5 nA confirms the probe is not in intimate contact with the graphene stack, with a tunnelling distance still present.In regime C the probe continues to move towards the now deflected graphene stack with an exponential increase in the tunnelling current, indicative of reducing the tunnelling gap between the tip and the graphene.No further discontinuous increases in current are observed indicating that all sheets in the few-layer graphene stack have been perturbed upwards towards the probe and are participating in the conduction network.At 3.3 nA the current pre-amplifier reaches its limit and the automatic routine reverses direction and begins retracting the probe.The probe then retraces a different exponential curve in regime D, discussed later, and via the electrostatic attraction is able to stretch or buckle the graphene from the height at which it jumped to the probe by around 20 nm, before at point E the restoring elastic forces cause the bottom layer of the stack to detach.This gives rise to a smaller discontinuous current drop, and as the probe continues to retract a series of further discontinuous current drops are evident as the remaining layers detach leaving bi-layer graphene in F and single layer graphene in G. By regime H, all layers have either unbuckled or detached from the probe and the tunnelling current returns to zero.Data presented here all use the same approach and retraction speed and displacement to keep these mechanics constants.We ensure the tip and graphene do not go in to intimate contact by checking at the lowest voltage setting that maximum current does not saturate the pre-amplifier.We are able to stop the movement of the tip at any point in the behaviour and hold that current, including stopping at n = 2 and n = 1 and thus select the number of layers of graphene used for measurement.Resistance ladder network models are predominantly used to represent the equivalent circuit for multi-layer graphene transport .Using a single contact point model where parameters vary with tip height z, an equivalent circuit for the system is constructed shown in Fig. 1d where R|| is the in-plane resistance, R⊥ the cross-plane resistance and Rt the tunnelling or access resistance of the probe to the graphene.The model finds that for R||≫R⊥ the last two steps corresponding to bi-layer and single layer graphene are roughly equally spaced and less likely to vary in current magnitude during tip retraction, matching the experimental observation in Fig. 1c. Note that the cross-plane resistance is measured over the interlayer spacing of graphene; a path length considerably shorter than the lateral probe spacing over which the in-plane resistance is measured.Therefore although the cross-plane resistance is lower, when adjusted for path length as sheet resistances or resistivity the expected result that the cross-plane component is much higher is observed.The clear presentation of the final two steps allowed a custom step detection clustering algorithm to detect and cluster them in all voltage data sets, and fit the current characteristics of each probe voltage data set to the model.Two example data sets for ±0.5 V are shown in Fig. 2a and b, with the fits for all voltages overlaid in Fig. 2c. To make the results comparable with other work , the fitted tunnelling resistance dependence is removed and the resulting fit is the network model presented earlier with Rt subtracted.For each voltage the set of ten I traces is simultaneously fitted to the model as a non-linear least squares problem to produce the component network values shown for all six voltages.Graphene is known to conform to the surface roughness of SiO2 increasing the adhesion and allowing strain engineering .We have previously shown that annealing increases the conformation to such an extent that few layer graphene can become ‘invisible’ on SiO2 via electron microscopy .It is likely that this inherent adhesion is providing competing substrate forces which act in the opposite direction to the applied tip forces which, when combined with the elasticity of graphene, define the extension at which each buckled graphene sheet will detach from the tip and return to its rest position.The minimal overlap in the z-direction of each cluster indicates that this process is stable and repeatable.As expected the total resistance of the graphene stack decreases with increasing layer number – the lower total resistance of few-layer graphene may be preferable for device integration over single layer graphene .As the tip retracts within each step the tip-graphene distance increases slightly leading to increased tunnelling resistance and lower current, observed as the gradient in the data and fit.Converting to equivalent sheet resistances the mean values of all the in-plane resistances R|| corresponding to the values shown in Fig. 2 are then measured to be 10.8 kΩ/sq for both individual sheets in bi-layer graphene, increasing to 13.2 kΩ/sq for single layer.These are higher than those typically reported, which will be addressed shortly.The weak van der Waals interlayer coupling of graphene sheets always gives rise to higher cross plane resistivity, reported between two and seven orders of magnitude higher than in-plane .Converting to sheet resistances, we find here that the mean out-of-plane resistance R⊥ for all six voltages is 1505 times higher than R|| for bi-layer graphene.These data are addressed together later.This method can also be used to identify the number of graphene layers by counting the incremental current steps as each layer detaches.Multiple sheets can sometimes detach together giving larger but fewer current drops, so to see the general behaviour it is necessary to look at several traces together.Fig. 3 shows twelve multiple repeats at −0.1 V overlaid, which is representative of observed behaviour at other probe voltages.The final two steps corresponding to bi-layer and single layer graphene are again clearly evident and can be detected by our combined step-finding clustering algorithm.In this example the next two clusters corresponding to n = 3 and n = 4 are evident by eye, but here we develop a semi-autonomous method to count the higher layer numbers where it becomes increasingly difficult to automatically detect the clusters.To avoid a subjective assessment we first detect the final two steps for n = 2 and n = 1 automatically, and then use the model to predict the likely extrema of the step corresponding to n = 3.This identifies a cluster which is manually selected to represent n = 3.The model then updates the fit to the last three layers and estimates the extrema for step n = 4.The points for this cluster are manually selected and this process is repeated until all steps are accounted for.Using this method we are able to count six layers of graphene which matches individual traces where clear separation of all steps are observed.While STM has been used directly to determine the thickness of graphene sheets through the topographic height changes or through the changes in tunnelling spectroscopy the methods are usually limited to very few-layers and require calibration against known samples or another technique.Atomic force microscopy can also be used to estimate the number of layers in graphene, but these measurements suffer from an experimental offset which can be greater than the height of a single sheet, results differ in vacuum and air, and are affected by surface contamination including absorbed water .Raman spectroscopy has been used effectively to identify single-layer graphene but sometimes competing analysis methods report different thicknesses when differentiating between bi-layer and other few-layer graphene .The electrostatic manipulation method used here can clearly and automatically determine whether a sample is bi-layer or single layer, and by fitting to a network model can accurately determine few-layer graphene layer numbers.To compare our measurement of the number of layers, we performed Raman spectroscopy where the ratio of the 2D to G peaks and 2D position indicated the sheet consists of around 5 to 6 layers.We also performed ambient contact mode AFM where a height measurement indicated between 5 and 7 layers.As these are in agreement we believe our measurement of six layers to be correct.There is some ambiguity when the layers numbers go above 4, but for n ≤ 4 our method is complimentary to Raman and AFM for counting the number of layers in graphene.The results of the fit to all six layers also allow a re-examination of the network parameters now for all six steps.Table 1 shows the equivalent in-plane and cross-plane sheet resistances of each individual sheet in the graphene.More information on the conversion is given in the Supplementary Data File - section 4.When the probe initially picks up the graphene stack with all six layers the equivalent sheet resistance of each layer separately is measured to be 6.5 kΩ/sq, but as the stack is stretched the sheet resistance increases up to 78% of the initial measurement.This sheet resistance change is not due to the number of layers changing; the model decouples the equivalent sheet resistance for a single sheet of graphene instead of the lumped total network resistance.Similarly when the probe picks up all six layers the cross-plane equivalent sheet resistance for each layer separately is measured to be 2806 kΩ/sq increasing to almost seven times this by the time the graphene has been stretched far enough for all but two sheets to decouple from the probe.Reported in-plane sheet resistances for single layers of graphene are typically in the range 0.1–2.7 kΩ/sq .Here, our results indicate that the initial upwards deformation of the graphene stack to meet the tip induces a ripple which increases the sheet resistance beyond that normally measured on flat graphene.Importantly though, the ripple can be increased by retracting the probe and stretching the graphene, causing a further increase in the measured in-plane sheet resistance.The values reported earlier for the fit to the last two steps for all voltages are in general agreement with those now derived from a fit to all traces for a single voltage.Both are the same order of magnitude and show increasing in-plane resistance as the stack is stretched and the layer number reduces, most likely due to the formation of long-range scattering potentials as predicted .This effect is reversible, with the initial measured resistance matching after each repeat.The height at which the graphene initially deflects up shows a slight drift upwards for the first few measurements at any new position indicating some non-reversible manipulation of the graphene upwards from the substrate.After this there is no further non-reversible movement of the ripple – it recovers to its initial position after every full extension.By fabricating our few-layer samples from HOPG and separating by electrostatic manipulation we mitigate inter-layer contamination and mis-alignment.This would lead to lower out-of-plane resistance measurements for our study, however this is reversed by the electrostatic manipulation which increases the inter-layer separation and further increases the out-of-plane resistance.The advantage of our study is that we know this out-of-plane resistance increase results from electrostatic manipulation of the graphene and cannot be due to inter-layer contamination.This further demonstrates that impurity-induced scattering is unlikely to be the cause of resistivity changes in graphene since we do not have any inter-layer impurities and yet we are able to modify the in-plane transport properties via ripple formation and extension.To reduce contamination of the top layer of graphene we anneal samples in UHV and confirm sample quality with Raman.Our direct contact method removes or reduces contamination associated with lithographically formed contacts .Even if some top-layer contamination was present it would simply add an extra access resistance term to the top sheet that would be lumped with, and accounted for, by the tunnelling resistance Rt.After forming the local ripple in graphene the probe continues to approach tracing an exponential increase in current with distance.However as the probe retracts a different exponential dependence is observed, as shown in Fig. 1c sections C and D.This is different to an observation where compression and release by a contact STM tip on bi-layer graphene traced the same exponential dependence in and out .Our model agrees that the cause of compression-based change is interlayer conductivity, but here we show it is also possible to increase the layer separation to further reduce the cross-plane conductivity.Since our exponential curve is tracing over a different regime on retraction this can only be explained by graphene mechanically responding differently when being pushed by the field, as opposed to being pulled by it.A similar in-plane resistance increase of around 60% for graphene strained up to 20% has been reported, however this was attributed to “reduced electrical percolation pathways” – a reduction in the number of contact points between graphene layers leading to what is termed here as increased cross plane resistance .Although the method of pre-straining the graphene on a patterned substrate is different to our local electrostatic perturbation method, we have shown that resistance changes of this magnitude can also be achieved by in-plane sheet resistance increases.Similar STM potentiometry measurements on multi-walled carbon nanotubes can be used to estimate the equivalent in-shell and cross-shell conductivity, which are directly comparable to the two directions in graphene measured here .The equivalent cross-plane resistance in MWCNTs is usually higher than graphene as the π-orbital overlap reduces.It has been assumed that beyond three shells there is very little change in the total resistance of MWCNTs.Our method presented here only modifies the sheets directly out-of-plane but it could be possible via multi-probe manipulation to also pull sheets laterally, altering the π-orbital overlap, while still measuring the change in the cross-plane conductivity.We have demonstrated that electrostatic manipulation can be used to repeatedly separate out layers of few-layer graphene, and measure the change in transport as local point ripples are formed and stretched.These increasing resistances resulting from microscopic corrugations are large enough to account for reported low charge carrier mobility in graphene.As they occur here without any change in doping inside an ultra high vacuum chamber they add further weight to the cause of low charge carrier mobility in graphene resulting from disorder-induced scattering potentials rather than dopant-induced changes.The method can be extended to study other rippling effects including mechanical properties, the formation of electron-hole puddles and the formation of a band gap .Controlled rippling in graphene could also lead to spin-based devices , and all-graphene strain based devices .Changes in the in-plane and cross-plane anisotropy could lead to mechanically-gated graphene devices .The method can also separate out and count the number of layers of few-layer graphene.In practical few-layer graphene devices the electrical , mechanical and thermal properties of the graphene are dependent on the number of layers and this method provides an alternative way to determine the number of layers in few-layer graphene.Strain can alter the flexural modes of graphene, or when vibrating the induced strain can alter the material properties.Here we demonstrate a way to controllably strain graphene and study directly the change in transport properties, offering a way to study the same effects that occur when graphene is integrated in to NEMS. | Strain, ripples and wrinkles in graphene reduce the charge-carrier mobility and alter the electronic behaviour. In few-layer graphene the anisotropy between the in-plane and cross-plane resistivity is altered and a band gap can be opened up. Here we demonstrate a method to reversibly induce point ripples in electrically isolated few-layer graphene with the ability to select the number of layers used for transport measurement down to single layer. During ripple formation the in-plane and cross-plane sheet resistances increase by up to 78% and 699% respectively, confirming that microscopic corrugation changes can solely account for graphene's non-ideal charge-carrier mobility. The method can also count the number of layers in few-layer graphene and is complimentary to Raman spectroscopy and atomic force microscopy when n ≤ 4. Understanding these changes is crucial to realising practical oscillators, nano-electromechanical systems and flexible electronics with graphene. |
581 | Carbon-nanotube-interfaced glass fiber scaffold for regeneration of transected sciatic nerve | Peripheral nerve injury is frequently encountered in the clinical setting.An injured peripheral nerve can regenerate spontaneously, but the regenerative capacity is limited in long defects and severe injury .Current medical and surgical management techniques, including autologous nerve grafts and allografts, are in most cases not sufficient for complete regeneration of the damaged peripheral nerve .Artificial nerve conduits, such as single hollow tubes, are commercially available for the connection of transected peripheral nerves, but are not thought to be suitable as a physical guide for the regeneration of a long defect .Many types of scaffold configuration and fabrication, including intraluminal microchannel formation and electrospun nanostructured scaffolds , have been attempted to give physical and biological cues for outgrowing axons and to overcome the limitations of regeneration in the peripheral nervous system.The delivery of growth factors , pharmacological agents , stem cells or Schwann cells within the nerve conduit might be other options for improving neural regeneration .Intraluminal structures for physical guidance of outgrowing axons have been developed using collagen fibers , denatured muscle tissue and aligned phosphate glass fiber bundles , though the results thus far have proved unsatisfactory.Carbon nanotubes have unique chemical, mechanical, structural and electrical properties that make them attractive for the repair and regeneration of tissues, including nerves, and functionalized CNTs have also been applied to stroke and spinal cord injury models .A body of key literature has already demonstrated the significant and profound effects of CNTs, particularly on nerve cells and even stem cells, with regard to their neurite outgrowth and neuronal differentiation , and CNT-based substrates have been suggested as potential agents for the stimulation of neuronal functions and the repair and regeneration of damaged and diseased neural tissues .The nanotopographical and biochemical features and electrical conductivity of CNTs may mediate neural modulation .Therefore, CNTs are expected to have synergistic effects on peripheral nerve regeneration when interfaced with an intraluminar structured scaffold.However, most of the studies mentioned were performed in vitro, and there is little evidence about the in vivo functions of CNT-interfaced biomaterials in nerve damage models.Therefore, we show here for the first time the in vivo effects of CNT-interfaced substrates on nerve regeneration using a transected rat sciatic nerve model.For this, we chemically linked functionalized CNTs onto the surface of aligned PGF bundles, aiming at utilizing CNTs as an interfacing material while the aligned fiber bundles are expected to function for physical guidance.Our previous studies on PGF have shown that aligned PGFs within a collagen scaffold were effective in guiding nerve tissues in a transected rat sciatic nerve model as well as in a transected rat spinal cord injury model .PGFs, a class of optical glasses composed of metaphosphates of various metals, offer biocompatibility and tailored directionality; as such, they are considered to be suitable for the regeneration of tissues requiring directional guidance, including muscle and nerve .We implanted a CNT-interfaced PGF neural scaffold in a 10 mm transected sciatic nerve for 16 weeks and the effects on axonal guidance, reinnervation of muscles and the electrophysiological functions were delineated and compared with the findings for a non-interfaced PGF scaffold.It is hoped that this first in vivo study using a CNT-interfaced biomaterial scaffold will provide some informative and pioneering concepts on the possible utility of CNT interfacing as a novel guide and scaffold for the repair and regeneration of nerve tissues.The composition of phosphate glass was P2O5–CaO–Na2O–Fe2O3, with a 50–40–5–5 mol.% ratio.The generation of microfiber bundles of the phosphate glass has been described in detail elsewhere .Produced microfibers were aligned using a microcomb, fixed on one end with heat-melted poly solution and then dried.The aligned microfibers were cut to a length and width of about 18 mm, then fixed on the other end with PCL, which can be directly applied in both in vitro and in vivo experiments.Together with the microfiber form, a disc of the phosphate glass was also prepared for characterization of the surface modification of the phosphate glass, after sintering phosphate glass powder of the same composition.The aligned PGF bundle was interfaced with CNTs, so that it could play the role of a guiding substrate for the neural cells, as depicted in Fig. 1A.The series of chemical reactions for this CNT tethering is shown schematically in Fig. 1B–D. First, the glass surfaces were positively charged with amine residues.The glass microfiber bundles and discs were pretreated with 1 N hydrochloric acid for 5 min, treated with 2.5% 3-aminopropyl-triethoxysilane at pH 5.0 for 10 s, then dried with a heat gun 10 times.CNT solution was prepared after carboxylation of raw CNTs by the acid oxidation method.Briefly, 0.5 g of CNTs was added to H2SO4/HNO3 1:1 aqueous solution and refluxed at 80 °C for 2 days, followed by filtration through a 0.4 μm Millipore membrane.The resultant carboxylated CNTs were washed and dried under a vacuum, then dissolved in ethanol to a concentration of 0.0025 wt.%.The aminated glass bundles and discs were then soaked in the CNT–COOH solution with 0.006 mM 1-ethyl-3- carbodiimide hydrochloride at room temperature for 3 h to enable amide bonds to form.The CNT–PGF surface was further functionalized with amine groups by carbodiimide crosslinking with 0.1 M ethylenediamine and 0.012 mM EDC at pH 5.0 and room temperature for 2 h to leave amine groups at the surface of the CNT–PGF substrate.Samples were rinsed with a series of ethanol solutions and distilled water to remove excess chemical byproducts, before being sterilized first in 70% ethanol and then under UV irradiation for further biological assays.The aminated CNT–PGF substrate was then incorporated into cylindrical nerve scaffolds.The scaffolding of the microfiber bundles was carried out as a two-step process: first wrapping them around a biopolymer nanofiber mat and then placing it within a porous biopolymer cylindrical tube.First, a PLDLA electrospun nanofiber mat was prepared.PLDLA solution in chloroform was electrospun onto a high-speed rotating metal collector to gather up aligned PLDLA nanofibers.The electrospinning conditions were a 1.5 kV cm–1 electric field strength and a 0.1 ml min–1 injection speed.The microfiber bundles were placed onto the nanofiber mat, which was then rolled up to wrap the bundles completely.The number of microfibers wrapped within the nanofiber mat was determined to be 900 ± 36.The nanofiber-wrapped microfiber bundles were then placed within a PLDLA cylindrical tube.The PLDLA tube was produced by the method described elsewhere with a slight modification .In brief, 0.2 g of PLDLA and 1 g of ionic liquid were dissolved in 10 ml of dichloromethane, in which a glass tube was immersed to coat it with a thin layer of the PLDLA–ionic liquid.After completely drying, the ionic liquid was selectively dissolved in DW by gentle washing, to leave a porous structured PLDLA cylindrical tube.The identification and quantitative analysis of chemical reaction were accomplished with a zeta potential analyzer, Fourier transform infrared spectrometry, X-ray photoelectron spectroscopy and thermogravimetric analysis.The morphology of the samples was examined by field emission scanning electron microscopy and transmission electron microscopy.The water wetting property of the samples was examined by contact angle analysis.The electrical conductivity was analyzed using a high-resistance measurement.The physical and chemical stability of the CNTs linked to the PGF surface were examined.For the physical stability, microfiber bundles were treated with ultrasound for 10 min, after which the CNTs’ existence and status on the surface were observed by FESEM.The chemical stability was observed by soaking the sample in DW for periods of up to 28 days.At predetermined times, the sample was taken out and the surface status was examined by FESEM.For the in vitro study, aligned microfiber bundles were used by fixing the ends of bundles with PCL to a length and a width of about 18 mm for a 12-well cell culture system.First, the effects of the any extracts from the CNT–PGF bundles on the cell viability were examined using the PC12 cell line.For this, the microfiber bundles were incubated in the culture medium, which consisted of α-modified Eagle’s medium, 10% fetal bovine serum, 100 U ml–1 penicillin and 100 μg ml–1 streptomycin, for either 7 or 14 days at 37 °C.After each period, the extract medium was mixed with the normal culture medium at varying ratios to prepare graded concentrations of the extracts.The PC12 cell line, derived from a pheochromocytoma of the rat adrenal medulla, were grown in normal culture medium at 37 °C in a humidified atmosphere of 5% CO2.Cells were cultured for 3 days in culture media containing 7 or 14 day dissolved solution.The cell viability was analyzed by means of a Cell Counting Kit-8.After reaction for 3 h, the colored formazan product was read at an absorbance 450 nm using a microplate absorbance reader.The test was carried out in triplicate.Next, we tested the effects of CNTs on the neurite outgrowth of primary neurons using dorsal root ganglion cells.Thoracic- and lumbar-spine-level DRG neurons from 6 week old Sprague–Dawley rats were excised, collected in Hanks’ balanced salt solution and prepared for primary culturing as previously described .CNT–PGFs of approximately 20 mm length were arranged longitudinally on coverslips, both ends attached using liquid PCL and plated onto culture dishes.PGFs without CNTs and coverslips without PGFs were used as a dual control.The coverslips were then coated with 20 μg ml–1 poly-d-lysine and 10 μg ml–1 laminin, and placed in the wells of a 12-well plate.DRG neurons were mixed in culture medium with 10% FBS and 1% penicillin/streptomycin, placed in a 37 °C/5% CO2 incubator and harvested after 4 h. Thus maintained DRG neurons were directly seeded onto each sample and then cultured for periods of up to 3 days, with refreshment of medium every 24 h.At each culture period, the slides were fixed with 4% paraformaldehyde in 0.12 M phosphate-buffered saline and stained.The primary antibody for axons was mouse SMI312 monoclonal antibody and the secondary antibody was fluorescein isothiocyanate-conjugated goat anti-mouse IgG.The stained slides were treated with PBS containing 4′-6-diamidino-2-phenylindole and coverslipped with Vectashield®.For the purposes of a quantitative analysis, the 15 longest SMI312-positive neurites were selected under confocal microscopy.The maximal neurite length was measured using NIH ImageJ software and NeuronJ plugins , and averaged according to the groups and periods.Fifteen SMI312-positive DRG neurons in each group and period were randomly selected, and the number of branch points which arose from each neuronal cell body was manually counted and averaged.The number of DRG neurons on each slide was also counted, and a total of three slides per group were used for analysis.All of the measurements were performed by one observer blinded to the group and time period.For the in vivo tests, the CNT–PGF 3-D scaffolds wrapped with PLDLA nanofiber and placed into PLDLA cylindrical tube were used.The scaffold dimensions were an inner diameter of 0.8 mm, an outer diameter of 1.0 mm and a length of 12 mm.The CNT-free PGF scaffold, prepared by the same method as the CNT–PGF scaffold, was used as the comparison group.Adult female SD rats were employed, strictly observing all animal care and surgical procedures as approved by the Institutional Animal Care and Use Committee of Dankook University.During the experiment, the animals were housed individually at a constant temperature and humidity without restriction of food and water.Surgery was performed under isoflurane.After the skin and subcutaneous layers around the left hip joint had been incised, the left sciatic nerve was exposed.The sciatic nerve was transected completely from a point 5 mm distal from the left hip joint and removed, leaving a 10 mm gap.Just after injury, both ends of the transected sciatic nerve were inserted about 1 mm into a 12 mm long PGF or CNT–PGF scaffold, which was then tied to the epineural sheath using 10-0 Nylon.For a positive control, an autologous nerve graft was performed using a 10 mm long transected sciatic nerve following a 180° rotation and reattached with 10-0 Nylon.The muscle, subcutaneous layers and overlying skin were closed with silk.The CNT–PGF- and PGF-implanted rats were sacrificed 16 weeks after implantation.A total of40 rats were sacrificed throughout the study.For the purposes of a histological analysis, all of the animals were deeply anesthetized, transcardially perfused with saline, and fixed with 4% paraformaldehyde.The injured sciatic nerve was removed, postfixed with 4% paraformaldehyde and immersed for 3 days in 30% sucrose solution.The tissues were embedded in M1 compound and sectioned sagittally or axially on a cryostat at 16 μm.Sections were treated with 0.2% Triton X-100 in 2% BSA/PBS solution and blocked with 10% normal serum.Primary antibodies were incubated overnight at 4 °C and secondary antibodies were incubated for 2 h at room temperature.Sections were treated with PBS containing DAPI, coverslipped with Vectashield® and observed by confocal microscopy.Whole SMI312-positive axons at the distal stump were counted in the transverse sections; counting was carried out using NIH ImageJ software and combined fully and semi-automated methods were used for nerve morphometry, as described previously .After completion of the electrophysiological evaluation, the gastrocnemius muscles of the injured site were dissected, frozen in liquid-nitrogen-cooled isopentane and cryosectioned at 10 μm.Hematoxylin and eosin staining was performed on the gastrocnemius muscles in the autologous-nerve-grafted group and the CNT–PGF and PGF scaffold-implanted groups at 16 weeks.Slides were dehydrated, cleared, mounted in DPX, and observed under a microscope.Sections from the belly of the gastrocnemius muscles of the injured site were ATPase stained to determine the muscle fiber type in the autologous-nerve-grafted group and the CNT–PGF and PGF scaffold-implanted groups at 16 weeks.The sections were prepared for staining by preincubation in barbital acetate buffer, followed by incubation in ATP solution.They were then washed with 1% calcium chloride solution, incubated with 2% cobalt chloride and washed in 0.005 M sodium barbital.For visualization, sections were immersed in 2% ammonium sulfide solution followed by rinsing in DW, dehydrated in an ethanol series, cleared with xylene, mounted in DPX and observed under a microscope.Stained muscle sections representing four different rats within the same group were selected for analysis, the cross-sectional area of the gastrocnemius muscle fibers was measured using NIH ImageJ software, and combined fully and semi-automated methods were used for nerve morphometry .Motor nerve conduction studies were performed for all of the experimental and control groups at 16 weeks post-implantation.The animals were anaesthetized with isoflurane, and placed on a warmed heating pad.The surrounding adipose and muscle tissues were carefully removed to expose the sciatic nerve.Electrical stimulation was applied by means of electrodes proximal to the nerve graft or scaffold.The stimulation mode was set to pulse; the active surface electrode was placed in the gastrocnemius muscle belly of the injured site, the reference surface electrode near the distal tendon and the ground electrode in the tail.Amplification and recording were accomplished using a data acquisition system; specifically, the signals were recorded using Labchart 7 software connected to a Bio-amplifier.A notch filter incorporating a band-pass filter set to 1–5000 Hz was utilized to remove 60 Hz of noise from the signals.The peak-to-peak amplitude and onset latency of the compound muscle action potentials were measured for the autologous-nerve-grafted group and the CNT–PGF and PGF scaffold-implanted groups according to the intensity of stimulation.Statistical analyses were performed using PASW Statistics 18.The Kolmogorov–Smirnov test was conducted to reveal the normal distribution of all quantitative data from the biomaterial properties and the in vitro and in vivo studies.The Kruskal–Wallis test was performed to compare the contact angles of phosphate glass disc and functionalized CNT–PGD, the PC12 cell viability cultured in 1%, 10% and 30% PGF and carboxylated or aminated CNT–PGF, and the number of survived DRG neurons cultured on plain dish, PGF and CNT–PGF.Bonferroni correction was also used to pair groups after the Kruskal–Wallis test.One-way analysis of variance with the Duncan post hoc test was conducted to compare the conductivity measurements of PGD and functionalized CNT–PGD, and the maximal neurite length and branch numbers of DRG neurons cultured on plain dish, PGF and CNT–PGF.The Mann–Whitney U-test was performed to compare the quantitative results of axonal and muscle histology and electrophysiology of the PGF and PGF–CNT scaffold-implanted groups.All error bars in figures related to the standard error of the mean, and statistical significance was set at p < 0.05.The CNTs used in this study were carboxylated by acid treatment and their properties are presented as Supplementary data.Unlike raw CNTs, which are not readily soluble in ethanol, the carboxylated-CNTs showed excellent solubility, with the solvent stability lasting for months.Zeta-potential measurements revealed a highly negatively charged surface, which was explained by the presence of a large number of carboxylic groups.Fourier transform infrared spectroscopy confirmed the development of carboxylic groups in the acid-treated CNTs and the XPS results showed a higher oxygen peak related to the carboxylic group.TGA showed a difference in thermal degradation behavior between the two groups, with more weight loss in the carboxylated CNTs, suggesting that thermal weight loss occurred in the carboxylic groups.A TEM image of the CNTs showed that acid treatment decreased the wall thickness of the CNTs slightly.The results clearly show that the multi-walled CNTs used in this study were carboxylated well and highly negatively charged.Using the carboxylated CNTs, the surface of the PGFs was changed through a series of chemical reactions, and the CNT–PGF bundles were then developed into 3-D nerve scaffolds.Fig. 2 shows scanning electron microscopy images of the samples during the process.After the melt-spinning of glass powder, PGFs were easily generated aligned in a single direction and were very uniform in size.The average size of the PGFs, as analyzed by SEM and calculated by the ImageJ image analysis program, was 22.32 ± 3.73 μm.This is within the optimal range for neuronal cell attachment and culturing on our phosphate glass poles, given that the reported diameters of the neuronal cell bodies are 5–20 and 5–50 μm for PC12 cells and DRG neuronal cells, respectively .We optimized the conditions for the tethering of carboxylated CNTs on PGF bundles by varying the concentration and frequencies of APTES treatment and the concentration of the CNT solution.A homogeneous monolayer-coated surface could be achieved on the CNTs on the PGFs by first using a low-concentration APTES solution while enabling the PGF-amination reaction to occur three times, then by using a diluted and better-dispersed CNT solution while enabling the amide reaction to occur five times.A highly non-homogeneous CNT coating is achieved when using a thick CNT solution, and this also happens when the APTES treatment is not properly carried out.The CNT-interfaced PGFs were subsequently aminated via the carbodiimide reaction using a diamine solution.The amination process was confirmed to preserve the morphology of the CNTs interfaced with the PGFs well.Next, the CNT–PGF bundles were constructed into a 3-D scaffold, first by rolling onto a PLDLA aligned nanofiber and then placing it within a PLDLA microporous tubular conduit.The morphology of the 3-D nerve scaffold containing the microfibers depicted in Fig. 2D shows the functional arrangement of each component, i.e. the microfibers packed inside, the thin wrapping sheet and the slightly thicker outermost layer.A higher magnification of the inner thin sheet revealed the nanofibrous morphology aligned parallel to the microfibers.Also, the outer shell presented a highly microporous with pore sizes of 50–100 mm.The physicochemical properties of samples underwent each chemical modification step were then in-depth analyzed.The chemical analyses were particularly carried out using a disc type of the same phosphate glass composition.First, the XPS signals showed energy peaks of atoms present on the outermost surface.The chemical shift from 284.63 to 285.07 eV, for a difference of 0.17–0.44 eV in the carbon atom binding energy of the C1s, is associated with CNT modification, in contrast to CNT-free glass substrate.The XPS spectra of the CNT-modified phosphate glass reflected the highest carbon atom and oxygen atom contents.It was obvious that this was due to the sp2 carbon atoms of the CNT molecules covalently bound to the glass.The amination of CNT-glass showed an increased percentage of nitrogen.This suggests that the open-end structures of the CNT molecules and the functional groups bonded to the nanotubes’ end loops on the discs.Fig. 3B demonstrates the surface wettability changed according to the surface chemistry.The phosphate glass showed the highest hydrophilicity due to a bunch of ionic groups on the surface, whereas the APTES-treated glass became hydrophobic due to the creation of silane groups.The CNT-tethering increased the hydrophilicity and the subsequent amination increased further.As one of the distinct advantages of CNTs-interfacing is the electrical conductivity, we calculated the value by measuring the resistance of each sample, as shown in Fig. 3C.The conductivity of CNT-free phosphate glass and APTES-treated glass samples ranged approximately 10−13 S cm–1, like insulators.However, the CNTs interfacing substantially increased the conductivity level to approximately 10−5–10−6 S cm–1, and the post-amination also showed a similar level.Next, the stability of CNTs tethered onto PGF was examined by means of either ultrasound sonication for 10 min or soaking in water for up to 28 days, as shown in Fig. 4.The SEM morphology of microfibers after 10 min of ultrasonic treatment showed little change in the CNT layered morphology from that before sonication.Moreover, the SEM image of microfibers during water immersion at varying period evidenced the CNTs were soundly present on the glass surface with a similar morphology to that before water immersion.Interestingly PGFs did not show any significant surface erosion and thus resultant CNTs detachment.PC12 cells were cultured for 3 days in culture media containing 7-day or 14-day PGF or CNT–PGF dissolved solution with different concentration.According to the results, PGF or CNT–PGF dissolved solution showed no cytotoxicity, and PC12 cells even showed better cell viability in the carboxylated or aminated CNT-interfaced PGF soaking solution than in any of the PGF dilutions.PC12 cell viability was significantly improved as the dilution percentage increased from 1% to 30% in 7-day dissolved solution, and also had a tendency to be increased with the concentration in 14-day dissolved solution.Based on this cellular toxicity study, we next assessed neurite outgrowth behaviors of primary neurons on the CNT–PGFs.Primary cultured DRGs extracted from 6-week-old SD rats were placed either on CNT–PGFs, on PGFs without CNTs, or in a plain dish, and cultured for 3 days.Whilst neurites outgrew randomly in the control dish, neurites extended directionally on the microfiber substrates, and the extension was much higher on the CNT–PGFs than on the PGFs.Analyses of the neurite outgrowth gave significant difference between groups.The maximal neurite length was significantly higher on the CNT–PGFs than on the PGFs or those cultured in the plain dish; further, the branch numbers per DRG did not differ between the CNT–PGFs and the PGFs, and the number of attached DRGs at 3 days was greater on the CNT–PGFs than on the PGFs.The produced 3-D scaffolds were implanted into transected stumps to fill a 10 mm gap after complete transection of the sciatic nerve of 12-week-old SD rats.We found that SMI312-positive axons crossing the implanted scaffold and S100-positive Schwann cells along the axons was more in CNT–PGF group than in PGF group and the number of SMI312-positive axons at the distal stump of the CNT–PGF group was significantly higher than that in the PGF group.The cross-sectional area of the gastrocnemius muscle was significantly larger in the CNT–PGF group than in the PGF group.Following CNT–PGF scaffold implantation, the mean value of the proportion of the type I fiber of the gastrocnemius muscle was decreased and that of the type IIa fiber was increased, more so than with the PGF scaffold, but without statistical difference.The onset to the peak amplitude of the CMAPs in gastrocnemius muscle also was larger in the CNT–PGF group than in the PGF group.In this study, we demonstrated for the first time the in vivo functions of CNT-interfaced implants for the nerve regeneration in rat sciatic model.For this, we designed a novel CNT-tethered nerve conduit based on the phosphate glass microfibers combined with polymeric scaffolds.In particular, CNTs linked to a phosphate glass fiber were functionalized by a series of reactions involving carboxylation and subsequent amination, and the amination was aimed to provide the outermost CNTs surface with amino groups that are considered a more favorable surface, at least when compared with carboxylated surface, for neuronal cell behaviors including cell adhesion, neuronal differentiation of neural stem cells, and in vivo recovery after ischemic stroke .Among other surface properties that may be impacted by the CNT modification, including increased roughness, altered chemistry, and hardness, the conductivity is believed to be the most fascinating aspect of the conduits for neural applications.In fact, whilst free phosphate glass samples showed a conductivity value of ∼10−13 S cm–1, like insulators, the CNT-interfaced samples substantially increased the conductivity to a range of ∼10−5–10−6 S cm–1.This apparent result suggests that the monolayer coverage of CNTs provides the phosphate glass substrate more electrically conductive surface that possibly alters and even stimulates neuronal cell responses.We subsequently 3-D structured the CNTs-interface phosphate glass fiber for implantable nerve conduit by bundling the CNTs-phosphate glass fibers, followed by wrapping onto a PLDLA nanofiber and then embedding within a porous PLDLA tube.Consequently, the CNTs-glass fibers were stably positioned within a tubular structure, where the porous tubes are freely to interact with outer environments, beneficial for mass transport and blood circulation, which enabling the CNTs-glass fibers to function neural guidance effectively.In fact, when free-CNTs were directly treated to neural cells, many studies have reported their cytotoxicity and genotoxicity .Therefore, the surface-tethered CNTs are considered to be much safer as they avoid rapid and direct cellular internalization while providing electrical stimuli to cells in the intercellular and/or cell-matrix interfacing reactions.As to the stability of CNTs onto the phosphate glass fiber, we confirmed the currently implemented CNTs, covalently linked to a phosphate glass substrate, showed to be very stable physically and chemically as they did not dissolve out from the surface to the in vitro test period.Furthermore, in vivo findings did not reveal any toxic responses related with the CNTs.Phosphate-based glass is usually soluble, but in this study, we used P2O5–CaO–Na2O–Fe2O3 with the smallest sodium and the highest iron composition which has the least solubility.This fact alleviates any possible concerns on the premature release of CNTs and resultant cytotoxicity, rather, allows for anticipating the CNT–PGF system as a biocompatible nerve guiding matrix.The CNTs-interfaced phosphate glass fiber scaffolds showed good viability of PC12 cells in the indirect dilution study.In particular, the improved PC12 cell viability with the diluents demonstrated the possible role of ionic extracts from the glass fibers played in stimulating cell metabolism.In fact, the phosphate glass fiber composition used herein has previously shown to release sufficient amount of ions such as calcium and phosphate that is favorable for cell viability and blood vessel formation .Schwann cell is important to support axonal outgrowth and remyelination, and CNTs may affect the survival and proliferation of Schwann cells following peripheral nerve injury .In previous in vitro studies, single-walled CNTs in three dimensional hydrogel has no toxicity on Schwann cells , and multi-walled CNT containing collagen/PCL fibers might support Schwann cell adhesion .In vivo condition, single-walled CNTs-based silk/fibronectin nerve conduits enhanced S-100 expression of Schwann cells .We found that Schwann cells along CNT-interfaced PGFs were more than those on CNT-free PGFs in vivo study, but we need to delineate the exact mechanisms of CNT-interfacing on the survival and proliferation of Schwann cells in the further study.With regard to this ionic role on nerve cells, more in-depth studies will be needed in the future, which is considered an interesting area of study to develop novel scaffolds for neural regeneration As discussed, the ionic release would be possible from the phosphate glass fiber over a long period, which however, is not considered to be an enough level to result in the dissolution of CNTs from the surface.Thus the CNT-interfaced outermost of the phosphate glass fiber implant would be stable at least to the test period, facilitating beneficial cellular interactions.In fact, in the direct culture of DRG cells, the glass fibers demonstrated nerve guidance role, with significant decrease in the neurite branches.Previously, we also found that DRG neurites grew actively along PGFs, which provided physical guidance and offered excellent cellular compatibility .More than this guidance role, the CNT-interfaced on the glass fiber significantly enhanced the cell adhesion level and neurite outgrowth length.The exact mechanism of the effect of CNTs on neuronal growth is yet to be disclosed .It is first thought that the CNTs provided a nanotopological cue to improve the neuronal cell adhesion.CNTs-substrate has been shown to stimulate cell adhesion related gene expression in vitro and the subsequent cell proliferation .Some researchers have suggested that CNTs activate extracellular signal-regulated kinase signaling and phospholipase C signaling pathways .The high electrical conductivity of CNTs might also affect the neuronal regeneration through the modification of ionic transport across the plasma membrane, by which the ECM protein conformation and synthesis is changed , and the neurotrophic factor release from neuronal cells is stimulated .Therefore, the integration of CNTs with the phosphate glass fiber is thought to have a synergistic effect on the DRG functions in terms of providing physical guidance as well as stimulating cell adhesion and neurite outgrowth.The physical, chemical, topological and electrical properties provided by the CNTs-phosphate glass are thus considered promising cues for neuronal functions and possible nerve regeneration.We demonstrated for the first time the in vivo performance of the CNT-interfaced scaffolds using a completely transected peripheral nerve injury model in rats.While most studies on CNT-based substrates have focused on the in vitro cell behaviors, little is known about the in vivo functions of CNT scaffolds.In fact, only a few recent studies have reported striking findings on the effective roles of CNTs in the in vivo central nervous systems including brain stroke and spinal cord injury models .Aminated CNTs-solution directly injected to a rat brain in stroke model significantly enhanced neural protection and functional restoration .CNTs functionalized with polyethylene glycol, directly injected to the injured spinal cord of rat, effectively reduced lesion volume, increased the number of neurofilaments and functional restoration .These pioneering in vivo studies on CNTs, however, showed the function of CNTs added directly to the injured sites in solution form, instead of reporting the role of CNTs as scaffolds or substrates.Therefore, this study is, to the best of our knowledge, is the first in vivo finding of the performance of CNT-based scaffolds.Here we tested the function of CNTs-interfaced glass fiber in the peripheral nerve injury model, which is considered common clinically encountered injury, thus requiring significant clinical needs, and the outcome can also be applied in parallel to the central nervous system in the future study.In previous studies, the scaffolds containing aligned or structured intraluminal guidance enhanced peripheral and central nerve regeneration ; we also observed the role of phosphate glass fiber in physically guiding the nerve regeneration.More than this, we found some clear evidences that the CNTs-interfacing functioned better as the intraluminal structured nerve conduit.The number of lesion-crossing axons was significantly increased by the CNTs-interfacing.In fact, phosphate glass fiber conduits inside a collagen scaffold have also shown very limited effect on intraluminal structure during the early stage of up to 8 weeks, with no further functional restoration at 12 weeks .The CNT-interfaced phosphate glass fiber scaffold, however, prolonged the effects of axonal regeneration up to 16 weeks.CNTs can also play roles in drug delivery systems and stem cell differentiation.A CNT-mediated drug delivery system was shown to effectively transport siRNA or other proteins to the target tissue and to achieve functional restoration following brain lesion , and, in combination with stem cell transplantation, also improved functional recovery and enhanced stem cell differentiation .Furthermore, we found that the CNT-interfaced PGF scaffold was effective in restoring motor functions electrophysiologically.Motor nerve conduction study showed that CMAP was significantly higher at the CNT interface.This indicates that scaffold-crossing axons were successfully reinnervated into the gastrocnemius muscles and that the muscle was functionally improved as a result of the CNT interfacing.The proportion of slow to fast muscle fiber types usually changes following denervation and reinnervation, with more fast fibers , and we found that this tendency was enhanced in rats receiving a CNT-interfaced scaffold.However, there was no clear evidence of any change in the muscle fiber types of reinnervated gastrocnemius muscles following complete transection of the sciatic nerve, and this result was not statistically different from those rats receiving the PGF scaffold or those receiving autologous nerve.Although we clearly observed the effectiveness of CNT interfacing in peripheral nerve regeneration, the phosphate glass fiber conduit used herein is not considered to provide any better conditions to those in the autologous nerve graft, as deduced from the series of in vivo results.This is due primarily to the limitations of the morphological and physicochemical properties of the phosphate glass fiber bundles.Firstly, although the phosphate glass fibers were developed to have an average diameter of 20–30 μm, the interspacing between the fibers appeared to be somewhat smaller than the optimal spacing for neuronal growth.Secondly, the elasticity of the glass fibers was intrinsically higher than the much softer nerve tissues, and this may not provide the best conditions for neuronal development.To this end, further study will be needed to develop nerve conduits with better morphological and elastic properties, with which the effects of CNTs-interfacing are envisaged to be synergized.Furthermore, as the CNTs interfaced at the edges of the nerve conduit have the potential to carry therapeutic molecules , including neurotrophic factors and neuroprotective/anti-inflammatory drugs, combining this drug delivery strategy with the CNT-based nerve conduits should improve the capacity to regenerate nerve tissues, possibly to the status of an autologous nerve graft.Carbon nanotubes were successfully interfaced on phosphate glass fibers for nerve guidance and then implemented into a 3-D scaffold which possessed physicochemical integrity with good cell viability and neuronal interactions.These first in vivo findings of carbon nanotube-interfaced nerve implants assessed in a rat sciatic injury model demonstrate the effective roles of the carbon nanotubes in the nerve regeneration process.This study is believed to open up a new class of neural scaffolds based on a electrically conductive nanomaterial – carbon nanotubes.No potential conflict of interest relevant to this article was reported. | Carbon nanotubes (CNTs), with their unique and unprecedented properties, have become very popular for the repair of tissues, particularly for those requiring electrical stimuli. Whilst most reports have demonstrated in vitro neural cell responses of the CNTs, few studies have been performed on the in vivo efficacy of CNT-interfaced biomaterials in the repair and regeneration of neural tissues. Thus, we report here for the first time the in vivo functions of CNT-interfaced nerve conduits in the regeneration of transected rat sciatic nerve. Aminated CNTs were chemically tethered onto the surface of aligned phosphate glass microfibers (PGFs) and CNT-interfaced PGFs (CNT-PGFs) were successfully placed into three-dimensional poly(l/d-lactic acid) (PLDLA) tubes. An in vitro study confirmed that neurites of dorsal root ganglion outgrew actively along the aligned CNT-PGFs and that the CNT interfacing significantly increased the maximal neurite length. Sixteen weeks after implantation of a CNT-PGF nerve conduit into the 10 mm gap of a transected rat sciatic nerve, the number of regenerating axons crossing the scaffold, the cross-sectional area of the re-innervated muscles and the electrophysiological findings were all significantly improved by the interfacing with CNTs. This first in vivo effect of using a CNT-interfaced scaffold in the regeneration process of a transected rat sciatic nerve strongly supports the potential use of CNT-interfaced PGFs at the interface between the nerve conduit and peripheral neural tissues. |
582 | Changing times: Migrants’ social network analysis and the challenges of longitudinal research | Migrants are constantly building new ties in new places as well as negotiating existing long distance ties.In this respect, SNA can be useful in challenging assumptions of deterritoriality – showing not only that place still matters, but also how relationships are developed and sustained in specific places as well as between geographically dispersed places.Thus, an SNA approach can help to interrogate the ‘death of distance’ discourse by analysing the impact of distance and physical separation on how social ties are weakened or maintained over time and how their content and meaning – as well as their practical use – can change.SNA can also be helpful in bridging the personal and structural dimensions in migration research, by providing a meso level of analysis.However, it is also important to connect the investigation of local and transnational networks with an analysis of the broader social, economic and political contexts in which these take shape; in other words, connecting the micro and the meso with the macro level.Although SNA has been growing exponentially across a range of disciplines, surprisingly this approach has not been widely used by migration scholars.Only in recent years have migration researchers begun to use SNA systematically.As discussed elsewhere in this special issue the specificities of migration pose not only challenges, but also opportunities for network researchers.Migration scholars tend often to use network concepts in a vague, descriptive way, often drawing on Putnam’s framework of bonding and bridging capital but without any detailed engagement with wider SNA tools and concepts.In recent years, however, there have been several emerging examples of how SNA could be usefully applied to researching migrant networks.Highlighting limitations with some current approaches to researching social networks, we examine the advantages, but also difficulties, of using mixed methods to explore the structure and meaning of migrant networks and how these evolve over time.Focusing on the sociological discussions of time, we consider how temporality can be adequately addressed when researching changing social relationships.Adopting a mixed methods approach to SNA may help to capture complex interactions and explore not just patterns of change, but also reasons and meanings behind temporal and spatial dynamics.Applying a reflexive framework, we draw upon examples from our longitudinal, mixed-methods research projects to consider the opportunities and challenges of researching change in migrants’ social networks through time and place.We have been researching migrant networks for over a decade and have amassed a significant body of work on processes of building new relationships in new places, sustaining long distance ties, networking in a business context, the whole networks of migrant organisations, the challenges of researching migrant networks through time.Case studies discussed in this paper illustrate the usefulness of mixed methods: firstly, in understanding the meanings behind certain network structures; secondly in going beyond ‘snapshots’ to analyse the complexity of temporality; and thirdly to appreciate the role of wider contextual factors in shaping both ego and whole networks.Hence, we argue that different combinations of quantitative, qualitative and visual methods do not just provide richer sets of data and insights, but can allow us to better connect conceptualisations – and ontologies – of social networks with specific methodological frameworks.The paper proceeds through five sections.The first three are largely conceptual and discuss methodological challenges of researching temporal dynamism of social relationships.The following two sections draw on our research to explore, and reflect upon methodological opportunities in researching migration using mixed SNA.The conclusion discusses the possible contribution of the sociology of time to the study of migrants’ dynamic networks.Amid on-going discussions about ‘crises’ in empirical sociology, it is necessary for social researchers to critically reflect upon how they collect and analyse different kinds of data.An abiding challenge is the notion of ‘time’ and how sociologists can adequately include temporality and dynamism within research projects.Over recent decades much has been written about time with significant contributions from scholars such as Giddens, Saldana, to name a few.In his influential structuration theory, Giddens argues: ‘An adequate account of human agency must situate action in time and space as a continuous flow of conduct’ and ‘grasp the time–space relations inherent in the constitution of all social interaction’ Giddens.Despite the influence of authors like Giddens, ‘many theorists find it difficult to maintain the temporal gaze’.Adam argues that when time is included in social studies it tends to be regarded simply as ‘the neutral medium in which events take place’.Thus, time becomes ‘commodified’, ‘decontextualized’ and reduced to the ‘empty time of calendars and clocks’.However, ‘our experience of time rarely if ever coincides with what the clock tells us’.Adam points to the challenge of ‘taking time seriously’.Time needs to be seen as ‘social’, assuming its ‘neutrality’ is problematic and limits our understanding of its complexity.Thus, Saldaña highlights the importance of looking at the social construction of time in specific socio-structural contexts.While many social scientists acknowledge the importance of time-space relations, ‘the major difficulty is to retain the complexity of time-space at both the levels of the theoretical and the empirical’.To address this challenge, Adam suggests the use of ‘timescapes’ to encompass quantitative time, the connections between space and time, and the multidimensionality of time experienced at different levels.The theoretical innovation of timescapes involves the combination of micro, meso and macro dimensions of time in order to understand ‘dynamic relationships between individual and collective lives and broader patterns of social change’.This concept provides a new way of thinking about and researching social processes and change, offering new tools to undertake longitudinal research.Adam’s approach strikes us as particularly appropriate for analysing migration processes and migrants’ social lives, allowing researchers to bring together the global social, political and economic drivers of migration, the lived experiences and actions of individual migrants, and the societal and community contexts and dynamics where migration takes place.The interconnections between these levels allow us to understand change and the factors driving change; as well as how the passing of time is experienced, internalised and presented by migrants and migrant groups.The challenge, however, is not just to collect data over time, but to triangulate rich and multidimensional data which can reflect interrelated aspects of timescapes, for example blending qualitative insights into micro experiences with wider macro dimensions analysed through quantitative data.To date, longitudinal research remains dominated – for historical and practical reasons – by large statistical surveys.With regard to migration studies, time series from population statistics are still widely used to look at migration trends, and panel surveys are used to examine the changing socio-demographic profiles or even the changing psychological attitudes of migrants over the years.These quantitative approaches can be effective in describing changes, but are not equally effective in exploring causes of change and the interconnections between different levels of analysis.In some cases quantitative longitudinal data is used – e.g. through cluster-analysis – to identify ‘typologies’ of migrants, thus better capturing the sheer diversity of migrants’ life trajectories; but this is still insufficient to delve into the individualised experiences of migrants, including their experience of change through time in personal and community networks.Recent years, however, have seen a growing interest in qualitative longitudinal research, as part of a broader ‘temporal turn in the social sciences’.QLR is distinguished by the deliberate way in which ‘temporality is designed into the research process making change a central focus of analytical attention’.However, in so doing it is necessary not only to acknowledge the complex and constructed nature of time but also the challenges of capturing change.This raises methodological as well as epistemological issues.For example, the ways in which participants talk about the past and future is shaped by ‘present context’.Thus, in researching social actors’ plans, hopes and aspirations one needs to be mindful of the fluidity and uncertainty of time horizons.As pointed out by Brannen and Nilsen, ‘time horizons should be discussed with reference to present activity and context and not merely as an isolated variable’ which can be measured and quantified through survey methods.Expected duration of stay, as expressed at the outset of a migration journey may be an unreliable indication of how long migrants will actually stay in the destination society.The time horizons of migration plans can change enormously as migrants’ experiences, expectations, relationships and responsibilities evolve through the life course.Hence, as discussed in the case study sections later on, we need research methods that are adequate to capture this kind of dynamism.While QLR implies a longitudinal research design from the onset, O’Reilly – with specific regard to migration research – argues that more flexible and pragmatic approaches can also allow researchers to successfully capture the dimension of time in a highly reflexive way.The specific ethnographic practice of ‘re-study’ – ‘ongoing relationships with the field, characterised by return visits’ has a long history in migration studies, though usually it is not explicitly labelled longitudinal.O’Reilly, for example, in her study of British migrants to Spain, started from an initial piece of qualitative research, entailing interviews and observations, subsequently adding a quantitative survey, which was analysed reflexively, informed by regular communication with the participants.The case studies presented in this paper also acknowledge the ‘recursive nature’ of social science and the importance of being open and reflexive about it.The researcher’s journey over a number of years, returning to the field and re-approaching the participants sometimes with new, complementary research tools, requires specific attention to capture change at different levels.As will be discussed through our case studies, the multi-dimensionality of temporality has implications for mixed research methods as the qualitative and quantitative elements of data collection may be formulating and capturing different notions of time.Specifically, the next section discusses the challenges and – at the same time – the importance of incorporating temporality when analysing social networks, and particularly migrants’ networks.There has long been awareness among some scholars that social relationships change over time.As observed by Snijders: ‘The idea of regarding the dynamics of social phenomena as being the result of a continuous time-process, even though observations are made at discrete time points, was already proposed by Coleman’.Moreover, Coleman’s concept of appropriability suggests that the nature of a tie and the resources flowing through that tie are not fixed but rather can be transformed.As Bidart and Lavenu note, ‘personal networks have a history.The form and structure they show today result from a construction elaborated over time’.Nonetheless, as mentioned earlier on, capturing time remains a challenge in network research.It has been argued that there is too much focus on the consequences of network properties, i.e. outcomes, and insufficient attention on their antecedents.Thus, SNA can involve static assumptions, such as that centrality values are fixed at a moment in time, or ignore that actors may seek out new ties over time.Attempts to capture networks and represent them through visualisation, such as sociograms, have resulted in mapping a single ‘snap shot in time’.However, as we discuss in later sections, visualisation can be adapted to address temporal dynamism.A related limitation of much empirical research is that it engages with longitudinal analysis through ‘interpolation between snapshots’, i.e. assuming a linear change between two points, without acknowledging that change is neither one-dimensional nor linear.Indeed, up until the 1990s, social network studies incorporating a longitudinal dimension were quite sporadic – despite some notable exceptions, e.g. Coleman, Hallinan, Bauman and Chenoweth.Longitudinal approaches became more common with the wider availability of panel data and, crucially, with the recent, rapid developments in software tools and computer models, e.g. RSIENA and TERGMs.This has led to a new field of ‘longitudinal social network analysis techniques’, largely software driven.For authors like Borgatti et al., there is a clear assumption that the way to understand dynamic relationships is through technological advancements.These approaches, however, may raise some scepticism.Scott sounds a cautious note: ‘it would be a disaster’ if new technological capabilities caused a return to descriptive SNA lacking in theoretical rigour Scott.Instead, he argues for a continued development in an analytical focus where data are used to test social theories and for further explanatory aims.Furthermore, discussing their study of Argentinean migrants in Spain, Lubbers et al. point out that while descriptions of structural change over time may give an insight into dynamic processes, these do not reveal the micro, dyadic processes, underlying wider, aggregate results.Rather than focusing on the ‘existence of ties’, more attention is needed to the content of ties.As Crossley notes, social networks involve a world of feelings, relationships, attractions, dependencies, which cannot be simply reduced to mathematical equations.As Ryan argued, analysing the ‘social’ aspects of networks requires consideration not only of the relative social location of alters, and the flow of resources, but also the meaning and impact of these personal relations.Migrants’ networks shape and are shaped by cultural identities, and affect and are affected by broader social and economic dynamics in the countries of origin, destination and transit.Some have argued that qualitative methods are under-utilised and indeed under-valued by social network analysts.The cultural turn in network research has created more opportunities for qualitative approaches to understanding the construction and meaning of inter-personal ties.Innovations in qualitative social network analysis attempt to bring insights from qualitative data analysis such as grounded theory to inform structural network relationships.However, some qualitative network research is criticised for relying on descriptions and narratives, overlooking the wider structural dimensions of social relationships.Indeed, Ryan et al. noted that although migration studies have widely adopted the concept of social networks, this has often been employed by qualitative researchers in a loose way, without engaging with concepts and tools of SNA.Rather than setting up a quantitative versus qualitative dichotomy, it is useful to explore how mixing methods may provide tools not only to measure the extent, but also to explore the reasons for change through time and space.Furthermore, in our own research, we found that a mixed methods approach to SNA can be helpful in addressing temporality.In the following section we explore these aspects and consider how mixed techniques, supported by visualisations, may provide important insights into dynamic social relationships.We contend not only that mixed methods provide useful insights into social networks but also that visualisation can play a key role in facilitating mixed approaches to data collection and analysis.In her comprehensive review of ‘Mixed Method Approaches to Social Network Analysis’, Edwards argues that SNA represents a specific opportunity to mix methods because of its history as an interdisciplinary field developed from sociometry and graph theory and from early ethnographic studies of the structures of personal relations.Of course, on the one hand, mixing any kinds of methods and techniques raises epistemological challenges and may cause paradigmatic tensions.On the other hand, we share the view of Crossley and Edwards that a strong argument for mixing methods arises from an ontological premise, i.e. that “social worlds outstrip the sociological gaze”; they are more complex than any single epistemological perspective can capture.Thus, sociologists “can achieve both a more comprehensive and a more robust perspective by combining the vantage points that different methods afford”.Specifically, the use of mixed-methods in SNA provides a more comprehensive tool-box for understanding ‘the relational condition of human life’ and can allow the researcher to reconcile the structure of a network with its content and meaning.In some cases – as argued by D’Angelo – this can produce a better reflection of the inherent nature of social networks and, in this sense, addresses a specific ontological position regarding social relations.Although ‘mixed-methods’ often are presented as being just about mixing quantitative and qualitative methods; from our experience they can also entail mixing different quantitative methods or different qualitative methods, particularly mixing ‘verbal’ research methods with visual ones.Indeed visualisation can be integrated with a range of other methods, both quantitative and qualitative, to facilitate data collection, inform analysis and illustrate results.In so doing, visualisation can be used to bring methods into dialogue with each other, not necessarily to produce agreement but also to show tensions and discrepancies between data derived through different methods and thus provide new insights for further analysis.Visualisation has been described as a means of making invisible social relationships visible, making abstract concepts more tangible.For example, sociograms offer a structured, integrated view of relationships that would not be immediately perceivable just from narratives or tables.As we demonstrate in sections below, sociograms allow both the participant and the researcher to think in terms of social structures, explore network narratives as they emerge and investigate the relations between networks and mobility.Visuals can also be used as a key tool in longitudinal research, to reflect on memories and perceptions of changing relationships through time.Thus, as argued by Tubaro et al., 2016: ‘visualisation has a decisive role to play in mixed-methods social networks studies, over and above its contribution, already acknowledged, to quantitative research’.Nonetheless, as we discuss in following sections, sociograms should not be seen as a ‘map’, a visual representation of an objective reality, but as a ‘visual narrative’.As Hogan et al. observed, when dealing with personal networks the respondent is the only informant on this network.This may raise concerns on the reliability of the responses, in other words ‘we are left to the mercy of a respondent’s cognitive biases’.However, this should not inhibit this kind of empirical research, but rather lead to recognition that recalled networks are cognitive networks, establishing a clear theoretical link between the questions asked and the meaning of data thus collected.Hence, we argue that visual images are never self-sufficient and work best when mixed with other sources of data, such as interview narratives or tables, to create incessant dialogue and reflection.Thus, it is important to include reflexivity as part of the mixed methods approach to ensure that researchers, as well as participants, have an opportunity to think about how different methods give rise to different kinds of data and are open to various interpretations.We maintain that reflexivity is crucial to enhancing research rigour and acknowledging limitations and challenges at each stage of the process from data collection through to analysis and presentation of findings.Adopting a reflexive approach, in the following sections, we consider these issues in our own use of visualisation in researching network dynamism.Before presenting our data we first, briefly, describe the separate research studies carried out by Ryan and D’Angelo.Louise Ryan has been researching Polish migrants in London for over a decade.During that time several different but related studies have been undertaken.Although these studies were not originally designed as longitudinal, following O’Reilly, these could be described as ‘return visits’ to the field.While networks formed a primary focus of this on-going research, sociograms were only introduced in the most recent study.Hence, while it is possible to compare interview narratives of those participants who were interviewed on repeated occasions over several years, it is not possible to compare network visualisation data.Nonetheless, the participants’ descriptions and explanations about how their networks changed since the first interview are particularly illuminating.A key aim of the most recent study was to examine evolving decision making processes about duration of migration and gradual extension of the stay over time.In particular, the study aimed to understand how inter-personal relationships both informed and reflected the unfolding migration trajectory.Thus, interview questions focused on network composition, structure and meaning and how these evolved over time as the stay abroad extended.In an effort to collect richer data on changing social relationships over time, the study used a combination of interviews and paper sociograms.Although sociograms have usually been used to present data findings, an alternative use of sociograms in data collection uses real-time, rather than ex-post, visualization by asking respondents to directly draw a network, freely or in some pre-defined framework.As discussed at length in the literature, there are many different ways of using sociograms, ranging from highly structured approaches that collect quantitative data from large numbers of participants, to qualitative approaches that elicit detailed explorations of meanings of specific social relationships.Coming from an epistemological position rooted in interpretative sociology and social constructionism, Ryan wanted to understand how networks are co-constructed in interview encounters and particularly in the process of populating the sociogram − not simply in the finished image as a piece of data.Using visual, as well as narrative, techniques allows different stories to be told suggesting the complexity and multi-dimensionality, as well as the fluidity, of social relationships.The paper-based sociogram Ryan used consisted of 3 concentric circles divided into 4 quadrants and was adapted from Mary Northway’s original 1940s target sociogram.Given advances in computer aided visualisation, it may be tempting to suggest that traditional pencil drawings of ego networks are obsolete.However, some researchers continue to use these simple visual tools partly because they have the distinct advantage of being completed by the participants during the interview – rather than post-hoc in a computer lab.Clearly, the sociogram is not a neutral tool for capturing a pre-existing network, rather the design of the instrument and the questions asked by the researcher shape how social ties are visualised and explained.Interviews lasted approximately one hour.The sociogram, introduced after about 15 min, took participants approximately 15–20 min to complete – interspersed with discussion.The participants were told that the concentric circles represented degrees of closeness, with the closest or most important people nearer the centre and the less important/less close in outer circles.Participants were invited to write down the geographical location of alters so that links between emotional and geographical closeness could be explored.This was not an attempt to measure their ties but rather to understand how they represented their relationships both visually and orally.The sociogram was not intended as a standalone instrument.Meanings of ties, how they changed over time, their relative importance, only made sense through discussions taking place in the interview.Ryan conducted an integrated narrative analysis of each complete interview transcript and sociogram, focusing on how a participant tells their story in words and images.Just as visual and narrative data are collected together, there is a strong rationale for analysing them together through an integrated method.This analysis thus captures the dynamic interplay between how people talk about and visualise their social ties.Analysing sociograms and interview transcripts together reveals the ‘dynamic interplay between the visual and narrative data’.The material act of visual representation provoked discussion about the nature of particular relationships.The visual tool prompted memories and stories, countering forgetfulness, and adding more alters to the network than achieved by interviewing alone.In addition, as discussed in the next section, visualisation raised questions about the ranking of alters as participants promoted and demoted friends and family relative to each other.The research work conducted by Alessio D’Angelo started in 2008, with the aim of mapping and analysing the social networks and networking practices among Kurdish community organisations in London.These organisations, defined as not-for-profit, migrant-led organisations provide support to the very diverse local Kurdish population included advice centres, service providers and cultural associations.One of the research aims was to investigate the extent to which organisational networks allowed individual users to benefit from enhanced social capital and whether, at the same time, these ended up constituting a sort of ‘organisational social capital enhancing the capability of individual organisations.D’Angelo’s long term research started with the adoption of quantitative data collection tools, typical of more ‘formal’ approaches to SNA, such as identification of ties on the basis of official records and the use of structured questionnaires, including a lists of all Kurdish organisations in London from which respondents had to select their alters with regard to different types of relationships.However, this approach was gradually integrated with interpretivist methods such as semi-structured and unstructured interviews and participatory observations.Thus, in later stages of work, the research framework became increasingly ‘mixed-methods’ and produced a wealth of data sometimes in apparent contradiction with each other.The fact that the research took place over a period of more than five years, with repeated interviews, observations and participation in community events allowed D’Angelo to get embedded into the network, whilst maintaining his role of external observer,Within this context, the process of visualising the whole network assumed a very particular role.Sociograms did not aim to get an objective representation of a social reality.Rather, D’Angelo sought to produce descriptive outputs, informed by his understanding of networking processes and structures, and attempting to summarise a set of relations as experienced ‘from within’ − thus an intrinsically interpretivist exercise.The development of these charts encompassed an iterative process, where the structural patterns presented in preliminary sociograms informed questions about their content and meaning, and with the results of qualitative methods being used to interpret, but also to enhance and amend the visualisations.After each phase of data collection and preliminary analysis, D’Angelo drew a chart summarising the main ties between organisations as informed by that particular set of data.By looking at different sociograms side by side, he aimed to identify similarities and contradictions and make sense of them.In other words, the sociogram was not used simply as a research output, but as an analytical took to triangulate and reconcile different types of data.In these themes are further developed in the following section.In this part of the paper we discuss not only how we use mixed methods to collect data on social relationships but also how we analyse these data to gain deeper understandings of dynamics through time and space.In the case of Author Ryan repeat interviews, it is possible to explore relational dynamics over time.While family ties in Poland remained strong as key sources of emotional support, Polish-based friendship networks seemed to weaken during the interval between interviews.During the first interview in 2006 Agnieszka said: ‘friendships from secondary school are the strongest ones and I am in frequent contact with people from Poland, my friendships are there’.At that time she had already been in London for several years but she remained firmly connected to networks of friends in Poland though frequent contact via e-mails and visits.During times of loneliness or uncertainty in London, her main sources of emotional support were friends and family in Poland.When re-interviewed in 2014, Agnieszka continued to maintain strong ties with family in Poland, especially her parents, who featured at the centre of her sociogram.But the friendships from secondary school no longer featured as close ties in her narrative and were not included in her sociogram.When asked how her relationships with friends in Poland had changed she remarked: ‘I’m not in good contact with them anymore’.She added that although she missed ‘old good friends’ in Poland, it proved difficult to sustain relationships and over time these have ‘gradually weakened’.A similar pattern was observed in the other repeat interviews.Ewa’s description of her friendship networks changed significantly through the repeat interviews, while her family ties in Poland remained strong.In her sociogram, her family appeared in the closest circle.When I went to Poland I used to spend time with friends but it became less and less regular and my time is quite short in Poland and you have to choose.There’s my nephews, my nieces.We are very close.I definitely need to put them all in here.Well, this is basically the centre of my life.Ewa’s sociogram reflects the centrality of her family in Poland.As a busy working mother, she had limited time to invest in maintaining long distance ties and clearly prioritised family during her visits to Poland.There were no friends in Poland represented on her sociogram.In describing network change over time, participants often discussed how they had changed and were no longer the person they used to be in Poland.Magda, arrived in Britain as a teenager in early 2000s, reflected how she had changed between the two interviews.She described shifting identity: ‘I think I can identify myself more as being British than being Polish’.She explained that she feels like a ‘tourist’ when visiting Poland and no longer feels ‘attached’ to that country.Thus, changing composition of networks was not just about weakening connections due to separation through time and space, but also about shifting identifications.As sociograms were only included in the second round of data collection, the aim was not to ‘measure’ changing network composition, but rather to qualitatively explore how participants narrated and visualised relational changes and continuities.The act of visualising and narrating networks usually prompted justification of their changing self through time.Thus, we are not simply counting the number of ties in different places but rather trying to understand the dynamic meaning and intensity of those relationships through time and across spatial locations.While completing the sociogram, Angelika observed she had less in common with former friends in Poland: ‘some of them, if they’re very religious, they’re just a bit brainwashed, like Catholics, in the way they think about certain things’.After a decade in London, ‘where it’s really liberal’, Angelika was ‘scared’ by how right wing Poland had become.Similarly, Dominik reflected on change over time and how old friendships fractured: ‘I recognised that we don’t have much in common any longer.Because I am a different person than 11 years ago in Poland’.Thus the sociogram enabled participants to depict not just their networked self but also to reflect their changing self.Time is crucial to this story.As noted earlier, temporality is not simply neutral.Time is imbibed with meaning.The concept of timescapes, discussed earlier, is particularly relevant here to make sense of change across various scales ranging from micro, meso and macro.The passage of time is understood through personal changes, but also how friendship ties have changed against the backdrop of migrant transitions from Polish to British society.We return to contextual change in a later section.Oliwia did not include any friends in Poland in her sociogram: ‘we didn’t understand each other anymore, and I just thought, I just felt guilty that I have this life here… I was studying and travelling, so.But it was really hard to explain to them…’.As Moreno observed: ‘the sociogram is more than merely a method of representation.It is first of all a method of exploration’.Embedding the sociogram in in-depth interviews enabled participants and researcher to discuss network composition; exploring why and how relations changed over time.For Oliwia, the growing gulf between her and her former friends in Poland reflected her feelings towards Poland more generally.Echoing a point made by Magda above, Oliwia remarked: ‘at the same time I kind of was becoming myself disconnected with Polish reality.’,Combining the visual tool with the in-depth interview made visible the process of network construction.Participants talked as they completed the sociogram; articulating the decision making process about where to place alters: ‘I’ve got one kind of colleague that I would put maybe even here, maybe a bit further, or maybe, no, actually maybe on the border’.‘Talk around the sociogram’ was a crucial aspect of the data and underlines the significance of the researcher’s presence during this part of fieldwork.Participants offered explanations about particular relationships and overall composition of their network.Looking at the sociogram, Ryan observed that most of Izabela’s close friends were male.Izabela explained that as an only child who grew up surrounded by male cousins, she was more comfortable around male friends.Through these explanations participants opened up new lines of discussion, facilitating deeper probing of issues that may otherwise have remained obscure.By requiring participants to rank friends and family members within concentric circles, rather than simply listing names of alters, the sociogram prompted discussions about the relative closeness of particular ties.Participants often promoted or demoted friends.Karina actually used the word ‘relegate’ as she erased and re-located friends on her sociogram: ‘actually I can relegate Anika goes here and the other A will now stand for another Anya who now lives in Australia’.Karina went to explain that Anika was relegated because she was not as reliable a friend as Anya, although the latter lived much further away.Analysis of network data as a narrative presents additional challenges when applied to whole-network research, and particularly to mapping and visualising whole networks through sociograms.Indeed, the idea of ‘whole-networks’ is largely associated with systematic, structured collection of data and, as such, perceived as intrinsically positivistic: ties are either there or not.However, in some instances, establishing the presence of ties as clear-cut and a priori can been particularly challenging and risk overlooking personal dimensions of ties and different ways in which the same relationship can be perceived, used or presented by different actors within a network.This creates a specific epistemological problem since, by definition, ‘whole networks’ – unlike ego-networks – should not be based around the views of one or a few actors, but rather present the network in its entirety.Authors such as Crossley and Edwards show how much more qualitative – and nuanced – approaches to whole-network analysis are possible, for example using data emerging from diaries or participatory observations.In this respect, the use of qualitative approaches – or mixing quantitative and qualitative approaches – can help reconcile the structure of the whole-network with the perspectives of individual actors.The issue of the ‘presentation of the networked-self’ when mapping whole-networks emerged quite clearly in D’Angelo’s work on Kurdish community organisations in London.In early stages of fieldwork, the structured questionnaire administered to community officers generated several non-reciprocated ties, i.e. respondents would report collaboration with some other organisation which, in their answers, did not confirm such links.Semi-structured interviews with community officers showed how such discrepancies often were reflections of well-rehearsed narratives, typical of a community sector where funding can be secured only when specific organisational networks can be demonstrated.Thus, some community officers would automatically tick all the boxes of the questionnaire, to indicate they had working relationship with ‘everybody – an approach later confirmed in the interviews.The idea was to make clear their organisations were well-connected, and able to work with and on behalf of ‘the whole community.Moreover, smaller organisations were keener to present themselves as connected to bigger ones than vice-versa.In this research, it was deliberate not to ask respondents to draw their overall view of the network of organisations.Among other things, this was an attempt to limit a mere ‘representation’ of it.However, at a later stage of the study, a selection of participants were shown some of the sociograms produced by D’Angelo, 2015 on the basis of his mixed-methods analysis – an example of these in Fig. 1 – and were encouraged to comment on them.The visual image of the network prompted self-reflection."In many instances, the first reaction was to look for the location of one's organisation, to then check which other groups were linked to it. "It was rare for respondents to reject the sociograms as inaccurate or ‘wrong’; however in many cases there was an urgency to ‘justify’ the network's structure, for example why certain nodes appeared to be disconnected from most others.Interestingly, some comments referred to the transient nature of ties, distinguishing between long-term, ‘strategic’ connections on the one hand and short-term, shifting and somewhat ‘tactical’ connections on the other.Hogan et al. observed how asking a participant to look at a representation of their social network would engage them and elicit personal insights.‘Respondents routinely comment on how interesting their personal network look and how they never considered it in such a fashion’ Hogan et al.It can be argued this is true for both ego-networks and whole-network visualisation.Indeed, looking at one’s position in a whole network may challenge and stimulate respondents even more, forcing them not to think about themselves as the centre of their own social space but considering their relative position within a larger social structure.In D’Angelo’s study, the feedback received by respondents was used to further revise the sociogram, in an iterative process of analysis and re-interpretation.This approach takes us back to the very early days of social network analysis.In the pioneering work of Moreno, Moreno, sociograms were used to visualise social networks as emerged through data collection.However Moreno warned that ‘the responses received in the course of sociometric procedure from each individual, however spontaneous and essential they may appear, are materials only and not yet sociometric facts in themselves.We have first to visualize and represent how these responses hang together’.This raises wider questions about how we can interpret and make sense of data pertaining to change over time.As argued by Crossley, choosing ‘networks of social relations and interactions between actors’ as the main unit of analysis allows the researcher to bridge the personal and the collective, going beyond the traditional dichotomy between ‘individualist’ and ‘holistic’ approaches to empirical sociology.Thus, SNA – and more broadly Relational Sociology – can, at least in theory,connect micro, meso and macro dimensions into one analytical framework.However, by focusing on a set of directly linked actors contained by more or less defined network boundaries, many SNA studies risk cutting out the macro level.Even ‘big data’ SNA often appears as analysis of ‘big meso’, rather than really looking at the macro, i.e. contextual, level.The temporal dimension, in particular, is often explained in terms of changes occurring to actors or between actors; a by-product of their characteristics and internal dynamics.There is, however, a need to pay much more attention to the role of external factors – including the opportunity structures for networking.Hollstein, for example, noted the importance of focusing not only on fluctuations in networks over time but also on the contextualisation of networks in changing physical spaces – with the two elements being strictly connected.But there is more.The network structures and practices of individuals are often affected by changes in the social, economic and political context around them.Legislative frameworks may also have an impact.As far as migrants are concerned, for example, changing legislation on mobility rights can affect the ability to maintain and act upon transnational ties.A typical case is that of Eastern European migrants: the EU enlargements of 2004 and 2007 suddenly made movements much easier, increasing migration flows and, crucially, enabling migrant communities across the EU to develop much more flexible and dynamics transnational practices with regard to professional and family life.Such a multilevel approach to social network analysis chimes with the previously discussed concept of timescapes as an interaction between micro, meso and macro dimensions of time and reinforces Snijders argument that looking at networks over time is the most effective – almost natural – way to explain their structures.While the numbers of migrants moving from Poland to Britain following EU accession in 2004 was enormous and largely unanticipated by policy makers, this was not the first wave of Polish migrants.In the post-war period many thousands of Polish exile made their home in Britain.This earlier settlement created a whole network of Polish churches, Saturday schools and cultural organisations across the country.Nonetheless, as Ryan noted, the relationship between these older Polish networks and more recently arrived post-accession migrants was complex and sometimes tense.Divided by age, socio-cultural experience and by class, these waves of migrants often had little in common.Thus, the existence of an established whole network is not necessarily a good indication of how more recently arrived migrants will engage and participate in these pre-existing structures.Another important temporal and contextual observation from Author Ryan’s research relates to the EU itself.While the enlargement of the EU formed the crucial context to her initial research in the mid-2000s, recent ‘re-visits’ to the field were framed by political debates about Britain’s future in the EU – ‘Brexit’.Many Polish migrants expressed concerns about their future as Britain voted to leave the EU.Much to the surprise of Ryan, one way in which interviewees sought to assuage such concerns was by applying for British citizenship.One participant, Oliwia, sent a link to her facebook page which featured a photo of Oliwia proudly holding her newly acquired British passport.These wider contextual aspects are equally, if not more, important when analysing organisational networks, for example networks of migrant community organisations.The development and the activities of these organisations – including their networking practices – have often been interpreted as a reflection of the specific cultural and socio-economic identity of different migrant populations and on their ‘natural’ tendency to collaborate on the basis on inter-ethnic ties.More recently, however, attention has been drawn to the importance of other, contextual factors, most notably the opportunity structure in the host society.The work conducted by D’Angelo highlighted how, in order to explore the networking dynamics among Kurdish organisations in London, it was not possible to ignore the specific history of Kurdish people, both internationally and within the local London context, their multi-faceted identities and their changing characteristics.At the same time, the development of Kurdish migrant organisations has been driven by a number of changes in the UK policy context, including the progressive ‘marketisation’ of the third sector and the shift from multiculturalism to social cohesion in the discourses and approaches of local and national policymakers.The strong views of different community activists and the importance of networking in third sector practices also affected the way in which community-level social networks are perceived, acted upon and, crucially, communicated to external observers, with considerable epistemological implications.Changing ties have also been a reflection of staff turnover, individuals moving from one organisation to the other as well as key officers leaving the sector – or the country – altogether.Also in this case, the data gathered through official data sources and ‘formal’ SNA instruments, were often very different from those emerging from participatory observation and when talking more or less informally with active community members.The tensions thus uncovered were a reflection of the complex interplay between formal and informal levels which characterise these organisations and that underpinned processes of long and short term change.Thus, it was important to triangulate different sources of information and different types of data, using the sociograms as an overall synthesis of these complexities.The sociogram presented earlier on is an attempt to summarise key links between the major players in the Kurdish organisations network, approximately in the period 2011–2013.However, the face of the ‘Kurdish community’ has changed constantly over the years.When D’Angelo conducted his first exploratory study of Kurdish organisations in London, their number, characteristics, and, crucially, the interactions between them were significantly different.Fig. 2 summarises the network as it was around 2006–2007.The picture would have been even more different over the previous two decades, as suggested by interviews with some of the older activists.As discussed earlier on, the changes that occurred between these two points in time were anything but linear.As argued by Borgatti et al., actors are constantly seeking out new ties, and changing the nature of existing ones.What emerged clearly from the fieldwork – particularly from long-term observations and repeated interviews − is the fact that ties between organisations can change very rapidly and on an ad-hoc, short term basis.Although some strong affiliations tend to be maintained over the years, many interviewees revealed how ties could be established or truncated at short notice.For instance, new alliances could be made in the face of a major issue emerging in the community.In other cases, new – often short-term – links were established to participate in joint funding applications, or to respond to consultation initiatives from local or national public bodies.Conversely, groups who have been traditionally working together could suddenly find themselves involved in two competing funding applications.Even when looking at a sociogram as a snapshot, many organisational links can only be understood in relation to the history of each group and each individual operating within it.Also with regard to this important point – which will require further discussion elsewhere – the aim of sociograms developed through this mixed-methods approach was not to generate an exhaustive and ‘final’ map of all types of ties existing between individual organisations, but rather to provide an overview of selected, specific connections, addressing particular research questions and informing further reflection and investigation.Nonetheless, it is possible to identify a number of overall trends and dynamics, related to a broad range of external factors, which effected the pattern of this Kurdish network over the years.Firstly, an increasingly competitive funding regime forced many organisations to close down between the mid-2000s and the mid-2010s – and thus disappear from the network.At the same time some new organisations were established, reflecting a changing population, and with an increasingly significant role being played by second generation migrants and by women groups in particular.This is a clear example of the generational dimension of change in these social structures.Meanwhile, the ties between ‘Turkish-speaking’ and ‘Arabic-speaking’ organisations have progressively disappeared.This process was, at least in part, due to external, indeed international factors.Notably, the strengthening of the Kurdish Regional Government in northern Iraq reduced the commonalities in the political struggles between Kurds from Iraq and those from Turkey.Moreover, many important community activists and ‘leaders’, including some of the coordinators of long-established London organisations, decided to move back to Iraq to help with the post-conflict reconstruction and to take up new, highly-skilled job opportunities.In several cases their absence led to the collapse of their organisation or severing of existing organisational ties.Again, to paraphrase Adam the changes occurring within the network of Kurdish organisations are clearly at the intersection of historical and biographical dimensions.More generally, the high degree of national and international political engagement of individual community activists had a major impact on the life of individual organisations.Over the years, some key community members, would sometimes leave the UK for relatively long periods of time to do campaigning or to run as candidates for local and national elections in the areas of origin.For the organisation, this could mean operating with no proper coordination or even stopping most activities for several weeks – the future of the organisation – and of its ability to play a role in the broader Kurdish network – dependent on electoral results.As these examples clearly illustrate, the contextual dimension can help explain changes in the very nature of network ties.By triangulating the structural dimension of networks with, on the one hand, the more personal level and, on the other, the broader societal and political context, mixed-methods research frameworks represent an invaluable instrument to go beyond the simplistic assumptions of some actor-based models Prell.This paper has explored the relational dynamics of migrants’ networks through time and space.Researching dynamism may be particularly challenging in the case of migrants because they are moving across varied spatial contexts and negotiating relationships in multiple sites.However, this is not to suggest that time and space are neutral media within which things happen.In this paper, we have sought to go beyond a ‘snapshot’ of time as a collection of fixed points to show the complex and dynamic interplay of temporality, contextuality and relationality.In an attempt to ‘take time seriously’, we have used the notion of timescapes, to explore micro, meso and macro dimensions of time to show how individual and wider contextual lives interact dynamically.This raises challenges about the methodological tools necessary to understand and study how migrant networks change over time.Like O’Reilly, we did not plan longitudinal research from the outset.Migrants may be transient and move around a lot, thus it can be difficult to track them over time.Nonetheless, by continually re-visiting the field over more than a decade, we have managed to maintain some relationships and re-interview participants on several occasions.Our work aims to contribute at the nexus between SNA and migration research.We argue that a longitudinal, multi-dimensional approach to SNA is not just desirable and advantageous, but that indeed it should stem directly from the very concept and nature of social networks.In this paper we have endeavoured to show how integrating a mixed methods approach to SNA with migration research can provide a useful methodological and analytical framework to understand temporal, spatial and relational dynamics.On this basis, we argue that different combinations of quantitative, qualitative and visual methods do not just offer richer sets of data and insights, but can allow us to better connect conceptualisations – and ontologies – of social networks with specific methodological frameworks.Particularly, the integration of visualisations with other research techniques can provide important insights into the dynamic meaning of social relationships.We suggest that sociograms offer a structured, integrated view of relationships that would not be immediately perceivable just from narratives or tables.Nonetheless, a visual image is never self-sufficient and needs incessant reflexive dialogue with narratives, structural and contextual data to derive its meaning as a representation of specific relations and interactions.The use of mixed methods of data collection and an integrated data analysis are useful not only in demonstrating how ties change over time but also why this change occurs.Furthermore, each tie is not an element whose existence can be explained ‘per se’, but is always dependent on the presence of other ties and to broader contextual elements.Mixed method approaches can enable an understanding of both network content and meaning within dynamic personal, relational and wider structural contexts.In conclusion, we suggest that combinations of methods like those used in the case studies presented in this paper, do not just provide richer data, but allow us to sustain an epistemologically sound approach to social network analysis over time. | Focusing on migrant social networks, this paper draws upon the sociology of time to incorporate complex notions of temporality into the research process. In so doing, we consider firstly, the challenge of going ‘beyond the snapshot’ in data collection to capture dynamism through time. Secondly, we apply the concepts of timescapes to explore ways of addressing the wider context and the interplay between spatiality, temporality and relationality in migration research. We argue that integrating a mixed methods approach to SNA, crucially including visualisation, can provide a useful methodological and analytical framework to understand dynamics. SNA can also be helpful in bridging the personal and structural dimensions in migration research, by providing a meso level of analysis. However, it is also important to connect the investigation of local and transnational networks with an analysis of the broader social, economic and political contexts in which these take shape; in other words, connecting the micro and the meso with the wider macro level. Drawing upon reflections from our migration research studies, we argue that different combinations of quantitative, qualitative and visual methods do not just provide richer sets of data and insights, but can allow us to better connect conceptualisations – and ontologies – of social networks with specific methodological frameworks. |
583 | A case report of ten-month-neglected anterior shoulder dislocation managed by open reduction combined with Latarjet procedure | Shoulder joint is the most common dislocated joint .Anterior dislocation of shoulder occurs more frequent, it accounts for 95% of all shoulder dislocation, rather than posterior dislocation .A neglected shoulder dislocation is rare and may be accompanied by pathological changes in bony and soft tissue structures.Therefore, it requires extensive surgical procedure .Until now, there is no standard treatment for this case and it is a difficult problem for both patients and clinicians.We present a 27-year-old male who has suffered neglected anterior dislocation for ten months with a Hill-Sachs lesion.We managed this case by open reduction and Latarjet procedure.This report is based on consensus-based surgical case report guidelines, SCARE criteria .A 27-year-old male was presented with a chief complaint of deformity on his left shoulder since ten months before hospital admission.The patient slipped in a bathroom and fell in sitting position with left arm supporting the body.After the accident, the left shoulder was painful and looked deformed.Then the patient went to a bone setter and was massaged, but the shoulder was still painful and looked deformed.The patient used an arm sling to immobilize his left shoulder for about six months.Gradually, patient could do his normal daily activity with limited movement of left shoulder.Ten months after the accident, patient decided to seek medical help to treat his left shoulder.The patient complained of limited movement of his left shoulder with some pain.On physical examination we found deformity on the left shoulder, liked a squaring shoulder, and muscle atrophy.Neurovascular examination was normal.The range of motion of left shoulder was extension-flexion 20°–90°, abduction-adduction 20°–70°, internal-external rotation 30°–30°.The antero-posterior X-ray imaging showed anterior dislocation of left glenohumeral joint and Computed Tomography scan showed a Hill-Sachs lesion on the humeral head.We diagnosed the patient had a neglected anterior shoulder dislocation with a Hill-Sachs lesion and performed an open reduction and Latarjet procedure to treat this patient.We performed an open reduction surgery using anterior approach of shoulder and found massive fibrotic tissue around the joint and the Hill-Sachs lesion .We removed all the fibrotic tissue to create the space for shoulder joint to be reduced.After succeeded reducing the dislocation, we inserted a Kirschner wire to add stability for maintaining the reduced shoulder, then continued on Latarjet procedure.The Latarjet procedure was performed by cutting the coracoid process and transferred it with conjoint tendon to antero-posterior part of glenoid and fixed by two screws .The final result showed that the glenohumeral joint has been reduced with wire fixation.Post-operative X-ray showed a reduced shoulder joint.Unfortunately there was a claw hand on his left hand due to a neuropraxia of the ulna.The patient was discharged 2 days after surgery.We removed the K-wire after 3 weeks, then the patient started the rehabilitation program.The patient also underwent Transcutaneous Electrical Nerve Stimulation and range of motion exercise for 12 times.We evaluated the patient for 3 months in outpatient clinic.Three months after surgery, the ulnar neuropraxia was healed but we found there were an osteolysis of coracoid graft and also an avascular necrosis of the humerus head.The patient still had a limited ROM, on his left shoulder.At ten months follow-up, the patient had no recurrent dislocation.The glenohumeral joint is the most frequently dislocated joint in the body .Anterior dislocation of shoulder occurs more frequent and accounts for 95% of all shoulder dislocation .The most common mechanism for the unilateral injuries is trauma.A traumatic event could lead to anterior shoulder dislocation when it was happened in abducted and extended arm position so greater tuberosity abuts against acromion, causing leverage forces leading humeral head to come out of glenoid cavity .The term of chronic dislocation of the shoulder is applied to condition where there is loss of recognition of injury for at least 3 or 4 weeks, although other authors have described chronic dislocation with various duration ,A neglected shoulder dislocation may be accompanied with pathological changes in bony and soft tissue structures.Therefore, it requires an extensive surgical procedure .A neglected shoulder dislocation, especially with significant bony defects, is a dilemmatic condition since it cannot be managed by standard surgical procedure and concomitant lesions are common, including Hill-Sachs and Bankart lesions, massive glenoid bone loss, rotator cuff tear; and later severe glenohumeral osteoarthritis could also occur .Because of severe soft-tissue contracture and imbalance as well as bone deficiency, neglected anterior shoulder dislocation is a difficult problem for both patients and clinicians.The outcomes of some procedures, such as Bankart repair, remplissage, coracoid transfer, bone-grafting and arthroplasty, in restoring the stability of the shoulder were varied and the overall failure rates were quite high .The choice of treatment included observation, manipulation, open reduction with or without allograft reconstruction, Bankarts repair, capsulolabrial repair and arthroplasty .Surgical treatment for chronic shoulder dislocation is usually advocated for better functional outcome, although the results may be poor and unsatisfactory .The open reduction surgery was mostly recommended if the dislocation has been neglected more than four weeks after injury, in order to reduce the risk of concomitant fracture or cartilage injury .Several surgical procedures have been reported a gleno-humeral transfixation by using smooth pins through the head into the glenoid for maintaining reduction.The acromio-humeral transfixing pins could halt joint motion for 3–4 weeks .The neglected cases generally have significant bony defects due to constant friction of the dislocated humeral head against the anterior border of glenoid, which was also found in our patient.The bony defect can cause recurrent instability but it depends on the size and depth of the defect .In defects more than 25% but less than 40%, the anatomic procedures, such as allograft reconstruction of the head, humeral head dis-impaction/humeroplasty and non-anatomic procedures, such as osseous or soft tissue transfer of the infraspinatus and Latarjet procedure, are recommended.Latarjet provides stability by its ‘triple effect’ and it is more familiar for the surgeon than remplissage procedure .Latarjet procedure has been proven to be effective for the treatment of recurrent anterior shoulder dislocation with a large glenoid osseous defect which might justify the application of this procedure for the treatment for neglected anterior shoulder dislocation.Transfer of osteotomized coracoid process into the glenoid rim was described by Latarjet in 1958.The transfer includes a portion of coracoacromial ligament which is sutured to the anterior capsule through a short horizontal incision in subscapularis.Latarjet procedure reconstructs the depth and width of the glenoid.A dynamic reinforcement is created for inferior part of the capsule through the coracobrachialis muscle, which is particularly effective when the arm is abducted and externally rotated .Burkhart et al., as cited by An et al. , reported excellent outcome of Latarjet procedure in 102 patients, who either had more than 25% of glenoid bone loss or an engaging Hill-Sachs lesion, with only 4.9% recurrence rate after a mean follow-up of 59 months .In defects that comprise more than 40%–50% of the head, rotational proximal humeral osteotomy in young patients and partial or total humeral head arthroplasty are recommended .It has been suggested that, compared with soft-tissue reconstruction, such as Bankart repair, an open Latarjet procedure is more effective for the treatment of recurrent anterior shoulder dislocation with a marked glenoid osseous defect .Nevertheless, it was reported a high rate of redislocation or subluxation, loss of external rotation and internal rotation, and the deterioration or early onset of glenohumeral osteoarthritis after the Latarjet procedure .In our case, the patient still had limited ROM, 100° of abduction after Latarjet procedure but no redislocation.Nerve injury after the surgery is a common complication that can be happened during transferring the coracoid process or during surgical exploration to reduce the dislocation.The most common nerve injury is from musculocutaneous nerve and axillary nerve, but it can occur in any brachial plexus branches and mostly can be recovered spontaneously .In this patient, the ulnar nerve neuropraxia happened during the exploration process to reduce the dislocation.Soft-tissue imbalance is another risk factor for postoperative redislocation or subluxation.In patients with neglected anterior shoulder dislocation, the long-term dislocation may cause the lengthening and thinning of the musculotendinous unit or changing the biomechanical balance of the glenohumeral joint.A high rate of glenohumeral osteoarthritis deterioration was also noted .Postoperative shoulder osteoarthritis is one of the complications that can occur due to avascular necrosis of the humeral head.There are many factors that contributing to avascular necrosis that leads to shoulder osteoarthritis for examples increased age at the time of first dislocation, increased age at the time of surgery, and presence of arthritis before surgery; however, there is no specific time when the avascular necrosis starting to occur .Shoulder osteoarthritis can be occurred as a result of preexisting chondral injury, which leads to degeneration over time, or as a result of the operation procedure .Despite the risk of avascular necrosis of humeral head in the long term follow up, we consider that the Latarjet procedure performed in this patient has successfully stabilized the shoulder joint of the neglected dislocation.This case report has a limitation that the follow-up period to analyze the stability of the shoulder was only 3 months after the surgery.Another long term follow-up should be considered to evaluate the shoulder stability and other surgical complications.In conclusion, open reduction combined with Latarjet procedure performed for treatment of neglected anterior shoulder dislocation was found to have a high rate of successful in preventing further dislocation of the shoulder joint although high risk of osteoarthritis of the shoulder joint can still persist.Andri Lubis is a consultant for Conmed Linvatec and Pfizer Indonesia.No sponsorship for this case report.This is a case report; therefore it did not require ethical approval from ethics committee.However, we have got permission from the patient to publish his data.We have written and signed informed consent obtained from the patient to publish this case report and accompanying images.Andri Lubis contributed in performing the surgical procedure, data collection, data analysis and writing the paper.Muhammad Rizqi Adhi Primaputra contributed in data collection, data analysis and writing the paper.Ismail H. Dilogo contributed in performing the surgical procedure.This is a case report, not a clinical study.The Guarantor is Andri M.T. Lubis, M.D., Ph.D.Not commissioned externally peer reviewed. | Introduction: Neglected shoulder dislocation is a rare case and may be accompanied by pathological changes in bony and soft tissue structure. Therefore, it requires extensive surgical procedure. Until now, there is no standard treatment protocol to overcome this case and it is still a challenging case. Presentation of case: We presented a 27-year-old male patient with deformity on his left shoulder. The patient has suffered a ten-month-neglected anterior shoulder dislocation with a Hill-Sachs lesion. The treatment was open reduction combined with Latarjet procedure. Evaluation of treatment was performed three months after surgery. Discussion: Management of a neglected shoulder dislocation, especially with significant bony defects, is challenging and cannot be managed with standard surgical technique because of severe soft-tissue contracture and imbalance as well as bone deficiency. Chronic locked anterior shoulder dislocation is a difficult condition for both patient and clinician. In addition to that, the treatment results can be unsatisfactory. Latarjet procedure has been shown to be effective for the treatment of recurrent anterior shoulder dislocation with a large glenoid osseous defect which might justify the application of this procedure for the treatment of neglected anterior shoulder dislocation. Conclusion: Open reduction combined with Latarjet procedure performed for treatment of neglected anterior shoulder dislocation was found to have a high rate of successful in preventing further dislocation of the shoulder joint although the high risk of osteoarthritis of the shoulder joint can still persist. |
584 | Shared values and deliberative valuation: Future directions | Shared values are values that convey conceptions of the common good between people and are formed, expressed and assigned through social interactions.The term shared values, and related terms such as social values, shared social values,cultural values and plural values, have been used to indicate a variety of concepts that relate to a sense of importance transcending individual utility, and that express the multidimensionality of values.Valuation that focuses only on individual values evades the substantial collective and intersubjective meanings, significance and value from ecosystems, while deliberation on shared values can help make valuation more robust and enhance its legitimacy.This is important because valuations that overlook these wider meanings may undermine the legitimacy of decisions based upon them.Indeed, in this journal some have argued that ‘truly social valuation’ of public policy alternatives is the ‘next frontier’ in environmental valuation, and that developing effective and credible techniques to achieve this is the greatest challenge facing ecological and environmental economics today.Shared values particularly come into play in determining how we evaluate values across the plural ontological and ethical dimensions of value.This Special Issue illustrates in diverse ways that the ethical, moral and justice dimensions of many environmental issues necessitate approaches that allow for the recognition and elicitation of shared, plural and cultural values.Key ethical concerns include: providing a space and opportunity for people to identify values that they may find difficult to articulate; recognising that some values cannot be traded without discussion and negotiation; and understanding that it is often difficult to isolate valuation from decision-making processes because people feel there are strong ethical or moral issues at stake that need to be debated.This reflects dominant themes in environmental debates, which often revolve around a number of key issues, including: lack of trust in elected representatives, feelings of powerlessness in the face of globalization, the ethical and social impacts of an increase in certain aspects of technology, and a call for justice and equity in environmental decision-making.While our focus is on the environment, many of the questions discussed here are also increasingly pertinent in other areas of public policy and evaluation.For example, in health valuation, contestation of instrumental, efficiency-based methods of health services valuation and allocation have given rise to nascent ‘communitarian’ approaches to health, drawing on deliberation of communal values.Nonetheless, shared values have been under-investigated, leading to a lack of established conceptual and evaluative frameworks to guide their assessment.This Special Issue of Ecosystem Services addresses a breadth of topics associated with shared values and illustrates a wide range of methods for understanding and assessing them.This paper synthesises current understandings and provides future directions for research around shared values, and the role of deliberation in valuation processes, which is highlighted in this issue as a key way in which shared values can be formed and expressed.Deliberation has been proposed both as an answer to methodological problems within monetary valuation, as a means to bring in questions of fairness, justice and participation, and as an answer to theoretical critiques of economic appraisal that are based on assumptions of individual, commensurable, and consequentialist values.While deliberative processes take place formally and informally, and individually and socially, we focus here on group-based deliberative processes that involve reflecting on and discussing values and information to form reasoned opinions.Group deliberation has been an important element in all the methodological approaches in the empirical studies in this Special Issue, and can be considered central to shared values approaches to valuing ecosystem services.Although the terms shared, plural, social and cultural values may each emphasise somewhat different aspects of values, for the sake of brevity we summarily refer to shared values or a shared values approach.A shared values approach can be defined as an approach that recognises a plurality of values that are socially formed, both substantively and procedurally.In the introduction to this Special Issue of Ecosystem Services, Kenter highlights six features of such an approach, which are reflected across the diverse papers in the issue: 1) axiological plurality; 2) the need for deliberation on these plural values to establish the common good; 3) the importance of institutional factors, such as the role of power, in such processes of value elicitation-formation; 4) the need to recognise and interpret cultural and institutional histories, place, identity and experience to understand values and contexts; 5) the inevitable subjectivity of valuations that arises from the complexity and contestedness of many environmental issues, because no valuation is ‘complete’ in its ability to encompass every aspect and dimension of value; and 6) the potential of valuations as new democratic spaces, bridging the divide between research and practice.The Special Issue that this paper concludes originated in two work packages of the UK National Ecosystem Assessment Follow-On, a substantial research programme that aimed to address key areas identified by the UK NEA as priorities for further development.After completion of the programme, a two-day workshop with UK NEA Follow-On co-investigators and authors across the papers in this Special Issue was held in March 2015 to sketch out future directions for research around shared values.Each participant initially presented their individual perspective, followed by open group deliberation and facilitated brainstorming and reflection exercises.This resulted in a gross list of research questions that was then distilled and refined to 35 questions across eight topic areas through online discussion.These areas are: 1) the ontology of shared values; 2) the role of catalyst and conflict points; 3) shared values and cultural ecosystem services; 4) transcendental values; 5) the process and outcomes of deliberation; 6) deliberative monetary valuation; 7) value aggregation, meta-values and ‘rules of the game’; 8) integrating valuation methods.The next section synthesises the outcomes of the workhop discussions with key material from papers across the Special Issue.We end with final reflections and conclusions.Reviews by Kenter et al. and Irvine et al. demonstrate the wide variety of ways in which the fuzzy and overlapping terms ‘shared’, ‘social’, ‘plural’ and ‘cultural’ values have been used in the ecosystem services valuation and ecosystems management literature.To provide clarity in identification and assessment, Kenter et al. discriminated five dimensions of values: the value concept; the value provider; the process used to elicit values; the scale of value; and its intention.The value concept dimension distinguishes transcendental values, from contextual values and value indicators.Value providers include individuals, ad hoc groups, communities, societies and cultures, providing individual, group, communal, societal and cultural values.Values may be deliberated or not, depending on the process of elicitation.The scale dimension discriminates whether values relate to individuals or a societal scale, and the intention dimension differentiates self- from other-regarding values.The authors then identify seven main, non-mutually exclusive types of shared/social values, listed in Table 2: 1) transcendental values; 2) cultural and societal values; 3) communal values; 4) group values; 5) deliberated values; 6) other-regarding values; and 7) value to society.Shared values are then conceived of as ontologically plural in the sense of varying across the above dimensions and in that they may reflect different categories such as utility, rights, virtues and aesthetic values, and are thus potentially incommensurable.This discussion raises the question of how these different dimensions and types of values interact with each other.For example, many papers in this issue, and in the literature where conventional valuation approaches are critiqued, explicitly or implicitly make strong links between other-regarding values, non-individual values and non-consequentialist values.Is this just an artefact of mirroring the neoclassical economic association between individualism, selfishness and utilitarianism, or do we indeed hold a distinct set of other-regarding, moralistic, shared, ‘citizen’ values in parallel with a set of selfish utilitarian ‘consumer’ values?,This leads to more fundamental questions on the nature of values and why different valuation approaches lead to different value expressions.Do we hold a single set of values that can only be approximated through elicitation, as is assumed by neoclassical economics, but also implicitly by many non-monetary valuation approaches; multiple sets of values activated by different roles, contexts, and value-eliciting institutions; partially formed ‘proto-values’ that are adapted to contexts; or not hold a priori values but form them through social interaction and expression?,This question is most salient for contextual values and their indicators, as transcendental values are generally assumed to be culturally engrained during childhood and stable across our lifespan.Nonetheless transcendental values can change when specifically challenged, and several deliberative valuation studies in this issue demonstrated changes not just in WTP following deliberation, but also in the relative importance of different transcendental values, which again beckons the question if this constitutes value change, value formation or a shift to a different value set activated by the context.Catalyst and/or conflict points can play a key role in both the emergence and articulation of values at a societal or community level that have not previously been expressed or articulated.They are often linked to wider contested issues and meanings about who is involved in decision-making, whose voice counts and is viewed as legitimate and who receives the benefits or disbenefits of any environmental change.A key issue of many conflicts are the emotional responses that arise from individuals and communities.In psychology emotions are often seen as automatic reactions that can occur when individuals encounter significant issues with others or their environment, while in sociology emotions are explicitly linked to cognition and values, with a focus on the social origin and function of emotions.Buijs and Lawrence argue that tendencies to rationalise nature often leave little room for emotion and can delegitimise it.Decision makers may dismiss emotions and feelings related to conflicts as irrational and not based on evidence and therefore focus on providing greater amounts of factual information.Terms such as NIMBYism can also be used to dismiss community concerns as irrelevant, ill-informed and not legitimate.Emotional attachments to nature should be taken into account in valuations and management of ecosystems with managers playing a greater role in acknowledging and discussing emotions and learning how to deal with them constructively.Underlying positive and negative emotional responses to environmental issues are often transcendental values.In particular, transcendental values related to broad issues of justice, ethics, fairness and responsibility tend to emerge in response to conflict points and there is often a distributive dimension concerning who is affected and in what way, with the poor and powerless potentially not being heard and taken into account.Catalyst points can also bring strongly held contextual values to the fore.For example, in response to the proposed public forest estate privatisation in England, 2011, publics identified particular woodlands that held specific meanings for them and were valued as special places, such as the woods where they had climbed trees, played hide and seek, and built dens as children.By recognising and making explicit transcendental, societal and communal values while simultaneously addressing obstacles associated with power dynamics through well-designed deliberation, we can bring more understanding to what we share and what differentiates and divides us, and it may be possible to arrive at a more widely accepted consensus or compromise.As discussed above, deliberative approaches may also allow shifts from an individual to a societal stance of an issue, which can help identify common ground and reduce the polarisation of views that often characterises conflict situations.Irvine et al. discuss the potential of deliberative valuations as new democratic spaces and Kenter adds that such valuations can function as boundary objects between researchers, stakeholders and decision makers.Ranger et al.; Edwards et al.; Kenter and Orchard-Webb et al., all in this issue, demonstrate examples of this in practice in different marine and terrestrial contexts, where environmental managers or decision makers are directly involved in valuation and evaluation processes, enabling more effective translation of values into policy and practice.From this perspective, the aim of integrating deliberation into valuation is not just more robust value elicitation, but to provide more effective opportunities for diverse voices to be recognised in decisions, and to build bridges between potentially conflicting perspectives and interests in the process of shared value formation.Social media are increasingly being used in relation to conflict and catalyst points, providing opportunities to mobilise and raise the profile of any conflict as well as coordinate activities of diverse groups of people across wide geographical areas.In the public forest estate privatisation example, the use of social media was critical is raising awareness about the proposed ‘sell off’ and galvanising protest that led to the government cancelling the public consultation.The role of social media in catalyst and conflict situations is likely to increase, and it could potentially be utilised to engage a wider group of publics and stakeholders in debates around shared values, or as a vector for deliberative valuations.While shared values approaches are not limited to cultural ecosystem services, these services raise particular axiological and ontological issues that favour approaches involving deliberative and non-monetary valuation.Many aspects of cultural ecosystem services resist classification as a ‘service’ or ‘benefit’ because they can be intangible, experiential, identity-based or idiosyncratic.While others have raised these points, Cooper et al. develop these arguments specifically in relation to spiritual and aesthetic values of ecosystems, finding that such values are often intersubjective and non-consequentialist, and reflect a two-way relationship between people and nature.While they benefit human well-being, spiritual and aesthetic values of ecosystems should not primarily be classified as ‘services’ or ‘benefits’.Indeed, the primary value direction may be from humans to the rest of nature as duties owed.These arise from the very different conceptions of nature in aesthetic and spiritual discourses to that of ecosystems delivering services.Cooper and colleagues argue that aesthetic judgements of value have been distinguished from personal tastes and pleasures since the Enlightenment.Aesthetic value is tied to the actual objects and their compositional relationships and not in the happenstance of how much pleasure an observer receives on a particular day.Brady points out that aesthetic judgements of nature are intersubjective, established through the identification of aesthetic qualities and agreements that emerge through social processes or, for example, meeting the test of time.These value judgements can motivate a moral responsibility to maintain the beauty of specific places and the wider world, ‘aesthetic preservationism’, expressed in protective designations such as National Parks.Many spiritual discourses about nature also resist talk of consequentialist benefits and economic analysis.These discourses counter assertions that the world has been successfully disenchanted by the commodification of nature.For example, in a study in this issue on the values associated with marine sites under consideration as potential marine protected areas by Kenter et al., divers and anglers portrayed profound experiences of beauty, fascination, magic, and connectedness that provided a deep layer of meaning to the places they visited that would have been invisible if the study had only focused on monetary outcomes.For example, one diver noted, “I ticked all of these and more, I added religious which is strange really because I am an atheist.I was in one place and visibility opened up and it was like a cathedral, with jewel anemones lighting up everywhere.I felt like I was in the presence of God, if there is such a thing.I was crying when I came out of the water”.Considering the importance of shared values for cultural ecosystem services more broadly, Fish et al. in their novel cultural services framework highlight the important role that shared cultural values play in terms of influencing how spaces are perceived, what practices are undertaken in those spaces, and how spaces and practices interact in shaping identities, forming capabilities and generating experiences.The authors emphasise that these cultural values and interactions are not abstract but are expressed as life in situ.Understanding cultural services thus means understanding peoples’ modalities of living that form and reflect the values and histories that people share, the places they inhabit and their symbolic and material practices.Importantly, shared cultural values are thus not wholly intangible as they are directly conveyed in material culture.While it has previously been argued that monetary valuations are challenged by intangible cultural values, in contrast Kenter and Fish et al. note that monetary valuation techniques such as choice experiments, deliberative or not, on their own are typically too abstract to adequately recognise cultural materialities.Fish et al. thus emphasise the need for interpretive and interpretive-deliberative approaches to investigate these modalities; examples in this issue include storytelling, arts-led dialogue, ethnographic video interviews feeding into deliberative workshops, and participatory mapping.However, these different types of non-monetary valuation methods have different ontological, axiological and epistemological assumptions, and thus the method chosen will influence how and which values are conveyed, beckoning the need for comparisons between valuations and whether and how those differences might affect decisions informed by those valuations.Cooper et al. note how some faith communities incorporate shared values into their own decision-making thus providing models that could be adapted for use in environmental decision-making.The role of transcendental values is an important but understudied area of research in relation to monetary and non-monetary valuation of ecosystem services.Raymond and Kenter showed that transcendental values directly influence WTP and behavioural intentions, as well as indirectly via worldviews, beliefs, norms and environmental concerns.Case studies across this Special Issue; demonstrated how different psychometric approaches, and deliberative and qualitative approaches such as storytelling were harnessed and in some cases integrated to help elicit and understand transcendental values in relation to ecosystem services.Beyond this issue, there has been very little research demonstrating and investigating the role of transcendental values in ecosystem service valuation, and more broadly environmental management and decision-making, with few links between the environmental psychology and ecosystem services literature, though there has been more attention to transcendental values in conservation research.More research is needed to better understand the effects of transcendental values on contextual values, value indicators and behaviour, and the role of transcendental values in deliberation.This is likely to involve integrating elements of different psychological theories such as the Value Belief Norm theory, the Theory of Planned Behaviour and the Value Change Model.However, psychological approaches have focused on subsets of transcendental values, leaving out other transcendental values pertinent to ecosystem management, in particular those that are procedurally important, e.g. around responsibility, fairness, justice and participation.Such process-related values are likely to impact on how people perceive and frame ecosystem service valuation and are particularly important when considering issues around intergenerational equity and regard for non-human species.As will be discussed in more detail in Section 2.6.1, deliberative democratic valuations can address these process values explicitly, but as of yet their role in deliberation and valuation is poorly understood.Conversely, deliberative valuation processes also provide opportunities for exploring interactions between transcendental and contextual values of individuals and the group, and the psychological processes responsible for changes in values.The work on transcendental values in this issue ultimately highlights that ecosystem managers cannot just focus policy instruments on monetary drivers of change.Any change of behaviour wrought by a scheme will be short term unless policy instruments target the underlying antecedents of that behaviour.Ultimately, broader shifts in environmental attitudes and behaviour have been the result of shifts in transcendental values at the societal and cultural level.Changes in contextual values and behaviour, resulting from activation of particular transcendental values through short term interventions such as one-off deliberative exercises, are not likely to endure with individuals unless these are reflected in their social environment through social learning processes.However, changes in contextual values and behaviour in relation to the environment can also take place through a variety of other ways than through changes in transcendental values, such as through changes in perceived benefits and costs, perceived behavioural control and symbolic and affective motivations, in turn interacting with broader cultural, geographic and contextual factors.This highlights the need for research taking an integrated perspective on environmental motivation, value and behaviour formation and change, accounting for the direct and indirect effects of transcendental values and the role of affective and hedonic motivations and contextual factors, as well as how these play out in interactions between individuals and group.In this way, environmental policies can be targeted at multiple motivations and at different scales to be effective.As noted above, most papers in this issue have illustrated that the ethical, moral and justice dimensions of many environmental issues necessitate approaches that allow for the recognition and elicitation of shared, plural and cultural values.Deliberation thus becomes critical for many environmental questions, to allow for discussion and debate about fairness, equity and justice issues concerning shared, plural and cultural values, to recognise that some values cannot be traded off and that valuations cannot be abstracted from decision-making contexts, and to provide space for articulation of complex, subtle and implicit values and value formation more broadly.Kenter et al. describe that a deliberative process can include the following elements:the search for, acquisition of, and social exchange of information, gaining knowledge, and the expression and exchange of transcendental values and beliefs, to form reasoned opinions;,the expression of reasoned opinions, as part of dialogic and civil engagement between participants, respecting different views held by participants, being able to openly express disagreement, providing equal opportunity for all participants to engage in deliberation, and providing opportunities for participants to evaluate and re-evaluate their positions;,identification and critical evaluation of options or ‘solutions’ that might address a problem, reflecting on potential consequences and trade-offs associated with different options; and,integration of insights from the deliberative process to establish contextual values around different options, and determining a preferred option, which is well informed and reasoned.As a democratic ideal, deliberation is a reflexive process in which participants not only discuss information, but also set the terms of the discussion, debate how questions should be framed and what types of values should be considered.They can discuss how values should be weighted and what rights and duties to take into account, including issues surrounding long-term sustainability.Participants can also discuss and reflect upon how the outcome of their deliberations should be used.Kenter et al., argue that the process of value formation in deliberation is intrinsically a social learning process, which they define as a change in understanding that goes beyond the individual to become situated within wider social units or communities of practice through social interactions between actors within social networks.It is this social mediation of learning that explains why some deliberative processes achieve their goals while others fail, for example if the power dynamics of the social context are not effectively facilitated, leading to a biasing of outcomes towards the positions of dominant individuals or groups.The Deliberative Value Formation model identifies key factors that influence potential outcomes of deliberation and conceptualises the social process as feeding into a translation of transcendental values to a specific context.However, indicators need to be identified or developed for different stages of this process, and more comparative research is needed to consider how different types of deliberative interventions affect these processes.For example, in the study by Kenter deliberation helped participants to better understand the wider role of different environmental components in the social-ecological system, while it also brought out competing social demands for resources such as education and healthcare, which reduced monetary values for ecosystem services overall but increased the portion assigned to conserving biodiversity.Kenter et al. found that deliberating on narratives brought out the deeper meanings, identities and experiences associated with values, which led to convergence between monetary values for marine conservation and non-monetary well-being indicators.However, there are few other studies that have considered specific effects of these kind of interventions both in terms of deliberative outcomes and value outcomes.Questions can also be raised around the relation between value formation and value changes, both at the individual level and in groups in terms of convergence or divergence: in what form do values exist before they are expressed in a valuation process, and how do different features of the process, such as the key factors identified by Kenter et al. lead to different outcomes?,Fig. 3 depicts possible ways in which values may be changed or formed, and in a social process converge or diverge: they are preformed and may or may not be changed through expression/deliberation; they are unformed or poorly formed as ‘proto-values’, and formed in the process of expression; they are changed or formed and also converged through the process; they are preconverged and changed or exist as shared proto-values and formed through the process; the process changes preformed values leading to value divergence; or proto-values are formed but also diverged through the process.While changes in contextual values are commonly reported after deliberation, Raymond and Kenter, Kenter and Kenter et al. provide some of the first empirical evidence that short-term deliberative processes can lead to more fundamental changes in norms and transcendental values.Whether or not these changes in values are transient, when asked in the Kenter et al. study, participants expressed a clear preference for values they expressed after deliberation, i.e. reflecting shifts in transcendental values, to be used in decision-making.This is consistent with theories of reasoned action and planned behaviour, the model of responsible environmental behaviour and Value-Belief-Norm theory, where changes in personal and social norms inform behavioural intentions.By integrating deliberation into a decision-making process, these behavioural intentions may then be reflected in actual decisions resulting in the creation of preventative measures or incentives to facilitate the intended behaviours formed by those involved in the decision-making process.Following this approach, it may therefore be possible to design interventions that affect changes in communal values, drawing on an understanding of social networks, concepts of homophily and the capacity for knowledge brokers and boundary organisations to create bridges between heterophilous social groups.So far there has been little comparative investigation between deliberative ecosystem service valuations and these other kinds of institutional deliberations.Over longer time horizons, Everard et al. suggest that social learning proceses can lead to a socialisation of shifts in values at the scale of broader social units, communities of practice or societies.They argue that society evolves by expansion of the ‘ethical envelope’, which is progressively cemented into societal and cultural values, norms and institutions when social learning leads to ‘rippling out’, affecting the development of constraining levers including regulation, modification of markets, a range of statutory and near-statutory protocols and evolving bodies of law.DMV can be seen as a range of approaches distributed on a spectrum between two archetypes: Deliberated Preferences and Deliberative Democratic Monetary Valuation.The former adapts stated preferences methods to include information-focused deliberation to enhance individual preferences, dealing with unfamiliarity with complex goods such as ecosystem services.In contrast, the latter applies a conception of deliberation as a process to enable value pluralism, better integrate transcendental values, and focus on the public rather than individual good.Deliberated Preferences approaches conventionally elicit individual WTP, while DDMV elicits monetary values at the societal scale, or fair prices at the individual scale.This issue presents two Deliberated Preferences case studies, both involving a multi-stage DMV where the valuation moved from non-deliberated to deliberated individual preferences, increasingly moving closer to a DDMV format, where participants ultimately voted on fair prices.A third case study was fully implemented through DDMV, establishing social WTP through negotiation by a group of stakeholders.Debate and empirical research on the motivations behind WTP in stated preference approaches has suggested that WTP is often not reflective of exchange values, but rather should be seen as a charitable contribution.These contributions may lead to higher bids than consequentialist payments.The two Deliberated Preferences studies in this issue suggest that a shift from individual values to shared values, in these cases expressed as group-deliberated fair prices, not so much rejects the ‘purchase model’ in favour of a ‘contribution model’, but rather means a shift to what Dietz et al. calls a ‘public policy model’.Within this broader societal framing, participants consider benefits and costs alongside competing social priorities, policy effectiveness, and the process and justice related concerns and values highlighted previously in Sections 2.2, 2.4 and 2.5, such as fairness, equity and responsibility.The two DMV studies by Kenter and colleagues demonstrate that this shift can generate significantly different outcomes in terms of monetary values, which in both cases decreased substantially compared to non-deliberated individual WTP.Based on evidence from economic models, psychometric analysis, participant discussion and feedback, it is apparent that these shared values are more informed, considered, confident and reflective of participants’ deeper-held, transcendental values than individual non-deliberated values.In the Kenter et al. study, which focuses on cultural services around potential marine protected areas, fair prices converged with non-monetary subjective well-being values, whilst individual WTP did not.Participants formed values in relation to specific habitats, where there had previously only been values for marine sites in general.Participants felt more confident in the deliberated values, which they also felt were most suitable for informing policy-making.The study concludes that these findings imply that deliberated shared values were a better impression of welfare impacts than conventional individual WTP, and suggests the possibility of harnessing group deliberation and fair prices to reduce hypothetical bias, which remains an important unresolved issue in stated preferences research.Another debate that is unresolved is how value indicators that move away from neoclassical value assumptions – deliberated WTP and particularly fair prices - should be aggregated and used in appraisal.For example, the legitimacy of Deliberated Preferences might be questioned in term of their representativeness, based on the evidence that deliberation changes values, and valuation workshop participants thus become unrepresentative of the population they are supposed to represent.However, it is important to realise that ex ante valuations are always a limited impression or projection of what ex post welfare impacts will turn out to be.As such, the question should be rephrased as whether participants’ values post-deliberation are more or less reflective of actual welfare impacts of a policy or project after it has come about.This is, of course, impossible to answer ex ante, but improvements in participants’ confidence, the forming of more specific values, better reflection of transcendental values, and convergence of monetary and subjective well-being values suggest that this may well be the case.Legitimacy concerns might also reflect viewing deliberation as a type of manipulation, particularly where it aims to ‘moralise’ preferences.However, it can also be argued that our preferences are manipulated on a daily basis and that deliberation can provide a transparent route to establishing values that is preferable to feigned notions of consumer sovereignty.In this issue, the Deliberative Value Formation model provides a theoretical and methodological framework for the design of transparent, effective and inclusive deliberative valuations, noting that regardless of whether deliberative valuation focuses on better informing preferences or on better recognising plural and transcendental values to consider the public good, there will be similar key issues to consider, e.g. relating to participants capacity to deliberate, power dynamics and group composition.Democratic deficits in environmental policy persist despite growing beliefs that democratisation of valuation can secure more sustainable and equitable decision-making.Also, focus on the democratic content of ecosystem service valuation methodologies has increased in the context of broader demands for improved democratic legitimacy in mechanisms for multiple and diverse stakeholder engagement in environmental planning and ecosystem management.DDMV embraces the essentially political nature of valuation by creating an inclusive platform and mechanism for inter-subjective group deliberation of shared communal, cultural and societal values.DDMV seeks to negotiate fair terms for social co-operation through group deliberation on plural values and establish social WTP through negotiation, rather than aggregation of individual values.The democratic content in DDMV is secured by a combination of procedural fairness at each stage of the process; creating inclusive platforms for expression of transcendental and contextual values by ‘free and equal citizens’; and creating the conditions for communicative rationality via social interaction and learning resulting from argument, reason giving, listening and respecting other views.In a rare empirical examination of DDMV, Orchard-Webb et al. illustrate how a variety of deliberative, interpretive and analytical techniques can be combined in a stakeholder-led process of developing and evaluating policy, establishing deliberated group values for different policy options, and securing shared learning between stakeholders, in terms of both the motivation for values attributed to their local environment and the democratic outcome value of the process of deliberation and dialogue.DDMV was shown to help address DMV methodological challenges regarding inclusivity, participation, conditions for reasoned debate, and efforts to secure mutuality and reciprocity.However, the empirical study also recognised its limitations in terms of evidence of inequalities of power within the process design and group discussions, requiring development of further understanding and case studies regarding the identification and mitigation of hidden exclusions within design, recruitment, facilitation and participation.In particular, there is a need to pursue empirical work to develop and test a range of DDMV protocols that are defensible in terms of deliberative democracy theory.Just as Habermas developed a dynamic critical reflexive test for the application of communicative rationality, there is a need to employ protocols that act as a check on imbalances and technologies of power in the operationalizing of DDMV.For example, one such protocol might raise questions around the conditions needed for more inclusive or expansive interpretations of deliberation that better reflect the wide range of approaches citizens feel most comfortable using to communicate and persuade others of their values or goals.Other protocols might relate to enabling community co-design; just representation and group composition; and the balance of techniques needed for expressing different local knowledges.Using deliberative democratic theory to inform these protocols will help address concerns regarding the democratic legitimacy of findings, as well as helping secure more sustainable and just decision-making in environmental policy and planning.Another key question is how DDMV may be able to represent the interests of those who are unable to represent themselves at the table, including non-humans and future generations.Stated and Deliberated Preferences valuation approaches can elicit bequest and existence values, but these are ultimately still grounded in assumptions of self-regarding utility.In theory, DDMV is inclusive of plural values without such ethical restrictions, but there is currently no evidence thay DDMV can genuinely improve representation of plural values, including intrinsic values, compared to Deliberated Preferences approaches.The formidable challenges of collective decision making have been well-recognised since at least Plato.Whenever proposals affect multiple individuals with heterogeneous knowledge, incentives and preferences, those preferences must somehow be elicited and aggregated to arrive at a collective decision.Arrow formalised the impossibility of aggregating individual rankings while satisfying certain basic desirable criteria.Valuation methods often go beyond rankings and seek to elicit the intensity of preferences or values more broadly, but aggregating them remains challenging.DDMV studies such as Orchard-Webb et al. can ‘aggregate by mutual consent’, while cost-benefit analysis usually applies the Kaldor-Hicks criterion: maximise the net monetary value of willingness to pay/accept across all individuals, regardless of rights, or distribution.Neither approach is unproblematic.While deliberation can achieve genuine reductions in disagreement, ‘mutual consent’ can also reflect inequalities of knowledge, capability and power, and deliberation becomes more challenging as the number of affected people increases.CBA can be conducted at large scales, and as an analytical exercise can claim to reduce inequalities of power between stakeholders.However, it is a product of power relations at higher levels,and Kaldor-Hicks appears to violate common intuitions about how aggregation should occur."People's meta-values for how aggregation should occur, what might be called the ‘rules of the game’, are by definition transcendental shared values: they should transcend a specific context. "However, despite the long history of thought in this area, empirical evidence on people's values and preferences for different aggregation approaches remains rare.We know little about how they are affected by context or culture and how much they vary between individuals."We also need to understand more about how people's transcendental values around and preferences for aggregation rules compare to those used by different decision-making institutions, and how important any differences are in terms of the real-world outcomes that result.We hypothesise differences will be greater the more issues are complex and contested, or involve values that are difficult to monetise.Of course, such transcendental values will be challenging to elicit, and are unlikely to be independent of the methods used.Deliberation with others is also likely to affect what meta-values and preferences around aggreation people express, which leads to the theoretical need for agreement on the terms of deliberation.This can in theory lead to an infinite regress, though in practice could be achieved on the basis of established participatory principles.The question of how we should aggregate individuals’ values has received vastly less attention than procedures for their elicitation.Thus, while the challenges noted here are formidable, we would expect considerable returns to careful empirical work on these meta-values.The empirical studies detailed in this Special Issue illustrate how different types of methods can be integrated to better incorporate complexity into valuation, work with plural values in contested contexts and help make implicit and subtle values explicit, taking advantage of the specific strengths of different methods.Fig. 4 gives an impression of our view of the relative suitability of key methods and methodological approaches in these terms.1,DMV and multi-criteria analysis provide a pragmatic analytical backbone to value formation and elicitation exercises for most studies, establishing value indicators for different environmental benefits and policy options.Visioning and participatory systems modelling provide an effective means to orientate towards joint analysis, consider complex linkages and consider future uncertainties.Participatory mapping allows a spatial consideration of often specific and localised values that elude the more abstract monetary valuation.Discussion of different elements of well-being and sense of place in relation to transcendental values using a values compass, large-scale well-being indicators or ethnographic video interviews following the Community Voice Methodology allows for bringing together values and subjective experience.This can be supported by storytelling and arts-based interventions, which prove a useful method to understand experiences that are otherwise difficult to appreciate, allowing art and stories to express the way a place can make someone feel.Bringing together narratives and deliberation allows people to better understand what is worthwhile and meaningful to both themselves and others, and to gain a sense of empowerment from their voice being heard.Different monetary and non-monetary methods thus have different strengths in terms of eliciting particular kinds of values.However, Kenter warns against a methodological ‘dividing the turf’, where conventional monetary valuation and CBA deal with with provisioning, regulating services and recreation, and non-monetary approaches value cultural ecosystem services; or ‘parallel tracks’ where distinct monetary and ‘sociocultural valuation’ evidence bases are separately built up.The paper argues that this creates an artificial divide between monetary and non-monetary methods, equates different non-monetary methods that are widely diverse, does not deal with institutional and axiological critiques leveraged against monetary valuation or encourage us to be critical of each others’ assumptions more broadly, and fails to lead to genuine inter- and transdisciplinary.Splitting off non-monetary/sociocultural/cultural service values is in danger of not just leading to separate value domains but also separate knowledge domains.Without clear integration mechanisms, and in combination with a ‘Pontius Pilates’ perspective on knowledge transfer, researchers stay clear of weighing different evidences, passing the burden on to decision makers.This undermines the effectiveness of valuation evidence, as addressing the major social-ecological sustainability challenges of our time requires moving beyond a naive technical-rational model of knowledge utilisation to enable transdisciplinary integration of knowledge.While the the case studies in this issue have not resolved these issues perfectly, they provide examples of working closely with decision makers in integrating different knowledges and values through deliberation and of using deliberative models for weighing up different dimensions of value based on interdisciplinary conceptual frameworks.However, a better understanding is needed of how different elements of shared values approaches should be integrated to suit different contexts and objectives, and how different combinations of methods affect procedural and substantive outcomes.Such questions can also be linked to those concerning the temporal effects of deliberation as well as the role of such methods in processes of conflict and decision making.Integrating methods will be a key part of elucidating the process of deliberation and further developing and testing the Deliberative Value Formation model.For example, integrated methods are necessary to elucidate how different types of values are expressed and how these are adapted or developed through deliberation compared to instrumental analytical approaches.Such methods integration and comparison may allow important questions to be answered such as how and to what extent different analytical, interpretive and deliberative valuation methods privilege or undermine the values of different social or cultural groups, e.g. in terms of social class, education, and non-indigenous vs indigenous groups.A key challenge is to define sets of methods that situate local or marginalised values and knowledges in such a way that they can be fully articulated, but which can also be taken forward as evidence for broader decision-making processes.Shared, plural and cultural values of ecosystems constitute a diffuse and interdisciplinary field of research, covering an area that links questions around value ontology, elicitation and aggregation with questions of participation, ethics, and social justice.We have presented future directions for further research around a broad range of areas relating to shared values, with particular attention to deliberation as a means both for formation of shared values, and also to integrate different types of knowledge and values.Notably, contributions in this Special Issue develop a wide range of key themes that have been highlighted by IPBES as crucial in recognition of the plural nature of values, such as the importance of culture and institutions, the relationality of values, and participatory means of integrating values in decisions.Box 1 also highlights a number of ways that the work in this issue can extend on and help operationalise the IPBES values framework and help address some of its current gaps, such as in relation to mechanisms for integration of plural values and in terms of the crucial understanding that values are often poorly formed, requiring a process of value formation, rather than just elicitation.The conception of shared values as the values that we come to express and assign through our interactions with others raises fundamental questions on the nature of the contextual values that we express: whether we hold single or multiple sets of values, partially formed ‘proto-values’, or simply do not hold values and only form them through expression and interaction.This has implications for how we understand valuation and gives rise to the need for a different valuation language.For example, if contextual values are not held separately from processes of elicitation, valuation becomes a process of value formation and expression, rather than of capturing values.Irrespective of whether values are held or formed through expression, the ethical, moral and justice dimensions of many environmental issues necessitate approaches that allow for the elicitation of shared, plural and cultural values, particularly in contexts that are complex or contested.While not limited to cultural ecosystem services, these issues come to the fore more often than not in relation to cultural aspects of ecosystems such as spiritual and aesthetic values.Here values are often expressed in ways that are intersubjective, evolve through social processes and reflect two-way relationships between people and nature, resisting talk of consequentialist benefits.Catalyst and conflict points can play a key role in the emergence and articulation of values at a societal or community level that have not previously been outwardly or explicitly articulated.Catalyst and conflict points can be symbolic and are often linked to wider contested issues and meanings about who is involved in decision-making, whose voice counts and who receives the benefits or disbenefits of environmental change.By recognising transcendental societal and communal values, it becomes possible to make these values explicit and incorporate them in decision-making to better anticipate and manage conflicts.An integrated mixed method approach is required to elicit the multiple dimensions of shared values and to translate transcendental values into contextual values and value indicators.Monetary valuation is limited to quantifying values.Other methods are needed to understand their meaning or content, and the communal, societal and transcendental values that underpin them.Psychometric, non-analytical and interpretive methods such as artistic methods or storytelling can reveal those shared values.They can be combined with analytical-deliberative methods to provide a comprehensive valuation that can quantify values, understand their individual and shared meanings and significance, and better include ethical dimensions.More research is needed on how different method integrations generate different procedural and substantive outcomes, whether diverse approaches with sometimes conflicting theoretical assumptions in terms of epistemologies and value ontologies can be bridged, and where there are hidden issues of power and exclusion in terms of which methods are chosen and how they are implemented.Direct involvement of practitioners and decision-makers in a number of studies in this issue demonstrates how mixed method valuations integrated through deliberative processes can become a boundary object between research and decision makers.Investigation of how these new democratic spaces can function in terms of more effective translation of values into policy and practice is crucial for enabling the transformative potential of valuations.Shared values resulting from deliberative, group-based valuation are different from individual values.Empirical evidence presented this issue suggests that they are more informed, considered, confident and reflective of participants’ deeper-held, transcendental values."Deliberated, group-based monetary values may be a better reflection of real welfare impacts than non-deliberated individual values, if derived through a carefully designed and managed process, and research is needed to further explore how, and the degree to which deliberation can enhance participants' ability to value the implications of counterfactual futures and reduce hypothetical bias.As a socially-mediated learning process, deliberative value formation is influenced by a set of key factors such as timescale and depth of interactions, the diversity of perspectives brought by different participants to the deliberation, the quality of facilitation and process design, the management of power dynamics within the deliberation and the degree to which transcendental values are made explicit.While it is generally assumed that transcendental values do not change in the short term, empirical evidence from psychometric testing indicates that carefully designed, short-term deliberative processes can lead to changes in both contextual and transcendental value expression, though further research is needed to investigate whether and when these value changes are transient or lasting.Whether or not this is the case, if participants state clear preferences for values they expressed after deliberation to be used in decision-making, this suggests that valuations that integrate deliberation have the capacity to draw on more salient knowledge that is perceived to be more legitimate, and less likely to be contested.It also highlights the importance of attending to transcendental values, which have thus far largely been ignored in both monetary and non-monetary valuation of ecosystem services.However, deliberative valuation methods such as DMV raise important questions around the legitimacy of deliberation processes.From a conventional economic perspective, in Deliberated Preferences approaches these are likely to focus on issues such as representation and consumer sovereignty.In contrast, DDMV bases its legitimacy on deliberative democratic theory that posits ideals of communicative rationality, which are very difficult to fully achieve in practice.This is in particular because there is an intrinsic tension between on the one hand recognising participants’ freedom to deliberate on their terms without external interference, and on the other hand the need for enabling and equalising mechanisms through process design, capacity building exercises and active facilitation.DDMV, while promising in terms of creating conditions for inclusivity, value plurality, reasoned debate, mutuality and reciprocity, thus has key challenges in terms of identification and mitigation of hidden exclusions within design, recruitment, participation and facilitation.Deliberation also opens up avenues to deliberate on meta-values, transcendental values around how to aggregate values.Within mainstream economics, difficulties associated with aggregating values, such as in CBA, have long been recognised, but have also been neglected.There is also little empirical evidence on what people think the ‘rules of the game’ should be in relation to aggregation.Deliberative avenues for aggregation by mutual consent have their own practical and theoretical challenges, with only few examples in practice especially at larger scales, providing an interesting avenue of exploration for future research.In conclusion, we have presented 35 research questions to help give direction to future ecosystem services valuation research, and more broadly valuation in complex and contested contexts where plural, subtle and conflicting values come into play.Ultimately, the purpose of ecosystem service valuation is to ensure that we recognise the tremendous importance of ecosystems for human economies, societies and cultures.Crucially, valuations cannot be separated from these social, cultural and institutional contexts.In this sense any valuation is ‘social’, whether this is recognised by those conducting it or not.The discourse on shared, plural and cultural values and deliberative valuation presented here provides directions to help embed these social aspects in a more transparent and rigorous way.Shared values approaches are crucial in realising the transformative potential of valuation by enhancing democratic participation, integrating knowledge, generating social learning and providing deliberative platforms that directly engage policy makers and practitioners.Further study is needed to demonstrate a more extensive evidence base to mature these approaches, and develop valuation into a more pluralistic, comprehensive, legitimate and effective way of safeguarding ecosystems and their services for the future. | Valuation that focuses only on individual values evades the substantial collective and intersubjective meanings, significance and value from ecosystems. Shared, plural and cultural values of ecosystems constitute a diffuse and interdisciplinary field of research, covering an area that links questions around value ontology, elicitation and aggregation with questions of participation, ethics, and social justice. Synthesising understanding from various contributions to this Special Issue of Ecosystem Services, and with a particular focus on deliberation and deliberative valuation, we discuss key findings and present 35 future research questions in eight topic areas: 1) the ontology of shared values; 2) the role of catalyst and conflict points; 3) shared values and cultural ecosystem services; 4) transcendental values; 5) the process and outcomes of deliberation; 6) deliberative monetary valuation; 7) value aggregation, meta-values and ‘rules of the game’; and 8) integrating valuation methods. The results of this Special Issue and these key questions can help develop a more extensive evidence base to mature the area and develop environmental valuation into a more pluralistic, comprehensive, robust, legitimate and effective way of safeguarding ecosystems and their services for the future. |
585 | Associations between cigarette smoking and cannabis dependence: A longitudinal study of young cannabis users in the United Kingdom | Together, cannabis and tobacco are two of the world's most used drugs, and despite their unique smoking relationship, relatively little is known about their combined effects.The high prevalence of cannabis use amongst young people in the UK is a growing concern.However, many daily cannabis users do not develop dependence.Prospective studies of the likelihood of developing a Cannabis Use Disorder have investigated predictors of dependence amongst cannabis users with baseline severity of dependence acting as a main predictor of dependence at one-year follow-up.However, there are a host of other factors which have been considered predictors of developing a CUD, for example; age of onset, gender, impulsivity, mental health problems and early onset of continued tobacco smoking.More recently, van der Pol et al. investigated a population of high risk young adult cannabis users and found that recent negative life events and social support factors such as living alone were more predictive of CUD then cannabis exposure variables suggesting the existing literature on the aetiology of cannabis use disorder is limited.Relatively, tobacco is more harmful than cannabis and the majority of tobacco smokers are indeed nicotine dependent.The gateway hypothesis posits that tobacco acts as a gateway drug to the use of cannabis.However, there is strong evidence for the ‘reverse gateway’ whereby cannabis smoking predicts tobacco onset.Several lines of investigation give weight to the hypothesised association between cannabis use and tobacco smoking.Firstly, there is evidence to suggest both nicotine and cannabis affect similar mesolimbic dopaminergic pathways suggesting overlapping mechanism in addiction.Secondly, there are shared genetic, temperamental and psychological factors that have been associated with the use of both drugs.Finally, both substances are smoked and often concurrently, such that cross-sensitisation to each substance might occur, with tobacco directly enhancing the subjective effect of cannabis.As nicotine is more addictive than cannabis, tobacco smoking may be a primary driver of continued use and relapse in co-dependent users.About 90% of cannabis users also identify as cigarette smokers, however, this exists as a complicated relationship given that increased cigarette smoking may substitute for reduced cannabis consumption and vice versa.Users of both drugs report more severe symptoms of CUD.Half of adults seeking treatment for CUD also smoke cigarettes and treatment outcomes for those using both cannabis and tobacco, in comparison to cannabis alone, are poor.Moreover, relative to those with a CUD, those with co-occurring nicotine dependence show poorer psychiatric and psychosocial outcomes.In a recent controlled laboratory study, Haney et al. found that the strongest predictor of relapse in cannabis dependent individuals was their cigarette smoking status.Further, cigarette smoking ad libitum or after a short period of abstinence were both associated with relapse to cannabis use thus ruling out acute nicotine exposure or conditioned motivation effects.This study suggests that cigarette smoking alongside cannabis use may confer a greater dependence syndrome and therefore a greater likelihood to relapse.To understand the factors involved in the maintenance of substance use, such that prevention strategies are better informed, longitudinal designs of the use of both drugs are essential, especially during the critical period of adolescence.The present study aimed to investigate the degree to which cigarette smoking predicts the level of cannabis dependence above and beyond cannabis use itself, both at baseline, and in an exploratory four-year follow-up in a sample of young cannabis and tobacco users.Cigarette smoking at baseline, independently of smoking cannabis, is hypothesised to contribute to CUD concurrently and at follow up.Moreover, following previous research we aimed to investigate if the effects of cannabis use on cannabis dependence are mediated by tobacco smoking using a multiple mediator model.A sample of 298 cannabis users who also used tobacco were selected from a sample comprising of over 400 recreational and daily users aged 16–23 years old, as described elsewhere.Inclusion criteria were to speak English fluently, not to have learning impairments, to have no history of psychotic illnesses and normal or corrected-to-normal vision.All participants provided written, informed consent.Participants could also consent to be contacted for further studies and provided contact details as such.The study was approved by the UCL Ethics Committee and its aims were supported by the UK Home Office.Baseline measures were collected in participants’ homes as part of a larger study investigating acute cannabis effects.Participants were required to abstain from all recreational drugs including alcohol for 24 h before each test day.Demographic information, a drug history and assessment of CUD, via the Severity of Dependence Scale, were completed while participants were abstinent.Participants’ past use of cannabis and tobacco were assessed using a semi-structured, questionnaire-based interview which included the following questions: when did you last use tobacco?, For how many years have you smoked tobacco?, In a typical month, how many days do you use tobacco?, How many cigarettes do you smoke per day?, When did you last use cannabis?, For how many years have you used cannabis?, In a typical month, how many days do you use cannabis?, How long does it take you to smoke an eighth?,Participants were assessed for cannabis dependence using the SDS which is five-item questionnaire focusing on ‘loss of control’ or ‘psychological dependence’ in relation to cannabis use.It has good and well-established psychometric properties and was found to be of equal utility in diagnosing cannabis dependence in comparison to more formal diagnostic assessments.A score of three on the SDS indicates cannabis dependence.The following measures were also administered; the Wechsler Test of Adult Reading which is a measure of premorbid verbal intelligence and consists of 50 irregularly spelt words.Scores range from 0 to 50; the Schizotypal Personality Questionnaire which is a 74-item questionnaire where higher scores indicate a greater schizotypal personality disorder severity; the State-Trait Anxiety Inventory, only the 20 items from the trait scale were administered with higher scores reflecting greater trait anxiety; the Barratt Impulsiveness Scale which is a 30 item questionnaire describing common impulsive behaviours, high scores reflect greater impulsivity; the Beck Depression Inventory which is a 21 item questionnaire indexing depression over the past week and the Childhood Trauma Questionnaire which is a 28 item questionnaire assessing history of abuse.At follow-up, four years later, we attempted to re-contact the 341 participants who gave consent and invited them to participate in a semi-structured telephone interview.The final sample consisted of 65 cannabis and tobacco smokers.Participants were recruited through a preliminary email requesting their participation.All participants gave informed consent by telephone and were entered into a prize draw to win a tablet computer for participating.Telephone interviews were conducted between October and December 2013.Demographics, a drug history and the SDS, to reassess participants for CUD, identical to the baseline assessments, were collected.All analyses were conducted in IBM Statistical Package for Social Sciences, V.21.Assumptions of no perfect multicollinearity, linearity, normally distributed errors and homoscedasticity were not violated.Correlations were conducted between cannabis dependence, predictors and possible confounders.At baseline, linear regression was used to assess the predictive relationship of cannabis variables on cannabis dependence.Tobacco smoking variables were added to the regression model to establish whether they could explain significant additional variance in CUD.Questionnaire measures that correlated strongly with cannabis dependence were then added to the model and finally variables that were not found to be significant as regression coefficients were removed generating the most parsimonious model.Those predictors were then used to predict cannabis dependence in the follow up data.Unstandardised B coefficients are presented with 2 decimal places.We used PROCESS for Statistical Package for Social Sciences version 21.Multiple mediation analyses were conducted on a priori hypotheses.We tested the possible indirect effects of DAYS-CANNABIS on CANNABIS DEPENDENCE through tobacco smoking variables in a multiple mediator model whist controlling for confounding variables in the baseline data.This method parses the relationship between a predictor and an outcome into ‘indirect’ and ‘direct’ effects.Indirect effects occur when the predictor influences the outcome variable through another mediator variable.Multiple mediators have a combined and a specific contribution to the relationship between a predictor and outcome.In contrast, ‘direct effects’ between the predictor and outcome are statistically independent of this mediating relationship.For all analyses we used bias corrected 95% confidence intervals which resulted from bootstrapping of 10,000 samples.An effect is deemed significant when the B lies within CIs that do not cross zero.For 4 participants, single questionnaire items for the SPQ were replaced with the mean of the subscale.SPQ data was missing for 11 participants.For 8 participants, single questionnaire items for the BIS were replaced with the mean.For the STAI, 7 items in total were replaced with the mean.Thus, a total of 0.05% of the baseline data was replaced with mean scores.Participants in this study were on average 20.55 ± 1.67 years old with 14.47 ± 1.94 total years in education.Their mean score on the BDI was 7.27 ± 6.67 with a range of 0–40), 44 participants scored >14, STAI 39.41 ± 9.02), BIS 70.73 ± 9.84), WTAR 41.93 ± 6.80, CTQ 37.09 ± 10.04) SPQ 17.67 ± 10.70).139 participants met criteria for cannabis dependence at baseline.The follow up sample were a mean age of 24.66 ± 2.07.26.2% met the criteria for cannabis dependence at follow-up.In comparison to the 233 who were not followed up, the 65 who were did not differ significantly on age, gender, primary study variables or smoking characteristics suggesting that the baseline demographics of the follow up group are equivalent to the baseline group who were not followed up.Correlations were conducted between the outcome variable of SDS score, predictors and possible confounders.SDS correlated positively with scores on the BDI.SDS correlated weakly with the WTAR and also weakly with scores on the SPQ but not on the STAI or BIS.BIS scores correlated with cigarettes per day and days per month cannabis use.This model predicted 24.6% of the variance in cannabis dependence.Cannabis dependence score was significantly predicted by DAYS-CANNABIS.Cannabis dependence scores increased by 0.12 units for every extra day of cannabis use per month.Time to smoke an eighth, years of cannabis use and days since last cannabis use were not predictive of cannabis dependence.When tobacco variables are added to regression model, the model predicted 28.5% of the variance in cannabis dependence = 3.880, p = 0.004).DAYS-CANNABIS remained a significant predictor of cannabis dependence with dependence scores increasing 0.1 units for every extra day per month.YEARS-TOB was predictive of cannabis dependence.For every additional year of tobacco smoking, cannabis dependence scores increased 0.197 units.DAYS-TOB was a significant predictor of cannabis dependence; scores increased by 0.031 units for every additional day of tobacco use per month.Time to smoke an eighth, years cannabis smoked and days since last cannabis use were not predictive of cannabis dependence.Variables that correlated strongly with cannabis dependence scores were added to the regression model.BDI score significantly predicted cannabis dependence.For every unit increase on the BDI, cannabis dependence scores increased by 0.046 units.As such the model predicted 30.4% of the variance in cannabis dependence scores = 3.955, p = 0.020).When redundant predictors were removed from the analysis, the model predicated 29.5% of the variance in cannabis dependence, which is not significantly different from model 3 which includes cannabis, tobacco and potential confounders = 0.008, p = 0.750).DAYS-CANNABIS remained the most important predictor of cannabis dependence, followed by YEARS-TOB, DAYS-TOB and BDI score.In this model, r = 0.54 for the most efficient model which is considered a large effect size.Demographic variables were added to the most efficient given the associations between these variables and CUD.When gender is added to this model, the model predicts 29.6% of the variance in cannabis dependence = 0.180, p = 0.670).Age was then added to the most efficient model.This model accounts for 30.7% of the variance in cannabis dependence = 4.740, p = 0.030).The addition of Age correlated highly with the variable YEARS-TOB, which was no longer significant when age was added.Finally, scores on the CTQ were added to the regression model.This model accounted for 28.7% in the variance of cannabis dependence = 0.440, p = 0.510).The significant predictors in the baseline regression were used to predict cannabis dependence at follow-up, 4 years later.This was to gage whether the same factors that predict dependence at baseline can predict dependence at follow up.Means, standard deviations and correlation coefficients of these variables can be found in Table 3.This model predicted 18.5% of the variance in dependence at follow-up.DAYS-CANNABIS, DAYS-TOB and YEARS-TOB and BDI score were not significant predictors of cannabis dependence at follow up.Baseline cannabis dependence was added to the model stated above.As a result, cannabis dependence became the only significant predictor of predicted cannabis dependence at follow-up.This model predicted 24.8% of the variance in dependence at follow-up.As a result of DAYS-TOB and YEARS-TOB being significant predictors of baseline cannabis dependence in the linear regression, these variables were used a mediators in a multiple mediator model to discern if the relationship between cannabis use and cannabis dependence was mediated by concurrent tobacco use.A bias-corrected and accelerated bootstrapped multiple mediation model confirmed the presence of a combined indirect effect of DAYS-CANNABIS on cannabis dependence through YEARS-TOB + DAYS-TOB, with significant, specific indirect effects through YEARS-TOB and DAYS-TOB.This model accounted for 28% of the variance in cannabis dependence, whereas the direct effect of DAYS-CANNABIS on CANNABIS DEPENDENCE, accounted for 23%.Pairwise comparison between specific indirect effects was not significant suggesting that both YEARS-TOB and DAYS-TOB are not statistically different from each other i.e. have equal importance in mediating this relationship.The direct route suggests that when taking into account the mediating role of tobacco smoking, DAYS-CANNABIS is still significant.Given that both BDI and WTAR correlated with dependence at baseline, these were added as covariates into the above analysis.As such this model predicted 30% of the variance in cannabis dependence.The indirect effect of DAYS-CANNABIS on CANNABIS DEPENDENCE through YEARS-TOB and DAYS-TOB whilst controlling for BDI and WTAR was significant with specific indirect effects through YEARS-TOB, and DAYS-TOB, with no significant difference between DAYS-TOB and YEARS-TOB.The direct effect of DAYS-CANNABIS on CANNABIS DEPENDENCE when controlling for these covariates is still significant.The main aim of this study was to investigate the role of cigarette smoking on cannabis dependence, above and beyond the effects of cannabis exposure, in a sample of young cannabis and tobacco co-users.We conducted an exploratory follow-up of these users four years later with a 27% response rate of which 70% of individuals had smoked cannabis and tobacco at baseline.The 65 participants that were followed up were equivalent in demographics and smoking behaviour to those who were not followed up, at baseline.We hypothesised that cigarette smoking would predict CUD, at both time points.We also investigated whether the effects of cannabis use on cannabis dependence were mediated by cigarette smoking.Cigarette smoking at baseline was predictive of CUD at baseline when controlling for cannabis use variables in young people who smoke cannabis and tobacco.The most efficient model accounted for 30% of the variance in cannabis dependence which is considered to be a large effect size as R > 0.5.However, this seems no longer the case four years later, where only baseline CUD predicted follow-up CUD, accounting for almost 25% of the variance and replicating previous findings.When we investigated how cigarette smoking predicted concurrent CUD; we found that cigarette smoking mediated the relationship between cannabis use and cannabis dependence suggesting a role of tobacco use in the pathogenesis of CUD in cannabis and tobacco users.We also found these effects to be robust when controlling for depression and premorbid IQ.Although causality cannot be assumed in this cross-sectional analysis, these results suggest that cigarette smoking may enhance the dependence-forming effects of cannabis.Alternatively, our results may suggest that CUD may capture some aspects of nicotine dependence in a subset of young people with CUD.As such, this research supplements previous epidemiological research that stresses the predictive ability of tobacco smoking in developing CUDs.Our results, based in a naturalistic setting, parallel results from a recent controlled lab study that found cannabis users who smoke cigarettes are more likely to relapse in comparison to those who do not smoke cigarettes, perhaps as a result of this indirect pathway.As such, reducing cannabis dependence might be facilitated by helping individuals quit cigarette smoking.We were able to account for about 30% of the variance in CUD from four predictor variables.However, CUD is a complex disorder and causality cannot be determined from one factor.There are many other factors that can predict CUD that were beyond the scope of the current study but have interesting implications.For example, a recent study by van der Pol et al. found that current problems were better predictors of cannabis dependence in young adults than cannabis exposure itself.As a result of this study, we included demographics and scores on the CTQ to our most efficient model, however these did not account for a significant proportion of variance to be included in the final model or in the mediation analysis.It is clear that CUD is a complex disorder that has many predictors and vulnerability factors that were not included in the model.In the past, regular cigarette smoking would precede cannabis use.This sequence in drug use seems to be tapering off, for example, around 1 in 5 young cannabis users have never smoked a cigarette.Interestingly, both cannabis and tobacco smoking were initiated 4.9 and 4.7 years previously, respectively, at the baseline visit, suggesting simultaneous age of onset in the current study.Therefore, these results do not speak to sequential use as on average the sample initiated both substances at the same time.Stricter tobacco laws in some countries have altered perceptions such that cigarette smoking is considered a more risky behaviour than previously.In 2013, for the first time, tobacco smoking prevalence was estimated to be below 20% in the UK.In comparison, cannabis use has become normal and perceptions of regular cannabis use as a risky behaviour are at an all-time low with risk perception inversely related to prevalence of cannabis use.This may be due to the shifting landscape and debate over legalisation of both medical and recreational marijuana in states such as Colorado, California and Washington in the United States as well as countries such as Uruguay and the Netherlands.As a result, whilst tobacco smoking decreases generally, it is possible that tobacco use will also increase indirectly over time due to increased cannabis use.Our findings are timely because they suggest tobacco may be involved in the pathogenesis of CUD, a possible risk factor of legalisation.Our results may be a product of the common liability to the use of cannabis and tobacco including such risk factors like shared genetic and temperamental factors.For example, recent research shows that nicotine dependence was associated stronger with lifetime CUD for females than males.Moreover, Cooper and Haney have recently demonstrated that whilst subjective effects are equal across genders, females report more abuse related effects.Thus, an interesting analysis would be to investigate whether the mediators suggested in the present study, were stronger in females than males however, given that the sample was 71% male, this was not possible.Demographic variables were instead added to the most efficient model and we found that gender and age did not predict cannabis dependence after accounting for cannabis and tobacco use.Our results may also be a product of the common route of administration where inhalation of one substance may sensitise an individual to the inhalation of another substance.This study has several strengths including a relatively large sample size of 298 young cannabis and tobacco users assessed in their own homes.Moreover, we used continuous variables to index both cannabis and tobacco smoking making it possible to assess the relationship between drug use variables at varying levels of severity.This study also suffers from several limitations.First, within our exploratory follow-up sample we had a modest response from 65 participants.This may have reduced the power to detect a possible true effect of baseline cannabis use on future dependence and therefore these exploratory follow-up results should be interpreted with caution until they can be replicated with a greater sample size.Moreover, we were unable to control for the simultaneous use of cannabis and tobacco as the route of administration and as a necessity our sample is limited those who only smoke cannabis and tobacco.These results should be interpreted within their self-reported context.Finally, the multiple mediation analysis was conducted on cross sectional data and therefore the existence and direction of causality cannot be discerned.In light of the medicalisation and legalisation of marijuana, research on cannabis and tobacco use is essential.In a naturalistic study of cannabis and tobacco co-users, baseline cigarette smoking predicts cannabis dependence concurrently when controlling for frequency of cannabis use; however this was no longer the case four years later.At baseline, cigarette smoking mediated the relationship between cannabis use and cannabis dependence, even when controlling for psychological and demographic correlates that might explain this relationship.This suggests that cigarette smoking enhances vulnerability to the harmful effects of cannabis.Funding for this study was provided by the Medical Research Council.The MRC had no further role in study design; in the collection, analysis and interpretation of data; in the writing of the report; or in the decision to submit the paper for publication.Authors H.V.C. and C.J.M. designed the study.Author C.H. managed literature searches and summaries of previous related work.Authors G.L., T.P.F., N.D.S. and G.G. undertook data collection.C.H., N.D.S., T.P.F., C.J.F., and R.K.D. undertook the statistical analysis, and authors C.H. and N.D.S. wrote the first draft of the manuscript.All authors contributed to and have approved the final manuscript.All authors declare that they have no conflicts of interest. | Aims: To determine the degree to which cigarette smoking predicts levels of cannabis dependence above and beyond cannabis use itself, concurrently and in an exploratory four-year follow-up, and to investigate whether cigarette smoking mediates the relationship between cannabis use and cannabis dependence. Methods: The study was cross sectional with an exploratory follow-up in the participants' own homes or via telephone interviews in the United Kingdom. Participants were 298 cannabis and tobacco users aged between 16 and 23; follow-up consisted of 65 cannabis and tobacco users. The primary outcome variable was cannabis dependence as measured by the Severity of Dependence Scale (SDS). Cannabis and tobacco smoking were assessed through a self-reported drug history. Results: Regression analyses at baseline showed cigarette smoking (frequency of cigarette smoking: B = 0.029, 95% CI = 0.01, 0.05; years of cigarette smoking: B = 0.159, 95% CI = 0.05, 0.27) accounted for 29% of the variance in cannabis dependence when controlling for frequency of cannabis use. At follow-up, only baseline cannabis dependence predicted follow-up cannabis dependence (B = 0.274, 95% CI = 0.05, 0.53). At baseline, cigarette smoking mediated the relationship between frequency of cannabis use and dependence (B = 0.0168, 95% CI = 0.008, 0.288) even when controlling for possible confounding variables (B = 0.0153, 95% CI = 0.007, 0.027). Conclusions: Cigarette smoking is related to concurrent cannabis dependence independently of cannabis use frequency. Cigarette smoking also mediates the relationship between cannabis use and cannabis dependence suggesting tobacco is a partial driver of cannabis dependence in young people who use cannabis and tobacco. |
586 | Lexical distributional cues, but not situational cues, are readily used to learn abstract locative verb-structure associations | Language acquisition is a complicated business.With little explicit teaching from adults, children rapidly learn words and grammatical structures.Critically, children must acquire language-specific links between verbs and structures; for example, in English fill can appear in the woman filled the bucket with water, but not ∗the woman filled water into the bucket.At around five years of age, children sometimes make overgeneralisation errors such as ∗I’m going to cover a screen over me where a verb is paired with a structure that is not appropriate in the language that they are learning.Such errors show that children understand the verb’s meaning and can produce the structure, but they have not yet learned the correct verb-structure link.Over time, however, children stop making these errors.This retreat from overgeneralisation occurs as children learn adult-like verb-structure links.The English locative alternation involves events where a theme moves to a location and the location is changed by the action.Locative events can be described with two structures, which differ as to whether the verb is followed by the location or the theme: the location-theme structure, as in the woman sprayed the wall with paint, and the theme-location structure, as in the woman sprayed paint onto the wall.Not all locative verbs can appear in both structures, however.Specifically, LT-biased verbs appear predominantly in the LT structure, for example deluge, inundate and flood.TL-biased verbs such as dribble, drip and pour appear mainly in the TL structure.Finally, alternating verbs like spray, load and pack appear in both structures.Linguistic analyses explain these associations between verbs and structures in terms of verb classes: clusters of verbs with common semantic and syntactic properties.For example, verbs in the “cover-type” class have the semantic property “a layer completely covers a surface”, which highlights the surface location over the theme argument.The greater salience of the location in these actions means they tend to be described with utterances that place the location earlier in sentences.If verbs incorporate this relative salience information into their meaning representation, then the structural preferences of the verb can determined from the meaning.One potential solution to the problem of learning verb-structure mappings would be for children to learn conservatively, memorising the verb-structure mappings in their input.This could be implemented with statistical learning mechanisms such as entrenchment and preemption.Here, the occurrence of a particular verb in grammatical constructions constitutes probabilistic evidence for the mappings in the input and against the grammaticality of unwitnessed combinations.Although these proposals enjoy some support, including in this particular domain and can help to explain the retreat from overgeneralisation errors, they are not on their own sufficient, as they do not directly explain why children make errors in the first place.An influential account of why children overgeneralise is that of Pinker, who suggested that from the outset children possess innate broad range rules that link alternating structures which can be used to describe the same action.In the locative, the broad range rule connects two construals of a locative action – one in which the focus is the location’s change of state and the other in which the focus is the manner of motion of the theme.When the location’s change of state is highlighted, the LT structure is preferred since it places the location earlier in the sentence.When the manner of motion is highlighted, the TL structure is preferred, since it places the theme earlier in the sentence.On Pinker’s account, the semantic information in the scene can be used to activate a broad range rule that allows children and adults to take a verb that has been heard only in one structure and use it with the other structure.This is desirable for many low-frequency alternating verbs which may only have been in a single structure in the input, but it can also lead to overgeneralisations if a verb is only acceptable in one structure.Pinker explains the retreat from overgeneralisation through the acquisition of semantic verb classes.In this theory, children assign verbs to semantic verb classes which link to structures via narrow range rules and these rules allow children to retreat from the overgeneralisations licensed by the broad range rules.In particular, the salience and consistency of the components of an action across different instances determines its verb class.For example, LT-biased cover is used to describe an action where the location changes state from being visible to being obscured.While the state change is salient and consistent across different cover actions, the movement of the theme can take place in various ways.Likewise, TL-biased pour describes a liquid moving in a continuous stream to the location, but the change of state of the location can be variable.A range of empirical evidence supports the idea that for both adults and children, verbs’ syntactic behaviour is governed by these semantically constrained classes.Pinker’s account of verb class acquisition focuses on semantic information that can be extracted from the situations that verbs are heard in.Gropen et al. provided evidence in support of this situational approach in a series of verb learning experiments in which they taught children and adults novel verbs alongside novel actions.Each action included either a salient location change of state or a salient theme manner.After training with these novel verb/action pairs, participants were prompted at test to describe the same action using a full locative structure.Participants used more LT locatives after training scenes with a salient location component, and more TL locatives after training scenes with a salient manner component.However, although this study appears to show situational effects on verb-structure learning, this is not the only possible account of Gropen et al.’s results, because their test actions were biased in the same way as their training items.For example, if in training participants saw the theme move towards the location in a zigzag motion with no change to the location, they saw the same event again at test.Participants’ choice of structure could therefore have been determined by placing the salient argument earlier in the sentence; importantly, this could take place without reference to verb-specific semantics.More generally, since most studies that show semantic effects on structural choice manipulate the test situation, it is not clear whether learners can recall situational information previously associated with a verb and use that information in later structural choices.In Experiment 1, we examine whether verb-specific situational training information can influence structural choices at a later test.A potential problem for situational learning is that the relevant situational information may only rarely be present: speakers do not generally narrate events as they unfold.Instead learners may acquire a considerable amount of information regarding a verb’s meaning from its linguistic context, as proposed under the syntactic bootstrapping hypothesis.For example, Naigles demonstrated that children correctly associated sentences containing novel verbs with causative visual scenes based on the transitive syntactic frame in which the verbs were presented.Specifically, children mapped the transitive sentence the duck is gorping the bunny to a scene in which a duck made a bunny squat by pushing on the bunny’s head.In contrast, children associated a scene in which a duck and a bunny simultaneously made arm gestures with intransitive sentences such as the duck and the bunny are gorping or the duck is gorping with the bunny.The results of syntactic bootstrapping studies have been explained with a range of distinct mechanisms.One involves the number of arguments in a phrase; for example, two arguments would signal a causative meaning.Another account is that learners use syntactic structures to establish elements of verb meaning; for example, the sequence of syntactic categories NP VERB NP might bias towards the causative.A third account is that the post-verbal noun may signal its thematic role; for example, patient nouns may indicate the causative.In addition to these syntactic mechanisms, it has been suggested that lexical mechanisms could provide cues to verb meaning.For example, Mintz showed that frequent lexical frames could be used to classify words into categories like VERB or NOUN.Importantly, because frames like is_the will only pick out transitive verbs, these frequent frames could be useful cues to verb meaning.Finally another mechanism is offered in Scott and Fisher, who found that the lexical distribution of animate/inanimate subject pronouns in training could influence verb class acquisition.Thus, in contrast to the non-linguistic information used by situational theories of verb learning, syntactic bootstrapping approaches suggest that syntactic frames, thematic roles, arguments, lexical frames, and lexical distributions in the linguistic signal could support verb learning.These syntactic bootstrapping studies have suggested several mechanisms which children may exploit when learning verb meanings.These different accounts can be tested by examining verb learning in the locative alternation, as the alternation itself rules out some mechanisms.Since the LT and TL structures have three arguments and a similar surface structure, it would be difficult to use the number of arguments, syntactic structures, or frequent frames to learn locative verb classes.In addition, both post-verbal arguments are inanimate, so unlike in Scott and Fisher, animacy/pronoun distribution is not a clear cue to locative verbs’ structural biases.A further challenge comes from the fact that locative verbs do not always occur in locative structures in the input.Twomey et al. examined all utterances containing any of the 140 locative verbs examined by Ambridge et al. from all UK corpora in the CHILDES database of child-directed speech, and found that 78% of adults’ locative verbs in their sample occurred not in full locative structure, but in transitive or intransitive structures, for example you dump the lady’s toys.Although the preposition is a very good cue for the locative structure, this large corpus analysis showed that many locative verbs frequently did not occur with prepositions.These features of the locative suggest that the acquisition of the structural properties of these verbs may depend heavily on lexical distributional learning.A growing literature in computational linguistics suggests that distributional learning mechanisms may provide a general account of lexical class learning.For example, these algorithms could use the sentences you’ve drenched the carpet with water and he saturated his carpet to classify drench and saturate as being more similar to each other than they are to fill, which does not typically occur with carpet.Such models achieve high levels of syntactic and semantic performance using the full set of words that occur with each verb.For example, Mikolov, Chen, Corrado, and Dean demonstrated that lexical distributional regularities from six billion words from Google News were able to achieve state-of-the-art performance in classifying pairs of words as being syntactically and semantically related.These mechanisms also work with child-directed speech: Scott and Fisher found that it was possible to distinguish causal and contact verbs using the distribution of subjects with these verbs in CHILDES corpora.In sum, there is growing evidence that distributional learning can be used to learn a range of meaning and syntactic distinctions from corpora.Twomey et al. applied a distributional learning technique to the acquisition of locative verb classes.Their correspondence analysis used the words that appeared near a verb to classify it along several dimensions that encoded lexical distributional similarity.These dimensions predicted adults’ verb grammaticality ratings.The CA was created from a list of the two post-verbal words for each verb in all of the parental input in the UK CHILDES corpora; the list could include any word, for example determiners, prepositions, nouns, verbs, and adjectives.The CA mapped these verbs into a similarity space based on the overlap in post-verbal words.Fig. 1 provides an example of how this might work for a small set of verbs given a small set of nouns that might appear with them in corpora.In this example, the verbs pour and inject are close to each other in the similarity space, because they both occur with words like water and oil in the post-verbal position.The word fill is on the other side of the space, because it tends to have containers in post-verbal position.Spray is in the middle, because it alternates, and is sometimes followed by liquids, sometimes by containers.Load is an LT-biased verb like fill, but since it has different nouns in post-verbal position, it is in a different part of the top part of the space.Inoculate is a low frequency verb that occurred only with one noun, cow, but that is sufficient to place it close to load, which has also occurred with that noun.Notice that this can take place even if the person interprets cow as being the patient thematic role, rather than as a location thematic role.This illustrates how a CA takes a lexical distribution without thematic role information and places verbs in a similarity space such that regions of this space act like verb classes which can be associated with structures.Twomey et al. tested a range of different CA learners on the locative verbs in their corpus.The best CA, which used two post-verbal words, explained 47% of the variance in the independent verb-structure ratings.In contrast, a CA that used all of the post-verbal words only explained 38% of the variance, because the order of the nouns was lost when all post-verbal words were collapsed together, blurring the distinction between LT and TL structures; for example, the woman poured water into the tub and the woman filled the tub with water would both have water and tub as post-verbal words.The success of the two post-verbal words CA suggests that a distributional learner should be sensitive to a small window of adjacent words in learning verb classes.Twomey et al. tested this prediction in a connectionist model that was biased for learning adjacent regularities.The model captured early overgeneralisation of locative verbs and the gradual retreat from overgeneralisation through the acquisition of locative verb classes.Critically, the model’s input was designed so that it could only learn these locative verb classes from the two post-verbal nouns in transitive utterances.This corpus and modelling work demonstrated that development of locative verb classes can be explained by a distributional learning mechanism combined with the transitive input that children hear.While Twomey et al. provided support for lexical distributional learning of verb meaning with corpus analyses and connectionist modelling, there is little experimental evidence that children can learn these classes in the same way as these models.One study, Scott and Fisher, has shown that toddlers can use the lexical distributional information before a verb when it is the only cue for verb meaning, but it is less clear if such cues will drive verb class acquisition when they are post-verbal and when situational information is also present.In addition, toddlers may be limited in their ability to deal with experimental task demands, which might mean that they do not combine situational and distributional information consistently early in development.In the case of the locative, there is evidence that children learn verb classes later in development.Thus, exploring these mechanisms in older children removes some of these limitations by testing learning mechanisms at ages where the ability to learn from situational and distributional cues should be robust.In summary, children need to learn semantic verb constraints on their structural choices.The acquisition of locative verb classes allows us to contrast a situational account of this process with a lexical distributional account, while controlling for other sources of information.In Experiment 1, we pit these two accounts against each other in children and adults and then further explore the properties of a lexical distributional mechanism in Experiment 2.In the first study we taught participants novel locative verbs alongside animations of novel actions in a training session and then examined how children and adults would use these verbs at test.For example, in the training action depicted in Fig. 2, a robot fills up its arms with an oil theme from the cylinder on the right and then goes towards the cone location on the left.It then shoots the oil in large balls towards the cone, filling it with the oil.To examine the role of situational information in locative verb class acquisition, we manipulated the salience of the locations and themes in training.In the location-salient condition, the action in Fig. 2 involved a large change in the location object and little change to the theme.In contrast, in the theme-salient condition the motion of the theme was highlighted while the location was less changed.If situational information is used to learn locative verb classes, then participants should remember whether the location or theme was salient for each novel verb and then use this at test to bias for the appropriate structure.To examine whether learners can use lexical distributional regularities to acquire verb classes, we described the training scenes with sentences that varied in whether the post-verbal noun was the location or the theme.We used transitive frames because corpus work has suggested these structures are the main context for learning about locative verbs in the input to children.We were interested in whether participants could use lexical distributional regularities in training to assign novel verbs to appropriate verb classes.For example, if pabbing occurred with the post-verbal theme-like nouns oil and water, could participants use this information to select a TL structure with this verb at test?,After hearing all of the training scenes for each of the four actions, participants were shown the same four actions with novel pairings of objects.They saw these new videos and heard them described with an intransitive structure that mentioned the target novel verb and were encouraged to describe the scene.Gropen et al. used test stimuli in which either the location or the theme was salient.Participants could therefore use this situational salience to select a structure and insert the novel verb after the structure had been planned.To force participants to use their memory of verb-specific situational regularities that were experienced earlier in training, our test scenes combined the salient version of motion of the theme with the salient version of the change in the location.For example, Fig. 3 depicts the test item for pabbing, in which the robot shoots balls that bounce on the floor and the cone fills completely.The test action was shown with two themes and two locations, which helped to bias participants to producing full locatives in order to disambiguate which themes and locations were involved in the action.Since the same test event was used regardless of the situational/lexical distributional condition in training, an effect of those variables at test would require participants to have retained some memory of the training situation.Because test scenes had salient theme and location components, another way to show a situational effect would be to use the consistency of location or theme across training and test; for example the training action in Fig. 2 and test action in Fig. 3 both show the cone being filled completely.Pinker claimed that verb classes were defined by consistent situational components across exemplars.For example, LT-biased verbs like fill tend to describe situations with consistent location changes,.The predictions of the training/test situational consistency account are the same as the predictions of the relative saliency account: in both cases, a location-salient action on a particular training trial should yield more LT utterances at test than an action presented in theme-salient training situation.To examine how these mechanisms change over development, we tested three age groups.Bowerman reported overgeneralisation errors at around age 5,demonstrating that children at this age can insert verbs productively into their learned locative structures.Furthermore, Ambridge et al. found that 5-year-olds were sensitive to semantic constraints on their use of locative structures, which suggests that they already have some semantic knowledge that could constrain verb classes.Thus, we tested this age group to examine how lexical and situational cues are used as they learn produce locative structures.Although the ability to learn novel verb-structure links is likely to increase over development, whether the ability to use lexical or situational cues also changes over time is unclear.Thus, to examine how cue use changes over development, we also tested 9-year-olds and adults.Adult participants were 48 native British English-speaking undergraduate students aged 18–22 years.Data from a further five participants were excluded due to experimenter error, equipment error or because participants were non-native speakers of English.Adults were recruited through a university participation scheme and received course credit for taking part.Child participants were 51 5-year old children and 55 9-year-old children.All children were British English-speaking and were recruited from local primary schools.Data from a further 11 participants were excluded due to equipment error.Parents’ prior consent was obtained and children received stickers for participation.The study crossed age, situational training and lexical training in a 3 × 2 × 2 design.Our dependent measure was structure produced.Visual scenes consisted of animations of four scenes depicting a robot performing novel actions on a set of items, computer animated in Processing.In each scene a robot caused a theme item to move towards a location item, resulting in a change of state in the location.Each novel action was a combination of a cause-motion action and state-change action, which created a verb that was felicitous in both LT and TL constructions.In action A, the robot threw a sheet-like theme which opened up in mid-air and covered the location.In action B, the robot filled a large location object by shooting or bouncing large balls of liquid into it.In action C, the robot raised a large object upwards by spraying a stream of small particles into it.Finally, in action D the robot decorated a large object with a smaller one after carrying the smaller object to the larger object either with static arms or in an up/down pumping motion.Training stimuli are depicted in Fig. 4.As in Gropen et al., we manipulated the manner and endstate components of each action.Our location/theme salience manipulations correspond to the situational elements that linguists argue are involved in the locative alternation and which have been manipulated in previous studies.Specifically, each manner component consisted of either more or less motion of the theme.In actions A and C, the robot was positioned so that the theme moved either a short distance or a long distance.In action B, the theme either went directly to the goal or bounced on the floor on the way to the goal.In action D, the robot carried the theme at a consistent height or in an up-down zigzagging motion.Each endstate component consisted of either more or less change to the location.In action A, the location was partially or completely covered.In action B, the box was partially or completely filled.In action C, the location was caused to levitate to either a low or a high level.In action D, the theme was embedded in the surface of the theme either deeply or on the surface.Thus, the theme-salient level of the situational manipulation combined the more-motion manner with the less-change endstate, while the location-salient level combined the more-change endstate and the less-motion manner.Since each novel action is not easily described by a single English locative verb that encoded both the state change and the manner of motion, Fig. 4 provides a separate English gloss for the theme-salient and location-salient versions of each action.These glosses used locative verbs from Levin and were accepted as plausible descriptions of these actions by adult participants in the norming study described in Section 2.3.Test stimuli are depicted in Fig. 5, and consisted of the more-change and more-motion components of each action.For example, in the test scene for action A the location was completely covered, and the theme moved a long distance.Critically, our stimuli were computer-generated and therefore the endstate and manner components used in test scenes were identical to the matching endstate and manner components used during training.Previous studies did not control the test stimuli in this way, because endstate and manner components of human actions or hand-animated videos are variable and difficult to equate across trials.To reduce item-specific effects, we used novel pairings of objects at test.For example, oil is shot into the cone in training, but water is shot into the cone at test.Lexical training stimuli consisted of transitive sentences spoken by the experimenter.Each sentence contained one of four novel words selected as plausible action labels for English speakers: cringing1, pabbing, veeming, and zopping.To investigate whether word co-occurrences could bias participants towards producing LT or TL locatives, training sentences occurred either with a post-verbal location noun or a post-verbal theme noun.L-transitive stimuli were sentences with a post-verbal noun that labelled the onscreen location-like object.T-transitive stimuli were sentences with a post-verbal noun that labelled the theme-like object.Participants heard each training sentence twice.At test, to ensure participants were not biased to produce a particular structure, they heard the novel verb in an intransitive structure and were encouraged to describe the scene with an utterance that mentioned the objects that were involved in the action.In order to see any effect of situational or lexical training on LT/TL structural choice at test, we needed to induce participants to produce full locative structures.Thus, participants received two warm-up scenes at the start of the experiment and two locative prime trials before the test trials.These were designed to increase the production of full locatives equally across all situational and lexical training conditions.These trials depicted scenes involving a loading action and were described with both the LT structure and the TL structure.To balance order effects, we created eight counterbalanced lists, each consisting of two warm-up trials, followed by eight training trials, two locative prime trials, and finally eight test trials.Situational/lexical training pairings were counterbalanced across participants and verbs, as was LT/TL order of warm-up and locative prime sentences.Order of presentation of verbs was rotated between lists, and training and test trials in a given list were presented in the same order.Left-right position of objects was randomised across participants.On test trials, front-back position and first theme item used were counterbalanced across participants.Fig. 6 depicts an example counterbalance list.Participants were told that they would be shown a video of a robot on a spaceship who would be carrying out known and novel actions with items on the spaceship, and that their task was to describe the scene to the experimenter.The experiment began with the two warm-up trials.First, the experimenter labelled the objects on the screen.Then, she said “Here is the robot in the spaceship.We’re going to watch him do some loading” and played the animation for the first warm-up trial.After the animation had finished playing, the experimenter presented the verb in one locative structure and asked an elicitation question.The second warm-up trial depicted a second loading action using different objects, which was labelled in the same way using the alternate structure.When participants produced a sentence which included only one of the objects, they were prompted to mention both objects up to three times, using the questions “Can you tell me what the robot did with both of the objects?”,“Can you tell me the robot was the ?,or using onto/with, “The robot was the what?,If the participant did not mention both objects after prompting, the experimenter noted this and started the next trial.Eight training trials immediately followed the warm-up trials and proceeded in an identical manner except that the novel verbs were presented in the appropriate transitive sentence for the lexical condition in each counterbalance list.Two locative prime trials followed the training trials in an identical manner to the warm-up trials.Finally, eight test trials were presented, again in the same way as training trials, with the exception that novel verbs were presented in intransitive form, for example the robot was veeming.When participants mentioned only one of the objects, they were prompted in the same way as during training trials.The child procedure was the same as the adult procedure, with the following adaptations to ensure it was child-appropriate.First, in order to engage children in the task, before the experiment began the experimenter told the child she was looking for space scientists to help her with a special job on the spaceship.She then explained that she was preparing a report for the captain of a spaceship, that she would show the child a video of a robot doing special jobs using objects in the spaceship, and that the child’s job was to watch very carefully and, when asked, tell her what the robot was doing so that she could complete her report.Second, warm-up, training and locative prime trials began with the experimenter labelling the objects on the screen and asking the child to repeat the labels, while on test trials the experimenter asked the children to label each object, correcting incorrect responses.In object labelling, the location object was always first in order to counter the strong TL bias found in children.Since this object ordering was the same for all test items, it could only increase the overall use of LT structures, but cannot explain any variation due to situational or distributional manipulations in training.Third, to encourage children to use the novel verb, the experimenter repeated it in the elicitation question.There were no other differences between the adult and child procedures.Participants’ responses on test trials were transcribed and coded offline.Sentences in which both nouns were used unambiguously were coded as LT or TL locatives.245 non-locative sentences were coded as Other and excluded from further analyses = 0.26, p = 0.61).Finally, data from 21 individual responses were excluded due to experimenter error on one or both training or test trials for that verb, equipment error or interruption.959 locative responses were included in the final analysis.A further 25% of responses were coded by a second experimenter, naïve to the experimental hypotheses.Inter-coder reliability was substantial, Cohen’s kappa = 0.92.The how/where and onto/with prompts were used to encourage full locatives.The how/where prompt was used on 2% of the test trials with adults.No adults received the final onto/with prompt at test.The how/where prompt was used on 9% of the test trials for 5-year-old children and 0.3% of the test trials with a 9-year-old child.The onto/with prompt was used on 3% of the test trials for 5-year-old children and 0.3% of the test trials with a 9-year-old child.Although these prompts could bias towards particular structures, they were only used when the participant had already mentioned either the theme or location in the post-verbal position, indicating that the element was salient for the participant.Proportions of LT locatives out of all LT and TL utterances produced by 5- and 9-year-old children and adults after situational training and lexical training are presented in Fig. 7.Structure produced was submitted to a binomial mixed effects model with age, situational training and lexical training as crossed fixed effects and participant and verb as random effects.The maximal model that converged included random intercepts for participant and verb, with no random slopes.All mixed effects models reported used this random effects structure unless stated otherwise.All analyses were repeated excluding responses produced after prompting, however the results were similar to the full data set and we only report the full results here.Overall, participants were biased towards the TL structure = −4.44, p < 0.0001), consistent with previous studies.However, this early TL bias disappeared over development as LT production increased across age groups = 12.00, p = 0.00053).Participants were sensitive to the lexical distribution, producing more LT locatives after L-transitives than after T-transitives = 45.32, p < 0.0001) and relying more heavily on this information over time = 9.72, p = 0.0018).Situational information affected participants’ LT production overall; however in contrast to Gropen et al., more LT locatives were produced after theme-salient training than after location-salient training = 669, p = 0.0097).Finally, LT production was marginally affected by different combinations of situational and lexical information = 2.81, p = 0.094).No other interactions reached significance = 2.15, p = 0.14; age by situation training interaction: beta = −0.54, SE = 0.36, χ2 = 1.39, p = 0.24).To investigate how development affected particular age groups, we applied a binomial mixed effects model to each age group with situational training and lexical training as crossed fixed effects to address three questions: Which age groups would show a TL bias?, How would the effect of lexical training change over age?, Was the mismatching situational effect, which did not interact with age, carried mainly by adults?,With respect to the TL bias, the omnibus analysis found a main effect for age, where LT production rose as participants got older.Our separate models found a TL bias in both 5-year-olds = −2.04, p = 0.041) and nine-year-olds = −6.60, p < 0.0001), but not in adults = −1.27, p = 0.21).Thus, the test stimuli appear to be unbiased in adults, but children prefer to describe them with TL structures.The second question was what the significant interaction between age and lexical training says about lexical distributional learning at each age group.When we looked at the effect of lexical training in the separate models, we found that 5-year-olds showed no effect = 0.001, p = 0.97), 9-year-olds did show an effect = 9.34, p = 0.0022), and adults showed a large effect = 44.79, p < 0.0001).This suggests that the ability to use the lexical distribution to learn about verbs grows over development, although it was not sufficiently robust for the 5-year-old children to learn from a few trials.Our final question related to the mismatching situational effect, by which participants preferred LT structures for theme-salient events.There was no situation effect for the 5-year olds = 1.30, p = 0.25) or 9-year-olds = 0.75, p = 0.38).The adults, on the other hand, showed a robust mismatch effect = 9.76, p = 0.0018).Although there is no situation training by age interaction, the fact that this effect appears to be strongest in the adults suggests that it could be the result of some experiment-specific strategy that only appeared in adults and some children.As the situational effect in adults was opposite to the one predicted, we carried out an additional norming study to insure that our training stimuli were biased in the predicted direction, where location-salient stimuli were best described by LT structures.In this norming study, 12 new participants saw one location-salient scene and one theme-salient scene for each novel action from the training stimuli in the above study.For each scene, participants were asked to choose between a description with an LT structure and another with a TL structure containing the verbs in the English glosses in Fig. 4.Four counterbalancing lists varied the order of action, situation bias, and object pairings.The two scenes for each action included different objects and were separated by three trials.Importantly, the location-salient and theme-salient scenes for each action depicted similar motion and endstates, so the verbs were compatible with both versions, as illustrated in the results in Fig. 8.For example, both veem actions were compatible with covered and threw, since the theme was thrown and the location covered in either variant; participants could not therefore have used the verbs alone to discriminate between the scenes.Further, although individually these verbs could bias participants towards either LT or the TL structure, the fact that one LT and one TL verb was present on every scene means that any potential distributional bias was equated.Thus, to distinguish situational bias in their choices, participants had to be sensitive to situational information, and specifically, gradations in manner of motion or endstate.A binomial mixed effects model was applied to LT production with centred situation bias.Participants and verb were entered as random effects with situation bias as a random slope for both.There was a main effect of situation bias = 2.58, p < 0.01).This shows that adults could indeed distinguish the location- and theme-salient stimuli across the verbs and that they preferred to label the location-salient scene with an LT structure and the theme-salient scene with the TL structure.The difference in LT proportion between location- and theme-salient actions was 25%.Thus, in line with Gropen et al., our norming study demonstrates that adults preferred LT structures for location-salient scenes more than for theme-salient scenes.Thus, the opposite effect of situation bias with the same animations in the main study must be due to the way that memory encodes the link between situational information and verbs.Twomey et al. found a general TL bias in structural choices in their corpus analysis, where 66% of adult locatives and 87% of child locatives used the TL structure.They argued that this bias could trigger children to overgeneralise LT-biased locative verbs into the TL structure.Since verbs that appeared with location-salient/L-transitive training in Experiment 1 should be described at test with an LT locative, any TL locatives produced for these verbs can be thought of as a type of TL overgeneralisation.To examine this, we used TL production compared to chance as an index of TL overgeneralisation for these items.As predicted, children at both ages produced significantly more TL locatives after location-salient/L-transitive training than expected by chance.Adults’ TL production did not differ from chance.Thus, for verbs where both situational and lexical information should have cued the LT structure, children nonetheless systematically overgeneralised those verbs to TL locatives, and even adults did not prefer the LT structure.Based on the effect of children’s TL bias on locative production, we predicted that this preference for post-verbal theme-like nouns would extend to their production of other structures.Indeed, as expected, children produced more T-transitives than L-transitives overall = 3.92, N = 50, p = 0.047).The TL bias also appeared to affect the errors that they produced: they produced structures with post-verbal theme nouns followed by with more often than structures with post-verbal location nouns followed by into/onto = 19.59, N = 27, p < 0.0001).Taken together, these data support the claim that the majority of children’s early locative overgeneralisation errors reflect a general TL bias in normal sentence production.In contrast to the view that situational cues are used in learning verb classes, adults and children in this study did not base their choice of LT and TL structure on the salience of the location or theme in training or the shared endstate or manner consistency across training and test items.Instead, adults and 9-year-old children remembered lexical co-occurrence information in transitive training sentences and used full locative structures at test that reflected verbs’ lexical bias in training.This ability grew over development, as indexed by the effect size measure Cohen’s d which factors out the variance and sample size associated with each age group: 5-year-old children did not use lexical cues appropriately, 9-year-old children showed a small effect of lexical training, and adults showed a medium effect of lexical training.This is consistent with Pinker’s claim that verb classes develop slowly, and demonstrates the gradual nature of the retreat from overgeneralisation.Experiment 1 examined whether participants’ choice between the LT and TL locative at test reflected some verb related information in training.To exhibit these effects, participants must produce locative responses at test.Our participants did so for these novel actions, producing 78% locatives overall.Participants also used non-locative conjoined transitives, which conform to our instructions to mention both location and theme and are consistent with the transitive structures used in training.Overall, however, only 60 responses were non-locatives that mentioned both location and theme, which is small compared to the 958 full locatives produced, suggesting that these novel actions were best described using the locative structure.Previous studies of locative use in 5- to 9-year-old children have found that children can use situational information about location and theme salience to choose between LT and TL structures.Our norming study showed that adults showed significant matching preferences for our training stimuli.In our verb learning study, however, children and adults did not easily store situational information with particular verbs in a way that could help them to assign them to the appropriate LT- or TL-biased verb class.Participants who saw a salient location with a completely filled cone were not more likely to use an LT structure at test compared to those that saw a partially filled cone.These results support the view that the situational effects reported in Gropen et al. are the result of biases in the test stimuli that directly influence structural choices independently of the particular verb and its class.If children cannot quickly learn from situational input how to assign a verb to its class, how do they learn the many low frequency locative verbs that exist in languages?,Twomey et al. found that post-verbal lexical distributional regularities in transitive sentences could be useful in learning locative verb classes for these low frequency verbs.The present study demonstrated that 9-year-old children and adults, but not 5-year olds, could combine lexical distribution with verbs to bias structural choices.We also found that this ability grew with age, consistent with the idea that the use of lexical distributional cues might depend on previously learned verb classes.While this study provides evidence for the role of post-verbal nouns in verb-structure choices, it is not certain that these results involve abstract verb classes.This is because in order to maximise the chance that children would recall situational and lexical cues, the objects seen on test trials were the same as those seen in training.Both types of cue could therefore be learned in an item-specific manner.For example, participants could have used situational cues to learn that pabbing involves oil specifically, rather than theme-like objects in general.However, the lack of a situational consistency effect in children suggests that even when test stimuli included the same items as in training, children’s memory of the action carried out with those items during training was not sufficient to constrain structural choice at test.Similarly, the lexical training effect seen in 9-year-old children and adults could have benefitted from this item overlap.For example, if during training a participant heard the robot was pabbing the oil with a scene including the oil and the box, at test they would see an action involving the oil and the cone.If the participant remembered the post-verbal nouns paired with pabbing during training, they could use those nouns to begin the locative produced at test, triggering a TL structure.Hence in Experiment 1, it remains possible that our effect of lexical distribution was due to our older participants having learned a trigram like pabbing the oil rather than having learned an abstract verb class which would allow pabbing to be followed by any theme-like object and then any location-like object.Therefore, Experiment 2 examines whether learners can use the lexical distribution to assign abstract verb classes that can be generalised to new nouns at test.The results of Experiment 1 suggest that participants may have been relying on a distributional mechanism to learn verb biases.However, there are many distributional learning mechanisms that could be used to learn abstract verb classes.Experiment 2 tests the predictions of Twomey et al.’s connectionist model, which assigned verbs to classes from the distribution of nouns that occurred after the verbs in transitive structures.For example in the model’s input, a verb might occur in transitive structures with container objects 75% of the time and liquid objects 25% of time.Based on this distribution, the verb would be assigned to an LT-biased class, since these structures tend to have containers in the post-verbal position.Critically, the model assigned verbs to LT- or TL-biased classes having only ever encountered them in transitive structures and without any situational semantics.We were interested in whether humans could learn verb classes under input conditions similar to the model.In Experiment 2, we taught adults four novel verbs without visual input.We trained participants with these verbs in transitive sentences and then tested them by presenting the verb with three nouns and asking them to generate a sentence.The main manipulation was the set of nouns that occurred with that verb in training.The L-biased condition included object nouns that were typical locations and the T-biased condition included object nouns that were typical themes.To investigate whether this learning mechanism was statistical in nature, both of these conditions included one item with the opposite type of noun.If the verb learning mechanism was not statistical, but simply recorded whether a location or a theme noun had occurred with that verb, then there should be no difference between conditions, since both types of nouns occurred with each verb.However, if human learners are sensitive to the relative frequency, like the connectionist model, then they should prefer the structure that was consistent with the most frequent type of nouns that were paired with that verb in training.20 monolingual, English-speaking adults from the university community participated for course credit or as volunteers.The study manipulated lexical distribution bias of novel verbs in a within-subject design.Four novel words used in previous child language studies served as novel verbs, all of which were presented in the past tense: dacked, keefed, pilked and tifed.Forty transitive sentence frames were created, half with the L-transitive and half with the T-transitive.All frames used different agent, location and theme nouns.Location nouns were containers or surfaces.Theme nouns were liquids plural nouns and mass nouns.Table 2 provides examples of training and test order and stimuli for the first 22 trials.Biased training sets of five items were created for each novel verb.L-biased sets included one novel verb in four L-transitives and one T-transitive, and T-biased sets included one novel verb in four T-transitives and one L-transitive.To allow participants to generate their own structures, we presented verbs with three nouns.Each triple included unique nouns for agent, location and theme, none of which had appeared earlier.There were 5 test items for each of the four verbs.Transitive training sentences alternated with test trials, with the constraint that all training sentences for a given verb appeared before the corresponding test triples.Participants initially saw two warm-up items consisting of two triples with known locative verbs.The first block of five test triples appeared with known locative verbs to further encourage locative test responses.These known trials were interleaved with the five transitive training sentences for the first novel verb.In the second block, the next novel verb appeared in transitive training sentences, while the first novel verb appeared with five more test triples.This procedure continued until all four novel verbs had been trained and tested.The last block interleaved the final novel test triples with five known verbs in transitive frames.Blocks of L-biased training items alternated with T-biased training items.Practice and filler stimuli were selected to have balanced structural biases: four known verbs were from a non-alternating LT-biased verb class, four verbs from a non-alternating TL-biased verb class, and four verbs from alternating locative verb classes.Two counterbalance lists were created that varied which novel verb appeared in each section, such that each verb occurred in both L and T structures.Participants were tested individually in a quiet room.Sentences were presented and participant responses recorded using a program written in Processing v.2.0.Training sentences were presented one word at a time in the centre of the screen.To ensure that participants paid attention to the training sentences, presentation of each word was self-paced by pressing the Enter key.When the sentence was completed, the prompt SPEAK appeared on the screen.Participants then pressed the spacebar while repeating the sentence aloud.Test trials displayed the three nouns from the test triple and a novel verb in a diamond shape on the screen.Subject and verb were randomly placed in the bottom and right locations, and location and theme were randomly placed in the top and left locations.This configuration made it harder for participants to use typical English reading patterns or to develop experiment-specific ordering strategies.Participants were asked to formulate a sentence with those words and to press the Enter key when ready.The prompt screen then appeared, and participants were again asked to press the spacebar as they said the sentence aloud.The test trials were designed to allow participants the freedom to generate their own sentence from the four words.Participants were told that they could add words like articles or prepositions in order to make their sentence grammatical.Participants’ responses on test trials were coded offline.Locative sentences with two unambiguous post-verbal nouns were coded as either LT or TL.For example, the utterance the student dacked the noticeboard with posters was scored as LT, while the student dacked posters onto the noticeboard was scored as TL.Transitive sentences and other non-locative uses of the novel verb were coded as Other and excluded from further analyses.332 locative responses were included in the final analysis.All responses were coded by a second coder naïve to the experimental hypothesis.Inter-coder reliability was substantial, Cohen’s kappa = 0.92.Fig. 9 depicts the proportion of LT locatives produced for LT- or TL-biased novel verbs in Experiment 2.LT production was submitted to a binomial mixed effects model with lexical training as a fixed effect and participant and verb as random effects.The maximal model that converged included random intercepts for participant and verb, with no random slopes.Overall, participants produced LT and TL locatives approximately equally frequently = −0.52, p = 0.60).LT production was higher for L-biased verbs than for T-biased verbs = 13.06, p = 0.00030).Thus, because nouns encountered on test trials did not appear on training trials, participants in Experiment 2 assigned verbs to abstract classes based on the set of nouns that they appeared with during training and then at test, these classes biased their structural choices.Despite the strong evidence for the importance of lexical distributional information from both studies presented here, it remains possible that the visual scene information provided in Experiment 1 may have affected participants’ ability to use lexical cues: it may be easier for speakers to use lexical cues when they have already extracted thematic role information from the visual scene.To examine this possibility, we compared the results from the adult data in Experiments 1 and 2.The left-hand panel of Fig. 9 depicts adults’ LT production in Experiment 1 based on lexical training, that is, collapsed across situational consistency.We submitted the adults’ proportion LT production from both experiments to a binomial mixed effects model with training bias and experiment as fixed effects.The maximal model that converged included by-participant random intercepts and slopes for training bias.Adults were not TL-biased = −0.45, p = 0.65).Overall, adults’ LT production was higher for L-biased verbs than for T-biased verbs = 19.22, p < 0.0001).However, there was no effect of experiment = 0.12, p = 0.73) or interaction between training bias and experiment = 1.72, p = 0.19).Thus, the visual situational information encountered in Experiment 1 did not substantially increase participants’ use of lexical cues relative to Experiment 2, which included no situational information.Other differences between the studies did not strongly modulate the results.Thus, the simplest explanation for the effect of lexical distribution in both studies is that participants used the distribution of nouns in the training phase to assign verbs to abstract classes.The division of nouns into theme and location nouns was based on intuition, but our participants shared these intuitions and when presented with three arguments like cup, coffee, and salesman, they were more likely to use the novel verb with a similar type of argument.That is, they were able to generalise a novel verb to a structure that it had never been paired with based on a verb class that was shaped by the distribution of nouns in the absence of situational cues.Theories of language acquisition often assume that verb knowledge is acquired by combining situational information from the world with abstract syntactic structures.Computational models do not always use these types of information to learn about verbs, because it can be difficult to identify the relevant aspects of scenes or accurately construct syntactic structures.Instead, some of these models make use of the distribution of words in sentences to identify aspects of word meaning and syntactic preferences.Lexical distributional learning could provide a unified approach to explaining a range of different phenomena in language acquisition and adult processing.For example, Chang, Bock, and Goldberg suggest that thematic role-based structural priming in the locative alternation could be due to differences in the lexical distribution of themes and locations.These mechanisms can also explain the behaviours seen in syntactic bootstrapping studies, because these studies manipulate the lexical items around novel verbs.While these mechanisms seem to be useful for learning about verbs, relatively little experimental work, other than Scott and Fisher, isolates the role of lexical distribution from syntactic or situational variation.The present work addresses this gap using the locative alternation.It is learned relatively late in development, which means that it can be examined within a production task at an age where children should be able to learn and generalise verbs outside of the laboratory.The acquisition of locative verb classes is a puzzle, because most locative verbs do not appear in the full locative structure in the input.And since the locative alternation controls factors such as syntactic frames, post-verbal noun animacy, and number of arguments, syntactic bootstrapping mechanisms are not easily applicable here.One mechanism that can address this issue is lexical distributional learning, which Twomey et al. showed could explain how these locative verb classes could be learned from post-verbal words.Their connectionist model attempted to predict the post-verbal words in locative utterances.When these predictions were incorrect, the prediction error was used to modify the internal verb representations which generated the prediction.Over time, the model developed distinct verb classes for predicting different types of post-verbal words.However, these classes by themselves did not create structural biases.Rather, these classes became associated with LT-only and TL-only classes learned from frequent verbs that occurred with situational information.If verbs occurred predominantly with post-verbal words that also occurred with the LT-only class, they were classified as close to LT-only verbs in their structural preferences.Similarly, verbs were closer to the TL-only class if they occurred frequently with the same post-verbal words.Thus, situational information was used to learn verb-structure associations for frequent non-alternating verbs and this information was sufficient to associate structural biases with alternating verb classes using post-verbal words without situational information like thematic roles.This mechanism can explain the lexical training effect in our studies.In Experiment 1, the 9-year-old children and adults encoded the nouns that occurred after the verb in training for four different novel verbs and used this information later at test.In Experiment 2, adults learned lexical distributional regularities and generalised this knowledge to new nouns at test.The 5-year-old children in Experiment 1 did not show an effect of lexical training, suggesting that the ability to learn from the lexical distribution changed with age.This increase in ability mirrors Twomey et al.’s connectionist model, where the separation between verb classes increased slowly as the model learned.Fig. 10 shows how this developmental process might work for the age groups in our study.Spatial distance in the figure encodes verb class similarity, and each verb is an exemplar of a verb class.Given the early evidence for lexical distributional learning, we assume that children at each age can use lexical distributional regularities to map novel verbs to existing locative verb classes.However, because 5-year-olds have a single cluster of locative verbs, it is difficult for them to show a distinction at test between verbs that have occurred in L-transitive and T-transitive sentences during training.Older children and adults know more verbs, and importantly these verbs are more semantically distinct.This predicts that adults and older children will show stronger systematic structural choice at test than younger children.Support for the differentiation of verb classes over time is provided in Ambridge et al.’s rating study, in which the strength of verb-class related semantic predictors increased with age.Although our results come from older children, Twomey et al. suggested that these same distributional mechanisms could explain a range of early effects in development.For example, syntactic bootstrapping studies typically manipulate post-verbal words.Distributional learning over these words would yield results that are similar to the predictions based on syntactic structure.Another important feature of our data is the TL bias in children.Twomey et al. argued that there was a close relationship between verb classes and the TL bias, where the TL bias resulted from an early inability to distinguish locative verb classes.The results in Experiment 1 support this claim.Adults used LT and TL structures equally often, which would emerge from a broad spread of verb classes which they could attach to both LT- or TL-biased verbs based on lexical distribution.However, 5-year-olds and 9-year-olds showed a TL bias, which would emerge from a clustering of known verbs in the TL-biased part of the space.When verbs are clustered tightly as in the 5-year old panel in Fig. 10, an LT-only verb like cover could be placed into the TL-biased part of the verb space, explaining why children sometimes make overgeneralisation errors where LT-only verbs are placed into TL structures at this age.The predictions of this lexical distributional account are different from approaches in which lexical item-based knowledge forms the basis for early usage.For example, fill appears frequently in LT and pour is frequent in TL in child directed speech, so under an item-based approach, children should be accurate at both of these frequent pairings from the outset.However, Gropen et al. found that young children produced errors by placing the verb fill in the TL structure as often as they created the correct pairing, even though they correctly used the verb pour only in the TL structure.A lexical distributional approach can explain these findings: the first cluster is TL-biased and fill is at the border between the TL and LT spaces.Nonetheless, in addition to distributional information, there is substantial evidence that children understand aspects of situational meaning from early in development and that structural knowledge is linked to this semantic information.However there is relatively little experimental evidence that children can store structure-relevant situational features for novel verbs across separate training and test events.Indeed, the effect of situation in the current study was not in the predicted direction: participants preferred to use LT structures with verbs that had been seen in theme-salient situations in training.Although there was no interaction with age, this effect was strongest in adults.In contrast, when adults were queried about the situational manipulation in the norming study, they produced structures which matched the situation, preferring LT structures for location-salient scenes.This suggests that adults can identify the salient elements in the training scenes and have a preference for the matching structure, but by the time they are tested, this preference has changed into a mismatch preference.The source of this mismatch effect is not clear, but one possibility is that the salience of the theme and location in the test situation is influenced by variability, rather than consistency as Gropen et al. argued.That is, endstate/manner in the situational manipulation varied between training and test, which could highlight the item that changed, and increase production of the structure that placed the highlighted item earlier.For example, the difference between a partially filled cone in training and a fully filled cone at test could render the cone more salient relative to the unchanged theme motion and trigger an increase in LT production at test.Regardless of what caused this mismatch effect, however, our results suggest that situation information is not being transparently associated with verbs to create matching structural preferences.An alternative mechanism that participants could have employed in these studies is to activate thematic roles directly from the lexical items and create verb classes based on these word-derived roles.For example, hearing the word floor in the cleaner pilked the floor could activate a location thematic role, and pilked could therefore be assigned to a location-biased verb class.Then at test, this verb class would bias speakers towards LT descriptions.One question is whether these word-derived thematic roles are linked to the situation-derived roles.If they are linked, the lexical effect should be stronger in Experiment 1, where additional visual information about thematic roles was provided, compared to Experiment 2, where no visual information was provided.However, there was no difference in the magnitude of the lexical effect between our experiments: word-derived roles plus situation-derived roles were not better than word-derived roles only.Linking word- and situation-derived thematic roles also predicts a strong interaction between situation and lexical training in Experiment 1.When the visual scene highlights the location role and the post-verbal noun activates the same location role, the location role should be very salient and easily bound to the verb.When the roles activated by the visual scene and the role activated by the post-verbal noun mismatch, then it should be harder to select the verb class, because the situation-derived and word-derived roles bias in opposite directions.This predicts that mismatching conditions should have a smaller lexical training difference than the matching conditions.In fact, the opposite was true.When situational salience matched the role selected by the lexical nouns, the difference between L- and T-transitives was 8%, but when they mismatched, the lexical training difference was 18%.Thus, our results suggest that word-derived role information is independent of situation-derived role information.This is consistent with the approach in Twomey et al.’s correspondence analysis, where the verb class space which encoded thematic role distinctions despite receiving only lexical – but not situational/visual – input.The acquisition of low frequency verbs is an important challenge for theories of language acquisition.The acquisition of the structural biases of these verbs is made more difficult when the verbs do not occur in the appropriate structures in the input, as is the case for many locative verbs.Lexical distributional learning can use overlap in adjacent words to identify semantically-related verbs, which can support the acquisition of these low frequency verbs, by using a small number of exemplars as in the studies presented here.In contrast, in Experiment 1 participants could not use situational information from a small number of exemplars to constrain verb meaning, even though the theme motion and location change were clearly visible and less variable than those in the real world.Thus, while situational information should clearly be encoded with verbs when frequent or salient enough, the vast majority of linguistic forms are infrequent and a complete theory of language acquisition requires a mechanism that can address this long tail of linguistic knowledge.The current studies point to the importance of lexical distributional learning in providing just such a mechanism. | Children must learn the structural biases of locative verbs in order to avoid making overgeneralisation errors (e.g., *I filled water into the glass). It is thought that they use linguistic and situational information to learn verb classes that encode structural biases. In addition to situational cues, we examined whether children and adults could use the lexical distribution of nouns in the post-verbal noun phrase of transitive utterances to assign novel verbs to locative classes. In Experiment 1, children and adults used lexical distributional cues to assign verb classes, but were unable to use situational cues appropriately. In Experiment 2, adults generalised distributionally-learned classes to novel verb arguments, demonstrating that distributional information can cue abstract verb classes. Taken together, these studies show that human language learners can use a lexical distributional mechanism that is similar to that used by computational linguistic systems that use large unlabelled corpora to learn verb meaning. |
587 | Stem cell decisions: A twist of fate or a niche market? | One of the central questions in cell and developmental biology is how differences in cells are established and maintained.In multicellular organisms this problem is not restricted to development but is also relevant during tissue homeostasis in the adult.One mechanism for establishing different cell fates is asymmetric cell division.In this context, the transmission of cell fate information can occur through cell–cell communication, it can be established via intracellular polarity or it can be inherited from one cell generation to the next .Stem cells are one cell type that can divide asymmetrically to produce a self-renewed stem cell and a daughter cell that will differentiate.Stem cells can also divide symmetrically to expand the stem cell pool.Increasing stem cell numbers or generating differentiating cells is a key process in building and maintaining tissues.In the context of stem cells the orientation of the mitotic spindle can influence the fate of daughter cells .The correct alignment of mitotic spindles is not only important in development but defects in this process are also associated with disease .It is thus not surprising that controlling the orientation of mitosis is an important issue for tissue morphogenesis .The different requirements and contexts in which stem cells are found predict that a plethora of regulatory mechanisms operate to govern spindle orientation and cell fate decisions.Here we discuss intrinsic and extrinsic cues that are involved in asymmetric stem cell division and focus specifically on the contribution of selective centrosome segregation.Invertebrate model systems have proven extremely useful for unraveling the general principles that underpin spindle orientation during asymmetric cell division.The genetic approaches possible in these model systems permit asking detailed questions about this process.They also enable identification and easy access of the cells under investigation.Importantly, most of the molecular principles of asymmetric division identified in Drosophila and Caenorhabditis elegans are highly conserved .How is spindle orientation achieved?,A series of events cooperate to position the spindle.In many instances two key events are required that are tightly coupled.First, cell polarity needs to be established specifying cortical regions that can capture the spindle.Second, the spindle apparatus needs to be able to interact with the cortex.Typically, astral microtubules nucleated by centrosomes at the spindle poles serve this purpose.Common to this process in various contexts, is the contribution of a conserved, sophisticated molecular machinery that includes cortical and microtubule binding proteins in addition to molecular motors that can exert torque on the spindle.Our understanding of the key molecules involved in this machinery is steadily increasing .In Brief, G alphai, LGN and Numa constitute the conserved core set of molecules involved in spindle positioning.G alphai can be myristoylated and binds to the cortex .G alphai also regulates the activity of Pins by increasing its affinity for Mud .Pins/LGN binds Mud/Numa .In turn, Numa/Mud can interact with cytoplasmic Dynein , which can exert forces to orient the spindle.Hence, this protein complex can function in anchoring and positioning the spindle.These molecules also play important roles in directing spindle orientation in progenitor cells in the mouse neocortex, the chicken neural tube, and during symmetric divisions in developing epithelia .The proteins involved seem to function similarly in different contexts.Nonetheless, how the orientation of mitotic spindles influences the outcome of progenitor/stem cell division varies and is not understood in many progenitor cells .Another difficulty is that measuring spindle orientation reliably in complex stratified vertebrate tissues is more complex than in the simpler tissue structures of Drosophila or C. elegans.In vertebrates, the orientation of mitotic spindles is commonly used to classify symmetric and asymmetric divisions .Although the position of daughter cells does not necessarily predict the fate of resulting daughter cells, the alignment of mitotic spindles perpendicular to the tissue layer in which the mother resides, usually this corresponds to the apical surface, is considered asymmetric because the daughter cells inherit different proportions of apical polarity markers.The problem that arises especially in morphologically complex tissues is: what is used as reference to determine the orientation of the spindle?,It is important to note that the methods used to measure mitotic spindle alignment have never been compared directly and the reference points used to report the angle of spindle orientation differ between investigators and systems .This may explain discrepancies between observations in the same system .In tissue that is curved like the base of the intestinal crypt, it becomes even more difficult to define relevant reference points or axes that relate to cell or tissue organization and more robust methods for these measurements in three-dimensional tissue are needed.Additional complexity is added by the emerging view that at least some stem cell compartments have a high degree of plasticity.Within some tissues, several cell populations can act as stem cells in a context dependent manner.Which stem cell pool is the active one under a given set of circumstances?,This important for understanding the role of spindle orientation in cell fate decisions and is particularly relevant in the stem cell compartment of the mouse intestine.In recent years much progress in understanding the biology of the stem cells at the base of intestine has been made revealing a high level of plasticity within this compartment .Leucine-rich repeat containing G protein-coupled receptor 5 was identified as a marker of cells that can generate all the lineages normally present in the intestinal epithelium .Within the epithelium, Paneth cells are secretory cells that are usually restricted to the crypt base where the antimicrobial peptides they secrete are thought to protect neighboring stem cells .Previously, cells that reside at position +4, above the last Paneth cell, were identified as stem cells based on their ability to retain labeled DNA .These so called +4 cells express low levels of LGR5 in addition to the marker Bmi1.Importantly, +4 cells can restore LGR5Hi cells upon their depletion .Similarly when +4 cells are specifically depleted, they are restored from the LGR5Hi pool .To complicate the situation further, a subset of Paneth cells can act as reserve stem cell pool when called upon in response to injury or disease .Together these and other similar observations illustrate the high degree of plasticity that exists in this tissue between different pools of progenitor cells in this tissue.The high turn over of cells in the intestine makes it vital to maintain a constant supply of replacement cells.A highly dynamic stem cell compartment that includes back-up provisions ensures the survival of the organism.The molecular mechanisms that control these decisions remain a mystery but they are likely to include a complex interplay between different signaling pathways, differential adhesion between cells and basement membrane, and mechanical forces that act at the level of cells and tissue.Stem cells usually reside in a particular environment called the niche, that hosts and maintains stem cells .One idea that has gained popularity is that the niche is the dominant factor in controlling stem cell fate by providing short-range signals that confer stemness on cells within their range.In the Drosophila germline, niche signals can even promote reversion of cells that are partially differentiated to become stem cells again .However, such powerful effects of the niche are not universal.In the case of the hair follicle, cells do not revert to a stem cell fate when they return to the niche after exiting and differentiating even when the niche is depleted of endogenous stem cells .On the other hand, hematopoietic stem cells can leave the niche without loosing their stemness and neural stem cells can exist and symmetrically self-renew outside their complex microenvironment .In the case of the crypts in the intestine, Paneth cells secret important stem cell maintenance factors including Wnt .If Paneth cells are experimentally ablated, however, stem cells are maintained in vivo .Hence crypt stem cells have the capacity to compensate for the loss of Paneth cells and maintain stemness by other means.Similarly, murine neuroepithelial progenitor cells removed from their normal location produce neurons at normal frequency suggesting that their self-renewal capacity does not immediately rely on environmental signals .Thus, mechanisms that are independent of a particular microenvironment can drive differentiation or stem cell self-renewal in some stem cell populations.This in turn suggests that at least some stem cells have the capacity to control self-renewal intrinsically or to self-organize a favorable environment to help them do so.Indeed, neural stem cells in the olfactory epithelium together with neighboring cells release factors that can negatively regulate self-renewal and proliferation to maintain homeostasis .Likewise epidermal stem cells can be the source of their own self-renewing signals as well as for the differentiating signals for their progeny .These data question the universal validity of the classical concept that the niche provides all the cues required for normal stem cell maintenance and emphasize the need to consider additional mechanisms that can confer cell fate.An emerging concept that can explain how cellular states are maintained between different generations proposes that cellular memory can be passed on from one cell to the next during division .Prominent examples for mechanisms that could transmit information from one cell generation to the next include epigenetic modification of the chromosomes , the inheritance of the midbody, which can impact dramatically on cellular physiology and cell-fate determination , and asymmetric segregation of centrosomes and cilia .These elements may provide the molecular basis for transmitting differential cell fate information.In the following sections we discuss what is known about such mechanisms in asymmetrically dividing cells, specifically stem cells, focusing on recent advances in understanding the mechanism and function of non-random centrosome segregation.Cell fate information could be carried directly by the spindle.Consistent with this idea, various organelles and mRNAs associate with the spindle to provide potential fate determinants to one or both daughter cells .In this context, the centrosome is particularly important.Centrosomes segregate to the opposing poles of mitotic spindles each time a cell divides making them ideal vehicles for carrying information from one cell to another during division.Centrosomes also provide a means to establish polarity in a spindle because they are intrinsically different, due to their duplication cycle .At the core of a typical centrosome are two centrioles.Before new centrioles are produced, the two centrioles already present separate and each one acts as the site for the assembly of a new centriole.As a result, centrioles within each centrosome can be distinguished by age-reflected in the language used to describe the older centriole as “mother” and the younger centriole as “daughter”.Hence the ‘mother centrosome’ carries the oldest set of centrioles whereas the ‘daughter centrosome’ carries the younger set.Differences in the maturation of mother or daughter centrioles are reflected by structural differences and the unequal distribution of proteins .Consequently, molecular differences exist between centrosomes that cells could use to distinguish between them.Indeed, differential segregation of mother and daughter centrosomes has been observed in cells that divide asymmetrically.However, the direction of centrosome segregation is not always the same.In Drosophila male germ line stem cells and in progenitor cells of the neocortex in mice the mother centrosome stays within the stem cell in asymmetric divisions.In budding yeast, where the phenomenon of differential centrosome segregation was first discovered and in Drosophila larval neuroblasts the mother centrosome in the case of yeast) leaves the old cell and segregates to the new daughter cells.This direction of segregation was also observed in cells from a neuroblastoma cell line where the daughter centrosome is inherited by the cell with progenitor potential .The nature of centriole duplication causes the presence of centrioles with different states of maturity within a cell.Intriguingly, in system that display biased centrosome segregation like budding yeast, the Drosophila male germ line and Drosophila neuroblasts, the centrosomes differ in their ability to nucleate microtubules during interphase .This could suggest that centrosome segregation patterns may be driven by differences in the ability to nucleate astral microtubules caused by structural variations that result from the maturation state of daughter versus mother centrioles.In vertebrate cells mother and daughter centrioles vary in their ability to recruit components for microtubule nucleation in interphase .This might be because centrioles require ∼1.5 cell cycles to fully mature to become a mother centriole.The maturation is accompanied by the formation of different types of appendages that may be involved in anchoring microtubules .Hence, the increased ability of the mother centriole to nucleate and/or anchor microtubules might confer an advantage for engaging with the microtubule binding sites at the cortex, which in turn enhances the probability of the mother centriole to be retained there.Although appendages do not form on mother centrioles in Drosophila , the mother centrosome of male germ line stem cells can nucleate a significant number of microtubules during interphase .To ensure asymmetry of the process, such astral microtubules might then be captured by asymmetrically localized microtubule stabilizing proteins like the adenomatous polyposis coli protein, which is restricted to the stem cell/hub cell interface .Differences in the maturation of the SPB might also drive biased SPB segregation in budding yeast.The old SPB is guided into the bud and this requires the Kar9 protein, a protein with some sequence similarity to APC .Importantly, the old SPB has the ability to nucleate microtubules significantly earlier than the new SPB because recruitment of Spc72 – a core component of the SPB and a receptor for γ-Tubulin – to the new SPB occurs with a significant delay.Abolishing this difference by forcing simultaneous nucleation of astral microtubule from both the old and the new SPB causes randomization of SPB segregation .This suggests that SPB segregation can result from structural asymmetries in the SPBs imposed by the SPB replication cycle.However, additional complexities are likely to exist.Using recombinase-dependent exchange of fluorescent tags fused to Spc72 to specifically label old and new SPBs allowed screening for genes involved in directional SPB segregation .This approach revealed that Nud1/centriolin, a core structural component of the SPB, together with components of the mitotic exit network – a conserved signaling cascade controlling key events of exit from mitosis and cytokinesis – are required to specify the fate of the SPB .Without a fully functioning mitotic exit network Kar9 does not preferentially recognize the old SPB and the older SPB is inherited randomly .Another structural difference between centrioles in vertebrate cells is linked to the fact that mother centrioles produce the primary cilium.The primary cilium is generated as mother centrioles mature into a basal body that is anchored at the membrane .In the case of radial glia, the non-random segregation of centrosomes could thus be linked to the fact that these cells are ciliated.Contrary to observations in other cell types, the primary cilium is not completely disassembled when absorbed prior to cell division in these cells.Remnants of it stay attached to the mother centrosome during mitosis and co-segregate to the daughter cell that retains stem cell characteristics .Intriguingly, observations made in mouse fibroblasts already suggested that inheriting the older centrosome results in an asymmetric outcome for the timing of primary cilium production.Both fibroblast daughter cells can build a primary cilium, but the daughter cell inheriting the older centriole produces a primary cilium first.This asynchrony results in a differential response to Sonic hedgehog signaling .Similarly, an asymmetry in the ability to form a cilium between progenitor cell daughters could lead to differences in their ability to respond to proliferative signals .Hence inheriting the ability to rapidly produce a primary cilium by asymmetrically receiving mother centrioles might support maintenance of radial glial fate.Indeed, depletion of the mother centriole marker Ninein by RNAi led to a reduction in the number of progenitor cells, suggesting that losing mother centrosome specific markers from the centriole impacts on cell fate maintenance .However, depletion of Ninein affects formation of the primary cilium in retinal pigment epithelial cells opening the possibility that loss of radial glia cells induced by Ninein knockdown may not solely be attributable to loss of mother centriole traits, but could also be due to loss of cilium-mediated signal transduction.Thus, direct evidence for non-random centrosome segregation and progenitor cell fate is still missing.It will be important to dissect the role of the primary cilium in ciliated progenitor cell divisions to resolve this issue.In Drosophila neuroblasts differences between centrosomes exist in interphase.One centriole nucleates an aster and is stably bound to the cell cortex, while the other does not nucleate microtubules and moves freely through the cytoplasm .Progress was made recently shedding light on the molecular details of this process.Centrobin, a protein specific for daughter centrioles that was first identified in mammalian cells is required for centriole duplication and localizes to the daughter centriole in Drosophila , actively nucleating microtubules and cortex bound.In interphase neuroblasts, CNB is required to recruit the machinery that nucleates microtubules.Loss of CNB abolishes the ability of daughter centrioles to nucleate microtubules causing both centrioles to move apparently in a random manner within the cytoplasm.Loss of CNB also randomizes the centriole segregation pattern. .Conversely, forcing recruitment of CNB to both centrioles leads to microtubule nucleation from both centrioles generating two cortex-bound asters close to each other .In both cases total number of centrioles per cell is normal, but at least in the case of CNB loss, the stereotype inheritance of the daughter centriole by the neuroblast is lost, which is likely to happen when CNB is forced to both centrioles in these cells as well.Recently Pericentrin like protein was discovered as an additional player in regulating microtubule nucleation in interphase neuroblasts.PLP localizes to both centrioles, but higher levels accumulate on the mother centriole .Loss of PLP causes activation of microtubule nucleation at both centrioles suggesting that PLP is normally involved in suppressing microtubule nucleation at the mother centriole .Unlike loss of CNB, loss of PLP also compromises centrosome segregation, but leads to abnormal centrosome numbers per cell .CNB and PLP are thus components that regulate microtubule nucleation and affect the stereotype segregation of centrioles.Almost 40 years ago, the immortal strand hypothesis was proposed by John Cairns.It states that in order to protect themselves against mutation due to errors introduced by DNA replication, stem cells retain the original DNA template strand .This hypothesis has been revised that stem cells might still control DNA strand segregation, but do so to differentially segregate epigenetic information.One major caveat is that molecular mechanisms that enable execution of this task are largely unknown .The finding that labeling centrosomes in Drosophila male germ line stem cells within a short time window during embryogenesis was sufficient to generate label-carrying centrosomes many cell generations later in the adult, demonstrated the permanent presence of the same centrosome within male germ line stem cells .Such an ‘immortal centrosome’ could be an element that provides continuity in controlling DNA strand segregation .There is still no evidence of immortal DNA strands in the Drosophila male germ line .Yet the finding that male germ line stem cells retain certain histones during asymmetric division indicates that these cells might differentially transmit epigenetic information.In line with this idea, using chromosome oriented fluorescent in situ hybridization non-random sister chromatid segregation of only the sex chromosomes was reported to occur in these cells .The SUN-KASH domain containing proteins connect cytoplasmic elements of the cytoskeleton with the nuclear lamina and chromosomes .This machinery might control non-random sister chromatid segregation since interfering with the centrosome or components of the SUN-KASH machinery randomized chromatid segregation .Nonetheless, how individual DNA strands are recognized remains completely unclear, as does the role played by the mother centrosome in this process.Furthermore, randomizing DNA strand segregation by impaired centrosome function, did not immediately affect germ line stem cell fate or number , leaving the functional relevance of this phenomenon unclear.Neuroblasts are special because they are the only somatic cells in Drosophila with a centrosome actively nucleating microtubules during interphase .It is also notable that in these cells the daughter centriole recruits the machinery to nucleate microtubules in interphase , a feature typically performed by the mother centriole in other systems .In interphase Drosophila neuroblasts, the daughter centriole organizes a microtubule aster that keeps an invariant position at the cortex, which will become the apical pole in the next mitosis and hence remains in the neuroblast.Therefore the interphase microtubule aster is located opposite from the position where daughter cells are born .Why daughter cells cluster remains unclear, but in the Drosophila embryo, mechanisms exist to correct errors in the orientation of neuroblasts division that involve signaling from neighboring glial cells , suggesting that daughter cell clustering is of critical importance during central nervous system development in Drosophila.In larval neuroblasts, the position of the microtubule aster at the apical cell pole opposite to the daughter cell cluster suggested that it might play a role in transmitting cell division orientation information from one division to the next.Consistent with this idea, transiently disrupting microtubules, which leads to loss of asters and the anchoring of centrioles to the cortex, resets the orientation of divisions by establishing an ectopic microtubule aster that serves as a predictor of the new axis of division after restoring microtubule dynamics .Mutants such as mud induce an increase in the number of symmetric divisions of neuroblasts thus interrupting the normal pattern of asymmetric divisions .Subsequent asymmetric divisions of the resulting mud mutant neuroblast siblings respect the orientation of the preceding symmetric cell division and daughter cells are born into the space between the sibling neuroblasts pair .This means that in this case the orientation of the preceding divisions is maintained.These data suggest that neuroblasts can ‘read’ or remember the orientation of their last division.The responsible mechanism is not clear.However, the memory of division orientation also functions robustly when the interphase aster is composed of two centrosomes.On the other hand, it is prone to errors when centrosome function is impaired or when microtubule dynamics are disrupted .This suggests that it is important for neuroblasts to have a functional microtubule network in interphase for the cell polarity memory to work, but why the system requires the daughter centrosome remains unknown.An important question that remains is whether cell extrinsic input contributes to bias in centrosome segregation.Orientation of cell division is known to be regulated by a number of signaling events between cells .The Wnt/planar cell polarity pathway can regulate spindle orientation .Remarkably Wnt signaling seems to be able to bias centrosome segregation.When exposed to a localized source of Wnt3a signal, embryonic stem cells in culture can be triggered to show biased centrosome segregation taking the older centrosome to the cell closer to the source of Wnt3a.The cell retaining this centrosome was also seen retained pluripotency markers .However, the molecular details of how exposure to Wnt regulates the orientation of mitotic spindles are not well understood.In Drosophila and zebrafish the transmembrane receptor Frizzled and its effector Dishevelled are involved .They can interact with Mud/Numa linking Wnt signaling to the spindle orientation machinery .That means it is possible that a similar signaling event also provides cues for the attraction of one spindle pole, the one containing stronger Ninein signal, a marker for the mother centriole, in embryonic stem cells .We do not understand the signaling that governs the selection of one spindle pole over the other, but details about how downstream targets of Wnt signaling could contribute to the orientation of mitosis are emerging.Wnt-dependent spindle orientation, recently identified in zebrafish dorsal epiblast cells, showed involvement of the anthrax toxin receptor 2a .Wnt polarizes the activity of this receptor.In cooperation with RhoA it activates the formin zDia2 to locally generate actin filaments to help orient the spindle .The precise role of actin cables in spindle positioning remains to be determined.In Drosophila S2 cells, experimentally forcing the localization of Dsh to restricted cortical regions causes recruitment of the actin binding protein Canoe/Afadin locally activating Rho signaling.Dia then functions as an effector of Rho activation inducing F-actin enrichment at sites of cortical Dsh .Interestingly, during Drosophila neuroblast asymmetric divisions Canoe is involved in spindle orientation by playing a role in recruiting Mud .These results from zebrafish and Drosophila indicate that actin–dependent processes might influence spindle orientation similar to the situation in budding yeast.In yeast, actin cables serve to guide astral microtubules to position the spindle during mitosis .Alternatively, the interaction of Pins/Canoe could be a way to stabilize the cortical position of Galphai/Pins/LGN/Mud/Numa complexes .It will be important to test whether the actin–myosin network is involved in this process in cells where non-random centrosome segregation occurs.Another signaling pathway that was recently implicated in asymmetric centrosome behavior is the Notch signaling pathway.In cells of the peripheral nervous system of Drosophila, asymmetries in centrosome behavior correlate with differences in centriole migration.During cytokinesis of the sensory organ precursor cell the anterior and posterior centrosome differed in the time required for their movement to the apical pole.Notably, this differential movement was delayed in mutants of Numb, a regulator of the Notch pathway, and accelerated when Numb was overexpressed suggesting that Numb regulates differential centrosome behavior in this cell type.Consistent with this idea, Notch may also function in regulating spindle orientation in the mammary epithelium.Treating young mice with γ-secretase inhibitor to block Notch signaling was reported to result in measurable differences in the orientation of mitosis in cells within the terminal end buds .Hence, in addition to the well-known link between asymmetric cell division and the control of Notch pathway activity, Notch signaling might play also a more direct role as a regulator of centrosome and spindle behavior.Many potential mechanisms have emerged that contribute to the phenomenon of non-random segregation of centrosomes.These include differences in their structure and molecular composition, and in their ability to respond to specific signals.Observations from yeast show that even if structural differences can suffice to ensure asymmetric SPB segregation , additional layers of regulation that involve signaling cascades can impact on SPB behavior .Similar to the situation in yeast, centrosome segregation seems to be controlled in a sophisticated manner in Drosophila neuroblasts since: pericentriolar material is actively shed from the mother centriole at the end of mitosis and accumulates on the daughter centriole ; stable microtubule nucleation by the daughter centriole requires the action of Pins, a protein that has thus far been shown to only localize to the apical cortex in mitosis .Thus, in Drosophila neuroblasts and yeast signals that control biased centrosome/SPB segregation cannot solely be explained by structural differences in centriole maturation.It is also still unknown whether the loss of a primary cilium from progenitor cells affects their fate.To this end, it will be important to determine if depleting specific genes, such as ODF2, which renders mother and daughter centrioles indistinguishable at the ultra-structural level and prevents primary cilium formation without impinging on the cell cycle , affects progenitor fate.Importantly, a clear-cut connection between directed centrosome segregation and cell fate generation has not been demonstrated in any of the systems that exhibit non-random centrosome segregation.To this end, it will be most informative to investigate now whether asymmetric centrosome segregation is a general feature of stem cell division, occurs only during asymmetric division or can also be observed in symmetric divisions and occurs in cells in which non-random segregation of DNA strands occurs.It should now be possible to measure this in muscle satellite cells, crypt stem cells and intestinal stem cells in Drosophila .The most important point to resolve will be to establish how non-random centrosome segregation and cell fate are related to test the beautiful hypothesis that inheriting one type of centrosomes ensures the continuity of cell fate between different generations. | Establishing and maintaining cell fate in the right place at the right time is a key requirement for normal tissue maintenance. Stem cells are at the core of this process. Understanding how stem cells balance self-renewal and production of differentiating cells is key for understanding the defects that underpin many diseases. Both, external cues from the environment and cell intrinsic mechanisms can control the outcome of stem cell division. The role of the orientation of stem cell division has emerged as an important mechanism for specifying cell fate decisions. Although, the alignment of cell divisions can dependent on spatial cues from the environment, maintaining stemness is not always linked to positioning of stem cells in a particular microenvironment or `niche'. Alternate mechanisms that could contribute to cellular memory include differential segregation of centrosomes in asymmetrically dividing cells. |
588 | Relationships between major epitopes of the IA-2 autoantigen in Type 1 diabetes: Implications for determinant spreading | The development of Type 1 diabetes is associated with T- and B-cell autoimmunity to multiple islet autoantigens including proinsulin, glutamate decarboxylase, IA-2 and zinc transporter-8 .Studies on the natural history of Type 1 diabetes indicate that spreading of autoimmune responses within and between these islet autoantigens is crucial for disease progression, and individuals who maintain a restricted response to single islet antigens have a low risk of developing clinical disease .The mechanisms underlying the progressive spreading of autoimmune responses to determinants on islet self proteins are unknown.Studies in animal models of autoimmune disease have implicated B-cells in this process, specifically through their roles as antigen presenting cells .Autoantibody-secreting B-cells are proposed to play a critical role in sustaining T-cell responses to islet antigens by mediating their efficient uptake via the B-cell receptor, facilitating the presentation of peptides derived from antigens to T-cells .Depletion of B-cells impairs T-cell responses to islet antigens, thereby preventing the development of diabetes in animal models and prolonging beta cell function in human Type 1 diabetes .There are close links between T- and B-cell responses to islet antigens when these are studied at the epitope level.Thus, both T- and B-cell epitopes are clustered on the structure of islet autoantigens and T-cell responses of peripheral blood lymphocytes from diabetic patients to specific IA-2 peptides are associated with the presence of antibodies to epitopes overlapping these peptides .Furthermore, the binding of antigen to the B-cell receptor is stable within antigen processing compartments and the formation of such complexes may protect or expose sites at which antigen is cleaved by processing enzymes, leading to the stabilisation of specific peptides for presentation and activation of autoreactive T-cells .Such modification of islet antigen processing and presentation may represent one mechanism by which B-cells facilitate determinant spreading in the autoimmune response in Type 1 diabetes.Studies on autoimmunity to one of the major islet autoantigens in human Type 1 diabetes, IA-2, illustrate the importance of immune diversification in Type 1 diabetes.Antibodies to IA-2 are detected in the majority of patients at the time of diabetes onset and their appearance is strongly predictive of disease progression in non-diabetic subjects .Analysis of binding of autoantibodies to deletion mutants of IA-2 has identified several distinct regions of antibody reactivity within the cytoplasmic domain, including at least two linear epitopes between amino acids 621–630 of the juxtamembrane domain and conformational epitopes within the tyrosine phosphatase domain, which include a major epitope represented by amino acids within the 831–860 region of the molecule and a second that includes residues 876–880 .In the early autoimmune response in pre-diabetes, IA-2 antibodies often recognise epitopes in the JM domain of the protein, reactivity then spreads to epitopes in the PTP domain and to the closely related IA-2β .Recent studies have shown an increase in the prevalence of antibodies to epitopes in the IA-2 PTP domain, concurrent with rising diabetes prevalence .Furthermore, diversification of the autoimmune response to multiple epitopes on IA-2 in pre-diabetes increases Type 1 diabetes risk , demonstrating that determinant spreading in IA-2 autoimmunity is closely linked to diabetes progression.We have recently shown that T-cell responses to a peptide representing amino acids 841–860 within the PTP domain of IA-2 are associated not only with PTP domain antibodies, but also more significantly with antibodies to the JM domain .We hypothesised that B-cell receptor binding to the JM domain may facilitate loading of processed peptides in the PTP domain for stimulation of T-cells, potentially as a consequence of these regions being closely aligned on the three dimensional structure of the protein.The aim of this study was to investigate the relationships of antigenic sites within the IA-2 JM and PTP domains by: i.) localising epitopes for monoclonal IA-2 antibodies to the JM and PTP domains by peptide inhibition and site-directed mutagenesis; ii.),investigating possible juxtaposition of the epitopes on IA-2 by cross-competition studies and iii.),determining the influence of JM and PTP domain monoclonal antibodies on peptides generated during proteolytic processing of IA-2:monoclonal antibody complexes."Patients with Type 1 diabetes between the ages of 12 and 30 were recruited within 6 months of clinical onset from diabetic clinics in Yorkshire, Durham and King's College Hospital, London, UK, with informed consent and approval from appropriate Ethics Committees.Serum samples from IA-2 antibody-positive patients were selected for characterisation of IA-2 autoantibody epitopes on the basis of strong reactivity to deletion mutants and chimeric constructs representing different regions of the IA-2 molecule .Four mouse monoclonal antibodies, 76F, 5E3, 8B3 and 9B5, that recognise epitopes in the JM domain of IA-2 overlapping those for autoantibodies in human Type 1 diabetes , and three human B cell clones 96/3, M13 and DS329 obtained after EBV-transformation of B lymphocytes from Type 1 diabetic patients and secreting antibodies to epitopes in the IA-2 PTP domain, were used for epitope characterisation.A polyclonal rabbit antiserum was also used for epitope studies.Monoclonal antibodies were purified by protein A-sepharose chromatography from tissue culture supernatants of these clones.For antibody competition studies, Fab fragments of the antibodies were prepared by papain digestion, as described .Antibody binding to radiolabelled IA-2 constructs was analysed by radioligand binding assay, as previously described .IA-2 constructs used were the cytoplasmic domain of IA-2, a chimeric construct representing the juxtamembrane domain fused to the tyrosine phosphatase domain of PTP1B, the IA-2 PTP domain and the central region of the IA-2 PTP domain.IA-2 cDNAs were transcribed and translated in vitro in the presence of 35S-methionine using the TNT Quick Coupled Transcription and Translation System.Radiolabelled protein was incubated with monoclonal antibody or test sera for 16 hours at 4 °C in wash buffer.Immune complexes were captured on Protein A-Sepharose and, after washing, the quantity of immunoprecipitated radiolabelled antigen was determined by liquid scintillation counting."To evaluate their contribution to antibody binding, single amino acids within the IA-2 sequence were substituted for alanine using the QuikChange site-directed mutagenesis kit according to the manufacturer's instructions.Substitutions were verified by sequencing.Mutated constructs were transcribed and translated in vitro in the presence of 35S methionine and used in radioligand binding assays as described above.Binding of antibodies to mutated constructs was compared with that to the wild type construct.Single amino acid mutations were considered to have inhibitory effects on antibody binding if binding was reduced by 50% or more.Relationships between antibody epitopes were investigated by competition studies using Fab fragments of monoclonal antibodies of defined epitope specificity.Monoclonal antibodies or sera from diabetic patients were incubated with 35S-labelled IA-2 cytoplasmic domain in the presence or absence of 5 μg of Fab fragments of the test antibody for 16 h at 4 °C and radiolabelled protein immunoprecipitated determined as described above.Inhibitory effects on antibody binding of Fab fragments of individual antibodies were tested by analysis of variance.To generate protein for antibody footprinting, cDNA representing the coding sequence of the cytoplasmic domain of IA-2 was cloned into the pGEX-6P vector to generate a construct encoding an IA-2 fusion protein with an N-terminal glutathione-S-transferase purification tag followed by a PreScission Protease cleavage site.The recombinant protein was expressed in BL21 E.coli cells and extracts prepared by lysozyme treatment of bacterial pellets.Recombinant protein in bacterial extracts was captured on Glutathione Sepharose 4b and treated on-column with PreScission Protease to cleave the purification tag and elute the pure IA-2ic protein.The protein was dialysed against phosphate-buffered saline and was > 90% pure by SDS-PAGE analysis.Monoclonal IA-2 antibodies were immobilised by chemical cross-linking to protein G Sepharose.Antibodies were incubated with beads for 1 h at room temperature and cross-linked with dimethylpimelidate in borate buffer .Unreacted sites were blocked with 20 mM ethanolamine for 10 min.Unbound antibody was removed by seqential washes in 100 mM triethylamine pH 11.7, sodium acetate, pH 3.0 and PBS.The influence of monoclonal antibody specificity on proteolytic processing of IA-2 was performed by incubating protein G Sepharose-conjugated antibodies with 20,000 cpm of 35S-methioine-labelled IA-2ic and 10 μg of unlabelled purified recombinant IA-2ic for 2 h at room temperature.Non-bound IA-2 was removed by washing and complexes incubated with trypsin for times indicated in the figure legend.Reactions were terminated by addition of phenylmethanesulphonic acid and non-bound proteolytic fragments removed by washing.Bound fragments were eluted in 100 mM triethylamine, eluates neutralised with 0.5 M NaH2PO4 and analysed by SDS-PAGE and autoradiography.For identification of the antibody-protected peptides by mass spectrometry, bead-bound antibody-antigen complexes were formed by incubating the immobilised antibody with 100 μg of purified IA-2 cytoplasmic domain protein for 2 h at room temperature with slow rotation.Unbound antigen was removed by washing with PBS and the complexes equilibrated in chymotrypsin digestion buffer.Activated chymotrypsin was added to the complex at an enzyme:substrate ratio of 1:10 and incubated for 30 mins at 30 °C with occasional mixing.Unbound proteolytic fragments were removed by washing with PBS and subsequently with water.Antibody bound fragments were eluted in 100 mM triethylamine pH 11.7.The eluates were vacuum dried and stored at − 20 °C prior to mass spectrometry analysis."Epitopes for four mouse monoclonal antibodies to IA-2 have been shown by competition studies to overlap with those for autoantibodies in Type 1 diabetic patients' sera .All recognise epitopes within the JM domain of the protein.To further define the epitopes for each of the four mouse monoclonal antibodies, the influence of synthetic 20-mer peptides on antibody binding to a chimeric protein representing the 605–693 region of IA-2 fused to the PTP domain of PTP1B was investigated.The four monoclonal antibodies to the JM domain were inhibited differentially by synthetic peptides within the 601–640 region of the protein.Binding of antibody 76F was inhibited by the presence of the 621–640 IA-2 peptide, but not by peptides 601–620 or 611–630.Antibody 5E3 was inhibited only by the 611–630 peptide and 8B3 only by 601–620.9B5 showed no inhibition by any of the peptides.To identify amino acids on IA-2 that participate in antibody binding, reactivity to IA-2 JM constructs with single amino acid-substitutions were evaluated.The inhibitory effects of substitutions of residues within the 626–629 region on binding of the 76F antibody were confirmed in this study.However, the epitope for this antibody was found to extend beyond this region, as indicated by inhibition by alanine substitutions of amino acids L631, G632, H635 and M636 and of several amino acid substitutions in the region 609–616.Substitution of amino acids between 626 and 629 did not affect binding of the other three mouse monoclonal antibodies, but mutational mapping did show effects common to those seen for 76F.Hence, substitution of amino acids L615, H635 and M636 inhibited binding of all four monoclonal antibodies and mutation of residues R611 and G616 inhibited at least two antibodies.Effects of other amino acid substitutions were clone-specific.Some amino acid substitutions enhanced binding of some antibodies, most notably of L612, E627, L631 and K639.The results demonstrate that epitopes for the mouse IA-2 antibodies are represented by two discontinuous regions within the 609–639 region of the IA-2 JM domain with common structural elements for all four JM antibodies.We have previously localised the epitopes for three human monoclonal IA-2 autoantibodies isolated from Type 1 diabetic patients to the 831–860 region of the protein .To further define the epitopes for these antibodies, substitutions of those amino acids within the region 826–862 located on the surface of the crystal structure of IA-2 were introduced into a truncated IA-2 PTP domain construct and inhibitory effects of each substitution on binding of the three monoclonal antibodies were investigated.Alanine substitution of amino acids L831, V834, E836, L839, K857, N858 and V859, that are clustered on the surface of IA-2 in the structural model, inhibited binding to all three monoclonal antibodies.Further inhibition of binding was observed in two of the three monoclonal antibodies following mutation of residues H833 and Q862.Binding to M13 was additionally inhibited by the substitution of amino acids E827 and Q860.A polyclonal rabbit anti-serum to IA-2 was unaffected by any of the mutations.The effects of these mutations were also assessed in thirteen patient sera positive for antibodies to the central region."Substituted residues that inhibited binding to monoclonal antibodies were also found to inhibit binding to antibodies in Type 1 diabetic patients' sera, indicating a common area of antibody recognition.Mutation of amino acids L831, V834, E836, L839, K857, N858 and V859 inhibited binding to the IA-2 construct in at least 11/13 samples.To examine relationships between individual defined epitopes in the JM and PTP domains of IA-2, the ability of Fab fragments of PTP and JM domain-reactive monoclonal IA-2 antibodies to compete for binding with monoclonal or serum antibodies to IA-2 was investigated.Fab fragments of the PTP domain autoantibody M13 abolished binding to other monoclonal antibodies recognising similar PTP domain epitopes, but had no effect on IA-2 binding of the JM domain-reactive antibody, 76F.The rabbit polyclonal antibody to IA-2 was also unaffected.Fab fragments of the JM domain antibodies abolished or partially inhibited IA-2 binding of the JM-reactive 76F antibody.However, Fab fragments of 5E3 and 8B3 JM antibodies also partially inhibited IA-2 binding of the monoclonal antibodies M13, 96/3 and DS329 that are reactive to the PTP domain epitope, and of the polyclonal rabbit IA-2 antibody.The results indicate that binding of Fab fragments of antibodies to the JM domain are able to impair antibody binding to epitopes within the PTP domain, possibly through steric hindrance or conformational effects.Inhibitory effects of Fab fragments of monoclonal antibodies were also investigated using serum antibodies from IA-2 antibody-positive Type 1 diabetic patients categorised according to antibody reactivity to the IA-2 JM domain only, to both JM and PTP domains or to the PTP domain only.Fab fragments of the JM domain reactive antibodies abolished or partially inhibited binding of antibodies from patients with reactivity restricted to the IA-2 JM domain, whereas M13 Fab fragments had no effect.Fab fragments of the JM domain antibodies inhibited IA-2 binding of autoantibodies from patients positive for both JM and PTP domain antibodies, but also those negative for JM antibodies.The ability of Fab fragments of JM domain-reactive antibodies to inhibit binding of antibodies to PTP domain epitopes points to structural interactions between these two regions of autoantibody reactivity.Antibody footprinting is a technique by which structural interactions between antibody and antigen are investigated by limited digestion of antibody:antigen complexes with proteases or hydroxyl radicals .Antibody binding protects regions close to the antibody epitope from cleavage and identification of the protected regions defines the antibody “footprint”.In this study, antibody footprinting was used to compare and identify antibody-protected IA-2 proteolytic fragments using monoclonal antibodies directed to epitopes localised within the JM or PTP domains of the protein.Initial studies used SDS-PAGE and autoradiography to characterise radiolabelled proteolytic products generated after trypsin digestion of complexes of bead-conjugated monoclonal antibodies with 35S-methionine-labelled IA-2ic.Time course studies demonstrated clear differences in the dominant tryptic digestion products eluted from bead-conjugated 5E3 and M13 antibodies, with predominant bands at Mr 3500 and 7000 for 5E3 and at Mr 11,000 and 23,000 for M13.However, despite the differences in epitope recognition, common bands were also eluted from both antibodies, in particular, a trypsin product of 9000 Mr.To identify the regions protected by the JM and PTP domain monoclonal antibodies, similar experiments were performed using purified recombinant IA-2ic as antigen, digesting antibody:IA-2ic complexes with chymotrypsin which, being a more frequent cutter than trypsin, provides better resolution of antibody-protected regions of the protein.Chymotrypsin digestion products eluted from bead-conjugated antibodies were identified by LC-MS/MS.A total of 39 distinct peptides were identified in the eluates, and the percent recovery of each of these peptides relative to the total number of peptides identified is shown in Table 1.Several of the peptides could be clustered according to the presence of a common core sequence with varying length extensions at the C- or N-terminus."Peptides containing the core motif AALGPEGAHGTTF representing amino acids 613–626 of IA-2 were highly represented in eluates from the JM epitope-reactive 5E3 antibody, but almost absent from the M13 eluates.These peptides include residues L615, G616 and H621 that were identified as part of the 5E3 epitope in the mutagenesis studies above.However, the majority of peptides eluted from the 5E3 antibody were derived from the PTP domain, with peptides containing the sequences SHTIADFW, KNVQTQETRTL, TAVAEEVNAIL and NRMAKGVKEIDIAATL being highly represented.These latter peptides were also detected in eluates from the PTP domain-reactive M13 antibody.Peptides with the core sequences INASPIIEHDPRMPAY and SWPAEGTPASTRPL were detected in eluates from the M13 antibody, but found at low abundance in eluates from 5E3.Studies on the appearance of autoantibodies to islet antigens in early life , together with assessment of the risk of development of Type 1 diabetes by detection of single and multiple islet autoantibody specificities , have emphasised the importance of determinant spreading for progression from autoimmunity to disease.A key role for B-cells in promoting determinant spreading has been demonstrated in animal models of autoimmune disease , probably through alterations in uptake, processing and presentation of relevant antigens.We now demonstrate a close structural relationship between determinants in two distinct domains of a major autoantigen in Type 1 diabetes that, together with previous observations, point to an important role for B-cells secreting antibodies to the JM domain of IA-2 in the diversification of the immune response in human Type 1 diabetes.Thus: i.) antibodies to the JM domain appear early in the IA-2 autoimmune response and precede spreading to epitopes in the IA-2 PTP domain and to the related autoantigen, IA-2beta ; ii.),the presence of autoantibodies to the IA-2 JM domain in Type 1 diabetic patients is associated with T-cell responses to a peptide in the PTP domain that itself overlaps a major autoantibody epitope ; iii.),as shown in this study, monoclonal antibodies to the JM domain block binding of autoantibodies to the same PTP domain epitope, suggesting juxtaposition of the two epitopes; and iv.),these JM domain antibodies protect and stabilise PTP domain peptides containing major T-cell determinants during proteolysis of antibody:antigen complexes.If similar antibody-mediated stabilisation of PTP domain peptides occurs within processing compartments of JM-specific B-cells, then presentation of those PTP domain peptides to T-cells would be promoted, providing a mechanism underlying the association of JM antibodies with T-cell responses to PTP domain peptides in Type 1 diabetes and for the spreading of autoimmunity from JM to PTP domain determinants as disease develops.The study of determinant spreading at the B-cell level requires a detailed understanding of the structures of dominant autoantibody epitopes, most easily acquired through the study of cloned antibodies.Although human mononoclonal autoantibodies to IA-2 JM domain epitopes from Type 1 diabetic patients have been reported , transformed B-cells secreting these JM autoantibodies were unstable and are no longer available for study.To our knowledge, the only IA-2-specific B-cell clones from diabetic patients that are currently available secrete antibodies to overlapping PTP domain epitopes within the region 827–862 .Analysis of amino acid substitutions affecting binding of three human monoclonal antibodies to the PTP domain suggest a core region of antibody binding represented by amino acids 831, 834, 836, 839, 857, 858 and 859, with individual B-cell clones showing different involvement of residues peripheral to this common core.Analysis of the effects of amino acid substitutions on binding of serum antibodies from individual Type 1 diabetic patients demonstrated that the pattern of reactivity to this region is typical of B-cell responses in Type 1 diabetes generally, consistent with it being a major target of autoantibody reactivity in disease.The protein footprint of the M13 human monoclonal PTP domain autoantibody included peptides with core regions 836–845 and 857–867 which encompass the amino acids implicated in the autoantibody epitope and are included within major T-cell determinants .However, the antibody also stabilised other PTP domain peptides extending beyond the epitope, including those containing regions 765–780, 788–795, 874–887 and 964–974.Peptides from the JM domain were rarely detected.Analysis of the crystal structure of the IA-2 PTP domain shows the 765–780 region to be buried in the molecule beneath the proposed epitope region .The 874–887 region includes peptides immediately adjacent to those harbouring the antibody epitope, but lies on the opposite face of the protein to the epitope region in the 3-dimensional structure .The 874–887 motif includes the 876–880 sequence of amino acids, substitutions of which have been shown previously to inhibit IA-2 autoantibody binding and that may form part of a distinct PTP domain epitope .Although no monoclonal IA-2 JM domain autoantibodies derived from Type 1 diabetic patients are currently available for study, there is good evidence that antibodies cloned from IA-2-immunised mice show very similar JM epitope specificities to those appearing in the human disease .Studies to localise the epitopes of mouse monoclonal antibodies to the JM domain show that synthetic peptides known to inhibit serum antibodies from Type 1 diabetic patients also inhibit binding of three of the mouse antibodies.Site-directed mutagenesis indicated that amino acids 615, 635 and 636 represent key residues for antigen binding to all four monoclonal antibodies, with differing contributions of amino acids within the 608–638 region of IA-2 to binding of individual antibodies.For the 76F antibody, substitutions affecting binding included amino acids 626–629 which form part of the “JM2” and “JM3” epitopes described by the Bonifacio group and, for 5E3, residue 621 which contributes to a “JM1” epitope .Consistent with the mutagenesis data, the protein footprint of the 5E3 antibody included JM-localised peptides with a 613–626 core, that were poorly represented in the M13 footprint, strongly supporting this region as part of the 5E3 antibody epitope.However, peptides within the PTP domain containing regions 788–795, 857–867, 927–942 and 964–974 were also highly represented in eluates from the 5E3 antibody, again indicative of antibody-mediated protection from proteolysis of peptides outside of the immediate epitope region.Fab fragments of 5E3 and other JM domain antibodies were more effective than those of PTP domain antibodies at blocking binding of serum antibodies to epitopes in both the JM and PTP domain.These strong inhibitory effects of JM-targetted antibodies on binding of antibodies to the PTP domains is suggestive of close structural relationships between the two epitopes and juxtaposition of the two epitopes may explain the stabilisation of PTP-derived peptides by the JM domain antibody.The results of this study point to close structural relationships between two major regions targetted by autoantibodies in Type 1 diabetes that may have implications for the diversification of IA-2 autoimmunity in Type 1 diabetes.Confirmation that these in vitro observations have pathophysiological relevance requires analyses of the influence of B-cell epitope specificity on peptides generated within cellular processing compartments.Our identification of antibody epitopes, and core regions of IA-2 protected by JM and PTP domain antibodies, will facilitate studies to fully understand the natural history of spreading of B- and T-cell responses to determinants during the early stages of IA-2 autoimmunity.Such studies would identify B- or T-cell responses to determinants most closely linked to disease progression that would represent effective targets for immunotherapy.Samples were analysed by LC-MS/MS on a ProteomeX machine.Dried chymotrypsin digests were resuspended in 0.1% formic acid and chromatography of aliquots of each sample performed on a 100- by 0.18-mm BioBasic C18 column.Peptides were eluted with aqueous acetonitrile containing 0.1% formic acid at a flow rate of 2 μl per min.MS/MS data were acquired in data-dependent mode with dynamic exclusion.Spectra were submitted against the IA-2 sequence database using Bioworks v3.1/TurboSEQUEST software.Proteins were considered to match entries in the database if XCorr values for individual peptides were ≥ 1.5, ≥ 2, and ≥ 2.5 for singly, doubly, and triply charged ions, respectively.The author declare that there are no conflicts of interest. | Diversification of autoimmunity to islet autoantigens is critical for progression to Type 1 diabetes. B-cells participate in diversification by modifying antigen processing, thereby influencing which peptides are presented to T-cells. In Type 1 diabetes, JM antibodies are associated with T-cell responses to PTP domain peptides. We investigated whether this is the consequence of close structural alignment of JM and PTP domain determinants on IA-2. Fab fragments of IA-2 antibodies with epitopes mapped to the JM domain blocked IA-2 binding of antibodies that recognise epitopes in the IA-2 PTP domain. Peptides from both the JM and PTP domains were protected from degradation during proteolysis of JM antibody:IA-2 complexes and included those representing major T-cell determinants in Type 1 diabetes. The results demonstrate close structural relationships between JM and PTP domain epitopes on IA-2. Stabilisation of PTP domain peptides during proteolysis in JM-specific B-cells may explain determinant spreading in IA-2 autoimmunity. |
589 | Gramene database: Navigating plant comparative genomics resources | Gramene supports researchers working with various crops, models, and other economically important plant species by providing online resources for comparative analyses of Genomes and Pathways .The Genomes portal, developed collaboratively with Ensembl Plants, hosts annotated genome assemblies for 44 plant species including lower plants, gymnosperms, and flowering plants.Users can access detailed information regarding individual genes and proteins, genetic and physical maps, phylogenetic trees based on whole-genome alignments, protein-based Compara gene family trees, genetic and structural variants, and expression data.Furthermore, tissue-specific basal expression data and/or differential expression data from several plant species can be viewed for individual genes or in their genomic context via the Genome Browser.The Genome Browser also facilitates upload, analysis, and visualization of user-defined high-throughput omics data.BLAST search can be conducted directly from the gene, transcript and variant summary pages, or using the BLAST tool accessible on the Tools page.Another tool, the Variant Effect Predictor, allows users to predict – either online or offline – the functional effect of genetic variants on gene regulation and encoded gene products .For data mining, our BioMart tool enables complex queries of sequence, annotation, homology and variation data, and provides an additional gateway into the Genome Browsers .The Pathways portal of Gramene, the Plant Reactome , was developed in collaboration with the Human Reactome project and hosts pathways for 67 plant species.In the Plant Reactome database, rice serves as the reference species for curation of plant metabolic, transport, signaling, genetic-regulatory, and developmental pathways.The curated rice pathways are then used to derive orthology-based pathway projections for other species .For a few species, users can also view baseline tissue-specific expression profile of the genes associated with pathways in the Pathway Browser .The Plant Reactome also allows users to upload and analyze omics datasets in the context of plant pathways, thereby fostering discoveries about the roles of genes of interest and their interacting partners, and compare of pathways between the reference species rice and any projected species, as described recently .Users can access various portals, tools, data, and release notes from the navigation panel located on the left-hand side of the Gramene homepage.Recently, we added a new dedicated search page to support quick access to database contents, bulk data downloads, analysis tools, archived data, outreach and training material, a news blog, and external collaborators.This interface offers interactive views of the search results both in aggregate form and in the context of a gene.The search bar, located at the top of this new user interface, facilitates queries for genes, pathways, gene ontology terms, and protein domains.In addition, the search bar provides suggestions for closely related terms and allows scientists to find genes by selecting among auto-suggested filters.Search results include a summary and associated interactive graphical depiction of a gene’s structure, associated genomic location features and links to functional annotations in the Genome Browser and external sites, a phylogenetic gene tree and associated homology elements, external references, and expression data from EMBL-EBI’s Atlas.The Genomes icon on the search page is hyperlinked to the Gramene’s Genome Browser page, which lists the available genomes for various plant species.Users can select their species of interest and open the Genome Browser window to display a gene or genomic region.From this page, users can access several other details pertaining to a gene’s structure, function, and evolution, including synteny, gene trees, genetic variation data, and expression data.As an example, Fig. 1C shows a view of synteny between rice chromosome 6 containing Os06g0611900 and the corresponding region on Zea mays chromosomes 6 and 9.Such comparisons of syntenic genomic regions among plant genomes are particularly useful for identifying evolutionarily conserved co-linear regions, functional orthologs, and genetic markers .The gene-based Compara trees provide information about the evolutionary history of genes in context of speciation.Gramene generates gene trees and alignments of orthologous and paralogous genes using the Ensembl Gene Tree method .Users can access Compara trees from the Genome Browser page.For example, the view of a gene-tree for rice Os05g0113900 shows gene duplication, speciation events, and gene alignments in various species.We recently described in detail the development of the Plant Reactome database, the pathway portal of Gramene, and its various functionalities .Users can access the Plant Reactome database from the Gramene homepage or search page.Alternatively, users can directly access the Plant Reactome homepage, which provides links to the quick search, Pathway Browser, data analysis tools, video tutorials, user guide, data download, data model, database release summary, news, APIs, etc.By clicking on ‘Browse pathways’ or by searching for a pathway or any entity associated with that pathway users can access the default Pathway Browser for reference species rice.Fig. 2 provides an example of the Pathway Browser showing the ‘abscisic acid biosynthesis and ABA-mediated signaling pathway’ from the reference species rice.The left-hand side panel of the Pathway Browser shows the list of the available pathways to facilitate easy navigation.The top right-hand side panel shows a pathway diagram comprising of various types of macromolecular interactions between proteins, protein complexes, and small molecules in the context of their subcellular locations.The bottom panel provides a pathway summary and data associated with various pathway entities, with hyperlinks to external databases that provide further details on their structure, function, and expression.If desired, users can select species of their interest from the available options.At present the Plant Reactome hosts ∼240 metabolic, signaling, regulatory, and genetic rice reference pathways, omics data and pathway comparison analysis tools, and orthology-based projections to over 78,000 gene products in 67 species.The Plant Reactome allows users to select pathways of interest and visualize the baseline expression data using the Pathway Browser, and also provides links to differential expression data and additional detailed information about related experiments within the species.Data related to selected entities can be downloaded as PDF, Word, BioPAX, and SBML files.Users can access the plant Expression Atlas from the Gramene search page.The plant gene Expression Atlas was developed by our collaborators at EMBL-EBI .Currently, it contains transcriptomics data from 17 plant species corresponding to 698 experiments including baseline tissue-specific expression data and differential expression data."Manually curated, baseline expression data from RNA-Seq experiments are available from 12 plant species, showing expression levels of gene products under ‘normal' conditions in various tissues.As of release #52, the baseline expression profile of an individual gene across all tissue samples and growth stages from EMBL-EBI Expression Atlas can be accessed from the gene page in the Gramene database, as well as from the Plant Reactome Pathway Browser page.Differential gene expression data are available from 13 plant species, and include datasets from both microarray and RNA-Seq experiments.At present, differential expression data can be viewed on the Expression Atlas website, and projected on demand onto the Gramene gene page and the Genome Browser.Furthermore, users can select a given experiment and view the baseline expression profile of all available genes included in that dataset online at the plant gene Expression Atlas page, or offline after downloading the data.The plant gene Expression Atlas page has a widget for loading the expression data on the Gramene/Ensembl Genome Browser.User can select a gene and a cultivar, treatment or organismal part, and then open the Gramene/Ensembl Genome Browser.The resultant Genome Browser window shows the expression value of the selected gene, mapped onto the corresponding genomic region.Because data for all genes in a sample are automatically loaded into the Genome Browser, users can view the expression of all other genes in a sample by simply scanning the Genome Browser.The Plant Reactome Pathway Browser automatically pulls out the baseline expression data from the EMBL-EBI Atlas database.For the processing of preloaded data, as well as data uploaded by users, Gramene hosts a number of analysis and visualization tools.The BLAST tool allows users to query orthologs from multiple target species using gene or protein sequence.The Genome Browser provides multiple options for displaying various types of preloaded data, such as genomic variation data, and also allows uploading of user-defined genomic data as described recently .Users can also perform variant effect predictor analysis to determine the functional consequence of genomic variations, such as SNPs and indels, on genes, transcripts, protein sequences, and regulatory regions using the VEP tool accessible from the Tools link on the Gramene homepage and Genome Browser pages.The online version of the VEP permits analysis of up to 700 variants in a single run.The analysis of large datasets can be performed offline by downloading the VEP Tool and using command-line protocols and Perl scripts .For some species, including tomato, pre-analyzed data are available from Gramene, and the consequences of genetic variants can be accessed online.Plant Reactome also allows users to upload and visualize omics data in the context of the plant pathways and to compare pathways between the reference species rice and any other species.Users have the option to download the results of the analysis, along with pathway diagram images as described recently .To learn how to effectively mine data and use the resources and tools available at the Gramene database, open access video tutorials and training material are available via Gramene’s outreach portal and Gramene’s YouTube channel.Gramene strives to provide plant researchers and breeders with the most updated and rich annotated data, tools, and user-friendly resources to support comparative plant genomics and pathway analysis.The contents, tools, and webpage of Gramene are updated three to five times annually.In each release, we add new genome assemblies, update assembly versions and annotations, and add new manually curated and projected pathways.We recommend that our users acquaint themselves regularly with our release notes and new updates to the database.We also host monthly webinars on various topics and welcome suggestions from users. | Gramene (http://www.gramene.org) is an online, open source, curated resource for plant comparative genomics and pathway analysis designed to support researchers working in plant genomics, breeding, evolutionary biology, system biology, and metabolic engineering. It exploits phylogenetic relationships to enrich the annotation of genomic data and provides tools to perform powerful comparative analyses across a wide spectrum of plant species. It consists of an integrated portal for querying, visualizing and analyzing data for 44 plant reference genomes, genetic variation data sets for 12 species, expression data for 16 species, curated rice pathways and orthology-based pathway projections for 66 plant species including various crops. Here we briefly describe the functions and uses of the Gramene database. |
590 | Growth of free-standing bulk wurtzite AlxGa1−xN layers by molecular beam epitaxy using a highly efficient RF plasma source | The recent development of group III nitrides allows researchers world-wide to consider AlGaN based light emitting diodes as a possible new alternative deep ultra–violet light source for surface decontamination and water purification .If efficient devices can be developed they will be easy to use, have potentially a long life time, be mechanically robust and will lend themselves to battery operation to allow their use in remote locations.Changing the composition of the active AlGaN layer, will allow one to tune easily the wavelength of the LEDs.This has stimulated active research world-wide to develop AlGaN based LEDs .Such DUV LEDs will also have potential applications for solid state lighting and drug detection.The first successful semiconductor UV LEDs are now manufactured using the AlxGa1−xN material system, covering the energy range from 3.4 up to 6.2 eV.One of the most severe problems hindering the progress of DUV LEDs is the lack of suitable substrates on which lattice-matched AlGaN films can be grown .Currently the majority of AlGaN DUV LED devices are grown on sapphire or AlN.The consequence of a poor lattice match is a very high defect density in the films, which can impair device performance.The lattice mismatch between the substrate and the active AlGaN layer results in poor structural quality of the layers, cracks, and low radiative recombination rates in current DUV LED devices.As a result, AlGaN layers contain a high density of dislocations arising from the large lattice mismatch and the difference in thermal expansion coefficient between the AlGaN layers and sapphire, which results in a low ∼1–10% external quantum efficiency and poor reliability of existing DUV LEDs.DUV LEDs require an AlN content in the mid-range between pure AlN and GaN, and therefore high quality ternary AlGaN substrates may significantly improve the properties of the devices.However, only limited success has been achieved so far in the growth of bulk AlxGa1−xN crystals with a variable AlN content .Molecular beam epitaxy is normally regarded as an epitaxial technique for the growth of very thin layers with monolayer control of their thickness.However, we have used the plasma-assisted molecular beam epitaxy technique for bulk crystal growth and have produced free-standing layers of wurtzite AlxGa1−xN wafers .Thick wurtzite AlxGa1−xN films with an AlN content from 0 to 0.5 were successfully grown by PA-MBE on 2-inch GaAsB substrates.However, in our previous studies the growth rate for AlxGa1−xN films remained below 1 µm/h and this is too slow to make the process commercially viable.Recent years have seen significant effort from the main MBE manufacturers in France, USA and Japan to increase the efficiency of their nitrogen RF plasma sources to allow higher growth rates for GaN-based alloys.All of the manufacturers are exploring the route of increasing the conductance of the aperture plates of the RF plasma cavity in order to achieve significantly higher total flows of nitrogen through the plasma source.For example, in the recent Riber source the conductance of the aperture plate has been increased by increasing the number of 0.3 mm diameter holes to 1200 .With this Riber source we have achieved growth rates for thick GaN layers of up to 1.8 µm/h on 2-inch diameter GaAsB and sapphire wafers .Recently, Riber have again modified the design of the aperture plate of their plasma source for even faster growth of GaN layers.The aperture conductance has been increased significantly by an increase in the number of holes, which allows a further increase in the GaN growth rate.First tests of the latest model of Riber RF nitrogen plasma source with 5880 holes in the aperture plate, produced even higher growth rates for thin GaN layers up to 7.6 µm/h, but with nitrogen flow rates of about 25 sccm .In this paper we will describe our recent results on PA-MBE growth of free-standing wurtzite AlxGa1−xN bulk crystals on up to 3-inch diameter substrates using the latest Riber model of the highly efficient nitrogen RF plasma source.Special emphasis in the current study has been made on the detailed structural analysis of AlGaN/GaAsB interface.Wurtzite GaN and AlxGa1−xN films were grown by PA-MBE in a MOD-GENII system .2-inch and 3-inch diameter sapphire and GaAsB were used as substrates.The active nitrogen for the growth of the group III-nitrides was provided by a novel high efficiency plasma source from Riber RF-N 50/63 with 5880 holes in the aperture plate.The source was custom designed at Riber in order to match the dimensions of MOD-GENII Varian system source flanges.The use of an As2 flux of ∼6×10−6 Torr beam equivalent pressure during substrate heating and the removal of the surface oxide from the GaAsB substrates allowed us to avoid any degradation of the GaAs substrate surface.The arsenic flux was terminated at the start of the GaN growth.A thin GaN buffer was deposited before the growth of the AlxGa1−xN layers.In the current study, the AlxGa1−xN layers were grown at temperatures of ∼700 °C.We are not able to use higher growth temperatures due to the low thermal stability of the GaAs substrates in vacuum above 700 °C, even under an As2 flux.The AlxGa1−xN layers with thicknesses up to 100 μm were grown on GaAs substrates, and the GaAs was subsequently removed using a chemical etch to achieve free-standing AlxGa1−xN wafers.From our previous experience with MBE growth of bulk zinc-blende and wurtzite AlxGa1−xN , such thicknesses are already sufficient to obtain free-standing AlxGa1−xN layers.The structural properties of the samples were studied in-situ using reflection high-energy electron diffraction and after growth ex-situ measurements were performed using X-ray diffraction and Transmission Electron Microscopy."A Philips X'Pert MRD diffractometer was used for XRD analysis of the layers.Advanced TEM studies were performed using an FEI Titan microscope operating at 300 kV with a CEOS probe-side corrector, and a JEOL 4000 EX microscope operating at 400 kV.Specimens were prepared for TEM by mechanical polishing, dimple grinding and ion milling with an argon ion beam at an acceleration voltage of 4 kV.We have studied the uniformity of Al incorporation in the AlxGa1−xN layers by secondary ion mass spectrometry using Cameca IMS-3F and IMS-4F systems and using an Oxford Instruments EDX system.The best structural properties of free-standing wurtzite AlGaN layers can be achieved with initiation under Ga-rich conditions, but before the formation of Ga droplets .Therefore, the first step for us with the use of the novel RF plasma source is to establish optimum growth conditions for a given nitrogen flux.In the current study we used a nitrogen flow rate of 6 sccm; which allowed us to use our standard PA-MBE pumping configuration for the MOD-GENII system with a CT-8 cryopump.We have grown GaN layers on 2′′ sapphire wafers to simplify the initiation process.In the case of GaAs substrates one need to be very precise at the GaN initiation stages to prevent potential strong meltback etching of the GaAs wafers, The beam equivalent pressure of nitrogen in the chamber during growth did not exceed 2×10−4 Torr.All layers were grown with a fixed RF power of 500 W.We achieved a growth rate of 3 µm/h at a Ga flux of ∼2×10−6 Torr.At the higher Ga fluxes the growth rate remains the same and we observed the formation of Ga droplets on the GaN surface under strongly Ga-rich conditions.We have used slightly Ga-rich conditions, but before the formation of Ga droplets for the growth of thick wurtzite AlxGa1−xN layers onB GaAs substrates.We have shown previously that growth onB orientation allows us to initiate the growth of hexagonal phase material .Wurtzite GaN buffers, 50–200 nm thick, were deposited before the growth of the AlxGa1−xN layers.In MBE, the substrate temperature is normally measured using an optical pyrometer.In the case of transparent sapphire substrates, the pyrometer measures the temperature of the substrate heater, not the substrate surface.Our estimate of the growth temperature on sapphire was based on a thermocouple reading.In the case of GaAs wafers we can measure and control the growth temperature with pyrometer.Therefore, we may have slightly different growth temperatures on sapphire and onB GaAs wafers.High-resolution TEM studies were used to investigate the interface between the GaAs substrate and GaN layer as shown in Fig. 1.We observed zinc-blende GaN crystallites in the wurtzite GaN matrix close to the GaAs substrate interface.These cubic inclusions extend to the first few tens of nanometers into the GaN wurtzite film, before being terminated at basal plane stacking faults, which form boundaries with the wurtzite matrix.We also see the roughening of the surface of the GaAs due to plasma- or melt-back etching of the substrate.Arsenic contamination of the first few nanometers of the layer is possibly responsible for the formation of the zinc-blende grains.From general considerations one might expect that GaN layers grown on a GaAsA surface will exhibit Ga-polarity, but N-polarity will become preferable for the growth on GaAsB substrates.However, as has been shown previously this may depend strongly on the MBE growth conditions .The polarity of our AlGaN and GaN layers grown with high growth rates onB GaAs have been investigated using high resolution Scanning Transmission Electron Microscopy.Fig. 2 shows HR-STEM images of three areas of the GaN buffer and AlGaN film viewed at atomic resolution.The AlGaN film clearly shows N-polarity due to the relative positions of the individual nitrogen atomic columns with respect to the gallium atom columns in the wurtzite crystal along the direction.The polarity of the AlGaN layer was verified by Convergent Beam Electron Diffraction studies, which confirmed N-polarity in the AlGaN film.However there appear to be isolated regions of mixed polarity in the GaN buffer layer, as shown by two atomic resolution images of different GaN regions which suggest that both N- and Ga-polar regions occur.This may be due to isolated N-polar regions forming by growth on meltback-etched regions of the GaAs substrate, however no inversion domain boundaries were observed in the buffer region, so it is not known whether these mixed polarity regions occur in high densities.The high density of extended defects in the GaN buffer layer resulted in scattering effects in CBED patterns, preventing conclusive CBED information about GaN buffer film polarity to be measured.Based on these results, we then grew thicker wurtzite AlxGa1−xN layers under similar group III-rich growth conditions onB GaAs substrates using the second generation Riber source.Fig. 3 demonstrates that the AlxGa1−xN layer thickness increases linearly with growth time.We have observed the growth rate of ∼2.2 µm/h, which is lower than we can see on sapphire wafers under similar conditions.This is probably result of different growth surface temperature or different Ga re-evaporation on sapphire and GaA substrates, but this question is still under investigation.Fig. 4 shows a 2θ–ω XRD plot for a ∼105 μm thick wurtzite AlxGa1−xN film.In XRD studies we observed a single 0002 peak at ∼35°, consistent with a wurtzite AlxGa1−xN layer.For AlxGa1−xN layers with increasing Al content we observed a gradual shift of the position of the 0002 AlxGa1−xN peak in 2θ–ω XRD plots to higher angles as expected."Using Vegard's Law, we can estimate the composition of the AlxGa1−xN layer shown in Fig. 4 to be x∼0.2.The AlN fraction in this AlxGa1−xN layer was also confirmed by both SIMS and EDX measurements.From high resolution XRD scans we can estimate the zinc-blende fraction, which in this case was below our detection limit.Fig. 5 shows the full-width-at-half-maximum of the 0002 peak at ∼35° from XRD ω-plots for several wurtzite AlxGa1−xN layers grown on 3-inch GaAs as a function of growth time.The AlxGa1−xN layers were grown at a growth rate of ∼2.2 µm/h and with an AlN content of x∼0.2.The growth time was up to 50 h and the thickness of the layers was up to ∼100 µm.In all of our earlier experiments with the growth of bulk zinc-blende AlxGa1−xN layers, we observed degradation of the crystal quality of the layers with increasing thickness due to a gradual build-up of the concentration of wurtzite inclusions in the zinc-blende matrix.In the current research, the structural quality of the wurtzite AlxGa1−xN layer improves rapidly with increasing layer thickness during the first few hours of growth.This is due to cubic inclusions close to the GaN/GaAs interface reverting to wurtzite after approximately 70 nm of growth.There is also a steady reduction in the density of stacking faults in the film, which are readily generated by growth on mixed phase material close to the GaN/GaAs interface.However, we are still investigating the mechanisms behind that.The structural quality of AlxGa1−xN then slightly degrades during further MBE growth.This may arise because we are probably shifting from the optimum growth temperature and Ga/N flux ratio after the first ten hours of growth, due to depletion of Ga in the Ga SUMO-cell during the long growths with high fluxes of BEP ∼2×10−6 Torr.The depth uniformity of Al incorporation in the AlxGa1−xN layers was studied using SIMS.As SIMS studies show the Al, Ga and N profiles are uniform with depth.Fig. 6 demonstrates Al distribution only in a ∼42 µm-thick AlGaN layer with an AlN of ∼20%.The profile is from the center of the film and there may be small variations in Al:Ga ratio as a function of radial position.There was no significant As incorporation into the bulk of thick AlGaN layers and the detected As was at the background level of the SIMS system, as have been demonstrated previously .We have grown free-standing AlxGa1−xN layers with thicknesses up to 100 μm by PA-MBE using the latest Riber model of fast-growth Riber RF plasma source.We have demonstrated that AlGaN layers grown on GaAsB substrates have N-polarity.Free-standing bulk AlxGa1−xN wafers with thicknesses in the 30–100 μm range may be used as substrates for further growth of AlxGa1−xN-based structures and devices."The novel high efficiency RF plasma source allowed us to achieve such AlxGa1−xN thicknesses on 3-inch diameter wafers in a single day's growth, which makes our bulk growth technique potentially commercially viable. | The recent development of group III nitrides allows researchers world-wide to consider AlGaN based light emitting diodes as a possible new alternative deep ultra–violet light source for surface decontamination and water purification. In this paper we will describe our recent results on plasma-assisted molecular beam epitaxy (PA-MBE) growth of free-standing wurtzite AlxGa1−xN bulk crystals using the latest model of Riber's highly efficient nitrogen RF plasma source. We have achieved AlGaN growth rates up to 3 µm/h. Wurtzite AlxGa1−xN layers with thicknesses up to 100 μm were successfully grown by PA-MBE on 2-inch and 3-inch GaAs (111)B substrates. After growth the GaAs was subsequently removed using a chemical etch to achieve free-standing AlxGa1−xN wafers. Free-standing bulk AlxGa1−xN wafers with thicknesses in the range 30–100 μm may be used as substrates for further growth of AlxGa1−xN-based structures and devices. High Resolution Scanning Transmission Electron Microscopy (HR-STEM) and Convergent Beam Electron Diffraction (CBED) were employed for detailed structural analysis of AlGaN/GaAs (111)B interface and allowed us to determine the N-polarity of AlGaN layers grown on GaAs (111)B substrates. The novel, high efficiency RF plasma source allowed us to achieve free-standing AlxGa1−xN layers in a single day's growth, making this a commercially viable process. |
591 | Evaluation of hepatocyte growth factor as a local acute phase response marker in the bowel: The clinical impact of a rapid diagnostic test for immediate identification of acute bowel inflammation | In order to inhibit disease transmission, patients with diarrhea are isolated at medical centers upon admission.Based on the patient’s medical and epidemiological history, a wide range of tests and examinations may be performed before a definite diagnosis is made .Subsequent treatment may include fluid and electrolyte replacement plus antibiotic treatment for patients with fever and stomach pain.However, appropriate treatment can be delayed for a few serious diseases that include diarrhea as an initial symptom.Such conditions include the onset of inflammatory bowel disease in young patients, colon cancer, and abdominal processes or abscess that cause reactive diarrhea .Despite the growing problem of multidrug-resistant gram-negative bacteria, it is inappropriate to treat self-limiting infectious gastroenteritis with broad-spectrum antibiotics, but this is quite common in medical centers .Various microbiological and immunological tests are performed on stool samples when patients with diarrhea are admitted to the hospital.However, these tests have limited sensitivity with respect to antibiotic consumption and/or low antigen burdens .Hepatocyte growth factor is produced by mesenchymal cells during organ injury.It stimulates cell division and cell motility and promotes normal morphogenic structure in epithelial cells adjacent to injured areas.It also induces the regeneration and repair of damaged tissue .HGF is translated as a single-chain precursor and is activated at the site of injury by proteolytic cleavage, resulting in a double-chained active form of HGF .High levels of systemic HGF have been detected during injuries caused by infection .In bacterial meningitis and pneumonia, there is local production of HGF at the site of infection .To identify the bowel as the focus of inflammation, proteins and cytokines that are produced locally at the site of injury can be detected in feces.HGF is produced both systemically and locally in infectious diseases , and determination of the HGF concentration in feces can be used to identify infectious gastroenteritis.However, there may also be high levels of HGF in feces due to chronic bowel diseases such as colon cancer and inflammatory bowel disease , limiting the specificity of such a test.Furthermore, HGF produced during acute inflammation binds to heparan sulfate proteoglycan with high affinity but exhibits decreased affinity to HSPG when produced during chronic inflammation .Based on these observations, we developed a metachromatic semi-quantitative test to detect the presence of growth factors such as HGF that show affinity to sulfated glycans in feces during infectious gastroenteritis .Determination of fecal pH is a classic method for evaluating signs of malnutrition and infection in feces.Recently, the pH levels in the feces of severely ill patients were found to indicate the severity of a disease or increased mortality .In order to confirm the results from previous studies , we developed a platform that could be used to evaluate whether the determination of substances with binding affinity to sulfated glycans, such as HSPG, could be used to distinguish between the various causes of diarrhea when patients with diarrhea were admitted to the hospital.Dextran sulfate has properties similar to those of HSPG in terms of binding to HGF .We developed a new strip test that has two assay surfaces, one for measuring fecal pH and one for detecting the binding affinity of fecal HGF to DS.In the present work, we performed a cohort study in which we assessed patients with symptoms of diarrhea and noted the outcomes during follow-up of up to one year.We evaluated local production of HGF as a local acute phase response marker in the bowel using the newly developed strip test and determined whether use of the test strip could distinguish infectious gastroenteritis versus onset/exacerbation of IBD and bowel cancer in these patients.A total of 513 fecal samples were collected in a blinded fashion from patients with bowel disturbances who contacted health care centers or hospital-connected home health care agencies, or who were admitted to the University Hospital in Linköping or to county hospitals in Norrköping and Motala, Sweden, from March 2012–December 2013.Each patient was followed for up to one year after inclusion in this study.Patients in hospital wards were isolated until they recovered from diarrhea.Stool samples were analyzed using the following routine tests at the Department of Microbiology, University Hospital in Linköping, Sweden:Detection of Clostridium difficile toxin A and B: these toxins were detected using two-step sandwich enzymatic immunoanalysis with fluorescent detection.Isolation of C. difficile from fecal specimens: stool samples were collected using sterile copan eswabs and inoculated on CCFE agar supplemented with cycloserine, cefoxitin, and fructose.Isolation of Salmonella, Shigella, and Campylobacter: stool samples were collected using sterile copan eswabs and cultured on xylose lysine deoxycholate agar and blood agar and incubated in a 5% CO2 incubator.Detection of Enterohemorrhagic Escherichia coli: stool samples were collected using sterile copan eswabs and analyzed by PCR to detect EHEC.Detection of viral agents,Detection of Calicivirus RNA: stool samples were collected in feces collecting tubes without additives and analyzed by PCR to detect Calicivirus RNA.Detection of Rotavirus antigen: stool samples were collected in feces collecting tubes without additives and analyzed by Enzyme Immunoassay to detect the rotavirus antigen.Detection of stool parasites: stool samples were collected in ParasiTrap BIOSEPAR tubes containing formaldehyde-free fixation and transport medium and examined by light microscopy to detect stool parasites.Detection of fecal hemoglobin: qualitative analysis was performed to detect fecal hemoglobin using antibodies developed against human hemoglobin; OC FIT-CHEK, Polymedco.Inc.NY, USA).Other tests: Other X-ray and endoscopic techniques were used as indicated.The ethics committee in Linköping, Sweden, approved the study protocol.The SP Technical Research Institute of Sweden studied the test results and the ultimate patient outcomes to evaluate the sensitivity of the test.Acute infectious gastroenteritis was verified in 131 of the 513 fecal samples that were included in this study.In patients who had negative microbiological test results, the differential diagnosis was assessed by evaluating the course of treatment and the outcome during the 1-year follow-up.Several sub-groups were identified that are described in Table 1 and Fig. 2.Patients with microbiologically verified infectious gastroenteritis were compared to verified cases of IBD exacerbation/onset and gastrointestinal malignancy.The strip test distinguished infectious gastroenteritis with a sensitivity of 87.9%, a specificity of 90.9%, a positive predictive value of 96.6%, and a negative predictive value of 71.4%.The reference test for C. difficile toxins A and B identified cases with recurrent enteritis with a sensitivity of 66.6% versus 90.9% using the strip test.There was no significant correlation between the strip test results and the presence of blood in feces in the same sample.Additionally, no significant differences were detected in the strip test results obtained from the same versus different batches of the strips.Fecal pH was measured and documented in patients with loose feces at the time of sampling.The patients were followed-up and divided into groups based on laboratory and clinical outcome.The sub-group with short non-recurrent episodes of diarrhea was omitted because no further investigation was needed during the follow-up period to define the cause of the short-term episode of diarrhea.The other cases consisted of three major groups: microbiologically- and clinically-verified infectious gastroenteritis; non-infectious cases; and cases with a generalized inflammatory response.The groups were then compared.The HGF concentration was measured by ELISA in 101 feces samples.Of these 101, 49 were positive using the strip test and 52 were negative using the strip test.The HGF levels were significantly higher in the positive test group.High levels of HGF are produced both locally and systemically in injuries caused by infection .Low levels of serum HGF in patients with pneumonia correlates significantly with poor prognosis , and application of HGF to the site of an injury, such as to the site of a chronic ulcer, accelerates the healing process .The gastrointestinal mucosa has a remarkable ability to repair damage, and growth factors play an important role in the regeneration of injured cells in gastrointestinal organs .Nishimura et al. has shown that of the cytokines, HGF is the most potent in terms of accelerating the repair of the damaged monolayer of epithelial cells derived from normal rat small intestine.This study evaluated the ability of a new strip test to determine the binding affinity of acute phase proteins such as HGF to DS as a tool for assessing patients with diarrhea.This strip test showed high sensitivity and specificity for identifying infectious gastroenteritis in patients with diarrhea upon hospital admission.The binding of HGF to HSPG on the cell surface and in the extracellular matrix plays an important role in activating an HGF precursor and in facilitating its interaction with the high-affinity c-met receptor .We observed previously that binding of HGF to both high affinity and low affinity receptors can be studied using a surface plasmon resonance-based system to differentiate HGF that is biologically active during acute inflammation from HGF that is present during chronic inflammatory diseases .We showed that unlike the biologically inactive form of HGF, the biologically active form of HGF has binding affinity to HSPG , and this affinity decreases significantly when DS is added to the samples.Thus, DS has properties similar to those of HSPG in terms of binding to active HGF .In a previous study, we prepared a DS-containing gel and immobilized it on plastic loops.Notably, DS changes the color of methylene from blue to red, so we developed a method that took advantage of this property.In this method, the binding of HGF to DS competes with the interaction of DS and methylene blue and inhibits the color change to red.Using this method, we tested fecal samples from patients and healthy volunteers to investigate the ability of the method to distinguish infectious gastroenteritis with high sensitivity and specificity .We then developed this strip test based on the binding affinity of active HGF to DS.Determination of fecal pH yields important information about ion exchange in the bowel and has been used as a non-specific way to diagnose some bowel infections.This method was recently demonstrated to predict outcome in severely ill patients .Since patients with generalized inflammatory response, or SIRS in the course of severe trauma and/or sepsis, have HGF with high affinity to DS and have high fecal pH , we determined both the binding affinity of HGF to DS and fecal pH. We observed that patients with infectious gastroenteritis had higher fecal pH than those with non-infectious diarrhea and healthy controls.However, fecal pH ⩾ 9.0 was seen in cases with severe colitis and SIRS, and this was associated with significantly increased mortality.Thus, including a pH sensor in the strip test may differentiate cases with self-limiting infectious gastroenteritis from cases at risk of bacterial translocation, septicemia, and SIRS.This would mean that antibiotics could be used in a more directed way.Fecal pH ⩽ 4.0 predicted an unfavorable outcome.Routine diagnostic tests for identifying infectious gastroenteritis have low sensitivity .We observed that the reference test for detecting the C. difficile toxin in feces identified cases with C. difficile enteritis with a sensitivity of 66.6%.However, technical improvements and the development of new diagnostic methods have made diagnosis more rapid and accurate.During the development of the strip test, we assessed the concentration of HGF in feces by ELISA.We observed previously that the HGF concentration is significantly higher in feces during infectious gastroenteritis than in feces from diarrhea due to non-infectious causes .Additional studies showed that the concentration of HGF in feces was significantly higher in chronic IBD compared to infectious gastroenteritis .It is complicated to perform ELISAs on stool samples , and it was not possible to perform ELISAs on fresh samples.The strip test was developed based on the affinity of HGF to the receptor and not the concentration of HGF .Thus, the concentration of HGF was determined in some samples, and the stool samples that were negative for HGF on the strip test included samples from cases with IBD as well as with other causes of diarrhea.Therefore the data was not normally distributed.Limitations: the strip test could not differentiate between the various etiologies of infectious gastroenteritis.However, patient isolation and avoiding antibiotic treatment are common management strategies for infectious gastroenteritis.Difficulty in detecting changes in the color is another limitation of the strip test, and one that could be overcome by instrumentation.The performance of the strip test did not change significantly when freeze-thawed feces were tested .The binding affinity to sulfated glycans/HSPG may not be limited to HGF in feces i.e. there may be competitive binding by other proteins as well.Although the strip test showed high sensitivity and specificity in the current investigation, the lack of a gold standard for comparison is a major limitation of this study.Further studies are planned to further assess the performance of the strip test and its possible impact on antibiotic use in different patient groups and diseases.In summary, HGF is a good local acute phase response marker for acute bowel inflammation.We developed a new rapid test for stool specimens that evaluates local production of HGF as a local acute phase response marker in the bowel.We suggest that this method can be used for the simultaneous determination of fecal pH and the binding affinity of fecal HGF to DS in order to assess patients with symptoms of acute diarrhea.The strip test provides useful information for making decisions about patient isolation and for planning appropriate diagnostic procedures and therapy.More data is needed before a test built on this platform can complement or replace currently available tests. | Background: There are no rapid tests that can distinguish contagious gastroenteritis, which requires isolation at its onset, from exacerbation of chronic inflammatory bowel disease (IBD) or bowel engagement in the course of systemic inflammatory response syndrome (SIRS). Hepatocyte growth factor (HGF) is an acute phase cytokine that is produced at the site of injury. It has high affinity to sulfated glycan, and this binding affinity is lost during chronic inflammation. The fecal pH strongly impacts the prognosis for severe bowel disease. We developed a strip test to evaluate HGF as a local acute phase response marker in the bowel. This test assessed the binding affinity of HGF to sulfated glycans in fecal samples and determined fecal pH as an indicator of illness severity. Methods: Fresh feces from patients with diarrhea (n= 513) were collected and tested blindly, and information about patient illness course and outcome was collected. Patients were classified based on the focus of inflammation and the cause of the symptoms. Objectively verified diagnoses of infectious gastroenteritis (n= 131) and IBD onset/exacerbation and bowel cancer (n= 44) were used to estimate the performance of the test strip. ELISA was performed on 101 freeze-thawed feces samples to determine the fecal HGF levels. Results: The test rapidly distinguished infectious gastroenteritis from non-infectious inflammatory causes of diarrhea (sensitivity, 87.96%; specificity, 90.9%; positive predictive value, 96.6%; negative predictive value, 71.4%; accuracy, 89.1%). Fecal pH (p<. 0.0001) and mortality within 28. days of sampling (p<. 0.04) was higher in patients with sepsis/SIRS and diarrhea. The concentration of HGF was higher in strip test-positive stool samples (p<. 0.01). Conclusions: HGF is a good local acute phase response marker of acute bowel inflammation. Test-strip determination of the binding affinity of fecal HGF to sulfated glycan was a rapid, equipment-free way to assess patients with diarrhea and to guide the diagnostic and therapeutic approaches on admission. © 2014 The Authors. |
592 | Experiential and authentic learning approaches in vaccine management | Many pharmaceutical products including vaccines are time and temperature sensitive and must be stored and transported at controlled temperatures .The increasing portfolio of vaccines and other biotech medicines dictate more effective and efficient operation of complex supply chains.Personnel who handle time and temperature sensitive pharmaceutical products must accommodate their different characteristics; all are sensitive to high temperatures and some highly sensitive to freezing.The recent focus on efficiency has led to increased interest in merging multiple disease-specific supply chains, such as vaccines, maternal and child health medicines, and family planning products, into one integrated supply chain .Although quantification, procurement, and requisition/ordering for products in this integrated supply chain may represent challenges due to very different quantification, demand-planning, procurement mechanisms and processes, the storage and transport of TTSPPs present tremendous opportunities for integration.The variety of products contained in temperature controlled supply chain is immense and is further complicated by each product having its own stability budget.A stability budget considers long term, accelerated, and stress temperature exposure, as well as temperature cycling studies to determine the amount of time out of storage that a drug product may experience without any significant risk to its quality .The stability budget of a product is also considered critical when it comes to access issues such as where cold chain availability is problematic as well as in hard-to-reach geographical areas and in war conditions .To keep these TTSPPs at appropriate temperatures to ensure their quality, a cold chain is designed and implemented as an integrated system of equipment, procedures, records, and activities .When we speak of “pharmaceutical/vaccine product quality”, there is much more that needs to be considered aside from the development, approval and manufacturing aspects - product quality must be viewed in terms of the patient or consumer of the product.All products spend considerable periods of time at storage facilities, in transport between warehouses, at hospitals, pharmacies, health centres and even within the homes of end-users.Therefore just offering a “quality” product to the market is not enough.The product’s quality must be maintained throughout its life until it is consumed .The legal requirements for distribution and handling of TTSPPs are known as good storage and distribution practices .These requirements require that personnel who handle and distribute pharmaceutical products have the education, training and experience required to perform their jobs effectively.In short, they must have expertise.This makes people the most critical element in a cold chain process.Continuous lifelong learning is crucial for professionals who wish to maintain, upgrade and expand their expertise .The importance of offering professional development opportunities for staff is widely recognised across sectors .Increasingly, professional development programmes and courses are offered online.However, the intended outcomes of professional development, online or otherwise, are not always attained and the competencies, skills, knowledge and abilities that the professional development was set out to enhance are all too frequently not transferred into professional practice .Moreover, online professional development programmes are often seen as being better suited for transmitting theoretical content rather than supporting the development of practical skills .In order for professional development programmes to lead to sustainable professional growth and transfer of learning, both offline and online learning environments should prepare the learner to “… draw on a range of resources and to adapt learning to complex and ill-structured workplace problems” , rather than simply promote memorising and regurgitating factual knowledge.Expertise is the hallmark of an expert.It includes an in-depth set of knowledge, cognitive and motor skills, as well as the analytical ability to determine how to approach a given situation.Dreyfus and Dreyfus quoted Aristotle in saying that the expert straight away does “the appropriate thing, at the appropriate time, in the appropriate way”.In the context of handling TTSPPs, expertise involves more than just knowing the rules and requirements of national authorities.Rather, it requires that people be able to apply those requirements and solve sometimes very complicated, conflict-filled problems in a way consistent with both the letter and the spirit of the requirements.People involved in distribution, storage, and transportation of supply chains perform a range of activities as described in their job descriptions.Those in operations typically execute procedures and tasks.Professionals in quality and management functions develop, optimize, and monitor system functioning.To identify the best ways of providing opportunities to develop the appropriate knowledge and skills for different jobs requires learning professionals to define the specific competencies required to successfully perform a job.Broadly speaking, those who develop and improve systems require a higher-level set of cognitive skills than those who must consistently and flawlessly execute procedures, an activity that must not be depreciated.Fig. 1 shows examples of competencies for two different groups involved with TTSPPs and how they align with Bloom’s revised taxonomy .The discussion that follows presents specific technical guidelines/requirements and how they relate to Bloom’s taxonomy.Table 1 lists operational and managerial vaccine management functions as prescribed by the World Health Organization .These functions are incorporated into the WHO effective vaccine management assessment tool that assesses vaccine management functions through a systematic sampling in a country against nine high-level global criteria :Pre-shipment and arrival procedures ensure that every shipment from the vaccine manufacturer reaches the receiving store in satisfactory condition and with correct paperwork.All vaccines and diluents are stored and distributed within WHO-recommended temperature ranges.Cold storage, dry storage and transport capacity is sufficient to accommodate all vaccines and supplies needed for the programme.Buildings, cold chain equipment and transport systems enable the vaccine and consumables supply chain to function effectively.Maintenance of buildings, cold chain equipment and vehicles is satisfactory.Stock management systems and procedures are effective.Distribution between each level in the supply chain is effective.Appropriate vaccine management policies are adopted and implemented.Information systems and supportive management functions are satisfactory.Since the introduction of the EVM assessment between 2009 and 2014, a total of 82 assessments have been conducted globally while 21% of these assessments were reassessments .In these assessments, a broad range of performance scores was observed in each criterion at each level, except storage capacity that had a median score above 80%.The availability of appropriate vaccine management policies was also scored relatively high at each level of the supply chain.Temperature monitoring at the national level, maintenance at lower levels, vaccine distribution at all levels, and stock management at lower levels were found to be the weakest areas.All these vaccine management related performances correspond to higher level of cognitive skills in Bloom’s taxonomy.Compared to the lower level cognitive skills questioned by the EVM assessment tool, higher level cognitive skills are found to be more problematic.Thus, it appears that knowing something is not enough to put it into practice; staff handling TTSPPs require high level cognitive skills and an overall mental model of vaccine management activities in order to analyse, synthesize and evaluate the complex situations to make sound decisions.For example, in the 2009–2014 analysis, the percentage of storekeepers and health workers knowing which vaccines on the schedule can be damaged by freezing was found to be very high.Similarly, health workers knowing how to read a vaccine vial monitor was found to be over 90% at all levels .However, these scores do not correspond to the ability to conduct necessary analysis, synthesis and evaluation to come to a decision under critical and complex conditions.Traditional education and training methods applied to meeting the challenge of vaccine management have rarely been sufficient because they do not reflect the last fifty years of advances in learning theory and design .For example, higher level cognitive skills are best developed when learners engage in solving complex, challenging problems rather than passively attending to messages transmitted by instructors or media .Further, authentic learning principles clearly demonstrate that knowledge and skills should be learned in contexts as much like the real- world situations in which the knowledge and skills will eventually be applied if the learning is not to be inert.In addition, transfer of learning from one context to another is very difficult and, therefore, learners must be given ample opportunities to apply their knowledge and skills in multiple contexts and domains, a process fostered by learning design practices derived from experiential learning theory .WHO Global Learning Opportunities between 2004 and 2006 offered vaccine store management and vaccine management courses for selected staff from countries where effective vaccine store management assessments and vaccine management assessments were conducted .As a second step, course graduates were offered the opportunity to attend a vaccine management on wheels course that enables 15 participants with three mentors to travel down the cold chain on a bus .In 2007 the course was extended to cover integrated supply chain and involved representatives as participants from the pharmaceutical, biopharmaceutical sector as well as national regulatory authorities .The “wheels course” encourages participants to make direct observations at the storage, warehousing, distribution, and health care delivery facilities that they visit, as they physically travel with mentors by bus down the length of the cold chain.Throughout the wheels course, guided observation exercises take place at the visited facilities under the supervision of the mentors.Participants are provided with guidance notes and tools to support their critical observations.Participants interact with operational staff and management at these facilities.Presentations and group discussions take place on the bus, in restaurants, and in the open air before and after the visits to the facilities .However, budgetary and logistical considerations limit WHO GLO to offering the experiential wheels course just once a year with only 15 participants.A way of opening up this experiential learning opportunity to more professionals around the world was deemed desirable, and thus in 2010, the very same course was redesigned as an authentic e-learning course and has been offered online since 2012 .During this time, WHO GLO has conducted this conversion of the wheels course to e-learning as an educational design research study that has pursued the twofold goals of developing a more effective approach to e-learning and identifying reusable design principles for future projects of this kind .The wheels course is based on experiential and social learning theories as defined by Kolb and Vygotsky, respectively .Experiential learning can be defined as a direct encounter with the phenomena being studied rather than merely thinking about it or only considering the possibility of doing something about it .Kolb described experiential learning as being an iterative four-phased activity consisting of concrete experience, reflective observation, abstract conceptualization, and active experimentation .A learner can join the process at any of the four phases.Social learning involves learning from others, particularly by observation of role models that can include both those who make mistakes and those who are experts in a field – individuals that Vygotsky identified as “more knowledgeable others”.In the pharmaceutical cold chain management course, activities that foster learning by doing are prevalent, supported by three expert mentors who serve as the more knowledgeable others.The GLO/EPELA e-learning course is based on authentic learning principles .One such principle maintains that authentic tasks should be ill-defined to the point that learners must figure out the specific actions needed to complete such tasks rather than simply applying existing rules to do so.These tasks should ideally be anchored in a context that approximates the complexity of the real world.For example, in the online PCCM course, learners work in teams to solve real world problems.Another principle of authentic learning is that authentic tasks should require learners to investigate and accomplish them over a sustained period of time.In the online pharmaceutical cold chain management course, three member teams of learners spend the last five weeks of the twelve-week course collaborating to prepare recommendations for solutions to the real world vaccine management challenges submitted to the course by the public health ministry of a specific country.A third principle maintains that authentic tasks should allow competing solutions and diversity of outcomes.In the online PCCM course, most of the solutions generated by the teams of learners are subjected to expert, peer, and self-review rather than “graded” using a predefined scoring scheme.While some solutions are clearly better than others, creativity is encouraged and there is no penalty for being “wrong,” but feedback is provided so that learners can improve their solutions to complex problems.The GLO wheels and the e-learning courses differ greatly from other courses in the field of vaccine management offered by other organizations.There are no theoretical sessions in the wheels course and “mentors do not lecture others.,Instead, possible solutions to problems discovered through facility visits are discussed at length together.The e-learning course is also unique in that it is not a typical “me and the computer screen” course; there is always a human face that supports participants whenever they need help and encouragement.Research has shown that feedback is the most important factor in any type of learning .In addition, GLO e-learning courses mimic the real world through authentic context and tasks, none of the problems are presented in a “prescribed” manner, each task forces participants to find additional information to solve the problem .Assessment is mainly embedded in the authentic tasks, but we also see what the participants are learning by how they comment on other reports, how they reflect on their experience in diaries, how they express themselves through Flipgrid videos , how they raise issues or contribute to ongoing discussions.We even see how they engage in fun learning activities such as a “scavenger hunt” where participants are given situations to photograph and post them in a blog .The infusion of rigorous individual and group authentic assessments may be the most distinguishing feature that sets GLO/EPELA courses apart from other forms of e-learning .In many traditional e-learning courses, technology is used as a platform through which content is delivered to the participants and participants in turn submit simple assignments or take quizzes to demonstrate their understanding of this delivered content.Traditionally, in this type of “learning from technology” approach, technology is controlled and provided by the teacher .Learning WITH technology, as applied in our courses, is vastly different.In this approach, technology is placed in the hands of the participants to be used as cognitive tools for complex tasks.In this approach participants use technology as a tool for solving problems, constructing knowledge, creating meaningful products and collaborating with each other .Authentic learning encompasses a “learning with technology” approach, thus differentiating our programmes from many other e-learning approaches.Though encompassing a broader field than only vaccines, both the wheels and the e-learning courses are well aligned with overall EVM criteria.Table 2 presents the alignments between the objectives of each course and each EVM criterion.As seen in Table 2, both courses focus on the “risk management” approaches in analysing the operational and management functions of a pharmaceutical cold chain system to help participants build a robust mental model of the system.In both courses, there are a series of side products that are produced by participants such as decision trees, detailed processes and flow-charts.Most of these are then refined by the mentors and shared back with the group.This reflects the authentic learning principle that authentic tasks should encourage the development of polished products that are valuable in their own right rather than an exercise or sub-step in preparation for something else .Ideally these products should contribute to the profession of which the learners are a part or even society at large.Fig. 2 is an example of just such a side product refined by the mentors in the e-learning course:For vaccines and other medicines to be safe, pure, effective, and available to those needing them, a high level of concern is being placed on the storage, handling, transportation, and distribution of these products, particularly those that are time and temperature sensitive.While active and passive cooling equipment and monitoring devices are important, it is the various personnel who execute and write procedures, design and operate systems, and investigate problems and help prevent them who need to have the required knowledge and skills so they can effectively perform these activities.In two unique learning solutions developed by WHO/GLO, participants have the opportunity of not just learning about cold chain systems or vaccine management, but, rather, learning to become specialists in these fields through experiential and authentic learning.In this process, participants have the opportunity to address real-life situations in contexts similar to what they may face in their own work environments and develop solutions and critical thinking skills they can apply when they return to their jobs.Interviews conducted with the participants after the completion of the courses indicate that the authentic learning approach and especially engaging in the authentic learning tasks had a beneficial impact on the professional learning of the participants.Several participants emphasised the realism of the tasks, the opportunity to collaborate with colleagues and the support from the mentors as key factors for a successful learning experience.These aspects also differentiated the programmes from a more traditional, content-oriented approach to professional development.In the words of one eLearning course participant, it was “…different from other e-learning courses which are, you know, more theoretical.In this we have both: theory and real practice”.The authentic learning approach also contributed to a high level of learner engagement.Several participants reported that the nature of the tasks challenged them and encouraged them to do their very best.The skills and knowledge learned in the programmes have also been transferred into practice in several ways.Not only are the products created during the courses in use in real work environments, but participants have also described improved decision-making, improved contingency planning, increased self-confidence and trust in one’s ability to perform new tasks, as well as a strengthened professional identity.These findings suggest that the authentic learning approach can be effective in developing the higher cognitive skills: analysing, evaluating and creating.All professionals who have graduated from the e-learning programmes described in this paper are now using the skills, knowledge and authentic products created in the programmes in their own professional contexts.These people also continue to be supported through a post-course mentoring programme, ensuring successful learning transfer and continuous professional learning.This professional learning in turn can directly benefit the communities through improved outreach services and increased accessibility and coverage of immunization programmes.As one of the participants concluded: “more children can be vaccinated now”. | A high level of concern is placed on the storage, handling, transportation, and distribution of vaccines and other pharmaceutical products, particularly those that are time and temperature sensitive. While active and passive cooling equipment and monitoring devices are important, it is the various personnel responsible for executing and writing procedures, designing and operating systems, and investigating problems and helping prevent them who are paramount in establishing and maintaining a “cold chain” for time and temperature sensitive pharmaceutical products (TTSPPs). These professionals must possess the required competencies, knowledge, skills and abilities so they can effectively perform these activities with appropriate levels of expertise. These are complex tasks that require the development of higher cognitive skills that cannot be adequately addressed through professional development opportunities based on simple information delivery and content acquisition. This paper describes two unique learning solutions (one on a bus called the “wheels course” and the other online called “e-learning”) that have been developed by WHO Global Learning Opportunities (WHO/GLO) to provide participants with opportunities not just to learn about cold chain systems or vaccine management, but, rather, to develop high levels of expertise in their respective fields through experiential and authentic learning activities. In these interactive learning environments, participants have opportunities to address real-life situations in contexts similar to what they may face in their own work environments and develop solutions and critical thinking skills they can apply when they return to their jobs. This paper further delineates the managerial and operational vaccine management functions encompassed in these two unique learning environments. The paper also describes the alignment of the objectives addressed in the “wheels course” and the e-learning version with effective vaccine management (EVM) criteria as prescribed by WHO. The paper concludes with an example of a real world product developed by course graduates (specifically a decision tree that is now used by some national programmes). These types of products, valuable in their own right, often emerge when learning environments based on authentic learning principles are designed and implemented as they were by WHO/GLO. |
593 | Studies of the effects of microplastics on aquatic organisms: What do we know and where should we focus our efforts in the future? | There is increasing scientific and societal concern about the effects of microplastics, commonly defined as plastic particles with sizes below 5 mm, on freshwater and marine organisms.Microplastics can be classified as primary or secondary, depending on the manner in which they are produced.Primary MPs are small plastic particles released directly into the environment via e.g. domestic and industrial effluents, spills and sewage discharge or indirectly.The range of primary MP particle types include fragments, fibres, pellets, film and spheres.Spheres are frequently associated with pharmaceutical and cosmetics industries.Secondary MPs are formed as a result of gradual degradation/fragmentation of larger plastic particles already present in the environment, due to e.g. UV radiation, mechanical transformation and biological degradation by microorganisms.Microplastics in the environment can be further degraded/fragmented to produce nanoplastics, which, when compared to other forms of plastic litter, have largely unknown fates and toxicological properties.The amount of MPs in the aquatic environment continues to increase, in part due to ongoing increases in the production of plastics, with a total global production of 335 M ton in 2016.There are a number of characteristics that make plastics suitable for use in a wide variety of applications, from construction to medicine.These same characteristics highlighted above make the presence of plastics in the environment problematic.Furthermore, plastics may incorporate additional chemicals during manufacture which are added to endow them with specific characteristic but which may be toxic if ingested.Chemicals may also be incorporated/adsorbed by plastics in the environment.Microplastic particles have a large surface area to volume ratio which provides a high association potential for environmental contaminants including polycyclic aromatic hydrocarbons or metals.There are a wide range of plastic polymers which are produced and released to the environment.In Europe, polyethylene comprised 28%, polypropylene 19%, polyvinylchloride 10% and polystyrene 7% of total production.Different plastic polymers have a wide range of densities which influences MP behaviour in the aquatic environment.Furthermore, MPs are found in a wide range of shapes.Differences in shape and density cause MPs to disperse diversely in different compartments of the aquatic environment and influence their availability to organisms at different trophic levels and/or occupying different habitats.For example, pelagic organisms such as phytoplankton and small crustaceans are more likely to encounter less dense, floating MPs while benthic organisms including amphipods, polychaete worms, tubifex worms, molluscs and echinoderms are more likely to encounter MPs that are more dense than water.Both benthic and pelagic fish may ingest MPs directly, or indirectly.Birds and mammals feeding on aquatic organisms or living in aquatic environments are also known to ingest MPs.Microplastics are found in almost all marine and freshwater environments and have been detected in protected and remote areas making their potential pernicious effects a global problem.To date, reviews on MPs in the environment have focused on summarizing properties, sources, fate and occurrence, concentrations, analytical methods and effects on organisms.Despite the available reviews concerning environmental concentration and ecotoxicological impact of MPs on aquatic organisms, there is a lack of critical evaluation of the current research trends related to types of MPs detected in aquatic animals versus the type of MP and study organism used in laboratory studies of ecotoxicological effects.Thus, this paper aims to: summarize and discuss the current field and laboratory research trends in terms MP polymer type, shape and size reported and organism group studied; critically review the published studies of ecotoxicological effects of MPs on freshwater and marine biota with respect to the aforementioned criteria and identify promising areas for future research.A survey of the available published peer-reviewed literature was conducted on November 22, 2017 through a bibliographic study using the Thompson Reuters database ISI Web of Science.A combination of keywords was used as criteria.A total of 1637 candidate publications were identified.The abstracts of all candidate articles were read so as to identify relevant field or laboratory studies reporting on MP ingestion or ecotoxicological effects in marine or freshwater aquatic animals.Of the 1637 candidate publications, 157 were retained for further analysis.Available studies were summarized according to the following criteria: type of MPs used and/or reported; shape of MP used and/or reported; MP size range; group of organisms studied; and type of ecotoxicological effect observed.The following list of plastic types was used to classify MP parent materials reported in the literature: polyethylene, polystyrene, polypropylene, polyester, polyvinylchloride, polyamide, acrylic polymers, polyether, cellophane, polyurethane and not specified.The PE family includes both high- and low-density PE and PES plastics such as polylactic acid and polyethylene terephthalate.The selected plastic types include the main groups of MP parent materials reported in e.g. Plastics Europe.When specified, shapes of MPs were classified according to the following list: spheres, fibres, fragments, film, and pellets.Microplastic size was assigned to one or more of the following classes: <50 μm; 50–100 μm; 100–200 μm; 200–400 μm; 400–800 μm; 800–1600 μm; >1600 μm; or not specified.The groups of organisms studied included the following: fish, birds, amphibians, reptiles, mammals, large crustaceans, small crustaceans, molluscs, annelid worms, echinoderms, cnidaria, rotifera and porifera.‘Small crustaceans’ included zooplankton while ‘large crustaceans’ included all other crustacean taxa.Plants and microbes were not included in the results reported here.Ecotoxicological effects enumerated included mortality, reproductive impairment, neurotoxicity, biotransformation of enzymes, genotoxicity, physical effects, behavioural effects, oxidative stress and damage, cytotoxicity, blood/haemolymph effects and increased accumulation of other contaminants.No attempt was made to separate direct ecotoxicological effects from e.g. organism responses associated with starvation following MP ingestion.The information in every article was tabulated and summarized in the following manner: An article could report one or more studies.A study was defined as a series of observations of one group of organisms, type of MPs, shape and/or size.Any article reporting data on more than one of the aforementioned categories generated a number of studies equal to the number of elements per premise.For example, if one article reported only on fish it was considered to be one study, but if it reported on both fish and large crustaceans, it was considered to be two studies.Similarly, if one article only reported effects of PE MP, it was considered to be one study, but if it documented effects of both e.g. PE and PS MPs in fish and small crustaceans it was considered to be four studies.Thus, the number of studies presented in the results represents the number of interactions of the aforementioned classification criteria, not the total number of publications.Our search identified 157 published, peer-reviewed articles which documented a total of 612 studies.In Sections 3.2.1–3.2.3 where MPs effects are described, only studies which report MP effects on organisms were considered.Thus, a number of studies of interactive effects of MPs and other contaminants for which no impacts of MPs were identified during our review of candidate literature have not been included here.The types of MPs most commonly reported across field and laboratory studies include PE and PS, followed by PP and PES.Fish were the most commonly studied group of organisms, followed by crustacea, molluscs and annelid worms.There were relatively few studies of other organism groups.Polyethylene was the most common type of MP studied in fish, it was reported in 34 studies, or 12% of the total number of studies identified.Ingestion of PE by fish is both widespread and crosses habitat preferences.For example, Lusher et al. showed that over one third of fish examined in their study had ingested MPs, with pelagic and benthic fish displaying similar gut contents, suggesting either a lack of selectivity or widespread presence of PE in both the water column and sediments.The presence of PE was reported less commonly in other groups of organisms, with 12 studies on molluscs, seven on small crustaceans and four on annelids.There were relatively few studies of PE in vertebrates other than fish.Effects of PS MPs have been reported in studies of fish and small crustaceans.Although crustaceans have been shown to have the ability to distinguish live particles from inert ones, e.g. algae and PS beads, ingestion of PS MPs has been observed in small crustaceans occupying both pelagic and benthic environments.Taxonomic groups in which PS MPs were detected include the marine copepods Acartia spp. and Eurytemora affinis, large crustaceans such as the estuarine mysid Neomysis integer and the crab Uca rapax.Cole et al. reported that dead copepods can have adhered PS fragments, which could contribute to the vertical transport of this type of MP.Nizzetto et al. report densities of 1040–1090 kg m−3 for PS and 910–940 kg m−3 for PE.These differences may explain the discrepancy in the number of studies reporting PE MPs in fish and other pelagic organisms versus the number of studies reporting PS MPs in benthic fish and crustaceans.The density of PS MPs is usually greater than PE MPs, so they may be available not only in the water column, but also in the sediment, representing a higher risk for both pelagic and demersal organisms, while PE MPs have a lower density presenting a higher availability in the water column and potentially pose a higher risk for pelagic fish.However, densities of these polymers may change while in the environment as a result of e.g. biofilms or flocculation.The widespread distribution of MPs in aquatic ecosystems and broad range of physicochemical properties makes a wide range of aquatic organisms potentially susceptible to these emerging contaminants.There are a number of ways in which organisms may accumulate MPs.Animals exposed to MPs may incorporate them through their gills and digestive tract.The ingestion may be due to an inability to differentiate MPs from prey or ingestion of organisms of lower trophic levels containing these particles.MPs may also adhere directly to organisms.Despite the ever increasing number of studies of the effects of MPs on aquatic biota, there is a possibility that effect studies may be biased towards to a particular type of polymer without due consideration of reported occurrence in organisms and the environment, estimated release to the environment and bioavailability.The same possibility should also be considered for the model organisms used in laboratory assays.The surveyed publications identified an equivalent number of studies conducted in the field and laboratory.However, there are differences in the groups of organisms studied.Fish are the most studied organism group in the field, whereas small crustaceans are the group most studied in the laboratory.Such differences may be the result of difficulties in maintaining and handling large and/or long-lived organisms under controlled conditions.Unfortunately, a sizeable fraction of the field studies did not report the typology of MPs present in the biota.This is a highly relevant shortcoming considering that the properties of the MP parent material are likely to influence both its physical behaviour in the environment, the potential to adsorb environmental contaminants, and its bioavailability and its effects on organism health.The surveyed publications identified an equivalent number of studies conducted in the field and laboratory.However, there are differences in the groups of organisms studied.Fish are the most studied organism group in the field, whereas small crustaceans are the group most studied in the laboratory.Such differences may be the result of difficulties in maintaining and handling large and/or long-lived organisms under controlled conditions.Unfortunately, a sizeable fraction of the field studies did not report the typology of MPs present in the biota.This is a highly relevant shortcoming considering that the properties of the MP parent material are likely to influence both its physical behaviour in the environment, the potential to adsorb environmental contaminants, and its bioavailability and its effects on organism health.The most common types of MPs reported in biota collected during field sampling include PE, PP, PES, PA and PS MPs.This detection frequency largely reflects rates of plastic production, except that PS plastics are produced in greater amounts than either PES or PA.While field studies generally reflect reported plastic production rates, these polymers were not studied with similar frequency in laboratory assays.In fact, only 8% of the laboratory studies have been performed with PP MPs whereas PS and PE MPs were used in 40% and 33% of the studies respectively.It becomes apparent that there is a need for further laboratory studies documenting the potential effects of PP MPs, considering that it is one of the most produced and demanded types of plastic, with about 20% of European production, much more than the 7% of production devoted to PS.The most common types of MPs reported in biota collected during field sampling include PE, PP, PES, PA and PS MPs.This detection frequency largely reflects rates of plastic production, except that PS plastics are produced in greater amounts than either PES or PA.While field studies generally reflect reported plastic production rates, these polymers were not studied with similar frequency in laboratory assays.In fact, only 8% of the laboratory studies have been performed with PP MPs whereas PS and PE MPs were used in 40% and 33% of the studies respectively.It becomes apparent that there is a need for further laboratory studies documenting the potential effects of PP MPs, considering that it is one of the most produced and demanded types of plastic, with about 20% of European production, much more than the 7% of production devoted to PS.The shape of MPs reported in organisms collected during field surveys is varied.Fibres and fragments were reported in 23% and 21% of studies, respectively, followed by spheres, film and pellets.However, in laboratory studies, spherical MPs have been the most commonly used shape followed by fibres, fragments and unspecified particles.From this, it can be seen that there is a need to perform more laboratory studies with fibres and fragments given their frequency of detection in animals collected from the field and their widespread presence in the environment.The MP size used or reported in published studies also varies and is related to MP shape, with almost every shape having been reported across all size ranges.The size ranges 800–1600 μm and 400–800 μm were the most commonly reported in animals collected from the field, followed by 200–400 μm.In the laboratory, smaller particles were used most commonly.This may reflect the difficulty in sampling smaller particles from environmental media as the size of MPs in biological samples collected in the environment may be biased by sampling and detection methodology that seem to contribute to the scarcity of reports of MP with a size < 50 μm.The size range 800–1600 μm, the most commonly reported from field samples of biota represents a very small fraction of the sizes used in the laboratory.This mismatch in size between field observations and laboratory studies demonstrates a clear research gap.However, this discrepancy should be evaluated in light of the relative distribution of MP size fractions reported in relevant compartments of the aquatic environment including floating material and sediments.The shape of MPs reported in organisms collected during field surveys is varied.Fibres and fragments were reported in 23% and 21% of studies, respectively, followed by spheres, film and pellets.However, in laboratory studies, spherical MPs have been the most commonly used shape followed by fibres, fragments and unspecified particles.From this, it can be seen that there is a need to perform more laboratory studies with fibres and fragments given their frequency of detection in animals collected from the field and their widespread presence in the environment.The MP size used or reported in published studies also varies and is related to MP shape, with almost every shape having been reported across all size ranges.The size ranges 800–1600 μm and 400–800 μm were the most commonly reported in animals collected from the field, followed by 200–400 μm.In the laboratory, smaller particles were used most commonly.This may reflect the difficulty in sampling smaller particles from environmental media as the size of MPs in biological samples collected in the environment may be biased by sampling and detection methodology that seem to contribute to the scarcity of reports of MP with a size < 50 μm.The size range 800–1600 μm, the most commonly reported from field samples of biota represents a very small fraction of the sizes used in the laboratory.This mismatch in size between field observations and laboratory studies demonstrates a clear research gap.However, this discrepancy should be evaluated in light of the relative distribution of MP size fractions reported in relevant compartments of the aquatic environment including floating material and sediments.The first studies concerning the potential ecotoxicological impact of MPs on aquatic organisms were carried out in latter half of the 2000s by Browne et al., who performed a laboratory study in which they observed the translocation of MP particles from the gut to the circulatory system of Mytilus edulis.Since then, the total number of field and laboratory studies involving MPs and their interactions and effects on aquatic organisms has grown significantly, especially after 2012.To date, ecotoxicological studies of MPs have been conducted predominantly using marine as opposed to freshwater organisms.This lack of ecotoxicological knowledge on the behaviour of MPs in the freshwater environment has been commented on by a number of authors.This knowledge gap is of high concern since freshwater organisms are directly affected by terrestrial runoff, wastewater and other discharges potentially containing high levels of MPs and other contaminants.Furthermore, they may encounter more highly contaminated sediments, increasing the likelihood of synergistic effects of MPs with other environmental pollutants.Among the reports of MPs in marine organisms, fish are the most commonly studied group, followed by molluscs, small crustacea, large crustacea annelid worms, mammals and echinoderms, birds and cnidaria, porifera, reptiles and rotifers.Multiple freshwater studies only exist for fish and small crustacea.There are individual studies of MPs in freshwater birds, amphibians, annelid worms and rotifers.The relative paucity of ecotoxicological studies on groups of organisms other than fish and small crustacea in freshwater environments highlights that the effects of MPs on freshwater ecosystems has been under-studied and deserves further attention.Specifically, the effects of MPs on molluscs and freshwater benthic crustaceans is worthy of further study as these groups of organisms may be exposed to high levels of MPs in sediments.To date, the most studied species in the laboratory has been the small freshwater planktonic crustacean Daphnia magna.The number of studies of D. magna may reflect its widespread use in ecotoxicology laboratories.Other commonly studied species include the freshwater fish Danio rerio, the common goby, a marine fish Pomatoschistus microps, a mollusc, and the annelid lugworm Arenicola marina.Given the differences in habits and physiology of marine and freshwater species, it is not clear to what extent results based on studies of marine organisms can be applied to freshwater species and vice versa.This further highlights the need for additional studies, especially in highly contaminated, low salinity environments such as the Baltic where both marine and freshwater organisms are subject to a high degree of environmental stress.Furthermore, closely related species may show differences in response to MP exposure.Jaikumar et al. showed that three Daphnia species showed different responses to primary and secondary MP exposure, and different interactive effects of MP exposure and thermal stress.While the authors note that their study should be interpreted with caution due to the High MP concentrations used, their results do suggest a need for studies across a wider range of organisms than commonly used ecotoxicological model species such as D. magna and D. rero.A total of 130 studies reporting ecotoxicological effects of MPs on aquatic organisms were identified.Crustaceans were the most commonly studied taxonomic group, followed by fish, molluscs, annelid worms, echinoderms and rotifers.These organism groups occupy a number of positions in aquatic food webs.Fish are generally intermediate/top predators and may ingest MPs either directly or through consumption of prey containing MPs.Small crustaceans are often primary consumers, as are planktonic rotifers.Molluscs include a number of ecologically and commercially important filter feeding organisms.Because of their habitat and feeding behaviour, molluscs and other benthic organism groups such as annelid worms are likely to be affected by MPs.Molluscs include a large number of filter feeding species, with a high tendency for bioaccumulation.Considering that several of these organisms are widely used for food, they are a potential source of MPs or environmental contaminants to humans.The relative lack of studies conducted on other organism groups should be addressed as these animals in these groups are likely to play an important role in aquatic food webs.Specifically, studies targeting mode of action must be conducted to determine whether or not MPs have similar effects on fish as they do on other vertebrate taxa, and whether studies of small crustacea exposed to MPs provide sufficient insight into ecotoxicological effects on other groups of invertebrates.A range of ecotoxicological effects of different MP types have been documented across several groups of organisms.In the following, the duration and/or type of experiment, size of MP and concentration are reported for each study.Documented effects of PE MPs in fish include neurotoxicity, reduction of the predatory performance and efficiency in P. microps.Mazurais et al. reported mortality and induction of the cytochrome P450 in D. labrax.Polyethylene MPs have been shown to affect growth and reproduction of a large freshwater crustacean, the amphipod Hyalella azteca.Several toxic effects related to immune response, oxidative stress and genotoxicity have been reported in molluscs including a study of the marine mussel Mytilus galloprovincialis exposed to PE MPs.Van Cauwenberghe et al. observed an increase of energy consumption by the polychaete A. marina when exposed to PE MPs.In echinoderms, PE MPs have been shown to influence larval growth and development of Tripneustes gratilla without affecting its survival.These results were corroborated by Nobre et al. for Lytechinus variegatus larvae.It is noteworthy that these studies have all been conducted at MP concentrations higher than those typically encountered in the aquatic environment.Effects of PP MPs on H. azteca have been reported by Au et al. who demonstrated a higher toxicity for PP than PE MPs.They reported a LC50 = 7.14 × 104 particles L−1 for PP MPs compared to LC50 = 4.64 × 107 particles L−1 for PE MPs.Effects of PS MPs have been widely documented.In studies with the fish D. rerio, PS MPs have been shown to be responsible for the up-regulation of genes involved in the nervous and visual as well as immune system.Studies of small crustaceans include those by Cole et al. and Lee et al. who observed a decrease in survival and fecundity of the marine copepods Calanus helgolandicus and Tigriopus japonicas when exposed to PS MPs.Gambardella et al. and Jeong et al. demonstrated some alteration of enzymes in the small crustaceans Artemia franciscana and Paracyclopina nana.Avio et al. documented similar enzyme alteration effects following PS and PE MP exposure in.the marine mollusc M. galloprovincialis.Additional ecotoxicological effects of exposure to PS MPs was observed in two mollusc species.A study using Scrobicularia plana reported increases in neurotoxicity and genotoxicity.A 25% increase in energy consumption was reported after ingestion of PS MPs by M. edulis, probably associated with an effort to digest inert material and maintain physiological homeostasis.In M. edulis, the transition of MP from the gut to the haemolymph was observed to continue for >48 days after a 3 days exposure.This persistence of MPs in mussel tissues represents a possible source of toxicity to their predators and potentially to humans, and emphasizes the relevance of this topic of research.In the polychaete A. marina ecotoxicological effects of exposure to PS MPs include reduced feeding activity and reduction of lysosomal membrane stability.Van Cauwenberghe et al. also observed an increase of energy consumption by A. marina when exposed to PS MPs.Della Torre et al. described effects of PS MPs on gene expression in the echinoderm Paracentrotus lividus, including an up-regulation of the Abcb1 gene responsible for protection and multi-drug resistance.In rotifers, a decrease in growth rate and fecundity was observed following exposure to PS MPs.Studies with A. marina subject to sediment PVC exposure showed a depletion of lipid reserves and an inflammatory response.Overall, documented effects of MPs on aquatic organisms include reduction of feeding activity, oxidative stress, genotoxicity, neurotoxicity, growth delay, reduction of reproductive fitness and ultimately death.The MP concentrations used in sediment ecotoxicological studies are of a similar order of magnitude to the highest field reported concentrations.Hurley et al. report maximum sediment concentrations of ~440,000 particles m−2, and an average of 16,000 particles m−2.However, concentrations of suspended MPs used in ecotoxicological studies are generally higher than average surface water concentrations summarized by Hurley et al.This is a significant concern as unrealistically high concentrations of MPs in experimental studies may lead to erroneous or misleading conclusions about the risks posed to aquatic organisms.There are a large number of studies on the combined effects of MPs with other environmental contaminants.These studies are motivated by the situation found in the environment where organisms are simultaneously exposed to several contaminants.The main goal of these studies is to investigate if MPs can interact positively or negatively with the ecotoxicological effects of other contaminants.There may be synergistic effects of MPs and other contaminants on organism health, or MPs may function as a transport vector for other contaminants.We identified 59 studies of interactive effects of other environmental contaminants with MPs.Most studies had been performed on fish, with some studies of molluscs, crustacea and annelid worms.Studies involving fish include e.g.Crustaceans were studied by Watts et al. and Tosetto et al.Molluscs studies included e.g.Annelida were studied by Besseling et al.Unfortunately, Herzke et al. did not specify the type of MP used in their study of legacy POP accumulation in the seabird Fulmaris glacialis so it was not possible to include this study in Fig. 5.Studies examined the combined effects of MPs with legacy POPs, endocrine disrupting compounds, metals, antibiotics and herbicides.Legacy POPs included PAHs, polychlorinated biphenyls, polybrominated diphenyl ethers and dichlorodiphenyltrichloroethanes.Metals included silver, chromium VI and nickel.Effects of MPs and the antibiotics celafaxin and triclosan as well as the herbicide paraquat were documented.Endocrine disrupting compounds included the pharmaceutical 17α-ethinylestradiol and Bisphenol-A.Slightly more than half the studies were conducted using PE MPs.Polystyrene, PVC and PP MPs were also reported.Adverse effects of MPs with chromium including neurotoxicity and mortality were reported in a study of P. microps, suggesting that this metal can adhere to MP surface and that populations from two estuaries can be differently influenced by this combined exposure.Studies on the combined effects of MPs and POPs exposure include e.g. PCBs, DDT and PAHs.Both genotoxicity and reproductive effects were often reported in this group of studies.Chen et al. and Sleight et al. studied the interactions of MPs and the 17α-ethinylestradiol, a synthetic hormone which can function as an EDC.They documented genotoxicity, reproductive and behavioural effects in the fish D. rerio.Browne et al. documented increased mortality of the annelid worm A. marina exposed to PVC MPs and the antibiotic triclosan.Fonte et al. showed a range of effects including neurotoxicity and changes in enzyme activity when the fish P. microps was exposed to PE MPs and the antibiotic celafaxin.Accumulation of MP-associated contaminants may result in an increase of the potential risk of contaminant accumulation for higher trophic levels including humans.The available data emphasizes the negative effects of those interactions and reveals the importance of studies focusing on the combined effects of MPs and other environmental contaminants, after short and long-term exposure periods, since the consequences of these combinations are still poorly known.There is an absence of studies that ascertain MPs capability to adsorb other contaminants once inside the organisms, which may represent a positive outcome.However, there are likely to be significant analytical challenges associated with conducting such studies.Furthermore, there is a lack of information about the interactive effects of MPs and other contaminants on freshwater organisms.Considering that there are many freshwater organisms with considerable trophic and commercial importance, this approach should be one of the directions that the study of MPs should take.In our search there was only one article that indicates that MPs had no combined effects with the other tested contaminant, in this case gold nanoparticles.While it is difficult to demonstrate, it is possible that this lack of studies in the literature reflects a bias against publishing negative results.A different study demonstrates that the effect of bisphenol A in immobilization in D. magna decreases in the presence of PA particles.Teuten et al. studied the effect of adding “clean” plastic to a sediment-water system with A. marina contaminated with phenanthrene as a model compound, using an equilibrium partitioning approach.They concluded that plastic addition would reduce bioavailability to A. marina due to scavenging of phenanthrene by the plastic.It has been discussed that MP ingestion may increase bioaccumulation for some chemicals in the mixture yet decrease the body burden of these chemicals if they have opposing concentration gradients between plastic and biota lipids, demonstrating the need of more studies about on the interactive effects of MPs and other environmental pollutants.The occurrence and accumulation of MPs in the aquatic environment is nowadays an undeniable fact.It is also undeniable that a large number of organisms are exposed to these particles and that this exposure may cause a variety of effects and threaten individuals of many different species, the ecosystems they live in and, ultimately, humans.The potential deleterious effects of MPs on aquatic biota have been recognized by the scientific community as demonstrated by the increasing number of studies in the last years focussing primarily on marine biota.However, the effects of MPs on freshwater organisms are much less well known.Overall, results suggest a knowledge gap on the effects of PE MPs on other organisms beyond fish and PS MPs on all groups of organisms except fish and small crustaceans.Polyethylene and PS MPs are clearly the most studied type of polymers, although others, specifically PP, PES and PA, may represent a similar danger for aquatic life.Clearly, more research is needed to confirm this issue.While spherical particles are most commonly used in laboratory studies, fibres and fragments are the most common types of shapes detected in organisms collected from field samples and the common size of these particles varies between 800 and 1600 μm.Based on the results of this work we point out some issues that need to be considered, in order to better understand the problematic of MPs in aquatic systems:Perform more studies of MP effects at environmentally relevant concentrations;,Study the effects of PP, PES and PA MPs in all groups of animals considered in this paper since these particles, namely PP, are manufactured in large scale and detected in aquatic organisms in the higher quantities than PS;,Perform more laboratory studies targeting a range of organism groups with the most common shapes and size range of MPs found in biological samples from the field;,Perform more field and laboratory studies with freshwater organisms;,Investigate the mechanisms by which MPs affect other groups of organisms to better determine the relevance of studies on small crustacea for invertebrates in general;,Understand the mechanisms by which PE affects other groups of organisms beyond fish and PS MPs on other groups, excepting fish and small crustaceans;,Take into account the interactive effects of MPs and other contaminants;,Assess the ecotoxicity of MPs in more environmentally relevant conditions, such as multispecies exposures and mesocosms.Encourage the publication of negative results,More investigations are needed in the coming years.They will be fundamental to understand the mechanism or mechanisms by which MPs affect aquatic organisms so as to credibly address the real impacts of these micro-contaminants on the environment.The following are the supplementary data related to this article.Frequency of occurrence of the different microplastic parent materials in studies involving field organisms and laboratory studies.Every bar has the total number of studies.Studies were defined according the type of study and parent material.Plastic types enumerated include PU-Polyurethane; CP-Cellophane; PT- Polyether; AC-Acrylic; PA-Polyamide; PVC-Polyvinylchloride; PES-Polyester; PP-Polypropylene; PS-Polystyrene; PE-Polyethylene.Distribution of microplastics sizes per shape of microplastics and type of study.Every bar has the total number of studies.Studies were categorized according to size class, type of study and the shape of microplastics.Supplementary tables documenting effects of microplastics on aquatic biota, combined effects of microplastics and other contaminants and the references used for populating figures 4, 5 S1 and S2.Supplementary data to this article can be found online at https://doi.org/10.1016/j.scitotenv.2018.07.207. | The effects of microplastics (MP) on aquatic organisms are currently the subject of intense research. Here, we provide a critical perspective on published studies of MP ingestion by aquatic biota. We summarize the available research on MP presence, behaviour and effects on aquatic organisms monitored in the field and on laboratory studies of the ecotoxicological consequences of MP ingestion. We consider MP polymer type, shape, size as well as group of organisms studied and type of effect reported. Specifically, we evaluate whether or not the available laboratory studies of MP are representative of the types of MPs found in the environment and whether or not they have reported on relevant groups or organisms. Analysis of the available data revealed that 1) despite their widespread detection in field-based studies, polypropylene, polyester and polyamide particles were under-represented in laboratory studies; 2) fibres and fragments (800–1600 μm) are the most common form of MPs reported in animals collected from the field; 3) to date, most studies have been conducted on fish; knowledge is needed about the effects of MPs on other groups of organisms, especially invertebrates. Furthermore, there are significant mismatches between the types of MP most commonly found in the environment or reported in field studies and those used in laboratory experiments. Finally, there is an overarching need to understand the mechanism of action and ecotoxicological effects of environmentally relevant concentrations of MPs on aquatic organism health. |
594 | Wet impregnation of a commercial low cost silica using DETA for a fast post-combustion CO2 capture process | Post-combustion CO2 capture is considered as a promising solution to control CO2 emissions from large fixed industrial sources.Post-combustion CO2 capture processes include physical absorption , chemical absorption adsorption with solid sorbents , and gas separation by membranes .The most developed technology for post-combustion carbon capture is chemical absorption with aqueous alkanolamines .However the process is not completely ready for application to power plants because of several drawbacks, such as high equipment corrosion, amine degradation by SO2, NO2 and O2 present in flue gases, high capital costs, high energy requirements for regeneration of amine absorbents, and low absorption efficiency .Adsorption is considered a promising technology for many reasons, including the potential for high capacity and selectivity, fast kinetics, good mechanical properties of sorbents and stability after repeated adsorption-desorption cycles, and low corrosion of equipment .A wide range of solid sorbents have been investigated for post-combustion capture, including carbon-based sorbents, zeolites, hydrotalcites, and metal-organic frameworks .However, each type of adsorbent has its limitations for example, carbon-based sorbents and hydrotalcites tend to have low capacities and selectivities, while zeolites, hydrotalcites and MOFs suffer from poor performance in humid flue gases.Moreover, MOFs are not particularly robust and are still expensive .Another option, introduced by Xu et al., is to try and combine the best features, and limit the worst, of amine absorption and solid adsorbents by incorporation of nitrogen functional groups within a solid material to increase specific adsorption sites.In general there are two different methods: physical impregnation of amine into porous materials, and chemical grafting of amine onto a porous material’s surface.Normally, physical impregnation is simply carried out by wet-impregnation with an amine-solvent mixture; the solvent is then evaporated after impregnation has occurred.The most common materials used for wet-impregnation are ordered mesoporous silicas due to their large surface area and pore volume, such as MCM-41 , MCM-48 , SBA-15 , SBA-16 , and KIT-6 .PEI is the most commonly used amine for wet-impregnation due to its low volatility and hence good stability.DETA, DIPA, AMPD, TEA, PEHA, TEPA, MEA, AMP, DEA and EDA have also been used in wet-impregnation studies.Table 1 summarises the CO2 capture capacity, the drop in capacity after 4 consecutive adsorption-desorption cycles, and the amount of CO2 desorbed after 10 min of regeneration, for a wide range of expensive and low cost support materials impregnated with different amine types.“It indicate that the use of amines mainly focus on PEI, MEA, DETA, TEA, TETA.Impregnating tests have been focused on testing high molecular weight amines such as PEI as they are considered more stable.There is nothing done with DETA supported onto low-cost non-structured silica gel because DETA is a lower molecular weight amines.However, DETA combines primary and secondary amine groups, has a small molecule size, low molecular weight and low viscosity, and therefore can be easily loaded into porous materials.Previous work on CO2 absorption using bulk liquid DETA has demonstrated that it exhibits fast mass transfer rates, good CO2 capture capacity, and a good cyclic capacity compared to other amines often used in absorption capture processes .In addition, the heat of absorption for CO2 in DETA is lower than for MEA , so the energy required for its regeneration should be lower.Moreover, Zhao et al. have recently investigated the CO2 uptake of DETA-impregnated titania-based sorbents, and microporous titania composite sorbents.They found that the CO2 adsorption capacity of these adsorbents was higher than the analogous DETA-impregnated SBA-15 adsorbents .Much of this previous work has focused on reaching the highest quantity of amine loaded into the structured and ordered mesoporous silica-based materials mentioned above.However, these silica materials are not yet made in large quantities and so are relatively expensive.Therefore, information about their regenerability and the drop in capture capacity is uncompleted.The present work provides insights into the use of moderate quantities of amine loaded into a low-cost silica gel than structured commercial silica supports, such as MCM-48 or KIT-6 .Overall, we produce a low cost and effective CO2 adsorbent.However, due to its low molecular weight, DETA might leach or be degraded during the cyclic capture process.Therefore, this work also addresses the mechanisms of CO2 capture in this material and its stability under repeated adsorption/desorption cycles.The support material for liquid amine impregnation employed in this work was a micro and mesoporous commercial silica gel purchased from Fischer Scientific with a particle size in the range of 0.2–0.5 mm.Diethylenetriamine and methanol were used for sorbent preparation.Ultrahigh purity gases were used for all measurements.For the wet impregnation procedure, the desired amount of DETA was dissolved in 7 g of methanol under stirring for about 15 min, after which 3.5 g of silica gel was added to the solution.The resultant slurry was stirred for about 30 min, and then dried at 40 °C overnight at atmospheric pressure .The prepared adsorbents were denoted as FS-DETA-X%, where X represents the loading of DETA as a weight percentage of the original silica gel sample.The weight ratio of amine to silica was set at 0, 10, 20, 30, 40, and 80%, respectively.The methanol/silica weight ratio was 2:1 in all the samples.The samples obtained were characterized in terms of specific surface area, porosity, pore size distribution, thermal stability and CO2 capture performance.The prepared samples were characterized in terms of texture by means of N2 adsorption/desorption isotherms at −196 °C.The N2 isotherms were measured in a Quantachrome Autosorb iQ2 apparatus.Prior to any measurement the samples were out-gassed for approximately 12 h under vacuum.The original silica gel sample was out-gassed at 110 °C with a heating rate of 5 °C/min.The wet impregnated samples were out-gassed at 30 °C with a heating rate of 1 °C/min.The BET surface area, total pore volume and pore size distribution were determined from the N2 isotherms at −196 °C before and after loading the DETA.The surface areas were calculated using the Brunauer-Emmett-Teller equation, and the pore volume was calculated from the adsorbed nitrogen after complete pore condensation at P/P0 = 0.9905 by applying the so-called Gurvich rule .The pore size distribution was calculated by using the DFT method and the DA method for comparison.The thermal and physical properties of the silica gel support before and after its impregnation with 10% of DETA were characterized by thermal gravimetric analysis.About 35 mg of sample was heated under an inert atmosphere of N2 up to 600 °C with a heating rate of 10 °C/min.The capacity and kinetics of CO2 adsorption of the silica gel before and after impregnation with DETA were measured with the thermogravimetric analyser.Pure CO2 was used for the adsorption step, whereas pure N2 was used as a purging gas for CO2 desorption.In a typical experiment, 30 mg of adsorbent was placed in the crucible and heated up to 100 °C in a N2 stream for 60 min to remove the CO2 adsorbed from the air as well as all moisture.Afterwards the temperature was decreased to 25 °C and maintained for 120 min before changing the gas to CO2.The adsorbents were then exposed to CO2 for 250 min.A temperature-programmed CO2 adsorption test was then conducted with a slow heating rate of 0.5 °C/min from 25 °C to 100 °C.The CO2 capture capacity at 25 °C was calculated from the weight gained after exposure to CO2, and was expressed in mg of CO2 per g of sorbent.The variation of capture capacity with temperature was obtained from the temperature-programmed CO2 adsorption step.The cyclic adsorption capacity was also evaluated at 60 °C by means of 4 consecutives adsorption/desorption cycles in the TGA.Prior to the first cycle, samples were heated up to 100 °C in a N2 stream for 60 min.Afterwards the temperature was cooled down to the desired temperature and kept within the N2 stream for 120 min before the change to CO2 for another 120 min.The regeneration step was run by simple flushing of N2 and the temperature was maintained constant at 60 °C.The total-cycle time was set at 240 min.The CO2 capture capacity for each cycle was calculated from the mass gained by the samples after they were exposed to the pure CO2 stream for 120 min.The initial mass was considered as the mass after the drying step for all the cycles.Adsorption kinetic data of the prepared adsorbents is essential to understand the overall mass transfer of CO2, as it will directly influence the duration of the adsorption process, the adsorber size requirements, consequently, the capital costs of the carbon capture unit.There is a wide variety of kinetic models with different degree of complexity in literature .The most common approach to use these models is to fit the experimental data to the conventional kinetic models, and then select the model with the best fit.In this study, two of the most common theoretical kinetic models were employed to interpret the interactions between adsorbate and adsorbents, as well as the adsorption rate performance.Intraparticle diffusion model was also employed to explore the CO2 diffusion along the treated and non-treated amorphous silica support.The fitting of the models was done by using the adsorption data which is the initial stage of the curves in Fig. 2.After the 70 s, the temperature increase and the desorption procedure started.For the calculation of the rate constants, k1 and k2, the experimental qe value was considered when fitting the models to the experimental kinetic data with the objective of getting the most realistic rate constant values.To complete the kinetic study of the adsorbent support prior and after the amine impregnation, CO2 intraparticle diffusion during adsorption has been evaluated by employing the intraparticle diffusion model, proposed by Weber and Morris .To apply the model, qt was plotted against t1/2 to get the straight line .In this case, the multi-linearity in qt vs. t1/2 was observed.During the first step the instantaneous adsorption occurs on the external surface; the second step is when the gradual adsorption takes place, where intraparticle diffusion is controlled; and the third step is the final equilibrium step, where the adsorbate moves slowly from larger pores to micropores causing a slow adsorption rate.Accordingly, Ri can be expressed as the ratio of the initial CO2 adsorbed amount to the final adsorbed amount.The implication of the obtained initial adsorption behaviour will be analysed and described along with the characteristics curves based on the IPD model in Section 4.3.1.In this work the second step of adsorption has been evaluated by the IPD model to further study the control of intraparticle diffusion on the CO2 adsorption in relation with the amine loaded on the silica support.The goodness of fit of the IPD model with the experimental results was evaluated using the nonlinear coefficient of determination.Thermal stability and DETA loading were studied from the weight loss of the sorbents measured during the temperature-programmed desorption experiments within the TGA.The evolution of the mass loss and the first derivative of the mass loss versus time and temperature are displayed in Fig. 2a and b respectively, for both the original silica gel and the impregnated FS-DETA-10%.For the pure silica gel support, the weight decrease started at 425 °C, and the total mass loss was gradual.In the case of FS-DETA-10%, the TG and DTG curves clearly show the existence of four thermal degradation regions: 48–53 °C, 60–100 °C, 130–425 °C, and 425–600 °C.The first region was detected in the first 2 min of the experiment.The second region was detected between the third minute and the 10th minute, with the maximum DTG observed at 67 °C after 4 min from the beginning of the experiment.The mass losses at a temperature up to 100 °C, which corresponds to the regions and, is attributed to the desorption of moisture and CO2 previously adsorbed from the ambient, and to the evaporation of the solvent trapped in the pores during the wet impregnation procedure.The third region of mass loss was detected with a sharp peak in the rate of weight loss in the temperature range from 130 °C to 425 °C, and a maximum velocity of mass loss of 0.84 % min−1 was detected at 235 °C.The mass loss in the region is attributed to the evaporation of the amine.After this maximum, further increase in temperature resulted in the loss of the remaining amine.Finally, the last region of mass loss was observed from 425 °C to 600 °C and is attributed to degradation of the silica gel.The total mass loss observed at 600 °C for FS-DETA-10% was 10.7%, discounting already the weight loss due to moisture and methanol in the previous regions at temperature below 130 °C and the total mass loss due to the degradation of the pure silica FS at 600 °C.Therefore, this mass lost is almost equal to the amount of amine loaded into the silica gel.This confirms that the wet impregnation procedure is sound.The thermal stability exhibited by the FS-DETA-10% is in agreement with the results found by Zhao et al. for DETA-impregnated TiO2 particles .Zhao et al. reported that the molecular weight of the amine used affects the strength of interaction between the amine compounds and the support material, and hence amines with higher molecular weight evaporate at higher temperatures .Considering the boiling point of the bulk DETA, it can be observed from Fig. 2a and b that the thermal stability of the amine is increased when it is impregnated in the solid support material.This occurs because DETA molecules strongly adhere to the porous surface of the silica gel, by either van der Waals forces or hydrogen bonding.Accordingly, it can be concluded that the studied DETA-impregnated silica gel displayed good thermal stability below 130 °C.Nitrogen adsorption-desorption isotherms for silica gel before and after its impregnation with different DETA loadings are shown in Fig. 3.The corresponding surface area, total pore volume, micropore volume, and average pore diameter are summarised in Table 2.Figs. 4a and 4b show the pore size distribution and the cumulative pore volume of FS, before and after the impregnation, calculated by the DFT method.Fig. 4c displays the PSD calculated by the DA method.Fig. 5 displays the BET surface area, total pore volume, and micropore volume as function of DETA loading.The adsorption-desorption isotherms present a loop when P/P0 > 0.4, and correspond to the type IV isotherm in the IUPAC classification.This type of isotherm is a typical characteristic of mesoporous solids, and the hysteresis loop is associated with capillary condensation in the larger mesopores.The total amount of N2 adsorbed decreases with increasing amine loading into the silica gel support.The PSD presented in Fig. 4a shows that the first peak of sample FS corresponding to micropores is considerably reduced after impregnation with 10 wt.% of DETA.The PSD becomes narrower and centred at larger average pore size diameter after impregnation because the amine preferentially fills micropores after impregnation.At higher amine loadings the micropores are completely filled and also the fraction of filled mesopores increases.Consequently, there is a significant reduction in pore volume and surface area as amine loading increases, but the adsorption-desorption isotherm type is preserved as type IV for all the FS-DETA sorbents.Thus, the available pore volume, Vp, and surface area were reduced from 0.52 cm3/g to 604 m2 g−1 respectively to a minimum of 0.06 cm3 g−1 and 27 m2 g−1 respectively after impregnation with 80% DETA, which indicates that most of the original silica support pores were blocked with amine.Conversely, the considerable remaining porosity of FS-DETA-10% could be beneficial for CO2 diffusion and the adsorption-desorption process.The pore volume decrease observed in Table 2 from the values estimated by the Gurvich’s rule is in agreement to the cumulative pore volume estimated by the DFT method.In this section we evaluate the effect of DETA loading and temperature on the CO2 capture capacity, the kinetics of CO2 adsorption, and the cyclic performance of the FS-DETA series.Fast adsorption and desorption kinetics are desired for CO2 sorbents since a faster cycle time leads to smaller equipment and lower capital costs.Fig. 6 shows the adsorption kinetics of CO2 onto silica gel before and after its impregnation with different DETA loadings measured at 25 °C.It can be observed that CO2 adsorption displays two different stages.Once the sorbents were exposed to the pure CO2 stream, a sharp weight gain occurred in less than 1 min, in which impregnated samples reached around 70% of their capacity.This fast process is then followed by a much slower adsorption process over the remaining 249 min in which the CO2 uptake increased to the maximum observed.This two-stage adsorption process showed similar trends in all the amine impregnated samples.For FS, CO2 adsorption reached the maximum quickly, and the second stage was not obvious.The adsorption kinetics of samples prepared in this work, i.e. FS-DETA-10%, which reached 70% of its capacity after 47 s, is comparable with that reported for some other adsorbents in literature, but it is several times faster than many others, like the impregnated MCM-41-PEI-50%, which reached 70% of its capacity after 5.7 min , or the surfactant-promoted hierarchical porous silica monoliths which completed the first stage in about 5 min .Table 4 summarises the time taken to reach 50%, 70%, 80% and 90% of the maximum adsorption capacity for the most common PEI-loaded support materials, and the materials prepared in this work.Table 3 shows the time required for the regeneration of CO2 in DETA-FS samples by flushing a N2 stream at 60 °C.The 80% of the CO2 was desorbed after 8–9 min flushing N2 on FS-DETA-10%.The rapid adsorption kinetics found in the first stage as well as the quick regeneration at relatively low temperature of the samples tested in this work, are considerable advantages for their practical applications in short adsorption-desorption cycles.To study the kinetic of CO2 adsorption, we have used three kinetic models to explain experimental results.Fig. 8 presents the experimental CO2 uptake as function of time for the amine-impregnated and non-impregnated silica gel support at 25 °C, as determined using the gravimetric method, along with the corresponding profiles predicted by the pseudo-first and pseudo-second kinetic models.Table 5 summarises the values of the parameters of the kinetic models and the associated errors calculated by Eq.Table 5 reveals that the kinetic rate for both models, k1 and k2, varied with the DETA loading.The favourable adsorption kinetics observed from the pseudo-first model for the 10% DETA loading can be attributed to the faster diffusion of CO2 molecules inside the pores, explained by the higher selectivity and chemical attraction towards the functionalised surface of the silica gel support.The increase in the mass transfer coefficient is not only reflected in the faster kinetics rate constant, k1, but also in the sharper slope of the kinetic curves.However, for DETA loadings of 20% and 30%, the kinetic rate constant, k1, decreased.The latter is attributed to the amine blockage of some pores fraction.Finally, for DETA loading 40% the highest kinetic rate constant value was k1 = 9.06E−02 s−1, which indicates that micropores were completely filled with DETA.It can be observed that pseudo-first-order kinetic model was fitted the entire adsorption period.In contrast, pseudo-second-order kinetic model overestimated the CO2 uptake in the initial stage but underestimated it in the flatter region.As consequence, lower errors were observed for the pseudo-first-order kinetic model than for the pseudo-second-order kinetic model.Results shown in Supplementary information indicate that equilibrium adsorption capacities obtained experimentally and as predicted by the pseudo-second-order kinetic model are overestimated, but in better agreement than the estimated by the pseudo-first-order kinetic model.The latter underestimated the equilibrium adsorption capacities.Furthermore better fittings were obtained for pseudo-second order kinetic model based on the higher R2 values compared to the obtained for the pseudo-first-order kinetic model.These results indicate that when experimental data was fitted with the models within the entire time range, and the equilibrium adsorption capacity value was estimated along with the rate constant value by using Solver, pseudo-second-order kinetic model gave better estimations.It is contrary to the best goodness of the pseudo-first-order kinetic model obtained for the first time interval fitted using experimental qe values.Briefly, when experimental data is considered at the very first stage of the adsorption process the model that best describes experimental data is the pseudo-first-order kinetic model.However, when the entire time-range for the experimental adsorption process is modelled, pseudo-second-order kinetic model better describes experimental results, indicating that adsorption process is controlled by the chemical interactions at longer exposition times of samples to CO2.The linearized plot of qt vs. t1/2 based on IPD model for the adsorption of CO2 on the silica support before and after its impregnation is shown in Fig. 9, and the kinetic parameters of IPD model are summarised in Table 6.The model has been fitted to the second stage of the adsorption process that is characterized by a gradual adsorption in which intraparticle diffusion is controlled.None of the straight lines pass through the origin but present a positive intercept.It is because of the existence of instantaneous adsorption which implies that some amount of CO2 molecules was adsorbed onto the exterior surface of the adsorbent in a short period of time.The intercept increases for the least impregnated sample, FS-DETA-10%, compared with the non-impregnated silica support, FS.However, it can be seen that initial adsorption decreases with increasing DETA-loading.The sample with the least impregnation loading has the largest initial adsorption, and the sample impregnated with the highest percentage of the amine, FS-DETA-40%, exhibits the lowest initial adsorption value.The slopes obtained for DETA-impregnated samples are higher than the slope obtained for the original support, FS.The slopes of FS-DETA-20% and FS-DETA-30% are close, revealing the very similar rates of IPD for the adsorption of CO2 molecules.This observation is in agreement with the values obtained for the IPD rate constant, kd.To interpret the Ri values obtained based on IPD model, the classification proposed by Feng-Chin et al. has been considered in this work.Ri is classified into four different zones depending on its value.When 1 > Ri > 0.9 is called weakly initial adsorption; 0.9 > Ri > 0.5, intermediately initial adsorption; 0.5 > Ri > 0.1, strongly initial adsorption; and Ri < 0.1, approaching completely initial adsorption.The Ri values obtained for the samples prepared in this work correspond to zone 3 for FS and FS-DETA-10%, and zone 2 for samples with higher amine loading, it is for FS DETA 20%, FS DETA 30% and FS DETA 40%, respectively.Lower impregnation percentage gives smaller Ri values, meaning that there is a stronger initial adsorption behaviour.In other words, when DETA loading increases the initial CO2 adsorption does increase, it is due to the amine covering the walls of the pores but without blocking them.Loadings from 20% up to 40% result in a lower initial adsorption capacity or higher Ri value, which indicates some intraparticle diffusion limitation, that can be attributed to pore blocking by the amine.The adsorption in these systems occurs under intraparticle diffusion control in almost the whole process.Contrary for FS and FS-DETA-10% the kinetics of adsorption are controlled by the adsorption on the external surface speeded due to the higher attraction of the CO2 molecule towards the amine groups, and also for the still-opened pore channels through which gas molecules travel.In practice, IPD model is suitable for the description of the present experimental data as the correlation coefficient ranges from 0.9629 to 0.9857.The CO2 capture tests for the FS-DETA series showed that, in general, DETA impregnation can significantly increase the CO2 adsorption capacity.The addition of 10 wt.% of DETA to the original silica resulted in an increase of 65% of CO2 adsorption capacity at 25 °C.However, the CO2 adsorption capacity decreases with increasing DETA loadings after 10%.This is in agreement with the results reported previously for PEI-containing samples .It is useful to compare with other support materials used for wet impregnation with amines.We see that the uptake of FS-DETA-10% is higher than the capacity of Si-MCM-41-PEI-50% reported previously by Xu et al. at the same conditions.In that work the CO2 adsorption capacity of the original silica MCM-41 was increased by 5.6 mg-CO2/g-ads after the impregnation with 50 wt.% of PEI.In the present work, the quantity of amine added was much lower, 10 wt.%, and the CO2 capture capacity increment with respect to the support material was nearly three times higher.Accordingly, the amine efficiency in our prepared materials was much higher than that exhibited by the Si-MCM-41-PEI-50% under the same conditions.Our results are also in line with those of Zhao et al. who impregnated SBA-15 with 10, 30 and 50 wt.% of DETA.They also obtained a lower CO2 capture capacity for SBA-15-30% than for SBA-15-10% at 30 °C for a mixture of 10% of CO2 in N2 at 1 atm .This particular result is somewhat better than ours presented here given the lower partial pressure of CO2 in their work, but the cost of the silica supports must also be taken into account.The CO2 adsorption performance of the impregnated materials at different temperatures gives essential information about the best temperature at which the sorbents should be used for CO2 capture.As the reaction of amines with CO2 is exothermic , an increase in temperature could reduce the capture capacity of the impregnated samples.Fig. 10 shows the CO2 capture capacity with respect to temperature, in the range from 25 °C to 100 °C.We find the CO2 capture capacity decreases with increasing temperature for all the studied samples, but there are small differences in the series.For instance, in the case of FS and FS-DETA-10%, the reduction of CO2 uptake with temperature is faster than that obtained for FS-DETA-20% and FS-DETA-40%.The profile curves of FS-DETA-10% and FS-DETA 40% cross at 78 °C.In order to avoid high energy penalty costs, apart from a high CO2 capture capacity and a fast adsorption-desorption kinetics, CO2 sorbents should be stable and regenerated at low temperature.In this section, the stability and cyclic behaviour of FS-DETA-10% and FS-DETA-40% were investigated via four consecutives CO2 adsorption-desorption cycles, during 850 min at 60 °C.The adsorption temperature was selected at 60 °C since it is the most common temperature used in chemical absorption processes using aqueous alkanolamines.The carbon capture stability during the 4 cycles can be observed in Fig. 11, and the calculated CO2 capture capacities during each cycle for FS-DETA-10% and FS-DETA-40% at 60 °C are shown in Fig. 12.The CO2 capture capacity remained substantially constant after four cycles, with a relatively insignificant drop of 0.36% between the first and the fourth cycle in the case of FS-DETA-10%.However, a drop of 4.9% in the CO2 capture capacity between the first and the fourth cycle was observed for FS-DETA-40%.The drop exhibited by FS-DETA-40% could be due to imperfect regeneration, or possibly leaching of DETA from the support material during the adsorption or regeneration processes.For comparison, a drop of 4% in the carbon capture capacity after 10 cycles was considered acceptable for MC-PEI-65% by Jitong and Wang , whereas a drop of 7% in the carbon capture capacity observed on 50-TEPA-TiO2-Based composite sorbents under 10% CO2/N2 at 75 °C was associated with the continuous volatilisation of the impregnated TEPA during the first 4 cycles .Regeneration of FS-DETA-10% also occurs more quickly than for FS-DETA-40%.From Fig. 11 and Table 3 we see that 50% of the adsorbed CO2 was rapidly desorbed from FS-DETA-10% after purging for 1.2 min, and desorbed from FS-DETA-40% after purging for 2.5 min.Moreover, 80% of the CO2 was released from FS-DETA-10% after 8 min of purging, and was released from FS-DETA-40% after 27 min of purging.Essentially, desorption rates peak during the first minute and a slow tail is displayed afterwards for both samples.This observation agrees with the influence of temperature on the rate of CO2 uptake observed in the CO2 temperature-programmed adsorption experiments explained in Section 4.3.2.Table 3 shows the working capacity calculated by the difference of CO2 capture at the 120-minute adsorption and desorption steps.It is almost constant for each cycle for FS-DETA-10%, while a slight decrease with each cycle, in the region of 1.5% per cycle, is observed for FS-DETA-40%.Once again, this gradual decrease could be caused by incomplete regeneration of the sorbent after each cycle or by leaching of amine from the sorbent.In Table 1 can be observed that modified as-prepared mesoporous silica SBA-15) with 50% TEPA loaded leads to a remarkably high adsorption capacity for this mesoporous silica–amine composite.Even higher capacities are found for TEPA-impregnated As-MCM-41-TEPA-60%, mesoporous silica capsules MC400/10-TEPA-83% at 75 °C and 1 bar CO2 partial pressure, and TEPA/DEA-impregnated SBA-15 at 75 °C and 0.1 bar CO2 partial pressure .However, MC400/10-TEPA-83% lead to a drop of 60% of the CO2 capacity after 50 cycles , and TiO2-TEPA-50% lead to the highest drop observed in the table after 4 TSA cycles .Consequently, stability of materials impregnated with TEPA is the biggest problem for these high-capacity materials.In contrast, the lowest CO2 capture capacity drop after 4 cycles was observed with Si-MCM-41-PEI-50% and with FS-DETA-10% prepared in this work.Accordingly, DETA-impregnated amorphous silica gel prepared in this study shows very good stability and it can be compared with the support materials impregnated with the most stable amine.The capture capacity exhibited by the materials prepared in this study is similar to that obtained by DETA-impregnated activated alumina, TiO2-TEPA-10%, and activated carbon AC-TEPA-10%, but higher than that displayed by Si-MCM-41-PEI-50% at 25 °C, and the activated carbons: AC-TEA, AC-AMP, AC-PEI-50%, AC-PEI-30% and AC-PEI-25%.Although the capture capacity for the materials prepared in this study is not as high as for the structured silicas, FS-DETA-10% and FS-DETA-40% present very easy and fast regeneration of CO2 after 10 min flushing N2 without an increase of the temperature.The amount of CO2 desorbed after 10 min flushing N2 at 60 °C for FS-DETA-10% and FS-DETA-40% is higher than that obtained for Si-MCM-41-PEI-30% by passing N2 at 100 °C, Cariat G10 Silica-PEI-67% at 60 °C, KIT-6-PEI-50% at 50 °C and zeolite 13X at 75 °C.Regeneration of impregnated silicas prepared in this work can be achieved in a matter of minutes: i.e. 2 min were required to desorb the 80% of the CO2 from FS-DETA-10% at low temperature.Among the variety of amines tested on commercial low cost supports, DETA-impregnated activated carbon and aluminium oxide showed the highest capture capacity of the series .In the current study, FS-DETA-10% shows similar capacity to DETA-impregnated alumina loaded with 40%, but slightly lower than activated carbon loaded with 27% DETA.However, the drop in capacity measured on the DETA-impregnated activated carbon resulting after 3 vacuum swing adsorption cycles is high, and not comparable at all with the drop in capacity after TSA cycles presented in Table 1.Briefly, the low cost silica used in this work present similar CO2 capture capacity as some low cost supports such as commercial activated carbons, activated aluminas and silicas impregnated with low molecular weight amines.Moreover, the stability and regenerability displayed by the FS-DETA series seems to be reasonably favourable.Here we present a consistent interpretation of all these results.We should first mention that the FS material used here is expected to consist of packed and fused silica nanospheres, in the region of 5 nm in diameter.This type of structure leads naturally to the appearance of micropores, formed by the ‘wedges’ between adjacent spheres, and mesopores that represent longer range packing structure.Also, it is known that the reaction of amines with CO2 in the absence of water leads to the formation of carbamate and hydronium ion pairs.Due to their ionic nature, these reaction products are likely to be more viscous, and perhaps even solid, at high conversions relative to the reactants.A widely accepted mechanism for wet impregnation of silica by amines is that amines diffuse into the pore space of the solid support and interact with hydroxyls on the silica surface, establishing hydrogen bonds.This presents as strong surface adsorption, leading to the preferential filling of micropores with amine at low loadings.Larger pores are gradually filled with increased amine loading.This is entirely consistent with the PSDs observed in Fig. 4a.Due to the strong amine-silica interaction, and the cooperative effect of adjacent pore walls in micropores, the amine in these micropores is relatively stable with very low vapour pressure and evaporation from the pore.Thus, leaching from micropores at low amine loadings is much reduced compared to the rate of leaching from mesopores at higher amine loadings.However, it is also important to consider pore connectivity and pore accessibility.When only micropores are filled, i.e. when the wedges between silica nanospheres are filled, there is little effect on the pore connectivity or accessibility.However, as the proportion of filled mesopores increases, more and more of the pore network becomes blocked by amine and inaccessible to gases with poor diffusion through the amine.When CO2 is introduced into the material, it reacts with surface layers of amine leading to the production of more viscous reaction products.The diffusivity of CO2 can be assumed to be reduced in these layers, leading to a slowing down of kinetics.Therefore, as the loading of amine increases, less and less pore space is effectively available to CO2 due to kinetic restrictions.In principle, the equilibrium adsorption of CO2 should increase with increasing loading of amine.However, the reduced availability of pores and worsening kinetics prevent this.Consequently, two kinetic processes with very different timescales are observed for CO2 adsorption and desorption.CO2 absorption occurs quickly into the outer surface of amine leading to the formation of viscous reaction products.The topology of this surface coarsens with increased amine loading, leading to reduced CO2 uptakes at short times with increased amine loading.The second process is slow diffusion of CO2 or carbamate through this surface to the less accessible regions of the pore network.The same processes operate in reverse during desorption.This kind of two-stage adsorption-desorption process has been reported before in other impregnated sorbents .At higher temperatures two key effects combine to produce the observed results.First, higher temperatures lead to lower equilibrium CO2 capture due to the exothermic nature of the reaction of amines with CO2.However, there is improved diffusion of CO2 through DETA-filled pores because of the lower DETA viscosity at higher temperature.Consequently, the CO2 capacity appears to decrease more slowly with increasing temperature for higher amine loadings.The carbon capture characteristics of low cost diethylenetriamine impregnated amorphous silica gel has been reported in this work for the first time.The composite material was found to be stable at temperatures below 130 °C.The CO2 capture capacity was enhanced by 65% after impregnation of the amorphous silica support with 10 wt.% of DETA.Reduced performance in every respect was found at higher loadings.We interpret this reduced performance in terms of increased pore-blocking by DETA with increasing DETA loading.Although low molecular weight amines have previously been reported as unsuitable for post-combustion CO2 capture applications due to poor stability, this work has demonstrated that a low cost silica gel impregnated with 10 wt.% of DETA displays fast kinetics and relatively stable cyclic CO2 adsorption/desorption performance, at least over 4 cycles.Additionally this work has demonstrated that regeneration of the FS-DETA-10% is easily and rapidly reached by only flushing with N2, with no need to increase the temperature.Nevertheless a further study needs to be done in order to optimise the regeneration conditions for the best cyclic performance.In conclusion, fast kinetics and the cyclic stability make DETA impregnated silica gel a promising candidate in cyclic CO2 capture processes.The short cycle time and low regeneration temperature, which is significantly lower than the temperature typically used to regenerate solid amine sorbents, normally in the range of 100–140 °C , imply potential energy savings.Additionally, the advantage of the wet impregnation procedure over the amine grafting is its simple route of preparation, i.e. the complex and expensive multi-step preparation of grafted amines is avoided here.Moreover, the estimated total cost of preparation of these materials could be much lower due to the low cost of the materials used.All of these results are very promising and make these DETA impregnated silica gels as candidates for further study for their potential application in cyclic adsorption-desorption post-combustion CO2 capture processes to serve power and industry. | This work presents an economic and simple method to manufacture low cost, but effective adsorbents for CO2 capture through impregnating DETA to a low cost, porous silica gel. The results have demonstrated that the low cost silica gel impregnated by low molecular-weight amine is stable and work very well at a temperature up to 130 °C. The developed adsorbent has a fast adsorption kinetics and can be regenerated at a low temperature. This will significantly reduce the energy used to desorb CO2, therefore the energy penalty. The effect of amine loading on the textural properties, thermal stability, and CO2 capture performance of the impregnated silica gel is also reported in this paper. 10% amine loading gives the best porosity, stability and the highest adsorption capacity. |
595 | High-resolution 3D MR Fingerprinting using parallel imaging and deep learning | MR Fingerprinting is a relatively new imaging framework for MR acquisition, reconstruction, and analysis, which can provide rapid, efficient and simultaneous quantification of multiple tissue properties from a single acquisition.Compared to conventional MR imaging approaches, MRF uses pseudorandomized acquisition parameters to generate unique signal signatures for different tissue types and retrieve quantitative tissue parameters using a template matching algorithm.Since its introduction in 2013, this technique has been successfully applied for quantitative imaging of multiple human organs including the brain, abdominal organs, heart, and breast.The quantitative tissue properties obtained using MRF, such as T1 and T2 relaxation times, have been demonstrated to provide new insights into improved tissue characterization and disease diagnosis.Specifically for brain imaging, MRF has been applied for longitudinal characterization of brain development in early childhood, differentiation of brain tumor types, and improved detection and diagnosis of epileptic lesions.While previous studies have paved the way for clinical applications of MRF, to broadly adapt MRF into clinical examinations, MRF must meet several prerequisites, including whole-organ coverage, a sufficiently high resolution and a reasonable data acquisition time.To this end, significant efforts have been made to extend the original 2D MRF approaches to 3D imaging.The extension to 3D MRF can potentially provide a higher spatial resolution and better tissue characterization with an inherently higher signal-to-noise ratio, which is favorable for pediatric neuroimaging and clinical diagnosis of small lesions in the brain.For 2D MRF acquisitions, thousands of MRF time frames are typically acquired for tissue characterization and each time frame/image is acquired with only one spiral interleaf, which is highly undersampled.To maintain the high scan efficiency in 3D MRF, only one spiral arm is acquired for each partition in a 3D image and the same spiral arm is acquired along the partition direction within the volume.While MRF has demonstrated high efficiency compared to conventional quantitative imaging approaches, 3D MRF with a high spatial resolution still requires lengthy acquisition times, especially for whole-brain coverage, making it impractical for clinical applications.Therefore, multiple algorithms have been developed for accelerated 3D MRF acquisitions.Since MRF is already highly undersampled in-plane with only one spiral readout, current approaches to accelerate 3D MRF have been focused on 1) acceleration along the partition direction and 2) reduction of MRF time frames for tissue characterization.One recent study utilized an interleaved sampling pattern along the partition-encoding direction to uniformly undersample the MRF dataset.With a reduction factor of 3, whole-brain coverage was achieved with a spatial resolution of 1.2 × 1.2 × 3 mm3 in less than 5 min.Cartesian GRAPPA combined with sliding window reconstruction has also been developed to reconstruct uniformly undersampled 3D MRF datasets.Liao et al. demonstrated an improved accuracy as compared to the aforementioned interleaved sampling approach and a whole-brain scan with a spatial resolution of 1.2 × 1.2 × 2 mm3 was achieved in ~3.5 min.Acceleration of 3D MRF with higher spatial resolutions poses more technical challenges due to reduced SNR and has not been extensively explored and validated.One important feature of MRF compared to most other quantitative MRI methods is the utilization of template matching for tissue characterization.However, this approach is relatively slow and requires a large amount of memory to store both the image dataset and the MRF dictionary.More importantly, it relies on a comparison of global appearances of signal evolution from each pixel and hence does not take full advantage of the useful information acquired in MRF.Consequently, more than 1000 time points are typically required for accurate tissue characterization using template matching, which prolongs the MRF acquisition.Advanced post-processing methods, capable of extracting valuable information in local regions of each signal evolution and measuring from neighboring pixels, can largely improve the performance of MRF in post-processing and therefore, reduce scan time.Deep learning is an ideal solution for information retrieval from MRF measurements.Recent developments in machine learning have indicated that it is possible to accelerate 2D MRF acquisition using deep learning neural networks.For example, Cohen et al. have developed a 4-layer neural network to extract tissue properties from a fully-sampled MRF dataset and the results demonstrated a 300–5000 fold improvement in processing speed.Hoppe et al. have developed a convolutional neural network to exploit the correlation in the temporal domain and their results demonstrated that accurate tissue mapping can be achieved from MRF images with undersampled aliasing artifacts.To further utilize correlated information in the spatial domain, Fang et al. proposed a deep learning model with two modules, a feature extraction module and a spatially-constrained quantification module to improve the performance of tissue characterization.Instead of relying on information from a single pixel, a patch of 64 × 64 pixels was used as an input for the convolutional neural network training and experimental results with a highly undersampled dataset demonstrated that accurate T1 and T2 quantification can be achieved with an up to 8-fold acceleration in the scan time.This suggests that more advanced features from MRF measurements can be extracted using the CNN network.Currently, these methods are applied with 2D MRF approaches and their utility for 3D MRF remains to be investigated.Compared to 2D MRF, the application of deep learning for 3D MRF faces more technical challenges to acquire the training dataset for the CNN model.The appropriate training dataset needs to provide 1) CNN model input, containing MRF signal evolutions with a reduced number of time frames, which mimic the real accelerated scans, and 2) ground-truth tissue property maps with decent image quality and no relative motion in regard to the input MRF signal.While retrospectively data undersampling can be used to achieve this goal for 2D MRF, this approach is not generally applicable for 3D MRF acquisitions.This study aimed to combine state-of-the-art parallel imaging and deep learning techniques to develop a rapid 3D MRF method capable of achieving 1-mm isotropic resolution and whole brain coverage within a clinically feasible time window.A new 3D MRF sequence specifically designed to acquire the training dataset for 3D acquisitions was developed.Our results demonstrate that the trained CNN model can be applied to acquire high-resolution T1 and T2 maps from prospectively accelerated 3D MRF acquisitions.All MRI experiments in this study were performed on a Siemens 3T Prisma scanner with a 32-channel head coil.A 3D MRF sequence based on the steady-state free precession readout was used for volumetric brain MRF with 1-mm isotropic resolution.Compared to the original 2D MRF method, a linear slice-encoding gradient was applied for volumetric encoding and the 3D MRF dataset was acquired sequentially through partitions.The data acquisition for each partition was performed with pseudorandomized flip angles and a highly-undersampled spiral readout, which is similar to the standard 2D MRF.The same acquisition parameters, such as the flip angle pattern and in-plane spiral readouts, were repeated for each partition in the 3D sampling scan and a constant waiting time of 2 s was applied between partitions for longitudinal recovery.Other imaging parameters included: FOV, 25 cm; matrix size, 256; TR, 9.1 ms; TE, 1.3 ms; flip angle range, .A total of 768 time frames were acquired to generate MRF signal evolution and the acquisition time including the 2-sec waiting time was ~17 s per partition.Similar to the standard MRF method, the acquired MRF signal evolution from each voxel was matched to a pre-defined MRF dictionary to extract quantitative T1 and T2 relaxation times.The MRF dictionary was generated using Bloch equation simulations with the actual acquisition parameters.A total of ~23,000 entries are contained in the MRF dictionary, which covers a wide range of T1 and T2 values.The step sizes of T1 in the dictionary were 10 ms between 60 and 2000 ms, 20 ms between 2000 and 3000 ms, 50 ms between 3000 and 3500 ms, and 500 ms between 4000 and 5000 ms. The step sizes of T2 were 2 ms between 10 and 100 ms, 5 ms between 100 and 200 ms, and 10 ms between 200 and 500 ms.In this study, parallel imaging along the partition-encoding direction was applied to accelerate the 3D MRF acquisition.An interleaved sampling pattern introduced by was used to undersample data in the partition direction.Parallel imaging reconstruction, similar to the through-time spiral GRAPPA technique, was applied to reconstruct the missing k-space points with a 3 × 2 GRAPPA kernel along the spiral readout × partition-encoding direction.The calibration data for the GRAPPA weight were obtained from the center k-space in the partition direction and integrated into the final image reconstruction for preserved tissue contrast.One challenge for parallel imaging reconstruction with non-Cartesian trajectories, such as the spiral readout used in MRF acquisition, was to obtain sufficient repetition of the GRAPPA kernels for robust estimation of GRAPPA weights.Similar to the approach for the spiral GRAPPA technique, eight GRAPPA kernels with a similar shape and orientation along the spiral readout direction were used in this study to increase the number of kernel repetitions.After GRAPPA reconstruction, each MRF time point/volume still had one spiral arm in-plane, but all missing spiral arms along the partition direction were filled as illustrated in Fig. 2.Aside from acceleration using parallel imaging, we further leveraged the deep learning method to reduce acquisition time by extracting features in the acquired MRF dataset that are needed for accurate tissue characterization.To describe the workflow of the 3D MRF, we will first briefly review how deep learning has been integrated into a 2D MRF framework.As introduced by Fang et al., the ground truth tissue property maps were obtained using the template matching algorithm from a MRF dataset consisting of N time frames.One way to accelerate MRF with deep learning is to achieve a similar tissue map quality with only the first M time points.To train the CNN model, the MRF signal evolution from M time points was used as the input of the CNN network and the desired output was set as the ground truth tissue maps obtained from all N points.To ensure data consistency and minimize the potential motion between the network input and output, the input data of M points was generally obtained from retrospective undersampling of the reference data with all N points.For 2D measurements, given a certain delay time generally exists between scans for system adjustments and other purposes, it is reasonable to assume that brain tissues reach a fully longitudinally recovered state and each acquisition starts from a longitudinal magnetization of 1.Therefore, the retrospectively undersampled MRF data and the data from prospectively accelerated cases should have the same signal evolution for the same tissues, independent of the spin history from the previous scan.The CNN parameters determined in this manner can be directly applied to extract tissue properties from the prospectively acquired dataset.While a similar method as outlined above for 2D can potentially be applied to reduce data sampling and accelerate 3D MRF, important modifications are required due to the additional partition encoding.For a 3D MRF acquisition, a short waiting time was applied between partitions, which is insufficient for most brain tissues to achieve full recovery of longitudinal magnetization.As a result, the magnetization at the beginning of each partition acquisition depends on the settings used in acquiring the previous partition, including the number of MRF time frames.Under this circumstance, the retrospectively shortened signal evolution with M time points does not agree with the signal from prospectively accelerated scans.Consequently, the CNN model trained using the aforementioned 2D approach is not applicable for the prospectively accelerated data.In order to train a CNN model for prospectively accelerated 3D MRF data, a new 3D MRF sequence was developed in this study.The new pulse sequence had a similar sequence structure as the standard 3D MRF with the exception that an extra section was inserted to mimic the condition of the prospectively accelerated MRF acquisition.This additional section consisted of a pulse sequence section for data sampling of the first M time points followed by a 2-sec waiting time.The purpose of this additional section was to ensure the same magnetization history as that in a real accelerated case so that the first M time points acquired in the second section matched with the data in the actual accelerated scan.With this modification, the MRF data obtained in the second section of the new sequence can 1) provide reference T1 and T2 maps as the ground truth for CNN training using all N time points and 2) generate retrospectively shortened MRF data as the input for training.Since the purpose of the additional section was to create a magnetization history, no data acquisition was needed for this section.Here, we named the new 3D MRF sequence for deep learning as 3DMRF-DL and the standard sequence without the additional section as 3DMRF-S.Before the application of the developed 3DMRF-DL method for in vivo measurements, phantom experiments were performed using a phantom with MnCl2 doped water to evaluate its quantitative accuracy.T1 and T2 values obtained using 3DMRF-DL were compared to those obtained with the reference methods using single-echo spin-echo sequences and the 3DMRF-S method.Both the 3DMRF-DL and 3DMRF-S methods were conducted with 1-mm isotropic resolution and 48 partitions.The reference method with spin-echo sequences was acquired from a single slice with a FOV of 25 cm and a matrix size of 128.To use the 3DMRF-DL method to establish CNN for prospectively accelerated 3D MRF scans, the optimal number of time points needs to be determined.A testing dataset from five normal subjects using the 3DMRF-S method was acquired for this purpose.Informed consent was obtained from all the subjects before the experiments.The 3DMRF-S scan was performed with 1-mm resolution covering 96 partitions and 768 time points.Reference T1 and T2 maps were obtained with the template matching method and used as the ground truth for the CNN network.To identify the optimal number of time frames, the CNN model was trained with various M values using retrospective undersampling.Since the determination of optimal time frame is also coupled with the settings of parallel imaging, the extracted input data was also retrospectively undersampled along the partition direction with reduction factors of 2 or 3 and then reconstructed with parallel imaging.MRF data containing nearly 400 input datasets from four subjects was used for network training and the data from the remaining subject was used for validation.The validation dataset was applied to evaluate the performance of the model after the network training was completed.This information was not utilized to fine-tune the model during the training process.T1 and T2 maps obtained from various time points and reduction factors were compared to the ground truth maps and normalized root-mean-square-error values were calculated to evaluate the performance and identify the optimal time point for the 3DMRF-DL method.A leave-one-subject-out cross validation was performed across all five subjects.One thing to note is that the CNN model training in this step cannot be applied to prospective accelerated data as discussed previously.After the determination of the optimal number of time points for the accelerated scans, in vivo experiments were performed on a separate group of subjects including seven normal volunteers to evaluate the proposed method using parallel imaging and deep learning.For each subject, two separate scans were performed.The first scan was acquired using the 3DMRF-DL sequence with 144 slices.A total of 768 time points was acquired and no data undersampling was applied along the partition direction.For the second scan, the 3DMRF-S sequence was used with prospective data undersampling, which includes sampling with a reduced number of time points and acceleration along the partition direction.Whole brain coverage was achieved for all subjects.CNN model was then trained using the same approach outlined above using the data acquired with the 3DMRF-DL sequence.This trained model can be directly applied to extract T1 and T2 maps from the second prospectively accelerated scan using the 3DMRF-S sequence.A leave-one-subject-out cross-validation was used to obtain T1 and T2 values from all seven subjects, namely no data from the same subject was used for the training and validation simultaneously.After tissue quantification using CNN, brain segmentation was further performed on both datasets to enable comparison of T1 and T2 values obtained from the two separate scans.Specifically, T1-weighted MPRAGE images were first synthesized based on the quantitative tissue maps.These MPRAGE images were used as the input and subsequent brain segmentation was performed using the Freesurfer software.Based on the segmentation results, mean T1 and T2 values from multiple brain regions, including white matter, cortical gray matter and subcortical gray matter, were extracted from each subject and the results were compared between the two MRF scans.The T1 and T2 quantification with the proposed method was further validated with standard relaxometry methods.The prospectively undersampling 3DMRF-S method with 192 time points was acquired on one subject and T1 and T2 maps with whole-brain coverage were extracted using the CNN network.A standard 2D inversion-recovery sequence with eight inversion times was applied to obtain the T1 maps from the same subject.The T2 values were obtained using a standard 2D single-echo spin-echo sequence with six TE values.For the standard methods, the in-plane resolution was 1 mm and slice thickness was 5 mm.The 3DMRF-S sequence was first applied to identify the optimal number of MRF time points and parallel imaging settings along the partition direction.MRF measurements from five subjects were retrospectively undersampled and reconstructed using parallel imaging and deep learning modeling.The results were further compared to those obtained using template matching alone or template matching after GRAPPA reconstruction.Fig. 4 shows representative results obtained from 192 time frames with an acceleration factor of 2 in the partition-encoding direction.With 1-mm isotropic resolution, significant residual artifacts were noticed in both T1 and T2 maps processed with the template matching alone.With the GRAPPA reconstruction, most of the artifacts in T1 maps were eliminated, but some residual artifacts were still noticeable in T2 maps.Comparing the two approaches, the quantitative maps obtained with the proposed method with both GRAPPA reconstruction and deep learning modeling presented a similar quality to the reference maps.The lowest NRMSE values were obtained with the proposed method among all three methods.These findings are consistent for all other numbers of time points tested, ranging from 96 to 288 as shown in Fig. 5.Fig. 6 shows representative T1 and T2 maps obtained using the 3DMRF-S method and different numbers of time points.With an acceleration factor of 2 along the partition encoding direction, high quality quantitative maps with 1-mm isotropic resolution were obtained for all cases.When the number of time points increased, NRMSE values decreased for both T1 and T2 maps.However, this improvement in tissue quantification was achieved at a cost of more sampling data and thus longer acquisition times.With the current design of the 3DMRF-S sequence, the sampling time for 150 slices was increased from 4.1 min to 8.2 min when the number of time frames increased from 96 to 288.Compared to the case with a reduction factor of 2, some residual aliasing artifacts were noted in the quantitative maps obtained with a reduction factor of 3.A leave-one-subject-out cross validation was performed for this experiment and consistent findings were achieved across all five tested subjects.In order to balance the image quality and scan time, a reduction factor of 2 with 192 time points was selected as the optimal setting for in vivo testing using the 3DMRF-DL approach.Before the application of the 3DMRF-DL sequence for in vivo measurements, its accuracy in T1 and T2 quantification was first evaluated using phantom experiments and the results are shown in Fig. 7.A total of 768 time points were acquired for both the 3DMRF-DL and 3DMRF-S sequences.For the 3DMRF-DL method, the first section contains 192 time points as determined in previous experiments.The T1 and T2 values obtained using the 3DMRF-DL method were consistent with the reference values for a wide range of T1 and T2 values.The average percent errors over all seven vials in the phantom were 1.7 ± 2.2% and 1.3 ± 2.9% for T1 and T2, respectively.The quality of the quantitative maps also matches the results acquired using the 3DMRF-S sequence.The NRMSE value was 0.062 for T1 and 0.046 for T2 between the results from the two 3D MRF approaches.Based on the optimal time points and undersampling patterns, the 3DMRF-DL method was used to establish a CNN network for prospectively accelerated 3D MRF data.The experiments were performed on seven subjects.Two MRF scans were acquired for each subject, including one with all 768 time points using the 3DMRF-DL sequence and the other with only 192 points and prospectively accelerated 3DMRF-S sequence.With the latter approach, about 160–176 slices were acquired for each subject to achieve whole-brain coverage and the acquisition time varied between 6.5 min and 7.1 min.Leave-one-subject-out cross validation was performed to extract quantitative T1 and T2 values for all subjects and the quantitative maps from both scans were calculated.Representative T1 and T2 maps obtained from the prospectively accelerated scan are presented in Fig. 8.Some residual artifacts were noted in the images acquired with the GRAPPA \u200b+ \u200btemplate matching approach but were minimized using the proposed method – combining GRAPPA with deep learning.The quantitative maps obtained from a similar slice location using the 3DMRF-DL method were also plotted for comparison.While subjects could have moved between the two separate scans, a good agreement was found between both brain anatomy and image quality.Representative T1 and T2 maps obtained using the accelerated scans from three different views are shown in Fig. 9.The results further demonstrate that high-quality 3D MRF with 1-mm resolution and whole-brain coverage can be achieved with the proposed approach in about 7 min.In addition, the time to extract tissue properties was also largely reduced to 2.5 s/slice using the CNN method, which represents a 7-fold of improvement compared to the template matching method.While all processing times were calculated based on computations performed on a CPU, further acceleration in processing time can be achieved with direct implementation on a GPU card.Representative segmentation results based on the MRF measurements are presented in Fig. 10.The quantitative T1 and T2 maps obtained using both the 3DMRF-DL sequence and the prospectively accelerated 3DMRF sequence are plotted, along with the synthetic T1-weighted MPRAGE images and brain segmentation results.Different brain regions, such as white matter, gray matter, and the thalamus, are illustrated with different colors in the maps and the segmentation results matched well between the two MRF scans.Based on the brain segmentation results, quantitative T1 and T2 values from multiple brain regions are summarized in Table 1.Compared to the reference values obtained using the 3DMRF-DL sequence with 768 time points, accurate tissue parameter quantification was achieved with the proposed rapid 3D MRF method and the mean percentage error was 1.0 ± 0.7% and 1.9 ± 0.7% for T1 and T2, respectively.We further compared the T1 and T2 quantification obtained using the proposed method and the standard relaxometry methods.Two representative slices are shown in Fig. 11.Quantitative T1 and T2 values were also extracted through a region-of-interest analysis and the results are presented in Table 2.Despite the difference in slice thickness between the methods, the quantitative T1 and T2 results are generally in a good agreement in most of the brain regions.In this study, a rapid 3D MRF method with a spatial resolution of 1 mm3 was developed, which could provide whole-brain quantitative T1 and T2 maps in ~7 min.This is comparable or even shorter to the acquisition time of conventional T1-weighted and T2-weighted images with a similar spatial resolution.By leveraging both parallel imaging and deep learning techniques, the proposed method demonstrates improved performance as compared to the previously published methods.In addition, the processing time to extract T1 and T2 values was accelerated by more than 7 times with the deep learning approach as compared to the standard template matching method.Two advanced techniques, parallel imaging and deep learning, were combined to accelerate high-resolution 3D MRF acquisitions with whole brain coverage.The 3D MRF sequence employed in this study was already highly accelerated for in-plane encoding with only one spiral arm acquired.Therefore, more attention was paid to apply parallel imaging along the partition-encoding direction to further shorten the scan time.In addition, CNN has been shown capable of extracting features from complex MRF signals in both spatial and temporal domains that are needed for accurate quantification of tissue properties.This has been well demonstrated in the previous 2D MRF studies.With 3D acquisitions, spatial constraints from all three dimensions were utilized for tissue characterization.The integration of advanced parallel imaging and convolutional neural networks leads to 1) drastically reduced the amount of data needed for high-resolution MRF images and 2) extract more advanced features, achieving improved tissue characterization and accelerated T1 and T2 mapping using MRF.In addition to shortening MRF acquisitions in the temporal domain, the deep learning method also helps to eliminate some residual artifacts in T2 maps after the GRAPPA reconstruction.This is consistent with recent findings that deep learning methods can be applied to reconstruct undersampled MR images and provide comparable results to conventional parallel imaging and compressed sensing techniques.Parallel imaging along partition direction was applied to accelerate 3D MRF acquisition with 1-mm isotropic resolution.Our results and others have shown that with such a high spatial resolution, the interleaved undersampling pattern with template matching does not resolve the aliasing artifacts in 3D imaging.By leveraging the sliding window reconstruction approach, previous studies have applied the Cartesian GRAPPA to reconstruct a 3D MRF dataset and a reduction factor of 3 was explored with the same spatial resolution.In the current study, advanced parallel imaging methods similar to the spiral GRAPPA was used.To compute GRAPPA weights, the calibration data was acquired from the central partitions and integrated into the image reconstruction to preserve tissue contrast.This approach does not rely on the sliding window method, which could potentially reduce the MRF sensitivity along the temporal domain.With the proposed approach, high-quality quantitative T1 and T2 maps were obtained with a reduction factor of 2 and some artifacts were found with a higher reduction factor of 3.The difference at the higher reduction factor, as compared to findings in the previous study, is likely due to different strategies to accelerate data acquisition.Specifically, only 192 time points were acquired to form MRF signal evolution in this study, while ~420 points were used in the previous study.More time points can be utilized to mitigate aliasing artifacts in the final quantitative maps, but at a cost of a longer sampling time for each partition and thus total acquisition time.A modified 3DMRF-DL sequence was developed to acquire the necessary dataset to train the CNN model that can be applied to prospectively accelerated 3D MRF data.With the 3DMRF-S sequence, a short waiting time was applied between the acquisitions of different partitions for longitudinal relaxation.Due to the incomplete T1 relaxation with this short waiting time, the retrospectively shortened dataset acquired with this sequence does not match with the prospectively acquired accelerated data even with the same number of time points.One potential method to mitigate this problem is to acquire two separate scans, one accelerated scan with reduced time points and the other with all N points to extract ground truth maps.However, considering the long scan time to obtain the ground truth maps, this approach will be sensitive to subject motion between scans.Even for a small motion, the corresponding tissue property maps could potentially lead to an incorrect estimation of parameters in the CNN model.Image registration can be applied to correct motions between scans, but variations could be introduced during the registration process.The proposed 3DMRF-DL method provides an alternative solution for this issue and generates the necessary data without the concern of relative motion in the CNN training dataset.While extra scan time is needed with an additional pulse sequence section, the total acquisition is the identical to acquire two separate scans to solve this issue.In the proposed 3DMRF-DL sequence, a preparation module containing the pulse sequence section for the first M time points was added before the actual data acquisition section.One potential concern with this approach is whether one preparation module will be sufficient to generate the same spin history as the prospectively accelerated scans.Previous studies have shown that when computing the dictionary for 3D MRF, a simulation with one such preparation module is sufficient to reach the magnetization state for calculation of the MRF signal evolution in actual acquisitions.Our simulation confirms that the signal evolution obtained from the proposed 3DMRF-DL method matched well with the prospectively accelerated 3DMRF-S method.All of these findings suggest that the one preparation module added in the 3DMRF-DL sequence is sufficient to generate the magnetization state as needed.To implement the 3DMRF-DL method to acquire the training dataset, the number of time points, which is also the number of time points used in the prospectively accelerated scan, needs to be determined first.This can be achieved by acquiring multiple training datasets with different M values and comparing the quantitative results in between.However, the data acquisition process for this approach is time-consuming and the comparison is also sensitive to the quality of different training datasets.Alternatively, this optimal M value can be identified using the 3DMRF-S method as performed in this study.While small variation exists in MRF signal acquired with this approach as compared to the 3DMRF-DL method, the difference is relatively small in terms of MRF signal evolution and magnitude.Therefore, it is reasonable to use the 3DMRF-S method to estimate the optimal M value.Only one training dataset is needed and retrospective data shortening can be performed to compare the results from different time points.Subject motion in clinical imaging presents one of the major challenges for high-resolution MR imaging.Compared to the standard MR imaging with Cartesian sampling, MRF utilizes a non-Cartesian spiral trajectory for in-plane encoding, which is known to yield better performance in the presence of motion.The template matching algorithm used to extract quantitative tissue properties also provides a unique opportunity to reduce motion artifacts.As demonstrated in the original 2D MRF paper, the motion-corrupted time frames behaved like noise during the template matching process and accurate quantification was obtained in spite of subject motion.However, the performance of 3D MRF in the presence of motion has not been fully explored.A recent study showed that 3D MRF with linear encoding along partition-encoding direction is also sensitive to motion artifacts, and the degradation in T1 and T2 maps is likely dependent on the magnitude and timing of the motion during the 3D scans.The 3D MRF approach introduced in this study will help to reduce motion artifacts with the accelerated scans.The lengthy acquisition of the training dataset acquired in this study is more sensitive to subject motion.While no evident artifacts were noticed with all subjects scanned in this study, further improvement in motion robustness is needed for 3D MRF acquisitions.Another factor that could influence the accuracy of MRF quantification is the selection of T1 and T2 step sizes in the dictionary.Since not every possible combination of T1 and T2 values are included in the dictionary, this may affect the values in the ground truth maps and therefore the values derived with the CNN method.Future studies will be performed to evaluate the effect of step sizes on the accuracy of tissue quantification using the proposed method.There are some limitations in our study.First, the network structure and training parameters for the deep learning approach were largely adopted from the previous 2D MRF study.While a similar MRF acquisition method was used, differences in spatial resolution, SNR, and the number of time points could potentially introduce differences in the optimized CNN setting.Moreover, in considering the memory size in the GPU card, MRF data from three contiguous slices were used as the input in this study.Input dataset with different number of slices or a true 3D CNN network could influence the network performance and future studies will be performed to optimize the CNN model specifically for 3D MRF acquisitions.In addition, some acquisition parameters, such as the 2-sec waiting time between partitions, were also adopted from previous studies and these can be further optimized in future studies.Second, all experiments and development were performed on normal subjects and its application for patients with various pathologies needs to be evaluated.Finally, the current study focused on quantification of tissue T1 and T2 values while MRF has been applied for quantification of many other tissue contrasts including T2*, diffusion, perfusion, and magnetic transfer.Longer acquisition times are required to extract more tissue properties with MRF and the application of the proposed method to accelerate these acquisitions will be explored in the future.In conclusion, a high-resolution 3D MR Fingerprinting technique, combining parallel imaging and deep learning, was developed for rapid and simultaneous quantification of T1 and T2 relaxation times.Our results show that with the integration of parallel imaging and deep learning techniques, whole-brain quantitative T1 and T2 mapping with 1-mm isotropic resolution can be achieved in ~7 min, which is feasible for routine clinical practice.This work was supported in part by NIH grants EB006733 and MH117943. | MR Fingerprinting (MRF) is a relatively new imaging framework capable of providing accurate and simultaneous quantification of multiple tissue properties for improved tissue characterization and disease diagnosis. While 2D MRF has been widely available, extending the method to 3D MRF has been an actively pursued area of research as a 3D approach can provide a higher spatial resolution and better tissue characterization with an inherently higher signal-to-noise ratio. However, 3D MRF with a high spatial resolution requires lengthy acquisition times, especially for a large volume, making it impractical for most clinical applications. In this study, a high-resolution 3D MR Fingerprinting technique, combining parallel imaging and deep learning, was developed for rapid and simultaneous quantification of T1 and T2 relaxation times. Parallel imaging was first applied along the partition-encoding direction to reduce the amount of acquired data. An advanced convolutional neural network was then integrated with the MRF framework to extract features from the MRF signal evolution for improved tissue characterization and accelerated mapping. A modified 3D-MRF sequence was also developed in the study to acquire data to train the deep learning model that can be directly applied to prospectively accelerate 3D MRF scans. Our results of quantitative T1 and T2 maps demonstrate that improved tissue characterization can be achieved using the proposed method as compared to prior methods. With the integration of parallel imaging and deep learning techniques, whole-brain (26 × 26 × 18 cm3) quantitative T1 and T2 mapping with 1-mm isotropic resolution were achieved in ~7 min. In addition, a ~7-fold improvement in processing time to extract tissue properties was also accomplished with the deep learning approach as compared to the standard template matching method. All of these improvements make high-resolution whole-brain quantitative MR imaging feasible for clinical applications. |
596 | Causal Mechanistic Regulatory Network for Glioblastoma Deciphered Using Systems Genetics Network Analysis | Glioblastoma multiforme is the most common brain tumor and is nearly uniformly fatal.Development of new therapeutics has been slow and difficult, in part because GBM is a complex and heterogeneous disease.One possible strategy to achieve complete and durable remission is to tailor a combination of drugs that targets multiple vulnerabilities in a patient’s tumor.What is needed to test this strategy is an approach that navigates the large space of possible drug combinations and prioritizes specific drug combinations based on the molecular signatures of a patient’s tumor.We hypothesized that knowledge of the detailed architecture of transcription factor and microRNA regulatory interactions in the form of a transcriptional regulatory network would provide the mechanistic details required to prioritize combinatorial interventions.Both TFs and, more recently, miRNAs have been used as therapeutic targets.In fact, consistent with the situation ∼20 years ago, therapies targeting TFs still comprise 14% of the top 50 best-selling FDA-approved drugs in 2014.Additionally, therapies targeting TFs and miRNAs have the potential for a broader effect than those targeting a single gene, as these regulators control many genes associated with diverse oncogenic biological processes.Previous efforts on the inference of TRNs for cancers have relied on the discovery of correlates or mutual information between different features within multi-omics datasets from patient tumors.Additionally, the integration of genetic markers with expression data has been used to infer causal relationships that explain the flow of information from a somatic mutation or copy number variation to its downstream effect on gene expression traits.We and others have used mechanism-based strategies to link TFs and miRNAs to co-regulated sets of genes through enrichment of physical binding site sequences in the regulatory regions of co-expressed genes in cancers.Many of these approaches are complementary and have yet to be integrated into a unified TRN inference pipeline.Until now the synthesis of mechanistic and causal inference approaches has been difficult, owing to the lack of high-quality mechanistic regulatory inference approaches and large-scale multi-omic datasets.Here we addressed the first issue of inferring high-quality mechanistic regulatory interactions by using recently published compendia of TF binding motifs and Encyclopedia of DNA Elements data to develop a TF-target gene database, which is a key input for TRN inference methods we developed previously.The second issue has been addressed by agencies like The Cancer Genome Atlas, having created large-scale multi-omic datasets for a wide array of cancers.Thus, the availability of the prerequisite tools and datasets enabled us to develop the Systems Genetics Network Analysis pipeline.The pipeline integrates correlative, causal, and mechanistic inference approaches to infer the causal flow of information from mutations to regulators to perturbed gene expression patterns across patient tumors.Importantly, because the algorithms behind each component of the SYGNAL pipeline have been rigorously tested and validated in prior studies, we were able to focus on validating the TF and miRNA regulatory predictions, demonstrating how these predictions from patient data can be used to prioritize single and combinatorial interventions and showing how the GBM TRN can be used to glean biological insights with a vignette focusing on the regulation of tumor lymphocyte infiltration in GBM.To enable the inference of mechanistic TF-mediated regulation of co-expressed transcripts, we constructed a database of TF-to-target gene interactions.The TF-to-target gene interactions were discovered by intersecting the locations of 2,331 unique DNA recognition motifs for 690 TFs across the human genome and ENCODE-determined 8.4 million genomic sites with digital genomic footprints across 41 diverse cell and tissue types.A DGF is experimental evidence that a DNA-binding protein was bound to a genomic location, and, when coincident with a motif instance, it suggests an interaction of a specific TF with that genomic location.We discovered 17,415,125 genomic locations within the optimal promoter region of human genes that matched significantly to a TF DNA recognition motif.The 3,505,491 motif instances that overlapped by at least 1 bp with a DGF were used to construct a map of interactions between the 690 TFs and 18,153 genes.We then systematically evaluated the sensitivity and specificity of the inferred regulatory interactions by comparing the predicted TF-target gene interactions against a gold standard physical map of protein-DNA interactions for 125 different TFs, constructed from 148 chromatin immunoprecipitation sequencing experiments across 68 cell lines.Specifically, we tested the ability of the inferred regulatory interactions to predict the TF that was targeted for ChIP from ChIP-seq-enriched genomic locations in each experiment.We chose this comparison as it mirrors how the interactions are used to infer TF-mediated regulation of co-expressed genes.First, we established that the optimal promoter region for predicting TF-DNA interactions using this approach was ±5 kb from the TSS, by systematically analyzing specificity and sensitivity of predictions across increasing promoter lengths compared to a core promoter.Next, we demonstrated that the sensitivity and specificity of predicting TF-target gene interactions improves significantly when motif instances are filtered based on DGF locations.Notably, TF-target gene interactions accurately predicted the immunoprecipitated TF even in the 48 cell lines and tissues that were not represented within the ENCODE compendium of DGF profiles.This result demonstrated that the collection of DGF profiles from 41 cell types within ENCODE had captured transcriptional regulation by most TFs across most cell types, including those that were not DGF profiled.Importantly, the specific cell type and context for a given set of TF-target gene interactions can be recovered post hoc by analyzing the patterns of conditional co-expression of the target genes.We accomplished this using the set-enrichment scoring module in the cMonkey2 biclustering algorithm, which discovers the most enriched TF and trains each bicluster by preferentially retaining and adding co-expressed genes with the enriched TF’s binding sites.This approach with cMonkey2 also can be used to discover miRNA-mediated regulation using an miRNA-target gene database as input for the set-enrichment scoring module.We have used patient data for GBM to develop the SYGNAL pipeline by integrating the methodology for constructing a mechanistic TF-target gene interaction database with previously developed multi-omics data mining methodologies.The SYGNAL pipeline constructs a TRN in four steps that are described briefly here and in detail in the Supplemental Experimental Procedures.First, simultaneous dimensionality reduction and inference of mechanistic TF- and miRNA-mediated regulation of biclusters are based on the enrichment of a regulator’s binding sites using the cMonkey2 biclustering algorithm.Second, post-processing of biclusters provides additional information about regulators, enrichment with functional categories, association with hallmarks of cancer, and association with patient survival.Filtering on the post-processed features ensures co-expression quality and disease relevance.Third, causal regulatory interactions are inferred, linking somatically mutated genes or pathways with the modulation of a TF or miRNA to the regulation of a downstream bicluster based on the fitting of casual graphical models that integrate genomic and transcriptomic data.In the fourth and final step, we integrate the mechanistic and causal predictions for TF- and miRNA-mediated regulation of biclusters.We applied the SYGNAL pipeline to multi-omics data from TCGA for GBM across 422 patients and nine normal post-mortem controls to infer an integrated TF and miRNA regulatory network.The TCGA multi-omics data were refined at each omic level to enhance the signal-to-noise ratio.We discovered 500 biclusters of genes that were significantly co-expressed across different subsets of patient tumors and were disease relevant.The SYGNAL pipeline also inferred causal influences for somatically mutated genes and pathways on the expression of TFs and miRNAs, which in turn were predicted to modulate the expression of co-regulated genes within one of the 500 biclusters.Using this approach, somatic mutations within 34 genes and 68 pathways were causally associated, through TFs and miRNAs, to the differential regulation of disease-relevant genes.Notably, nine of the 34 mutated genes are well-known driver mutations in GBM: AHNAK2, EGFR, IDH1, MLL3, NF1, PIK3CA, PIK3R1, PTEN, and RB1.The SYGNAL pipeline-derived network identified additional GBM driver mutations in 25 genes and 68 pathways that putatively act via modulating the activity of TFs and miRNAs, which in turn regulate the expression of 5,193 disease-relevant genes associated with patient survival and/or hallmarks of cancer.Thus, the SYGNAL pipeline provides the means to synthesize genotype, gene expression, and clinical information into a TRN with both mechanistic and causal underpinnings to explain how specific mutations act through TFs and miRNAs to generate disease-relevant gene expression signatures observed within patient data.As part of the post-processing, we extended predicted influences of a TF to its paralogs by assuming that the motifs within a TF family would not vary significantly.Inclusion of TFs via expanded family memberships resulted in an ∼1.5-fold increase in the number of TFs that were incorporated into the network with both mechanism-based and causality-based evidence for regulation.Thus, the gbmSYGNAL network predicted at least one TF or miRNA as a regulator responsible for co-regulating genes within 237 of the 500 disease-relevant biclusters.To test gbmSYGNAL predictions, we extracted phenotype data for 1,445 TF knockouts from our recent genome-wide CRISPR-Cas9 screen, where we assayed consequences of each perturbation on the proliferation phenotype of two patient-derived glioma stem cell isolates and two control neural stem cell lines.In total, knockout of 387 TFs significantly altered proliferation in glioma stem cell isolates, of which the effects of knocking out 158 TFs were glioma specific.The gbmSYGNAL network was significantly enriched with 26 TFs that had altered proliferation phenotype in the CRISPR-Cas9 screen.Notably, 13 of these TFs altered proliferation only in the glioma stem cell isolates.The observation that 86% of the CRISPR-Cas9 TF knockouts had phenotypes in only one glioma stem cell isolate underscores the known variability of such studies, because of the extensive genetic heterogeneity across GBM tumors.Specifically, knockout of a particular TF will only show a phenotype in an appropriate genetic context, i.e., a patient-derived cell line in which the specific TF-associated TRN is perturbed.We expect that future studies with patient-derived glioma stem cell isolates with a different spectrum of mutations will provide appropriate context in which knockouts of additional TFs in the gbmSYGNAL network will alter proliferation.Thus, the CRISPR-Cas9 screen provided an unbiased phenotypic demonstration that the gbmSYGNAL network had deciphered disease-relevant transcriptional regulatory interactions directly from patient data.In addition, three independent sources of evidence also supported biologically meaningful roles in GBM for a significant fraction of TFs in the gbmSYGNAL network: 8 of the 74 TFs were also previously implicated in GBM by a regulatory network of 53 TFs that was inferred using a different dataset and a different set of algorithms; according to the DisGeNET database of disease-to-gene associations, 16 of the 74 TFs have important functions in GBM; and 33 of the 74 TFs were differentially expressed in at least one GBM subtype relative to post-mortem controls.In summary, the gbmSYGNAL network implicated 74 TFs in the regulation of 3,170 GBM-relevant genes.Of the 74 TFs, 58 had not been previously associated with GBM.We incorporated miRNA regulation into the gbmSYGNAL network by integrating the Framework for Inference of Regulation by miRNAs into cMonkey2 using the set-enrichment scoring module.In the context of transcriptional regulation, miRNAs are known predominantly for their ability to repress transcript levels; therefore, we limited miRNA regulatory predictions to models where the miRNA had a repressive effect.Altogether, 37 miRNAs were implicated in the regulation of genes within disease-relevant biclusters, either because their binding sites were enriched in the 3′ UTRs of co-expressed genes within disease-relevant blusters or because somatic mutations in the miRNAs were causally associated with disease-relevant expression changes.Four miRNAs were implicated by both inference procedures.Several independent lines of evidence supported the biological and disease significance of the miRNAs in the gbmSYGNAL network: 28 miRNAs were implicated in GBM in manually curated databases of miRNAs dysregulated and causally associated with human diseases; perturbations of seven miRNAs have been shown to alter cancer phenotypes in GBM; and 25 miRNAs were also differentially expressed in at least one GBM subtype relative to post-mortem controls.That 28 of the 37 miRNAs have been implicated as dysregulated or causally associated with GBM demonstrates the ability of the SYGNAL pipeline to recapitulate known regulatory interactions, and the remaining nine miRNAs demonstrate the potential to discover new biology.We next screened miR-223 and miR-1292 for effects on proliferation or apoptosis in a primary astrocyte cell line and two GBM-derived cell lines.We tested both miRNA’s potential role in regulating proliferation and apoptosis by introducing miRNA mimics to simulate overexpression and miRNA inhibitors for knockdown.Overexpression of miR-223 led to significantly lowered proliferation and increased apoptosis in normal human astrocytes.However, miR-223 overexpression marginally increased apoptosis and had little effect on proliferation in the two GBM cell lines.Thus, miR-223 does not appear to be an important factor for proliferation or apoptosis in the GBM cell lines we tested, although it may be important in other GBM cell lines or for other cancer phenotypes.Knockdown of miR-1292 significantly reduced proliferation in normal human astrocytes and the U251 glioma cell line.miR-1292 was expressed at appreciable levels across all three cells lines, but expression data for this miRNA across patient tumors were not available.Thus, predicted influence of miR-1292 was based entirely on the discovery of its binding site in the 3′ UTRs of genes within disease-relevant biclusters.Taken together, the gbmSYGNAL network recapitulated much of what was known about miRNA regulation in GBM and identified ten miRNAs not previously associated with GBM, of which the effects of miR-1292 were experimentally validated.Nearly 40% of all biclusters in the gbmSYGNAL network were predicted to be under combinatorial control of two or more regulators.Using GBM patient tumor expression data and bidirectional stepwise linear regression, we constructed an additive combinatorial regulatory model that best explains the expression for each of the 93 bicluster eigengenes.There was significant evidence that 87 of the 93 biclusters were putatively governed by an additive combinatorial regulatory scheme including two or more regulators.Of the 87 additive combinatorial models of bicluster regulation, 58 included two regulators, 17 included three regulators, ten included four regulators, and two included five regulators.In the combinatorial models there were 54 TFs and 31 miRNAs that integrated into 45 TF-TF, 17 miRNA-miRNA, and 25 TF-miRNA combinatorial regulatory interactions.We provide the same analyses above with correction for bicluster redundancy in Table S14, and the similarity demonstrates that bicluster redundancy has not biased these analyses.Even though biclusters might be redundant, the subtle distinctions may reflect real differences between patients and processes, and future work can address this redundancy through ensemble-based methods that assign confidence metrics to gene co-occurrence across biclusters.The 54 TFs in the combinatorial models included 23 of the 26 TFs in the gbmSYGNAL network with significantly altered proliferation in glioma stem cell isolate CRISPR-Cas9 knockouts and all 13 TFs with glioma-specific proliferation effects.This suggests that a majority of the TFs involved in combinatorial regulatory interactions are functional and disease relevant.Additionally, 44% of TF-TF, miRNA-miRNA, and TF-miRNA pairs within combinatorial models had significant binding site co-occurrence within the corresponding regulatory regions of bicluster genes, suggesting that the predicted combinatorial regulators are directly interacting with regulatory regions of the same genes and thereby mediating their co-expression.The ability of the SYGNAL pipeline to uncover combinatorial regulatory interactions not only provides deeper understanding of GBM etiology but also enables strategies for combinatorial interventions.It has been demonstrated that combinations of master regulators can be used to predict synergistic compound pairs.Therefore, we explored whether combinatorial regulation in the gbmSYGNAL network can facilitate discovery of combinatorial interventions that lead to additive or synergistic outcomes.From the list of 87 predicted combinatorial regulatory models, we selected four pairwise TF combinations that maximized coverage of the following four different criteria: their location in the combinatorial network, the increase in variance explained by the combinatorial model, whether there are known interactions among the TFs, and whether there is a significant co-occurrence of binding sites among the TFs.We then assayed the effect of double knockdowns of the four pairwise combinations in all three cell lines on proliferation and apoptosis.We used the Bliss independence model to assess the extent to which combinatorial effects deviated from an additive model as follows: additive, combined effect is indistinguishable from the expected additive effect; antagonistic, combined effect is less than the expected additive effect; or synergistic, combined effect is greater than the expected additive effect.Double knockdown of ETV6 and NFKB1 synergistically reduced proliferation in the U251 GBM cell line.Double knockdown of CEBPD and CEPBE resulted in an additive decrease in apoptosis in the U251 GBM cell line.Finally, double knockdowns of IKZF1-IRF1 and ELF1-PPARG had antagonistic effects on proliferation and apoptosis, respectively.Our results suggest that the topology of combinatorial regulatory interactions in the gbmSYGNAL network could potentially accelerate the discovery of synergistically acting drug combinations.To elucidate the mechanism underlying the synergistic interaction between ETV6 and NFKB1, we analyzed the genome-wide transcriptional consequences of single and double knockdown of the two TFs in U251 cells.As expected, transcript levels of both TFs were reduced when knocked down, individually or in combination.Consistent with their predicted roles as activators, knockdown of each TF led to significant downregulation for a large number of genes and significantly fewer genes were upregulated.The downregulated genes were significantly enriched with predicted targets of the perturbed TF.In addition, a common set of 247 genes was downregulated in both knockdowns, suggesting a significant overlap in the regulatory networks of the two TFs.However, there was not a significant amount of ETV6 and NFKB1 motif co-occurrence in the 247 genes, suggesting that their combinatorial influence may be more complicated than binding to the same promoters.Relative to the single knockdowns, the double knockdown of ETV6 and NFKB1 resulted in the upregulation of a significantly larger number of genes and downregulation of only 22 genes.A significant fraction of the upregulated genes in the double knockdown were downregulated in the single TF knockdowns.Notably, 48 upregulated genes in the double knockdown were among the 247 genes that were downregulated by single knockdown of both TFs.This reversal in direction of differential expression for 210 genes and the upregulation of an additional 228 genes were unexpected given the consequences of single knockdowns for the TFs.The precise mechanism for this synergistic anti-proliferative interaction was not readily discernible from the transcriptome changes, and it is unlikely that we could have predicted the impact of the double knockdown from the single knockdowns.While effects like this are to be expected in a massively combinatorial non-linear network, we have shown that knowledge of the topology of regulatory interactions can facilitate the selection of synergistically acting TFs and miRNAs.It has been shown that the simultaneous knockdown of an oncogene mRNA and inhibition of its protein activity using a drug can lead to a synergistic effect.Therefore, we systematically screened for synergistic phenotypic effects of combining miRNA mimics and established inhibitor therapies that were predicted to target the same oncogene in the gbmSYGNAL network.Inhibitors targeting 49 oncogenes have been considered in treating GBM.The gbmSYGNAL network included 18 of these 49 oncogenes, five of which were predicted to be regulated by at least one miRNA.We assayed the consequence of single treatments for the six miRNA mimics and seven inhibitors on proliferation and apoptosis across the HA, T98G, and U251 cell lines.For these studies, we specifically screened for significant anti-proliferative and pro-apoptotic effects, as these are the desired therapeutic responses when treating cancers.All inhibitors, as expected, and three miRNAs had significant anti-proliferative effects in at least one cell line.Six inhibitors had significant pro-apoptotic effects in at least one cell line, whereas of the miRNA mimics only miR-892b had a significant pro-apoptotic effect in HA and T98G cells.Together the single-agent screens identified six inhibitor-miRNA combinations targeting three oncogenes that could be tested for synergistic anti-proliferative effects and two inhibitor-miRNA combinations targeting two oncogenes that could be tested for synergistic pro-apoptotic effects.We selected romidepsin-miR-486-3p for further experimentation because romidepsin had the strongest effects on proliferation and apoptosis in every cell line, which explains why it is an attractive therapeutic candidate.In the gbmSYGNAL network, both romidepsin and miR-486-3p target HDAC5, which is upregulated in GBM patient tumors and known to increase proliferation of GBM cells.Therefore, we hypothesized that the potentially synergistic effect of romidepsin and miR-486-3p on HDAC5 would generate a stronger and longer-lasting treatment.We generated dose-response curves for romidepsin and miR-486-3p in the U251 cell line.Then we designed a 6 × 6 dose-response matrix with a range of concentrations centered on the IC50 of each therapeutic agent.Four different combinations from this dose-response matrix generated synergistic effects.Significant synergy was observed for romidepsin concentrations between 0.167 and 0.634 nM and miR-486-3p concentrations between 0.5 and 4.6 nM.Maximal synergy was observed with a combination of 0.634 nM romidepsin and 4.6 nM miR-486-3p mimic, which generated an effect size that was equivalent to 1.75-fold higher concentration of single treatment with 1.1 nM romidepsin.The effect size of this combination also was very similar to the effects of 1.85 nM romidepsin that previously was observed to be anti-proliferative and pro-apoptotic in GBM cell lines.This demonstrates that the gbmSYGNAL network can facilitate discovery of combinations of inhibitors and miRNAs that act synergistically on cancer phenotypes of GBM cell lines.Applied in a high-throughout framework, this approach could, in turn, aid in the prioritization of future studies on delivery and dosing that together will help to assess the therapeutic potential of selected combinations, such as ETV6-NFKB.Finally, we demonstrate how the gbmSYGNAL network knits together layers of biological and clinical data into a cohesive platform for making deeper and more meaningful insights.For example, the gbmSYGNAL network links somatic driver mutations in either NF1 or PIK3CA to the upregulation of the TF IRF1 that activates the expression of 27 genes within the bicluster PITA_282, which is associated with increased tumor lymphocyte infiltration and a worse prognosis.This was particularly interesting because both NF1 and PIK3CA are known GBM driver mutations.Upregulation of IRF1 led to increased expression of the bicluster PITA_282 genes and subtracting out the activation by IRF1 removed the causal influence of somatic mutations from NF1 and PIK3CA.Incorporation of somatic homozygous deletion of NF1 into these analyses reinforces these findings.Furthermore, the IRF1 DNA recognition motif MA0050.1 was enriched within the promoters of 25 of the 27 genes, suggesting that IRF1 directly regulates these genes through binding to their promoter sequences.Based on the structure of the combinatorial regulatory network, IRF1 is a hub because it was included in 12 combinatorial models with as many distinct regulators, suggesting it may have additional functions when paired with other TFs.Knockout of IRF1 in the CRISPR-Cas9 screen led to increased proliferation in the 0827 glioma stem cell isolate.Rank ordering of the patient tumors based on the median expression of PITA_282 bicluster genes enriched for specific GBM subtypes in the tails of the distribution.We found that the proneural subtype was highly enriched in the bottom quintile and the mesenchymal subtype was highly enriched in the upper two quintiles.Additionally, the PITA_282 bicluster was significantly associated with patient survival, where patients with tumors in the upper quintile had shorter survival on average relative to patients whose tumors were in the bottom quintile.The PITA_282 bicluster was associated with four hallmarks of cancer: tumor-promoting inflammation, evading immune detection, self-sufficiency in growth signals, and insensitivity to antigrowth signals.More specifically, twelve of the 27 PITA_282 genes are involved in MHC class I antigen processing and presentation machinery.Thus, we find that increased MHC class I APM is associated with reduced survival of GBM patients.A similar trend was observed in medulloblastoma where increased MHC class I APM was associated with unfavorable prognostic marker expression.We then asked whether higher MHC class I APM expression in patient tumors had any impact on tumor lymphocyte infiltration as measured by pathological assessment.Tumors with tumor-infiltrating lymphocytes had significantly increased IRF1 expression, and 15 of the 27 genes in PITA_282 had significantly increased expression with increased numbers of tumor-infiltrating lymphocytes.The SYGNAL pipeline integrated multiple layers of biological and clinical data into the gbmSYNAL network, and this allowed us to explain how somatic mutations in NF1 and PIK3CA upregulate IRF1, which in turn activates the expression of downstream target genes that are associated with increased lymphocyte infiltration and worse patient survival.We developed the TF-target gene interaction database and the SYGNAL pipeline to construct TRNs that model the influence of somatic mutations on TFs and miRNAs and, consequently, their downstream target genes.The SYGNAL pipeline is powerful because it is rooted in an integrative model that stitches together multi-omic and clinical patient data and incorporates mechanistic regulatory interactions, which provide the means to maneuver the system back into a more healthy state.Using the rich multi-omic TCGA GBM dataset, we constructed the gbmSYGNAL network, and thereby we discovered 67 regulators that, to our knowledge, have not been linked to GBM-associated co-expression signatures.It is attractive to use small RNA molecules in cancer therapy, because they modulate the expression of a specific regulator and, thereby, predictably impact many downstream oncogenic genes and processes.Network understanding of a complex disease, such as GBM as has been generated in this work, provides a platform for the prioritization of TFs, miRNAs, drugs, and their combinations as an alternative to unconstrained high-throughput screens.Our results combined with findings from recent work demonstrate that it is feasible to predict synergistic compound pairs, and our discovery of a synergistic anti-proliferative effect from few tests provides proof of principle for potentially using this approach to discover tailored combinatorial therapies matched to the characteristics of a patient’s disease.This strategy should be broadly applicable, as the tools developed to construct the gbmSYGNAL network can be used to construct similar TRNs for any human disease directly from cross-sectional patient cohort data that include a compendium of transcriptome profiles.The discovery of inhibitor-miRNA combinations using the gbmSYGNAL network took advantage of a similar principle that synergy can emerge by combining an miRNA mimic with an inhibitor to target the same oncogene.Using this principle, we discovered a synergistic interaction between romidepsin and miR-486-3p, which can be attributed to the fact that they both target HDAC5 in the gbmSYGNAL network.Such synergistic combinations could address at least two issues in using romidespin for cancer therapy.First, the short half-life of romidepsin in patients poses a significant challenge to keep the dosage at a level that is needed to effectively treat tumors.Therefore, combinations with other therapies that increase the efficacy of romidepsin could lengthen the effective treatment window and potentially lead to better therapeutic outcomes.Second, the synergism generates similar efficacy at a lower inhibitor dosage, which could, in turn, help to increase the specificity of the combination treatment and lessen the toxic side effects present at higher doses.We also demonstrated how the gbmSYGNAL network could be used to glean new biological insights by providing meaningful linkages across GBM driver mutations, differential regulation of regulators and their downstream genes associated with two hallmarks of cancer, and a cancer phenotype and clinical outcome.It was known previously that mutations in NF1 significantly increased the number of tumor-infiltrating lymphocytes and that tumor-infiltrating lymphocytes were enriched in the mesenchymal subtype.However, the mechanism by which NF1 mutations affected lymphocyte infiltration into tumors was not known.Through the gbmSYGNAL network, we were able to provide a plausible mechanism for IRF1, a TF that is characterized by being an integral part of the immune response, to regulate antigen processing and presentation genes, which could feasibly modify the recruitment of lymphocytes and other immune cells to the tumor.TCGA has assembled clinical, transcriptomic, and genomic data for a large cohort of patient GBM tumors, with the hope that they will catalyze innovative treatments and cures.The challenge with these data has been that each patient tumor contributes a single snapshot that alone is insufficient to provide insight into causal or mechanistic underpinnings of the disease within that patient.We hypothesized that patients with similarly perturbed oncogenic processes would have conserved genomic and molecular patterns, based on which they could be sub-grouped to provide the statistical power required to map the underlying dysfunctional network and identify points of intervention to halt oncogenic processes.This was the impetus for developing the TF-target gene database and the SYGNAL pipelines.Through our studies applying these tools to GBM, we have demonstrated that multi-omic data from each patient can be stitched together to gain clinically/biologically meaningful insights and that the network structure and integration with orthogonal information can be used to discover intervention points that can lead to synergistic interactions.Our SYGNAL pipeline can become a data integration platform that explains the etiology of a disease and provides the knobs that can be turned to maneuver the system back to a more healthy state.Detailed methods are described in the Supplemental Experimental Procedures, and parameters for algorithms and programs can be found in Table S21.Regulatory sequences for each gene were acquired from the University of California, Santa Cruz human genome release hg19.Unique TF DNA recognition motifs were collected from a public DNA recognition motif repository, a private DNA recognition motif repository, protein-binding microarray DNA recognition motif repository, and a recent study where they used high-throughput SELEX sequencing to discover DNA recognition for most human TFs.DGFs aggregated across all tissue and cell lines were acquired from ENCODE.A gene was considered a target of a TF if it had at least one significant motif instance in its cis-regulatory regions that overlapped with a DGF by at least one base pair.The genomic locations bound by 71 TFs in 148 ChIP-seq experiments were downloaded from the UCSC genome browser.Overlap p values of each TF versus each ChIP-seq TF-bound gene set were used to compute the sensitivity and specificity for predicting the TF that was immunoprecipitated in ChIP-seq studies.The final database of TF-target gene interactions and the location of their motif instances can be downloaded from http://tfbsdb.systemsbiology.net.All TCGA data were acquired from the Broad Firehose.Validation cohort data were either downloaded from GEO: GSE7696 and GSE16011 or EMBL-EBI ArrayExpress: E-MTAB-3073.The SYGNAL pipeline is described briefly in the main text and in detail in the Supplemental Experimental Procedures.The gbmSYGNAL network can be explored and downloaded from http://glioma.systemsbiology.net.We tested for significant evidence of combinatorial regulation using bidirectional stepwise linear regression, and we computed the significance of the increase in variance explained using ANOVA F-test.Co-occurrence of TF and miRNA-binding sites was computed using a hypergeometric overlap p value.C.L.P and N.S.B. conceived the project, designed the SYGNAL pipeline, and interpreted results of application to GBM.C.L.P. and D.J.R. developed the SYGNAL pipeline.C.L.P., B.B., and S.R. collected GBM data for SYGNAL network construction.C.L.P constructed the gbmSYGNAL network and conducted subsequent comparisons.C.M.T., Y.D., and P.J.P. performed the CRISPR-Cas9 knockout screen.C.L.P., S.O., and Z.S. performed all transient transfection experiments, including microarray and miRNA-seq.C.L.P. and N.S.B. wrote the manuscript.All authors contributed to preparation of the manuscript. | We developed the transcription factor (TF)-target gene database and the Systems Genetics Network Analysis (SYGNAL) pipeline to decipher transcriptional regulatory networks from multi-omic and clinical patient data, and we applied these tools to 422 patients with glioblastoma multiforme (GBM). The resulting gbmSYGNAL network predicted 112 somatically mutated genes or pathways that act through 74 TFs and 37 microRNAs (miRNAs) (67 not previously associated with GBM) to dysregulate 237 distinct co-regulated gene modules associated with patient survival or oncogenic processes. The regulatory predictions were associated to cancer phenotypes using CRISPR-Cas9 and small RNA perturbation studies and also demonstrated GBM specificity. Two pairwise combinations (ETV6-NFKB1 and romidepsin-miR-486-3p) predicted by the gbmSYGNAL network had synergistic anti-proliferative effects. Finally, the network revealed that mutations in NF1 and PIK3CA modulate IRF1-mediated regulation of MHC class I antigen processing and presentation genes to increase tumor lymphocyte infiltration and worsen prognosis. Importantly, SYGNAL is widely applicable for integrating genomic and transcriptomic measurements from other human cohorts. |
597 | Calibration and field evaluation of the Chemcatcher® passive sampler for monitoring metaldehyde in surface water | Metaldehyde is a solid, synthetic, neutral, non-chiral tetramer of acetaldehyde and is used as a potent molluscicide.It is the active ingredient in most formulated slug pellets used commonly to eliminate infestations of slugs and snails on crops such as barley, oilseed rape and wheat .It has been used for this purpose since the early 1940s.The amount of metaldehyde used in pellets varies between 1.5, 3.0 or 4.0% by weight.In the United Kingdom, it is estimated that 80% of arable farmers use metaldehyde, with ~ 460 t applied to fields between 2012 and 2015 .Metaldehyde is predominantly used in the early autumn to winter months when molluscs thrive in the wetter conditions .Once applied to soil, metaldehyde degrades to acetaldehyde and CO2, with a half-life reported to vary between 3 and 223 days .Metaldehyde is polar and highly water soluble , with a low tendency to bind to soil .As a consequence, it readily runs off from land and enters surface waters particularly after rainfall events.Once in the aquatic environment, the degradation of metaldehyde is slowed significantly , hence, it is considered a semi-persistent pollutant.The impact of metaldehyde in the aquatic environment has been reviewed recently .Metaldehyde is detected regularly in surface waters in the UK with concentrations fluctuating seasonally."Frequently the concentration of metaldehyde exceeds the European Union's Drinking Water Directive limit of 0.1 µg L−1 for any pesticide which is legally binding) .Problems arise when such surface water bodies are used as capitation sources for potable drinking water supplies.Metaldehyde has also been detected in ground water, above the PCV .Due its physicochemical properties metaldehyde is difficult to remove from water using conventional drinking water treatment processes, such as granular or powdered activated carbon beds .Whilst advanced treatment processes have potential to remove metaldehyde, these are expensive to operate commercially .Therefore, alternative strategies or substituting metaldehyde for different molluscicides are needed in order to protect river catchments .Key to the successful delivery of these remedial environmental actions is the establishment of an effective surface water quality-monitoring programme for metaldehyde.Typically, monitoring programmes rely on the collection of infrequent spot samples of water followed by analysis in the laboratory.The effectiveness of this approach is limited, particularly where concentrations of pollutants fluctuate significantly over short periods of time, such as those associated with the sporadic application of pesticides.In order to gain a better temporal resolution, different approaches are required.Automated devices allow for the frequent collection of water samples and can provide a higher temporal resolution.This equipment, however, has a high capital cost, requires regular maintenance and can be subject to damage or theft in the field .The use of passive sampling devices can overcome many of these drawbacks, as they are relatively low-cost, non-mechanical, require no external power and are easily deployable in many field conditions.A wide range of passive sampling devices is available to monitor different classes of organic pollutants found in surface waters .These include semi-permeable membranes devices, polymer sheets or Chemcatcher® for non-polar pollutants and the polar organic chemical integrative sampler , o-DGT and the polar version of the Chemcatcher® for polar pollutants.Samplers comprise typically of an inert body housing a receiving phase selective for the compounds of interest, which is usually overlaid by a thin diffusion-limiting membrane.Devices can be deployed for extended periods where analytes are continually sequestered from the environment.Depending on the deployment regime, samplers can yield the equilibrium or the time-weighted average concentration of a pollutant .The former requires knowledge of sampler/water partition coefficient for the analyte of interest .In order to measure the TWA concentration, the compound specific sampler uptake rate) is required.Rs is determined typically in laboratory or in situ field calibration experiments.Mathematical models can also be used to predict uptake based on physicochemical properties .We describe the development and evaluation of a new variant of the Chemcatcher® passive sampler for monitoring metaldehyde in surface water.This comprised a hydrophilic-lipophilic-balanced Horizon Atlantic™ HLB-L disk as the receiving phase overlaid with a polyethersulphone membrane.The Rs of metaldehyde was measured in laboratory and field calibration experiments.The performance of the device for measuring the concentration of metaldehyde was evaluated over a two week period alongside the collection of spot water samples at a number of riverine sites in eastern England, UK.To our knowledge this is the first time a passive sampling device has been used to quantify the concentrations of metaldehyde in surface water.The device has the potential to be used in river catchment programmes to monitor the impact of this molluscicide and to provide improved, cost-effective information for the future development of environmental remediation strategies.Unless otherwise stated, chemicals and solvents were of analytical grade or better and were obtained from Sigma-Aldrich.Ultra-pure water was obtained from an in-house source and was used in all laboratory procedures.Metaldehyde and deuterated metaldehyde-d16 were purchased from Sigma-Aldrich and Qmx Laboratories Ltd. respectively.All glassware and apparatus were cleaned by soaking in 5% Decon 90 solution overnight, then washed with water and rinsed with methanol.Calibration standards and test solutions were prepared as described by .Three component PTFE Chemcatcher® bodies were obtained from A T Engineering.Components were cleaned initially by soaking overnight in a 2% Decon 90 solution and rinsed with water.This was followed by immersion in an ultrasonic bath, rinsed with water and dried at room temperature.Horizon Atlantic™ hydrophilic-lipophilic balanced extraction disks were used as the receiving phase.Disks were washed by soaking in methanol overnight.Disks were then placed in an extraction manifold and pre-conditioned using methanol followed by water and stored in water prior to use.PES sheet was obtained from Pall Europe Ltd. and was used as the diffusion-limiting membrane.PES membrane circles were punched by hand from the sheet and soaked in methanol overnight to remove traces of polyethylene glycol oligomers present as an artifact of the manufacturing process .Afterwards, membranes were rinsed in water and then stored submerged in water until use.Devices were prepared by placing a HLB-L disk followed by the PES membrane onto the Chemcatcher® supporting plate, ensuring that no air bubbles were trapped in the interstitial space.The two components were secured in place by a retaining ring, which was tightened sufficiently in order to make a watertight seal.Assembled samplers were kept submerged in water prior to use in order to prevent the HLB-L disks drying out.Performance reference compounds were not used.HLB-L disks were removed carefully from exposed samplers using solvent rinsed stainless steel tweezers with the PES membrane being discarded.The disks were placed onto solvent rinsed aluminium foil and allowed to dry at room temperature.The dried disks were placed in an extraction funnel manifold and metaldehyde eluted with methanol into a pre-washed glass vial.HPLC grade water was added and the solution evaporated to ~ 0.5 mL using a Genevac ‘Rocket’ centrifugal rotary evaporator.The extract was transferred to a silanised glass vial and the volume adjusted to ~ 1 mL by the addition of methanol.Metaldehyde was quantified in all water samples by liquid chromatography tandem mass spectrometry using an Agilent 1200RR LC system coupled to an Agilent 6460 tandem mass spectrometer.The instrument was interfaced with an on-line solid-phase extraction system fitted with a Waters Oasis® HLB cartridge.The full analytical procedure has been described by Schumacher et al. .Metaldehyde in extracts obtained from Chemcatcher® samplers was analysed using a similar procedure with the following modification.One hundred µL of extract,was added to a silanised glass auto-sampler vial containing water and 20 µL of internal standard solution and then analysed as for the water samples.Preliminary experiments to investigate the sorption and recovery of metaldehyde from the HLB-L disks were undertaken.A river water sample collected as below was spiked with metaldehyde to give environmentally relevant concentrations of 300 and 600 ng L−1 and extracted under gravity using a pre-conditioned HLB-L disk held in an extraction funnel manifold.The above procedure was repeated with a second sample of river water from the same source.Metaldehyde was eluted and analysed as described above.A 14-day laboratory calibration experiment was undertaken to determine the sampler uptake rate for metaldehyde.Three hundred and fifty L of water was collected into a ~ 400 L pre-cleaned polypropylene vessel from the River Lliedi, Felinfoel near Llanelli,.The river water was stored in a temperature controlled room and left to equilibrate prior to use.This value was selected, as it is typical of the temperature of rivers in the UK during late autumn to winter when metaldehyde is most prevalent in surface waters.The concentration of metaldehyde found in the river water was below the limit of quantification .Uptake rate was measured in a calibration rig similar to that described by Vrana et al. , but using a semi-static system rather than a flow-through design.A pre-cleaned glass tank containing a rotatable PTFE carousel for holding up to 14 Chemcatcher® samplers on two layers was filled with 16 L of river water and allowed to pre-condition.Afterwards, the tank was drained and 14 devices placed into the carousel.The tank was refilled with river water that had been spiked with metaldehyde, to give a nominal concentration of 1.7 µg L−1.This concentration was chosen in order to sequester sufficient metaldehyde on the disk to enable quantification at early time points during the calibration experiment.This concentration is often exceeded in river catchments impacted by the molluscicide .Using an overhead stirrer, the carousel was rotated at a speed of 20 rpm; giving a linear water velocity of ~ 0.2 m s−1 over the face of the sampler bodies.This rotation speed was considered representative of water velocity at the riverine sites used for the subsequent field trials.Spiked water in the tank was drained and replenished every 24 h so as to ensure a relatively constant concentration of metaldehyde throughout the experiment.The concentration of metaldehyde in solution was measured before and after each tank replenishment in order to monitor the stability of the analyte during the trial.The small well on top of the Chemcatcher® body ensured that the PES membrane remained wet during these emptying and refilling operations.One Chemcatcher® was removed from the carousel after exposures of 8, 24, 48, 72, 96, 120, 144, 168, 192, 216, 240, 264, 288, 336 h.A ‘dummy’ PTFE body was inserted into the position of each sampler removed from the carousel so as to maintain consistent hydrodynamic conditions in the tank.The temperature of the water was monitored throughout the duration of the study.A blank sampler exposed to the laboratory atmosphere was used to account for any background contamination during each operation.The mass of metaldehyde accumulated in the HLB-L disk from each exposure time was measured using the analytical procedure described in Sections 2.3 and 2.4.These data were used to calculate RS.PES membranes from the deployed Chemcatcher® samplers were also extracted and analysed using the same procedures.Two types of field tests were undertaken alongside spot water sampling at several riverine locations in the east of England, where oil seed rape is grown extensively.These sites are known to be impacted by inputs of metaldehyde sometimes exceeding the PCV for drinking water.Firstly, Rs for the Chemcatcher® was measured ‘in-field’ at a site where the concentration of metaldehyde was known to be relatively constant.Here three replicate samplers were deployed for 14 days at a feeder tributary to a reservoir in the Anglian region between 4th–18th September 2015.Secondly, the performance of the sampler was evaluated at three sites on the River Gwash between 4th September-12th November 2015.Samplers were deployed for five successive periods of 14 days at each of the three locations.Triplicate Chemcatcher® samplers were used for each field deployment.In order to protect the devices they were placed inside a bespoke stainless steel cage.A chain was used to secure the cage to a mooring point along the river.This equipment ensured that the samplers remained fully submerged during the deployment period.Upon retrieval, the well in the body of the Chemcatcher® was filled with river water and sealed with the transport lid.Samplers were transported to the laboratory in cool boxes and stored at ~ 4 °C until analysis.At each location, a field blank sampler was exposed during deployment and retrieval operations and was analysed as per the experimental samplers.Spot samples of river water were collected into pre-cleaned, screw-topped polyethylene terephthalate bottles at set periods during the sampler deployments and stored at ~ 4 °C until analysis.Extraction and analysis were performed as described in Sections 2.3 and 2.4.MS = mass of analyte in Chemcatcher® receiving phase disk after exposure time t,M0 = mass of analyte in receiving phase disk of Chemcatcher® field blank,RS = sampler uptake rate of analyte,For laboratory and ‘in-field’ calibration studies, Rs can be calculated from Eq. using the slope t−1) of the regression of the mass in the sampler upon time and the concentration in the water.Values for Rs can then be used in field trials to estimate Cw and this corresponds to the TWA concentration of the chemical over the deployment period.The use of HLB-L disks as a receiving phase for the Chemcatcher® is new.This sorbent comprises a specific ratio of two monomers, hydrophilic N-vinylpyrrolidone and lipophilic divinylbenzene and provides high capacity for the retention of a wide range of polar analytes.Its use with the Chemcatcher® for sequestering a wide range of pharmaceuticals and personal care products in waste water has been described .This sorbent has been used extensively in the POCIS for monitoring a wide range of polar pollutants .The POCIS uses a loose HLB sorbent powder held between two PES membranes.The material can move and sag towards the base of the device during deployments altering the effective sampling area and hence uptake rates.This impacts on the robustness of the device .The use of a commercially available bound receiving phase sorbent can overcome this issue and gives better reproducibility.As metaldehyde is a highly polar substance it was important to investigate its retention behaviour and recovery from the HLB-L disk.Results from batch extraction tests using spiked river water showed that this sorbent material was effective at retaining metaldehyde and that the compound could subsequently be eluted readily using methanol.Average recoveries for the duplicate river water samples spiked at 300 ng L−1 were 95.5% and 98.2% and at 600 ng L−1 were 92.7% and 95.5%.These data indicated that this disk could be used as a receiving phase in the Chemcatcher® for the sequestration of metaldehyde.The water temperature and concentration of metaldehyde in the test tank was stable over the 14-day period of the trial.The mean concentration measured each time before the tank was drained was 1.72 µg L−1.The mean concentration measured each time after the tank was re-filled was 1.74 µg L−1.A two-sample t-test showed that there was no significant difference between these two concentrations.A simple linear regression of the mass of metaldehyde accumulated in the disk on time of exposure was highly significant) and gave a good a fit.The slope of the linear regression equation was 1.13 giving a RS = 15.7 mL day−1.This would represent ~ 220 mL of water cleared by the sampler over a typical 14 day field deployment.Unlike with many non-polar pollutants, longer field deployments for such highly mobile and often sporadic polar contaminants are unwarranted when investigating inputs into river catchments.The mass of metaldehyde found in the laboratory blanks was below the LoQ of the instrumental method.The intercept was −6.55 h and was not significantly different from zero indicating no lag phase in the uptake of metaldehyde caused by sorption of analyte to the polymeric diffusion limiting membrane.The absence of a lag phase was substantiated as no metaldehyde was detected in the PES membranes from the deployed Chemcatcher® samplers.There is limited RS data using the HLB-L disk as a receiving phase for the Chemcatcher®.Using such a device, Petrie et al. determined the RS values for 59 polar organic micropollutants over a 9-day deployment in wastewater effluent.Sampler uptake rates ranged from 10 to 100 mL day-1.Ahrens et al. using an alternative receiving phase determined under laboratory conditions the Chemcatcher® uptake rates for 124 pesticides.RS values varied between < 1–150 mL day-1.Oasis® HLB sorbent has been used with the pharmaceutical variant of the POCIS to sequester a wide range of polar pollutants and their associated sampler uptake rates determined in the laboratory .This wide variation in measured sampler uptake rates is a function of the physicochemical properties of the analyte and the conditions used for the calibration experiment.Taking these factors into consideration the sampler uptake rate measured for metaldehyde in our laboratory study falls within the range of previously reported RS values for polar chemicals.The concentration of metaldehyde found in spot samples of water collected at the in-field calibration site on days 1, 10 and 14 was 35.2, 37.6 and 46.6 ng L−1, respectively.The mass of metaldehyde accumulated in the receiving phase of the Chemcatcher® sampler after the 14 day deployment was 9.7, 9.8 and 10.3 ng.Using an average aqueous concentration over the exposure period this corresponded to RS = 17.4, 17.6 and 18.6 mL day−1 for each device.Metaldehyde measured in the blank samplers was below the LoQ of the analytical method.The RS values obtained using the two different approaches to calibration were in good agreement.A small variation between the RS values can be expected.The water temperature in the laboratory tank was maintained at ~ 5 °C, whilst the water temperature at the riverine site during early autumn was ~ 13–14 °C.Higher temperatures increase the rate of diffusion and hence the uptake rate of an analyte and may account for the slightly higher RS value found for the in-field study.Additionally, the water velocity in the laboratory study was maintained at ~ 0.2 m s-1 and it is unlikely that a similar degree of turbulence appertained throughout the duration of the in-field calibration.However, the effect of water temperature and flow on the uptake of a wide range of polar analytes by the POCIS has been shown to be relatively small .One solution to overcome issues associated with the variation of RS with changing environmental conditions during field deployments is the use of PRCs.The effectiveness of this concept for use with polar passive samplers is not fully proven and alternative solutions such as the use of passive flow monitors and increasing membrane resistance have been suggested and warrant further study .The time period of the trial coincided with the agricultural use of metaldehyde within the catchment.The concentration of metaldehyde measured in the eleven spot samples of water taken during the three field trials is shown in Fig. 1.The values found were variable, ranging from ~ 30-2900 ng L−1 and are representative of a river catchment in the UK impacted by high use of the molluscicide .Higher peak concentrations were evident as the trial progressed, corresponding to increased application of metaldehyde to land for crop protection.On 28th October 2015 the concentration of metaldehyde found in spot samples of raw water at the three River Gwash sampling sites was between ~ 10–30 times the permitted PCV for drinking water).Rainfall over this period is shown in Fig. 1.The rainfall fluctuated, with a number of dry periods.It is difficult to link directly concentrations of metaldehyde found in the rivers to rainfall events during the trial as there is a number of additional influential factors within the catchment that need to be taken into consideration .Deployment of the Chemcatcher® samplers was restricted to 14 days as inputs of metaldehyde into river catchments are known to be episodic .It was estimated that even for short periods of time sufficient sequestration of metaldehyde would be obtained for quantitative analysis.Additionally, restricting deployments to two weeks limited the degree of biofouling on the PES membrane of the sampler.TWA concentrations of metaldehyde were calculated using Eq. and the RS value measured in the laboratory calibration experiment.The data for the three different field deployments are shown in Fig. 1.At all sites, there was an increase in the TWA concentrations as the trial progressed.The amount of metaldehyde found in the field blank samplers was below the LoQ.It is difficult to compare directly the water quality data obtained using the two monitoring techniques, particularly where the concentration of pollutant is episodic .Firstly, there is no information on how the concentration of metaldehyde varied in the time interval between collections of spot water samples.Secondly, recent evidence from field trials has shown polar passive samplers are unable to completely integrate stochastic events with rapidly changing concentrations of pollutants .In this case the relatively low Rs values obtained for polar compounds may lead to an under-sampling of a pollution event.Due to its high polarity it is expected that metaldehyde will be freely dissolved in the water column, with no binding to particulate or dissolved organic matter present.During the first two weeks of the trial at all three locations there was good agreement between the data obtained by the two monitoring methods.At later periods when there was evidence of significant stochastic inputs of metaldehyde into the catchment, this was reflected in higher TWA concentrations found using the Chemcatcher®.Here where there was an exceedance of the PCV found in spot samples this was also shown in the TWA values.One approach to improve the comparability of the data obtained by the two techniques is to increase the frequency of spot water sampling or the use of other monitoring methods such as time-triggered automated samplers or on-line systems .These solutions, however, are expensive to employ within remote river catchments.There has been recent interest in the use of passive sampling devices to detect pesticide inputs into river catchments.Such devices can provide information on the spatio-temporal occurrence, frequency and fluxes of pollutants within a river catchment.This information can assist in the development of remediation and risk assessment strategies .Understanding diffuse and sporadic sources of pollutants within river catchments is important where downstream waters are abstracted for use in the production of potable supplies.This is important for chemicals that are recalcitrant to remove to concentrations below the PCV using conventional drinking water treatment processes .Such processes are expensive to operate and it is more cost effective to prevent the input of specific pollutants at source.Deployment of Chemcatcher® devices in a river catchment in eastern England impacted by agricultural use of metaldehyde showed that they provide complimentary information to the currently used infrequent spot sampling procedures.Data from this study shows that the Chemcatcher® can have a role in river catchment investigations in identifying sources and fluxes of this problematic pesticide, particularly at locations where surface waters are abstracted for subsequent use in the production of potable supplies.Devices can also provide information useful in the management of designated Drinking Water Protected Areas and on the effectiveness of long-term remediation strategies.Further work using the Chemcatcher® to address these applications is presently on-going at a number of drinking water supply companies in the UK. | Metaldehyde is a potent molluscicide. It is the active ingredient in most slug pellets used for crop protection. This polar compound is considered an emerging pollutant. Due to its environmental mobility, metaldehyde is frequently detected at impacted riverine sites, often at concentrations above the EU Drinking Water Directive limit of 0.1 µg L−1 for an individual pesticide. This presents a problem when such waters are abstracted for use in the production of potable water supplies, as this chemical is difficult to remove using conventional treatment processes. Understanding the sources, transport and fate of this pollutant in river catchments is therefore important. We developed a new variant of the Chemcatcher® passive sampler for monitoring metaldehyde comprising a Horizon Atlantic™ HLB-L disk as the receiving phase overlaid with a polyethersulphone membrane. The sampler uptake rate (Rs) was measured in semi-static laboratory (Rs = 15.7 mL day−1) and in-field (Rs = 17.8 mL day−1) calibration experiments. Uptake of metaldehyde was linear over a two-week period, with no measurable lag phase. Field trials (five consecutive 14 day periods) using the Chemcatcher® were undertaken in eastern England at three riverine sites (4th September-12th November 2015) known to be impacted by the seasonal agricultural use of metaldehyde. Spot samples of water were collected regularly during the deployments, with concentrations of metaldehyde varying widely (~ 0.03–2.90 µg L−1) and often exceeding the regulatory limit. Time weighted average concentrations obtained using the Chemcatcher® increased over the duration of the trial corresponding to increasing stochastic inputs of metaldehyde into the catchment. Monitoring data obtained from these devices gives complementary information to that obtained by the use of infrequent spot sampling procedures. This information can be used to develop risk assessments and catchment management plans and to assess the effectiveness of any mitigation and remediation strategies. |
598 | Thickness effect on mechanical behavior of auxetic sintered metal fiber sheets | As a typical open-cell cellular material, auxetic sintered metal fiber sheets are three-dimensional fiber network architecture manufactured by sintering stacked layers of sheets with stochastically distributed in-plane fibers.The base fibers can be flexibly selected to meet various needs, which are made of copper , titanium , carbon nanotube , and steel .In particular, a number of stainless steel fibers exhibit superior performance such as high temperature resistance, corrosion resistance, high surface to mass ratio and permeability.As the cell size can be accurately controlled with different fibers of diameter ranging from several to several hundred micrometers, they are widely applied in filtration and separation , gas infiltration , catalyst support , biomaterials , heat transfer , and sound absorption .In addition, it has been shown that auxetic fiber networks can have various promising mechanical properties over traditional open-cell cellular materials in terms of high specific stiffness and strength, high shear and indentation resistance , larger fracture toughness and enhanced energy absorption properties , among others.Such properties make auxetic materials attractive for a wide range of applications including fabrics and textiles engineering and defense industries .To facilitate applications, a number of experimental, micro-mechanical, and numerical simulations studies have been conducted on the stiffness, strength, fracture and constitutive properties of MFSs ."Carbon nanotube sheets and metallic fiber networks can exhibit negative Poisson's ratios. "Jayanty et al. measured the out-of-plane Poisson's ratio of sintered metallic fiber sheets along with their composites and found that the auxeticity in the out-of-plane Poisson's ratio is significant. "They also revealed by numerical simulations that the negative out-of-plane Poisson's ratio of MFSs depends on the extent to which they are compressed during the manufacturing process .Delannay and Zhao et al. predicted theoretically for stochastic metal fiber networks.Zhao et al. found that with small angle between the fibers and plane, mechanical properties of the fiber network materials tend to stabilize and significantly improve.A number of work has been done on size effects of random fiber networks, including the influence of different fiber diameters, different length-to-diameter ratios, and different relative densities on the mechanical performance of the material .Ma et al. established three-dimensional stochastic fiber networks with cross-linkers to investigate elastic-plastic behavior.In particular, Neelakantan et al. manufactured fiber networks by sintering 316L fibers produced by a bundle-drawing process with solid-state technology .The mechanical parameters of MFSs of different thickness were measured with in-plane tensile and modal vibration testing, and revealed that the auxetic effect of MFSs is due to the straightening of fibers, whose kinks are induced during manufacture process by the applied pressure."Furthermore, it is found that the Poisson's ratio decreases from −14.2 to −7.4 with increasing thickness from 1 mm to 5 mm, suggesting significant thickness effect. "Reentrant structure is the fundamental model for negative Poisson's ratio cellular materials including not only MFSs, but also 2D honeycombs and 3D negative Poisson's ratio open-cell foams .The mechanisms of the auxeticity are similar in spite of different materials: the initial reentrant struts and cell walls are straightened when subjected to tension, which is widely accepted."However, based on the previous research finding on the honeycombs and foams with negative Poisson's ratio, there is no evidence that the straightening of reentrant micro-structures can result in such considerable auxetic effect and thickness effect.There may co-exist another undiscovered deformation mode.Both the underlying mechanism and the influence of great auxetic effect on in-plane tensile mechanical behaviors are not fully understood to date and require further investigation.The present study investigates the deformation and failure mode of the fibers of MFSs subjected to in-plane tension, as well as their influence on the mechanical performance."A theoretical model with reentrant fiber deformation mechanism is established to predict the Poisson's ratio in the initial loading stage.Subsequently, the thickness effect of sample on in-plane tensile capacity of fiber network materials is examined with digital image correlation.In addition, with the aid of synchrotron radiation X-ray, the changes in nodes and angles between fibers and layer plane are measured at different loading stages.A different umbrella-like fiber deformation mode is observed, based on which the effect of the newly proposed mode on mechanical performance of samples of different thickness is analyzed.Furthermore, the uniformity of the fiber network from the manufacture process is assessed, providing solid support for design and optimization of fiber networks in industrial applications.The metal fiber sheets considered in the present study are sintered from commercially available 316L stainless steel fibers produced by bundle-drawing process , in which thousands of metal fibers in a bunch are repeatedly stretched.The as-received long fibers with diameter 12 μm are cut short to length 10–20 mm in laboratory, which is the optimized choice balancing performance of single fiber and the fiber distribution uniformity during the air-laid web forming process .Single fiber layers with thickness of about 0.1 mm were produced using this technology, so that the short metal fibers within each layer were randomly distributed.Finally, a number of fiber layers were stacked one upon another and sintered in a vacuum furnace at 1250 °C for 4 h, we adopted fixed volume sintering method, in which a constant compressive pressure was applied in the layer thickness direction to ensure bonding quality of the fiber joints.More details of the materials processing method are available in Zhao et al. .A typical MFS is illustrated in Fig. 1, showing clearly a layer-by-layer feature.The coordinate system is defined in such a manner that the axes x and y denote the in-plane directions while the axis z refers to the out-of-plane direction.An X-ray tomographic reconstruction is also incorporated in Fig. 1, demonstrating that the fibers are randomly distributed within each layer and the inclination angles of the fibers measured from the z direction are very small .Consequently, the produced MFSs are transversely isotropic.To explore the thickness effect of auxetic behaviors of MFSs, seven groups of MFS samples with relative density 0.15 and different thickness were fabricated and cut with electro-discharge machining method, in which at least three samples were tested for each thickness.To explore the thickness effect on the auxetic behaviors of MFSs subjected to uniaxial tension, dog-bone samples with thickness 2–20 mm were prepared, as shown in Fig. 1.The gauge size of the samples is 20 mm × 12.5 mm × t, t being the sample thickness.As the MFSs are transversely isotropic, both in-plane and out-of-plane uniaxial tension test were carried out.Fig. 2 illustrates setups for in-plane and out-of-plane tension test and samples were bonded to the fixture with epoxy resin.For a specific combination of certain relative density, loading direction and sample thickness, six samples were tested and averaged results are reported.The tests were performed using a servo-hydraulic material testing machine, in accordance with the ASTM standard D3552-96R02.Displacement was controlled at a nominal rate of 0.6 mm/min to realize quasi-static loading.It is worth noting that there are inevitable defects, including fracture of fibers, debonding of fibers at certain nodes for metal fiber sheets.The nonlinearity in the initial loading stage is caused by the isolated yielding of such defects before global yielding of the entire sample, resulting in seemingly lower stiffness than its true value.This mechanism for the initial nonlinearity was firstly discussed by Sugimura et al. and McCullough et al. when investigating the mechanical properties of metal foam, which is also a typical cellular solid."They suggested a method to determine the true stiffness: perform some loading and unloading, the unloading modulus at 0.2% strain is taken as the Young's modulus, which is remarkably greater than the slope of the nonlinear initial loading curve. "In the present study, the Poisson's ratios were measured at longitudinal strain 0.2%.The digital image correlation method with a 3D non-contact optical measurement system was employed to measure the strain field of the sample surfaces during loading.To facilitate the measurement, the surfaces were painted in black and white to form random but unique speckles.After test, post-processing the recorded images of the deformed samples, measurement zone within the gauge area of the tested sample is determined.The averaged various strains can be calculated by using Aramis system, in which the longitudinal and transverse strains are of interest in the present study thus plotted.One advantage of DIC is that both global and local strain field on the sample surfaces can be obtained simultaneously.With the 3D DIC, the measured strains and the applied loading can be associated to simultaneously investigate the deformations of two adjacent sides of MFSs .However, when subjected to large deformation, the strain measurement is affected by factors such as local detachment of the painted speckle, local deformation exceeding calculation threshold due to surface cracks.In such situation, the average strain in the gauge area in the advanced stage cannot be accurately measured.A majority of the MFS application lies in the filtration core for filtering and separation of gas, as well as liquid filtration.They deform under the long duration load induced by fluid, and are considered failure and replaced if hole size decreases and affects the accuracy of filtering.To this end, our research emphasis is placed on the MFS response subjected to initial loading till yielding, such as accurate measurement of mechanical parameters in elastic range.The stress decreases rapidly after tensile limit is exceeded.When the MFS ultimately fractures is not our concern.To examine the fiber deformation modes during different loading stages when the MFS was subjected to in-plane tension, some samples were cut from the same MFS of thickness 10 mm, and more than three samples were tested for each of the four stages, as shown in Fig. 1.Sample 1 was undeformed, while Sample 2 and 3 were loaded to 0.2% and 10% strain, respectively.Sample 4 is the one loaded to ultimate failure.After test, a part of dimension Lx × Ly × Lz = 3 mm × 3 mm × 4 mm was cut with EDM from the samples in the same gauge area and examined with CT scanning.For extraction of architectural parameters, a sub-volume of Lx × Ly × Lz = 2 mm × 2 mm × 3 mm was analyzed to avoid edge effects from EDM.As stainless steel fibers are strong in X-ray absorption, it is difficult to obtain reconstruction images of samples with industrial CT.To this end, synchrotron radiation X-ray at Shanghai Synchrotron Radiation Facility was employed to establish the projection image, from which the micro-structure of the fibers was reconstructed with synchrotron X-ray computed tomography technique.The method of data analysis and reconstruction was detailed by Zhao et al. .The synchrotron radiation CT experiment was conducted at the BL13W1 beamline in SSRF .The samples were placed 10 cm away from the CCD camera, which is an array of 2048 × 1024 pixel and 1.625 μm for each pixel.720 radiographs were taken at regular intervals over rotation of 180°, each with an exposure time of 4 s and beam energy 40 KeV.The schematic of the SR-CT experiment is shown in Fig. 3.The in-plane and out-of-plane performance subjected to different loading conditions such as moduli and strength were analyzed in detail in Zhao et al.Measured typical stress-strain relations of an MFS with relative density 0.15 and sample thickness 10 mm are shown in Fig. 4 and for uniaxial in-plane and out-of-plane tensions, respectively, where the insets show the enlarged views of the small strain regions.One can see that the stress-strain curves suggest an initial elastic response followed by a long range of strain hardening for both in-plane and out-of-plane tension."However, the in-plane response is significantly stiffer than the out-of-plane one, as is evident from the corresponding unloading Young's moduli Ey = 6.63 GPa and Ez = 0.17 GPa.Here, subscripts denote the loading direction."The trajectories of transverse and longitudinal strain of an MFS of relative density 0.15 and thickness 10 mm are shown in Fig. 5 and, corresponding to the in-plane and out-of-plane tension, respectively, from which the Poisson's ratios can be calculated, i.e., vyx = − εx/εy = 0.21 and vyz = − εz/εy = − 0.18 in-plane and vzy ≈ vzx = − εy/εz = − 0.005 out-of-plane.Fig. 6 shows a 3D tomographic reconstruction of the fiber network micro-structures of the MFS.Details of the tomographic calculation and visualization are available in Zhao et al. .The reentrant feature of the fiber networks is illustrated in Fig. 6 and, which accounts for auxetic behavior of various cellular materials including foams and carbon nanotube sheets , among others.The reentrant micro-structures in MFSs are believed to be responsible for the observed auxetic behavior in the initial deformation stages of the in-plane and out-of-plane tension, whose mechanism is schematically illustrated in Fig. 7."Initially, the reentrant feature results in negative Poisson's ratio upon tension. "However, with further loading, the auxeticity diminishes, giving rise to zero and even positive Poisson's ratio. "To quantify the auxeticity, an analytical model based on beam theory is established to determine the out-of-plane Poisson's ratio in the initial loading stage with small strain.While the reentrant deformation mechanism of MFSs is widely accepted, the accurate measurement of the geometrical parameters in the model such as average segment angle θ0, fiber diameter D and average edge length L, remains a complicated and difficult problem.In the present study, 3D geometrical model of the MFSs micro-structure is established and each statistical parameter of the model is obtained with the aid of tomography reconstruction.However, these parameters are easily affected by the manufacture process.For instance, insufficiently scattered fibers compress the adjacent fibers, resulting in large inclination angle.Moreover, unevenly distributed fibers, instead of single fibers arranged layer by layer, will make it difficult to determine the statistical average edge length.In the current study, the use of short fiber of small diameter, along with air laid method for spreading the fibers, greatly reduces the scatter of the obtained geometrical parameters.The average angle of the MFSs of relative density 0.15 between the fiber segments and the xy plane was measured as 0.51°, based on the analysis of the reconstructed micro-structure."Then the Poisson's ratio is calculated as vzy ≈ − 0.006, which favorably agrees with the measured vzy = − 0.005, shown in Fig. 5. "The negative transverse Poisson's ratios vyz and vzy indicate the MFS is auxetic.A consistent check of the experimental results can be made with the symmetry condition of elastic stiffness matrix requiring vyz/Ey = vzy/Ez ."With the measured values of Young's moduli and Poisson's ratios, one can have vyz/Ey = − 0.27 and vzy/Ez = − 0.29.Therefore, the consistent condition is approximately satisfied, which provides added confidence in the obtained experimental results.In addition, Fig. 7 shows that, subjected to out-of-plane tension, the sample first expands in the in-plane direction, giving rise to an auxetic behavior, and then contracts upon further loading.It is clearly seen that the behavior in the out-of-plane case is in sharp contrast to the in-plane case in which the auxetic behavior under in-plane tension is retained for the initial loading regime.To explore the thickness effect on the auxetic behavior of MFSs, a series of dog-bone samples with different thickness were manufactured for this study, where the relative density was fixed 0.15.Fig. 8 shows the contours of the transverse strain field εz of the samples of different thickness, with longitudinal in-plane tensile strain εy = 0.2%.It is noted that the transverse deformation of samples of thickness from 2 mm to 8 mm is highly localized.However, the transverse strain for sample thickness 10 mm, 15 mm and 20 mm becomes uniform.This finding is consistent with the results in Fig. 9.The measured strain trajectories of samples with different thickness subjected to uniaxial in-plane tension are plotted in Fig. 9."It can be seen that the Poisson's ratio varies with the sample thickness, vyz ranging from −0.17 to −4.15 with t increasing from 2 mm to 20 mm.Moreover, Fig. 9 shows that the slope of the εz − εy curves for relatively thin samples increases with increasing tensile strain.According to the in-situ tension test by Neelakantan et al. , the main cause of auxetic effect in fiber sheets lies in that the bent fibers, when straightened due to tension, exert pushing force to neighboring fibers.As the tensile loading further increases, the originally bent fibers are progressively straightened and the auxetic effect is expected to decline.However, Fig. 9 shows an opposite trend."Furthermore, if the mechanism of straightening of reentrant fibers dominates the fiber deformation, the Poisson's ratio should be very small, based on both the observation and theoretical prediction in the present study.Thus, there may exist another fiber deformation mode working together with the fiber straightening effect to cause large auxeticity.To reveal the underlying mechanism of the unexplained observation, a new test with CT reconstruction technique was designed and carried out.Four bone-shape samples with the same dimension, cut from the same piece of MFS with relative density 0.15, were stretched in-plane to nominal strain 0%, 0.2%, 10% and ultimate fracture, respectively, as presented in Fig. 10.To observe deformation patterns and extract geometric parameters, a piece of MFS Lx × Ly × Lz = 2 mm × 2 mm × 3 mm in the gauge area in each sample was scanned with SR-CT, whose micro-structure was then reconstructed.The reconstruction of MFS pieces in Fig. 10 and show that the micro-structure difference between the unloading and 0.2% strain pieces is trivial.A majority of the fibers are uniformly distributed and the angle between the fiber axial and xy plane are small.Subjected to 10% tensile strain, many fibers in the cut Sample 3 are bent, instead of being straightened in the loading direction.Furthermore, it is obvious from Fig. 10 that a considerable amount of fibers near the fracture part of Sample 4 are bent, according to the CT reconstruction.From Fig. 10 the statistics of the fiber joint numbers and the angle between the fiber and xy plane, one can observe that with increasing tensile loading, the number of fiber joints per unit volume gradually decreases.In particular, the node number declines to nearly half its original value at the sample fracture.The average angle between the fibers and xy plane slightly decreases at the initial loading stage then significantly increases.The observations are remarkably different from the previous finding which accounts for the auxeticity as a result of the straightening of bent fibers .Therefore it is believed that in addition to the effect of straightening of bent fibers, there must co-exist another deformation mode."If not, the absolute value of the negative Poisson's value should remarkably decreases, which is not the case.In addition, the standalone mechanism of straightening of reentrant fibers cannot reasonably account for the observation that subjected to large strain 10% or beyond, the average angle between the fibers and z direction drastically increases, shown in Fig. 10 and.Further examination of Fig. 10 reveals a new mechanism of fiber deformation: with increasing tensile loading, an increasing number of fiber nodes fail, resulting in weakened constraint between adjacent fiber layers and local shear occurs.Subsequently, the shear leads to local buckling of some fibers and put adjacent fibers outward, like unfolding an umbrella.As shown in Fig. 11, it is inevitable that some parts of fibers are thinner than normal during manufacturing process.Fig. 11 are the SEM images of an MFS sample fracture.The blue framed enlarged view was taken from the top of the fracture, in which the tensile failure is in a layer by layer manner and the fiber orientation is in line with the loading direction.The red framed enlarged SEM side view of the fracture was taken from in-plane direction.The fibers around the fracture were stretched and bent.The orientation is totally different from the in-plane tension, which implies that they are caused by the shear of the adjacent fiber layers.The deformation mechanism of fiber response is schematically illustrated in Fig. 11, the deformation model consists of straight and wavy fibers, the black line denotes the middle layer with defects, and brown lines denote neighboring fiber layers.Blue fibers between black line and brown lines are linked to the inner and outer layers through joints.Take the neighboring three fiber layers to analyze.During the initial tensile stage, wavy fibers are straightened along the loading direction and their adjacent fibers are pushed away), resulting in the auxetic effect.The straightening of reentrant fibers is the main mechanism accounting for the auxeticity.Illustrated in Fig. 10, the defected black fiber layer gradually fractures with increasing tensile loading, together with joints failure between fibers.Subsequently, the uninform distribution of stress among layers results in inter-layer slippage between the inner and outer layers.This leads to local shear deformation of blue fibers, then inducing remarkable local lateral deformation such as large angle bending of fibers.As the deformed shape, which illustrated in the red dash line on the blue fibers, is similar to an umbrella, we termed this mechanism as “umbrella effect”.This microscopic mechanism results in macroscopic lateral expansion of the sample, which dominates the increasing out-of-plane deformation with relatively large tensile strain.As demonstrated earlier in Fig. 9, observation on the ratio of longitudinal strain to lateral strain of MFS sample of 2 mm, 4 mm, 6 mm, and 8 mm indicates that the gradient decreases with increasing thickness, which implies that the threshold of critical load required for umbrella effect increasing with increasing sample thickness.Furthermore, the curves for 10 mm, 15 mm, 20 mm suggest that the umbrella effect is suppressed during the initial elastic loading stage, in which the auxeticity solely attributes to the stretching of reentrant fibers."The elastic Poisson's ratio of sample of thickness 10 mm, 15 mm and 20 mm being the same −0.17 further confirms the mechanism.In fact, with increasing loading greater than the threshold of umbrella effect during the plastic deformation stage, the umbrella effect occurs, which dominates the auxeticity in plastic deformation and diminishes with increasing sample thickness."In this section, the thickness effect of sample thickness on the out-of-plane Poisson's ratio and in-plane Young's modulus are further investigated.Fig. 12 compares the in-plane stress-strain relation of MFS samples of thickness 2–20 mm."It is found that the Young's modulus, strength and ductility of MFS samples significantly increase with increasing thickness from 2 mm to 8 mm.However, the stress-strain curves of samples with thickness greater than 10 mm almost coincide with each other."The experimental results of the Poisson's ratio and Young's moduli measured for MFS samples of different thickness are summarized in Fig. 13, along with data obtained by Neelakantan et al. .It can be seen from Fig. 13 that the thickness effect on the auxetic effect is pronounced for t less than 8 mm.The results by Neelakantan et al. were obtained for a stainless steel sheet with fiber diameter 40 μm and relative density 0.15, using dog-bone samples with different thickness 1 mm, 2 mm, and 5 mm."It can be seen from Fig. 13 that the magnitudes of the negative Poisson's ratios reported by Neelakantan et al. are remarkably greater than those measured in the present study. "Fig. 13 shows that Young's moduli of MFS samples significantly increases with increasing sample thickness from 2 mm to 8 mm. "Young's moduli and out-of-plane Poisson's ratio appear to be insensitive to thickness greater than a certain thickness.The umbrella effect is increasingly restrained with greater sample thickness, and converges with a certain thickness.This observation can be accounted for with a x-ray tomography approach.The left part of the enlarged view of Fig. 10 suggests that the fiber nodes near the sample surface are more likely to fail upon loading, due to less constraint compared to those in the sample interior.Therefore umbrella effect with locally buckled fibers more likely occurs, accounting for a considerable portion of the lateral expansion subjected to tension.The right part of the enlarged view reveals the umbrella effect in the sample interior.Due to the strong constraint from the adjacent layers, the densely packed fibers effectively suppress the umbrella effect, leaving a relatively small amount of fibers to buckle and decreasing contribution to lateral expansion.As shown in Fig. 10, on no account can we ignore the suppression of brown layers on the umbrella effect of the blue layers, whose effect increases with increasing numbers of fiber layers and increases the threshold of umbrella effect during the initial loading stage.However, with the advancement of loading, an increasing number of fiber layers fracture and more joints between fibers detach, greatly diminishes the suppression effect.Eventually, the umbrella effect will dominate the auxeticity."As shown in Fig. 13, the measured out-of-plane Poisson's ratio and in-plane Young's modulus of 2 mm MFS samples are νyz = − 9.8 and Ey = 1.78 GPa by Neelakantan et al. , but νyz = − 4.07 and Ey = 3.78 GPa in the current study.To address this discrepancy, four types of fibers are categorized based on the inclination angle between the fibers and xy plane, with the aid of CT reconstruction.As shown in Fig. 14 and, large inclination angle is observed at two ends of the fibers for the first type bowline and the second type slant line, while nearly negligible small inclination angle is detected at two ends of fibers for the third type serpentine line and the fourth type straight line.Careful examination into the bowline fibers reveals that the significant variation of inclination angle is mainly caused by the bundle effect of neighbor fibers marked green in the same direction ①).If the fibers are not separate individually during the fabrication of MFSs, they are likely to yield to the direction of its neighbors while randomly orientated fibers are not.Therefore, for bowline and slant line type of fibers with large inclination angle at two ends, pronounced auxeticity is observed as these fibers tend to push their adjacent fibers aside resulting in in-plane expansion, when subjected to in-plane tension.As a result, material strength and stiffness are weakened due to the dominant flexural deformation.In contrast, as shown in Fig. 14 ③ and ④, with fibers randomly, statistically uniformly separated, orientated, connected and piled in layers, the flexural deformation of fibers would become non-significant, even negligible, resulting in less auxetic characteristics but remarkably higher strength and stiffness."The unique mechanism of the negative Poisson's ratio of MFSs significantly differs from that of other materials and structures with negative Poisson's ratio such as honeycombs and metal foams: due to the large length-to-diameter ratio, each bow line or slant line fiber in the former connects to hundreds of other “elements” and may push a considerable amount of fibers outward when stretched, while each fiber in the latter only connects to one “element” and can only moderately push the adjacent fibers outward when subjected to tension. "To sum up, the unusually large negative Poisson's ratio is a result of non-uniform distribution of fibers and the umbrella effect, at the price of reduced in-plane mechanical performance.To enhance the in-plane performance, the fibers should be distributed as uniformly as possible and the sample thickness should be sufficient."Therefore MFS of various thickness can be designed for either high performance or large negative Poisson's ratio, or a balanced approach for a wide spectrum of practical needs.A systematic test was carried out for MFS samples with different thickness.With the aid of CT reconstruction technique, the following conclusions can be reached:A new mechanism of auxeticity is revealed: the defected fiber layer gradually fractures with increasing tensile loading, subsequently, the uninform distribution of stress among layers results in inter-layer slippage.This leads to local shear deformation of fibers, which linked to the inner and outer layers through joints, and putting adjacent fibers outward, like unfolding an umbrella.This mechanism remarkably increases the lateral expansion thus greater auxeticity.However, this effect further loosens the structures of fibers within MFSs, significantly diminishes various in-plane performance."The umbrella effect accounts for the non-uniform through-thickness distribution of Poisson's ratio.The thinner the sample subjected to in-plane tension, the easier the fiber deformation in umbrella mode.On the contrary, the umbrella effect is suppressed in thicker MFS samples: which results in steady mechanical performance.The thicker the sample, the higher the threshold required for the umbrella effect.The auxeticity of MFSs attributes to the combined effect of straightening of reentrant fibers and umbrella effect.For samples with uniformly distributed fibers and thickness greater than a critical threshold, the straightening of reentrant fibers plays a major role in the initial elastic loading stage, while the umbrella effect gradually dominates the further plastic loading stage.However, for thin samples, both effects are equally occurred in the initial stage.Compression during sintering of multiple uniformly distributed fiber layers results in slight bending of fibers, forming typical reentrant fiber feature within MFSs accounting for the auxeticity."According to theoretical prediction, the out-of-plane negative Poisson's ratio relates to the fiber inclination angle, and is a relatively small value.However, the gathering of a bundle of fibers in parallel, significantly bend some adjacent fibers and decrease the node numbers of the fiber network.This non-uniform fiber structure increases the auxeticity, but diminishes the in-plane performance such as rigidity and strength. | Sintered metal fiber sheets (MFSs) made by sequential-overlap method are transversely isotropic open-cell cellular materials with paper-like fiber network architectures, which exhibit auxeticity and are promising for various potential applications due to the reentrant micro-structure. The thickness effect on the out-of-plane auxeticity (negative Poisson's ratio) of MFSs samples of 2–20 mm thick subjected to in-plane tensile loading is investigated with digital image correlation technique. Furthermore, the deformation modes of fibers within MFSs during various loading stages are examined with X-ray tomography. It is found that in addition to the straightening of reentrant fibers, fiber layers with defects and joints failure induced slippage between adjacent layers leads to local shear and results in unique umbrella-like local deformation termed umbrella effect, which gradually dominates the auxeticity during tensile loading. Although remarkably increasing lateral deformation, the umbrella effect significantly diminishes the in-plane mechanical performance such as rigidity and strength. In particular, this effect is suppressed by sample thickness: the overall performance tends to stabilize with sample thickness greater than a certain value, provided that the MFS is uniform with all fibers randomly distributed. The finding facilitates wider application of auxetic MFSs with further understanding on the relationship between the thickness effect and performance. |
599 | Cost-based analysis of autonomous mobility services | Autonomous vehicles are expected to revolutionize mobility by turning cars into mobility robots and allowing more dynamic and intelligent forms of public transportation.A multitude of transport services are conceivable with AVs, yet it is largely unclear which ones will prevail.Besides travel time, reliability and comfort, price is the key attribute of a transport service.Therefore, predicting level of acceptance and resulting competitiveness of future AV operational models requires knowledge about their cost structures.The validity of scenarios, simulations and conclusions of such studies relies heavily on accuracy of assumptions about the absolute and relative competitiveness of new transport services compared to current offerings.Better estimates of absolute competitiveness thus allow better estimates of mode choice, induced demand and spatial distribution of travel demand - in short: future travel behavior.First cost estimates of future transport services with AVs were proposed by Burns et al.For three different cases, they calculated the cost, per trip, of a centrally organized system of shared AVs, which would replace existing transport services.Their estimates are based on different cost categories, which capture fixed and variable costs.They concluded that such systems could provide “better mobility experiences at radically lower cost”.In the case of a shared AV system for a small to medium town, they found the cost of driverless, purpose-built vehicles to be 0.15 US$ per trip-mile."In a second approach, Fagnant and Kockelman considered the external costs of today's private transport system to calculate AVs' potential benefits, which they found to be substantial.In a following paper, Fagnant and Kockelman focused on possible prices for users of a centrally organized, shared AV system.By assuming an investment cost of 70000 US$ and operating costs of 0.50 US$ per mile for AVs only, they found that a fare of 1.00 US$ per trip-mile for an AV taxi could still produce a profit for the operator."This is a higher price level than in Burns et al., but still very competitive compared to today's transport options.Litman introduced additional factors into the discussion, like cleaning costs of shared vehicles.He estimates costs based on different categories.For some values, however, the paper remains unclear about sources, or uses ballpark estimates.For example, it assumes shared autonomous vehicles cost more than car-sharing, but less than driver-operated taxis.Building on the work above, Johnson estimated the price of shared AVs to be 0.44 US$ per trip-mile.For purpose-built shared AVs used as pooled taxis, they estimate the price per trip-mile as only 0.16 US$.They use detailed cost categories to estimate the total cost, but do not fully specify the sources of the numbers.It is, therefore, difficult to reproduce and understand their estimates."In contrast to earlier studies, however, they compare and validate their calculations against today's private cars.Less rigorous and detailed, but more transparent estimates are provided by Stephens et al. and Friedrich and Hartl.Stephens et al. find the lower-bound cost of fully autonomous vehicles used with ride-sharing to be less than 0.20 US$ per passenger-mile and the upper bound to be 0.30 US$ per passenger-mile.This encompasses a range similar to Friedrich and Hartl, who assume 0.15 € per passenger-km for a ride-sharing scheme in an urban area in Germany.Stephens et al., however, do not differentiate between private and commercially offered vehicles and Friedrich and Hartl focus their cost analysis on the ride-sharing service only.Costs of US$0.30 per passenger-mile are also estimated by Johnson and Walker in a less rigorous, but more detailed approach.With a less detailed approach for the Netherlands, Hazan et al. estimate that fully-autonomous vehicles in a ride-sharing scheme can be operated at costs as low as 0.09 € per passenger-km - i.e. at lower cost than rail services.1,Overhead costs of shared services were neglected in all cases, which is a major limitation given the new service market in the transport sector; for example, Lyft or Uber, which - in their definitions of their services - provide only the overhead of shared transport services, but no actual transport service.As outlined above, earlier approaches to determining the cost structures of operational models for AVs were incomplete for both the diversity of possible operational models and cost components.This research addresses this gap by conducting a comprehensive, bottom-up calculation of the respective cost structures of fully autonomous) vehicles for various operational models, such as dynamic ride-sharing, taxi, shared vehicle fleets or line-based mass transit.The chosen methodology allows determination of different cost components’ importance and differentiation of vehicle automation effect on individual cost components.This research focuses on passenger transportation.Freight transport, where AVs will undoubtedly also cause major disruptions, cannot be investigated in this paper.The remainder of this paper is structured as follows: Section 2 presents a bottom-up determination of operating cost for a variety of vehicle systems in various situations, while Section 3 studies their respective utilization for different use-cases.Based on this, the cost structures of different operational models are calculated.The results, including a robustness analysis against different assumptions of key variables, as well as the impact of autonomous vehicle technology on future transport systems are presented in Section 5.Section 6 then goes one step further by assuming a future with autonomous-electric vehicles and studying the prospects of different modes under these circumstances.Finally, in Sections 7 and 8, insights gained through this research are discussed and suggestions for further research are given.This research covers three generic operational models:line-based mass transit,taxi,In this context, line-based mass transit uses full-size buses or trains running along predefined lines on a fixed schedule.Taxi represents a taxi or ride-hailing scheme, as it is known today, where transport may be offered as individual service providing private ride, or pooled services in which multiple travelers may be bundled into one vehicle.2,Private cars are owned by private persons and are solely used by themselves, or their family and friends.As detailed below, for the generic operational models taxi and private cars, different vehicle types were considered.Although many further variations of operational models can be hypothesized, their cost structures are assumed to be close to one of those three generic models.Various indicators can be used to represent the competitiveness of a service.The most important dimensions are:cost of production vs. prices,vehicle kilometers vs. passenger kilometers,full cost of a trip vs. direct cost only.While the cost of production is relevant for fleet operators to meet demand most efficiently, prices can be assumed to be a key attribute of customer mode choice.The two indicators can be converted into each other by considering taxes, payment fees and profit margins.Similarly, vehicle kilometers can be planned by the operator, whereas passenger kilometers take demand reaction to the service into account.Finally, the direct cost of a trip is the operating cost for a ride from point B to point C, while the full cost of a trip also includes a possible empty access trip from point A to B."While the first measure determines the customer's willingness to pay, an operator must cover the full cost of the trip. "Pursuing a bottom-up approach; in this research, first, individual cost components are determined for different vehicle types, then operating costs are determined from an operator's perspective and after that, travel behavior impact is estimated using prices.The respective cost components are obtained in two steps: First, based on manufacturer data and additional sources, fixed and variable vehicle costs are determined for the case of private ownership and use of the vehicle.In a second step, variations are introduced into the calculation to cover the case of commercial ownership and shared use of vehicles."Then, using a separate approach, the cost components of today's line-based mass transit are established.Eventually, the effects of vehicle electrification and automation are estimated for individual cost components, allowing calculation of overall operating costs and required minimum charges.Fixed vehicle costs depend substantially on the vehicle type.In this research, four general vehicle categories are considered:Solo: One-seat urban vehicle,Midsize: Standard four-seat all-purpose car,Van: Large eight-seat all-purpose car,Minibus: Minibus with 20-seats with small trunk."For each of these categories, example vehicle acquisition cost was obtained from the car manufacturer's website for a model with a medium level of optional equipment.It should be noted that the costs mentioned in this and the next section include Swiss VAT of 8%.Depreciation is split into a fixed part, attributable to aging of the vehicle and a variable part from its usage.For the fixed part, it is assumed that the vehicle depreciates one tenth of its acquisition cost every year, independent of mileage.The variable part is explained at the beginning of the next section.It should be noted, however, that the calculations do not reflect costs for private owners who prefer to drive relatively new cars.Furthermore, as it is the purpose of the paper to derive an internal cost calculation, cash flow calculations, such as the repayments of the necessary loans, are not explicitly considered.The interest amount was determined based on an annuity loan with an interest rate of 7.9% and a five-year credit period.Processing fees for the borrower are ignored.Insurance rates were determined using the cheapest fully comprehensive insurance according to the internet comparison service Comparis for vehicles registered in the Canton of Zurich.The rates reflect the cheapest offer for non-business customers with 25 years of driving experience without accidents and 30000 driven kilometers per year.It should be noted that the policy with 15000 km per year is only 10 CHF cheaper.Taxes were obtained using a web-tool provided by the Canton of Zurich.For parking costs, the average for private cars in Switzerland was used.For tolls, the price of the Swiss motorway permit sticker was used for solo, midsize and van vehicles.Minibuses are subject to a lump-sum heavy vehicle charge.The resulting monetary values are presented in Table 1.Variable costs of private vehicles were estimated for the four vehicle types.For depreciation and maintenance cost, the average for the Swiss car fleet was used and scaled by price of the car.For a midsized car, this results in a fixed depreciation of 3500 CHF per year and an additional 11.67 CHF for every 100 km driven.It is clear that this procedure underestimates value losses in the first years and overestimates them in later years.As this paper compares average cost figures, this nonlinear nature of value losses is ignored.It is further assumed that a private car is cleaned eight times a year, based on a median value between 6 and 10 in Germany.The associated cost was estimated based on the price list of a self-service car-wash facility in Zurich.Concerning tires, it was assumed that two sets of tires and two annual tire changes would allow for 50000 km of driving.Fuel costs were given by urban fuel consumption as reported by the manufacturer at a fuel price of 1.40 CHF per liter.Given that there are no distance-based tolls for passenger cars in Switzerland at this time, a zero toll has been used here.The resulting monetary values are presented in Table 2.Commercial fleet operators benefit from various discounts on fixed and variable vehicle costs due to scale effects.By reviewing their platform transactions, Blens assessed the average discount granted to commercial customers for thirty popular company cars.The discounts range between 8.5% and 30.5%, with a median of 21%.As the number of vehicles bought by fleet operators should be substantially larger than for an average company, a general discount on the vehicle price of 30% is assumed.Due to more intense use, commercially used vehicles are further assumed to be written off over 300000 km, rather than ten years.Moreover, it is assumed that insurance rates for fleet operators are 20% lower, reflecting discounts typically available for group insurances.Given that the German car rental company Sixt SE issues bonds at 1.125% p.a. with six years duration, the corporate interest rate is set at 1.5%; credit period is assumed to be three years.In addition, maintenance and tire costs are assumed to be 25% lower due to better conditions for bulk buyers.For fuel costs, a 5% reduction is assumed based on typical group discounts.Based on Schlesiger, parking costs for fleet operators are assumed to be 133% higher than for private drivers.In addition, VAT is deducted where appropriate, as costs in the previous section are based on gross prices of products and services.It is assumed that customers pay less attention to third parties’ property, leading to substantially higher cleaning costs.Based on experiences of a car-sharing operator,7 it was assumed that vehicles need to be cleaned after every 40th trip.If a car is not automated, the costs per driver hour are estimated at 35 CHF, based on the average yearly salary of Swiss taxi-drivers and the calculation tool of Braendle, which helps determine labor costs for a company based on gross income.Further cost components include overhead costs and vehicle operations costs per vehicle and day for commercially operated on-demand services.These figures are assumed to depend on the fleet size and composition.An analysis based on US data is presented in Appendix A.It suggests that, for a case in Switzerland, approximately 14 CHF per vehicle-day can be assumed for overhead and 10 CHF per vehicle-day for operations costs.In addition to operating costs, user prices take into account a profit margin of 3%, the Swiss VAT of 8%, and a payment transaction fee of 0.44%.The profit margin estimation is based on the study of SCI Verkehr that reports that the median profit margin for German logistic companies is between 2% and 4%."For a fair comparison of different modes, the full production costs of today's public transportation services were estimated before direct subsidies.Given the large fleet sizes of most public transportation operators, it was assumed that administrative overhead share in the full costs is independent of fleet size.Therefore, overhead costs are assumed to be already incorporated in the full production cost per kilometer, not treated separately.Operating costs for passenger rail are based on the annual report of the SBB, for the year 2015.The cost per train kilometer is stated as 31.40 CHF/km.This number should serve as a ballpark estimate, given that it only reflects the average across various train types and routes.Although the figure includes track fees, it should be highlighted that track fees in Switzerland are subsidized.Among local public transportation providers, tight competition hinders transparency in reported business results.To attempt an accurate an estimate as possible, the framework introduced by Frank et al. was adapted, using Swiss salaries; this was then applied to an urban and a regional transit provider.The results were cross-validated, with data on average kilometer costs of 98 urban and 787 regional public transport lines across Switzerland from 2012.It should be noted that trams are not part of the current analysis.For bus and rail transportation, no scale effects are assumed because such effects are already incorporated in public transport providers’ reported cost structures.In this research, two technological advances are expected to have a substantial impact on the cost structures of vehicles and transport services: electric propulsion technology and vehicle automation.The battery is one of the main cost drivers of electric vehicles.As multiple batteries may be needed during the lifetime of a vehicle, it was decided to add the depreciation of the battery to the maintenance costs.It is thus assumed that the purchase price of an electric vehicle is similar to its conventional counterpart.Saxton analyzed the battery capacity of used Tesla Roadsters in line with their age, mileage and climate conditions.He found that only mileage would have a significant correlation and that most Tesla Roadsters would have a battery capacity of 80–85% after 160000 km.As fleet vehicles are written off over 300000 km, it is assumed that a battery needs to be replaced every 150000 km.Furthermore, a McKinsey analysis concludes that average production costs of an electric car battery are 227 US$ per kwh.Including a profit margin of 3% and taking into account the Swiss VAT of 8%, this amounts to 252 CHF/kwh per customer.Taking the Volkswagen E-Golf8 with a battery of 24.2 kwh as a reference, additional maintenance costs of 0.04 CHF/km are calculated.However, Diez report that the remaining maintenance costs for electric vehicles were found to be 35% less than for conventional vehicles.In total, maintenance costs therefore increase by 28%.Given that maintenance costs are adjusted to the different vehicle types, it is assumed that this increase covers the different battery capacities.Nevertheless, it needs to be emphasized that prices for batteries are likely to decline in the future."In fact, Nykvist and Nilsson even highlight that past predictions about today's costs of battery packs have been too pessimistic.Furthermore, according to an internet comparison service, insurance fees are 35% lower for the e-Golf than for a comparable gasoline Golf.This ratio has thus been assumed for all vehicle categories.In addition, electric vehicles are exempted from road tax in the Canton of Zurich.The fuel costs are 50% lower based on a comparison of the Golf and e-Golf energy consumption at current fuel/electricity prices.Impacts of AV technology are less clear.It was assumed that the necessary technology would increase vehicle price by an average of 20%, leading to higher acquisition cost, interest cost and depreciation.Due to more balanced driving, it is further assumed that automation lowers fuel costs by 10%.Due to more considerate automatic driving, it is expected that autonomous vehicles will need less maintenance for traditional car components.However, since it can be expected that the new sensors themselves need periodic maintenance, we do not assume different cost figures for the total maintenance costs.Based on earlier research, it was assumed that safer driving would lower insurance rates by 50%."This is regarded as conservative, as today's Tesla Autopilot is reported to have already decreased accident rates by 40%.The authors acknowledge, however, that this estimate is highly uncertain, given the profound changes ahead for the insurance industry, which are beyond the scope of this research.To estimate the effect of vehicle automation on trains, the number of full time equivalents of active train drivers in passenger rail,) was multiplied by the average personnel cost of train drivers and employer outlay).The resulting total salary sum was then divided by the total annual operating cost of SBB passenger rail in 2015.Assuming train drivers will not be needed any longer, this corresponds to a 4.7% decrease in cost per kilometer.Given that in Switzerland, railway lines already operate on electrified tracks, no further impact through electric propulsion is assumed.To estimate the impact of electric propulsion and automated vehicle systems on bus operations, the framework by Frank et al. was used.Following assumptions on private and taxi operations, it was assumed that electric propulsion halves the fuel cost."Based on information by the Swiss Federal Office for Transport a bus driver's salary amounts to 55% of the total cost.As with trains, it is assumed that costs decrease by this share through automation.Automation technology and electric propulsion are not expected to have substantial impacts on the fixed and variable cost of public bus and train services because automation technology is already pre-installed or would not represent a substantial increase in the purchase price of a vehicle.Moreover, it is assumed that systems will continue to be operated in the same manner as today, so that impact on administration costs will be minimal.Based on cost factors provided above, comparable costs and prices of vehicles and transport services are calculated.The primary results are costs per vehicle-kilometer, cost per seat-kilometer, cost per passenger-kilometer, and price per passenger-kilometer.Cost thus represents the production costs for the fleet operator, while price also includes a profit margin for the provider, the VAT, and a payment transaction fee.The cost-calculation framework allows for specification of the vehicle fleet, expected usage of this fleet and the operation model, including expected revenue.Vehicle fleet: The vehicle fleet is specified by type of vehicles, number of those vehicles and their features.Expected usage: Usage is differentiated between peak, non-peak and night usage.For all three cases, average number of operating hours, relative active time, average occupancy, average speed, average passenger trip length, relative empty rides, relative maintenance rides and relative maintenance hours are required.Expected usage is further specified in Section 3.Operational model: Besides private vehicle ownership, two forms of public transport are differentiated here: dynamic shared fleet operation and line-based mass transit.Expected usage parameters result in an average number of kilometers travelled per day, plus number of passengers, including passenger kilometers and passenger hours per day.The cost calculation framework is implemented in the programming language R with an input interface in Microsoft Excel.9,The cost calculation software framework is available from the authors on request.Fellow researchers are encouraged to reproduce the cost estimates presented in this paper, but also to estimate costs for different situations, use cases and other possible future transport services with AVs.Average cost per user depends not only on vehicle characteristics, but also on service efficiency, i.e. vehicle utilization, empty travel and overhead.To approximate these values, this section first presents the utilization cases differentiated in this paper and then describes their average usage as assumed for the Zurich, Switzerland region.A summary of the complete parameter set can be found in Appendix C.In this paper, three spatial and three temporal cases were differentiated for private and shared services including taxis.The three spatial cases are:Urban: Trips starting and ending in an urban area and which are shorter than 10 km.Regional: Trips starting and ending outside of an urban area and which are shorter than 50 km.Overall: Any trips shorter than 200 km, independent of the area.For each of these spatial cases, the following three temporal cases were defined:Peak: Trips beginning or ending between 7am and 8am or 5pm and 6pm.Off-peak: Not peak-trips, which begin or end between 8am and 5pm.Night: Neither peak, nor off-peak trips.For each of the above use cases, the following parameters were assumed for the Zurich, Switzerland region:Average operation hours: The time a taxi or a service is available.For private vehicles in Switzerland, the average usage time was determined for each of the above cases based on the Swiss transportation microcensus and Swiss Federal Office for Spatial Development, 2012).This suggests 0.32 h during peak, 0.7 h during off-peak, and 0.3 h during night.Conventional mobility services were assumed to stay online for 20 h per day.The live time would span the peak hours, the off-peak hours and a part of the night.To account for maintenance issues, 5% of the live time was subtracted during peak and off-peak times and 20% during the night.In contrast, professional AV services would be operated throughout the day, except for maintenance.Given the limited information available about public transportation, average daily values were determined without a temporal differentiation.In average, 19 h of operation were assumed for bus and rail services to account for the official public transport night break from 00:30 to 05:30 in the Zurich, Switzerland region.This probably overestimates vehicle operation hours, as each individual vehicle is not operated for the full 19 h.It is thus a conservative estimate for public transportation.Relative active time was estimated at 85%, based on the official schedule.Average occupancy and speed of public transport vehicles was derived from Bundesamt für Verkehr; for urban buses an average occupancy of 22.42% and a speed of 21.31 km/h were reported, for regional buses 12.6% and 20.89 km/h, and for regional trains a speed of 37.82 km/h.Average regional train occupancy was 22.7% according to the SBB annual report.Empty rides and maintenance rides were assumed to be already accounted for in the average occupancies and thus set to 0%.A full table presenting the various cases and corresponding values is included in Appendix C.Current usage values as presented here are average values based on current market situation and price structure.If lower costs were assumed, these values would probably be too high for occupancies and too low for trip lengths.Combining the cost structures with utilization parameters allows for the calculation of average cost values for each use case.In a first step, results were validated against data on current transportation services.An average private car in Switzerland costs 0.71 CHF per kilometer.Our own cost calculations resulted in a cost of 0.69 CHF per kilometer for a conventional private midsize car with average usage as in.Also considering that shares of different cost components show only minimal differences, the framework is considered to correctly calculate cost structures for private vehicles.In Zurich, taxi prices are regulated, with a base price of 8.00 CHF plus 5.00 CHF per kilometer.UberPop fares consist of a base price of 3.00 CHF plus 0.30 CHF per minute and 1.35 CHF per kilometer.For a conventional midsize car used as taxi in an urban setting, the framework returns costs of 3.38 CHF per kilometer, which can be assumed to be in the correct range, given taxi and UberPop charges cited above.Calculation of cost values for city and regional buses was calibrated to the full cost per vehicle kilometer, as stated in.Original reference values are 7.14 CHF per kilometer for city buses and 6.70 CHF per kilometer for regional buses.Cost of rail services was derived from SBB AG as 31.40 CHF per kilometer for trains.In both cases, given the lack of an additional, independent source of information, no actual validation is possible.After validation in Section 4, the framework presented above was used to estimate the cost of future transport services.Given the large number of operational model combinations, vehicle types, and spatial classifications, only a selection is presented below, representing the use cases offering the most important insights.A more complete list with other combinations can be found in Appendix D.In a first step, the framework was used to analyze the impact of vehicle automation on the cost structures of various mobility services.To better understand the impact of autonomous vehicle technology, the cost structures of different operational models were compared.First, conventional private vehicles and conventional taxi fleet vehicles without pooling were studied.Fig. 1 presents the cost components for the case of conventional midsize vehicles in an urban setting with and without vehicle automation.As shown in Fig. 1, there is a general difference between the two operational models."While the operating cost of a private vehicle is mostly determined by its fixed costs related to the car, the operating cost of current taxi fleets is mostly determined by the driver's salary.Administration cost, as well as car related costs, play a minor role.Vehicle automation drastically changes cost structures of the different services.In general, three effects are at work; on one hand, autonomous vehicle technologies raise the vehicle purchase price, but on the other hand, reduce operating cost through lower insurance fees, maintenance and fuel costs.In addition, they allow taxi fleets to operate without drivers, thus cutting their main cost component.It is interesting to note that for privately used cars, the first two effects cancel out, so that running costs remain largely unchanged.The third effect does not apply for private vehicles, because there is no direct monetary gain through their automation.Due to substantially higher utilization, reductions in variable and labor costs more than outweigh the increases in fixed costs for ride-hailing or taxi vehicles.In particular, driverless technology is the key factor in substantially lower production costs for such fleets.It is assumed that the operating costs of a ride-hailing or taxi vehicle would plummet from 2.73 CHF/km to 0.41 CHF/km.Given the absence of a driver and fellow passengers, however, customers of such taxi services are expected to show more irresponsible behavior in the vehicle resulting in a faster soiling of the vehicle.To estimate this effect, minimum cleaning efforts of current car-sharing schemes were used."As shown in Fig. 1, even minimum assumptions result in substantial cleaning efforts, which would rapidly account for almost one-third of automated taxi's operating costs. "Combined with an estimated share of 20% due to overhead cost, this means that more than half of autonomous vehicle fleets' operating costs will be service and management costs.Hence, by optimizing their operations processes, providers may realize substantial efficiency gains, allowing them to increase their respective market share.However, even without such additional measures, autonomous vehicle technologies would allow fleets to operate at lower costs than private vehicles.As outlined above, the effect of vehicle automation on the cost structure of future transport services is substantial."Fig. 2 shows the effect of automation on today's main operational modes for the urban and the regional case. "Given the dominant role of midsize cars in today's transport system, private and taxi operational models are assumed to be conducted using midsize vehicles.Up to this point, no electrification of vehicles is considered to isolate the effect of automation.Fig. 2 shows that, without automation, the private car has the lowest operating cost per passenger kilometer.Because of the paid driver, taxi services are substantially more expensive.In the current transportation system, they are used for convenience or in situations without alternatives, not because of their cost competitiveness.Non-automated urban buses and regional rail lines operate at similar costs per passenger kilometer as private cars.The picture changes substantially with the automation of vehicles.While the cost of private cars and rail services changes only marginally, autonomous driving technology allows taxi services and buses to be operated at substantially lower cost, even more cheaply than private cars.In an urban setting, taxis become cheaper than conventional buses, yet they remain more expensive than automated buses.The absolute cost difference between buses and taxis, however, is reduced substantially through automation from 2.20 CHF/pkm to 0.17 CHF/pkm for individual taxis.Even in relative terms, automated taxis will be only 71% more expensive for individual and 21% more expensive for pooled use than automated buses.In regional settings, defined as suburban and exurban trips, automated taxis and buses become cheaper than private vehicles and rail services.Here, pooled taxis are the cheapest mode, followed by individual taxis.In a regional setting, based on operating cost, automated buses and trains no longer seem to be competitive.Calculation of the presented cost structures relies on a number of assumptions, which - given the limited state of knowledge today - have different degrees of certainty.To analyze how robust the results are to changes in assumed cost components, three of the variables with highest uncertainty were varied in the calculation.The selected variables are: vehicle sticker price, overhead costs for fleet operators and relative active time of the vehicles.To reduce the complexity of the problem, changes in each of the three variables were analyzed uni-dimensionally only.The resulting elasticities for a 10% increase in the three variables are presented in Table 4.They are in line with Fig. 1 in that salaries are the key cost driver for conventional services, whereas for autonomous vehicles, hardware is also important.As shown in the table, changes in sticker price of a vehicle have a substantial impact only for privately used vehicles.For fleet vehicles, in contrast, neither the vehicle price nor changes in the overhead cost show a substantial impact.Increases in the relative active time of a vehicle show a substantial effect on the operating cost.Although actual effect of changes in the relative active time are non-linear because fixed costs are distributed over relative active time, the linear approximation can be assumed valid between values of 30% and 90% active time.It is therefore argued that for this range, combined changes in the three variables can be approximated by adding up the individual effects.When comparing the different modes, results show that autonomous buses and solo vehicles benefit most from increased active times and reduced overhead costs, whereas in the conventional scheme, all systems benefit comparably.Yet, when compared to Fig. 2, substantial disruptions in the three variables would be required to actually change the ratios of the costs of the different vehicle types.It can thus be assumed that cost structures presented above also robustly represent the different vehicle types’ relative order.Based on the cost structures described above, suitable market niches are studied for different operational models with automated vehicles.Besides private vehicle ownership, midsize taxis and line-based public transport, a fleet of shared solo vehicles is considered.Taking into account ongoing developments, it is assumed that by the time autonomous driving technology is introduced, all vehicles will be equipped with electric propulsion.Here, as opposed to the previous two sections Section 4 and Section 5, the perspective changes from supplier to user, represented by a change from cost analysis to price analysis.The price consists of the operator cost plus the expected profit margin, VAT and fees.Obviously, this does not apply to private vehicles, as there the user is also the operator.Fig. 3 summarizes the prices per passenger kilometer for the above modes in an urban and a regional setting with average usages as described in Section 3.It indicates the future competitive situation.The cost of private vehicles is differentiated between fixed and variable cost, as private users often consider only immediate out-of-pocket cost in their short-term mode choice.In both settings, it can be observed that a service with shared aSolo vehicles is not substantially cheaper than aTaxis, because in Switzerland today, the average occupancy of a midsize vehicle ranges between 1.34 and 1.46 persons per ride.This is enough to make - per passenger-kilometer - aTaxis competitive with shared aSolos.It can also be observed that, while city buses remain substantially cheaper than aTaxis), aTaxis can provide services for around 80% of the regional bus service price).In summary, aTaxis are very competitive, especially for regional settings, if the full cost of private vehicles is considered.Together with aSolos, they are the cheapest mode in a regional setting, ranking among the cheapest individual service options in an urban setting.Quite like today, private vehicles represent the cheapest option if only immediate out-of-pocket costs are considered.In particular, in an urban setting), they incur about two thirds of what has to be charged for line-based city-bus services and about 40% of autonomous taxis.It should be mentioned here that the above refers to relative differences in prices.Even if these differences could be substantial, in absolute terms they are still small, which makes the value of travel time savings a substantially more important factor in mode choice.Having shown that fleet vehicles can be offered at competitive prices, in a next step, different vehicle sizes are compared.To that end, minimum prices per passenger kilometer were calculated for different vehicle types and demand levels.In this context, demand levels are defined as passengers per vehicle per main load direction.It is further assumed that the vehicles operate as part of larger fleets.Therefore, no additional scale effects arise when adding more vehicles to a specific route and the operational model from above is used.The results are presented in Fig. 4.It becomes immediately clear that electric propulsion and self-driving technologies allow a substantial decrease in prices for all modes.This decrease, however, does not affect all modes in the same way as shown in Section 5.In fact, the most substantial gains are achieved for shared midsize vehicles, for which the price per passenger kilometer falls precipitously by 78% to 0.24 CHF/Pkm at full load.This way, the price gap between a city bus and a midsize taxi at full load decreases from 0.95 CHF/pkm to 0.18 CHF/pkm.Interestingly, autonomous-electric vans and minibuses are not substantially less expensive than midsize vehicles when operated at full load.The detailed cost values can predict viability of different autonomous vehicle operational models at different levels of demand.For example, the data shows that autonomous solo vehicles are the cheapest mode only for low-demand origin-destination relations.With an average occupancy of two on this route, a midsize car would already be more efficient.On the high-demand side of the spectrum, for demand levels of more than fifteen simultaneous passengers per main load direction, a city-bus is more economical than a midsize car.The threshold between a minibus and a city bus is at 21 passengers.The average occupancy of a city bus today is 22.42 passengers).The results of this research help to understand the roles that the various modes may play in a future transport system, encouraging future research on AVs and the study of possible implementations in a more efficient way.In particular, this research shows that private cars still represent an attractive option in the era of autonomous vehicles, as out-of-pocket costs for the user are lower than for most other modes.This is in the range of the $0.15/mile Burns et al. found for shared AVs and the $0.16/mile Johnson found for purpose-built pooled aTaxis, but they neglected important cost factors - for example cleaning.Compared to the assumption by Fagnant and Kockelman of a $1.00/mile price for a shared AVs, even the full cost of private vehicle ownership might be competitive.In fact, buying an autonomous vehicle could be regarded as an investment in a private mobility robot, which can be used both for chauffeur services and for errands; this venture will therefore be even more attractive than a conventional vehicle.Hence, it can be expected that a substantial number of people would value the private use of a mobility robot and will agree to pay the associated premium.Additionally, traditional car manufacturers are strongly motivated to maintain the current emotional connection many people have to their cars.In conclusion, even as low costs for shared AVs, as estimated by Burns et al. and Johnson, might not be low enough to end the reign of the private car.In contrast to private vehicle ownership, results of this research suggest that current line-based public transportation will probably be subject to adjustments beyond its automation.Although its operation will become cheaper, pooled taxi schemes and other new forms of public transportation will emerge as new and serious competition.The results of this paper suggest, however, that the situation is not as clear as suggested by for example Hazan et al.If full cost of shared vehicles is considered, combined with the low average occupancy achievable with pooling even in an urban setting, mass transit public transport is still competitive - especially for urban settings and high-demand relationships - and even more, if unprofitable low-demand relationships can be served with more flexible services, thus increasing average occupancy."The situation gets even more interesting if one considers that today's form of public transportation in Switzerland receives subsidies for approximately 50% of its operating costs.In principle, with autonomous driving technologies and constant levels of subsidies and demand, operators would be able to offer line-based mass-transit public transportation for free.However, public money - if available - would be much more wisely spent on new forms of public transportation promising not only lower operating costs, but also higher customer value by offering more comfort, faster travel times and fewer transfers.Hence, it is likely that particularly rural or tangential relations will be served by new forms of AV-based public transportation.Shared solo vehicles are seen by some researchers as an ideal first-and-last-mile complement of mass transit public transportation.Offering direct point-to-point service for single travelers, they promise short access and travel times.However, they may not be designed to carry baggage and are not much less expensive to operate than shared midsize vehicles.Policies aside, it is therefore more likely that fleet operators opt for a homogeneous midsize or van fleet to serve individual travelers, as well as smaller groups.One vehicle size functions well, particularly because - assuming acceptably low waiting times - few relations see a demand as low as only one traveler per time interval.Fleets of aTaxis are another mode often proposed as the new ‘jack of all trades’ in transport: offered at such low prices that they will replace every known mode.As this study shows, however, this picture changes if full costs, including overhead, parking, maintenance and cleaning, are considered.Just these factors, neglected in most studies on the topic so far, contribute two thirds of the total cost of aTaxis.It is thus no surprise that the costs of shared services determined here are substantially higher than those in previous work; Johnson; Stephens et al.; Hazan et al.).Because shared AVs are still very economical to operate and OEMs might also have an interest in shared services, these business models might still have a bright future.Shared fleets of aTaxis also have the advantage that the usage can be controlled and restricted - especially compared to privately owned vehicles.This minimizes the liability risk for OEMs and mobility providers as long as AVs are not yet an established and proven technology.Car-pooling might lower prices if high average occupancies can be achieved.High occupancies, however, come with more detours, longer individual travel times and more strangers in the same intimate environment of a car.Especially the latter factor might be an obstacle to the success of pooled midsize or van fleets.Although people usually prefer the privacy of their own vehicle to the anonymity of a bus ride, many find sharing a vehicle with strangers burdensome.In this respect, more research is required to better understand customer preferences and design pooled vehicles accordingly, because, if successful, ride-sharing with AVs will definitely have a number of benefits.A further challenge for shared services will be to find a solution to maintain vehicle cleanliness.The analysis above revealed that even with low cleaning frequencies and costs, cleaning is the single largest contribution to the operating cost of autonomous taxi schemes.Any higher level of soiling may even endanger their competitiveness through high costs or low service standards.The analysis presented in this paper is - to date - the most comprehensive approach to estimating operating costs of future autonomous transportation services.It includes several aspects previous studies have overlooked, or assumed negligible and it draws a clearer and/or different picture of the future transportation system than earlier works.While this detailed approach reveals new insights, it is also comes with various limitations.First, it is likely that vehicle automation will change the demand for certain kinds of infrastructure, like parking, which could also have an effect on assumed prices.It is also possible that automated vehicle introduction induces changes in overall mobility behavior, which will require the government to take measures to counterbalance negative effects.The impact of governmental policy measures, external subsidies, company-internal cross-subsidies and special pricing strategies, however, have been ignored in this paper; attention to policies was restricted to those already implemented today.Extended services, such as for example non-driving personnel in the vehicles or premium vehicles have been ignored too.Such extras might be part of an improved customer experience, or due to legal requirements.While such measures, policies, and extra services are expected to substantially influence price structure and thus the competitive situation, their cost will probably be passed along to the user.It follows that they would not substantially influence the basic cost structure of different services for the provider.Given their unpredictable nature and that they are often designed based on analyses like the one presented in this paper, it was decided to leave these questions open for future research.Scenario-based analyses of possible measures, e.g. to achieve socially optimal prices, or to incorporate externalities in the prices, as well as to investigate different price vs. service combinations, could further be a topic for future work.It should be mentioned that bus and train capacities are likely to be adjusted after automation.While the influence of different occupancy rates has been analyzed in Section 6.2, that study does not account for different vehicle purchase prices and increased vehicle management costs for growing fleets.It is anticipated that increased resource scheduling flexibility in a system without drivers will lead to financial savings.As extensive analysis would be required to quantify these savings, these effects would exceed the scope of this paper.While the cost structures of cars, whether private or as shared vehicles, could be analyzed with a high degree of detail, the overhead costs of shared mobility services are well-kept secrets within the respective companies.Furthermore, data suggests that these figures vary substantially among companies of similar size, indicating that further factors like internal organisation and detailed business case play an important role.Accordingly, estimates of the overhead cost of shared services and total costs of public transport services must be treated with caution.The same applies for the cost effects of new technologies.Electric cars are already on the market and thus estimates of the cost effects are reliable and well grounded.The effects of automation, however, are uncertain.Admitting this, the approach presented in this paper identifies different factors resulting in the observed total cost.Then, the effect of autonomy is estimated for each cost factor separately, resulting in more precise and reliable estimates.Where available, these estimates were based on values reported in literature.The authors are aware, however, that even such detailed estimates’ accuracy is questionable.Correspondingly, usage values in this study are based on current market situation and price structures.If radically lower costs are assumed, these values are likely to change substantially.Once travel behavior impacts and usage patterns of such schemes become clearer, the framework introduced in this research can be used for a scenario-based analysis of their cost structures and will probably yield more accurate results.Until then, the reader should be aware of these limitations when interpreting the results presented in this paper.This paper presents a detailed cost estimation for current and future transport modes, with special consideration of automated vehicles.It is based on a detailed cost structure reconstruction of different transport services as far as available and best knowledge estimates otherwise.This analysis goes beyond earlier assessments of future modes’ cost structures in both its level of detail and its rigor."The framework was validated and delivers new insights into automation's impacts on different transport services and vehicle categories.For example, it is clear that fleets of shared autonomous vehicles may become cheaper than other modes in relative terms, but in absolute numbers, the difference will be small.Thus, there will still be competition by other modes and even by private car ownership, which may well persist beyond the dawn of autonomous vehicle technologies by offering the luxury and convenience of a personal mobility robot.On the other hand, this research was able to confirm expectations) that conventional forms of public transportation may face fierce competition in the new era.Importantly, this research also revealed that the success of shared AV fleets may well depend on a factor which has been previously ignored - cleaning efforts.According to our findings, developing viable business models for shared AV fleets will entail solutions to require that customers behave appropriately while on board and/or to clean and repair vehicles efficiently and at low costs.Based on an exclusively cost-based approach, this research was able to clarify use cases of future modes of transportation, although many open questions remain and require further research.For example, the actual mode choice is determined not only by cost, but also substantially by travel time and comfort, as well as other factors like the perception of transfers, waiting times, etc. - all of which were not investigated in this paper.Therefore, actual implementations of the proposed schemes need to be implemented in a field trial or simulation approach to better understand the size of the respective market segments in a realistic environment.Moreover, due to its complexity, long-distance travel could not be covered in this research, but it is known to play an important role in mobility tool choices.Another future research area is the investigation of a re-sized, line-based transit resulting from the automation of buses.If driver wage is no longer part of the cost structure, it might be worthwhile to operate buses with smaller capacities and higher frequencies.Not only is demand bundling, when possible, more economic than point-to-point service, there is also a user preference for high-frequency, line-based service over dynamic services).Current passenger statistics of the City of Zurich make a first approximation of the utilization possible.Nonetheless, cost ramifications of legal requirements, as well as infrastructure necessary for mass transit need further consideration that would exceed the scope of this paper. | Fast advances in autonomous driving technology trigger the question of suitable operational models for future autonomous vehicles. A key determinant of such operational models’ viability is the competitiveness of their cost structures. Using a comprehensive analysis of the respective cost structures, this research shows that public transportation (in its current form) will only remain economically competitive where demand can be bundled to larger units. In particular, this applies to dense urban areas, where public transportation can be offered at lower prices than autonomous taxis (even if pooled) and private cars. Wherever substantial bundling is not possible, shared and pooled vehicles serve travel demand more efficiently. Yet, in contrast to current wisdom, shared fleets may not be the most efficient alternative. Higher costs and more effort for vehicle cleaning could change the equation. Moreover, the results suggest that a substantial share of vehicles may remain in private possession and use due to their low variable costs. Even more than today, high fixed costs of private vehicles will continue to be accepted, given the various benefits of a private mobility robot. |